Some of the coolest things in augmented reality are coming out of Microsoft Research, and their newest project "SemanticPaint" is no exception. This project lets you scan objects, or whole rooms, in 3D using a Kinect and then separate and define individual objects in the scene. This kind of accurate visualization of the world bodes great possibilities for things like self-driving cars and robots.
After the initial scan which produces a singular object (below), a person can touch a chair with their hand or foot, say "Label: chair" and continue with the rest of the environment until each object is logged in the computer as singular items. The computer then separates the items into classes.
The project is fully online, to provide continuous feedback and create a personalized experience, and it can be processed using a laptop's CPU for better interactivity. Microsoft claims that their software has the ability to learn, remember and differentiate between labels so it can operate faster in the future.
Microsoft's ideal set-up is for the user to wear a depth camera and as the person moves within a space, "the dense 3D geometry of the room is automatically scanned in real-time."
Microsoft wants this to be more than just another cool augmented reality project. They see it as a way of recreating the world around us and then using the semantic models in a variety of ways, such as aiding partially sighted people and creating and navigating maps.
"Our system also hopefully moves us closer to the vision of lifelong learning: where semantic models adapt and extend to new object classes online, as users continuously interact with the world," Microsoft says.