The project is malleable 3D, rather than the stereoscopic 3D, which films two 2D images from different angles and has the viewer see slightly different images in each of his eyes. Stereoscopic 3D can only be seen in 3D from the angle from which the footage was originally shot. The Kinect uses its sensors and its cameras to construct a genuine 3D view of its environment, meaning you can move the viewpoint while retaining the 3D effect. Of course, blind spots need to still be eliminated by using more than one Kinect, since the device can't see the other side of objects.
"Real-time 'holographic' video chat using two Kinect cameras to capture one participant, and a custom compression algorithm and network protocol to stream the resulting 3D video data across the Internet," reads the video's description. "The other side of the conversation was filmed off a consumer 3D TV with a regular video camera; I apologize for the bad video quality. Note: the Wiimote was not used for head tracking; only to control the program and to move through the virtual space. The network protocol uses lossless compression using a Hilbert-curve traversal and run-length and delta encoding for the depth stream, and a Theora video codec for the color stream. The resulting bandwidth is about 750 kB/s for one Kinect camera."