There’s always something interesting cooking at Microsoft Research -- even if the technology doesn’t end up in commercial products. The latest involves a rather conventional LCD panel, but combined with force sensors and a robotic arm that moves it back and forwards, it can simulate the shape and weight of objects on screen.

The technology was demonstrated in public for the first time during TechFest 2013. One of the demos consisted of three virtual 3D boxes, each with different weights and friction forces corresponding to their material: stone, wood, and sponge. Through the use of sensors this setup can simulate the appropriate resistance there is to a user's fingertip. If pressed against, the robotic arm pulls the screen back in a matching smooth movement, and if the user starts to retract it moves the screen back forward, all while graphics adjust to create the 3D effect.

“Your finger is always aware of motion,” Michel Pahud, an engineer at the Natural Interaction Research group at Microsoft Research Redmond explains. “As your finger pushes on the touchscreen and the senses merge with stereo vision, if we do the convergence correctly and update the visuals constantly so that they correspond to your finger’s depth perception, this is enough for your brain to accept the virtual world as real.”

Taking the experiment a step further, some of the participants where asked to identify objects by touch while blindfolded. Surprisingly, many people got the shapes right. The user only touches one point at a time, yet with sufficient haptic feedback as the finger moves, there is enough information to identify shapes.

Haptics are already common in devices such as smartphones, but Microsoft is bringing it to virtual reality. Among the possible applications researchers envision are gaming, 3D modeling, medical and educational.