Posts: 2,905 +760
Sony researchers presented a paper titled, "Evaluation of Machine Learning Techniques for Hand Pose Estimation on Handheld Device with Proximity Sensor" at the 2020 CHI Conference on Human Factors in Computing Systems. The work describes a prototype that uses capacitive proximity sensors that sense how the player's hand is positioned with 14 points of articulation. The data is then used by a machine-learning algorithm to mimic hand and finger movements in the simulation. The AI was trained using optical tracking on a dozen people with different hand sizes and presenting numerous poses.
While seeing your hands fully articulated in VR would be much more immersive, the practical applications are less clear. Aside from giving your buddy the middle finger, it does not seem necessary for current-generation games and would just be a waste of processing power.
The demonstration does show the controller operator stacking blocks in VR, but that's not really a game dynamic that anyone over the age of five would find exciting and fun. However, the technology could prove more useful in social settings, particularly for the deaf using sign language. That is, if somebody launches a successfully adopted social VR platform, as we discussed in our recent feature on the subject.
We have known that Sony has big aspirations for the PlayStation VR headset. A little over a year ago, Sony Network Entertainment International Vice President Shawn Layton said that the evolution of the PSVR over the next 10 years would be "dramatic." Thanks to some recent patents, we have a general idea where Sony is going with PSVR 2, even though it has not announced one in the works.
Finger tracking seems further down the road, however. As can be seen in the video, it far from perfect when it comes to certain gestures, but what they have now seems promising. It will be up to developers to come up with fun new ways to use the tech in games, though.