Microsoft has taken the wraps off some of the natural user interface (NUI) projects currently in development within its research group.

As a follow-up to a NUI-centric and Kinect SDK TechForum event yesterday, the company today posted videos of 3D talking heads, as well as new initiatives in smart displays--both of which make use of camera technologies to create new types of interaction experiences.

The first is an evolution of Microsoft's face-mapping technology, that is soon to be introduced as part of the Xbox 360's Avatar Kinect feature. Instead of mapping facial movements to a virtual character though, the technology grabs a 2D, high-quality video of a person from a Webcam (or Kinect) and maps it onto a 3D facial mesh model. The end result is a 3D face that can track speech or head movements it picks up on camera and display it back on the 3D avatar.

In the demo, Microsoft Research principal researcher Zhengyou Zang demonstrates that you can take that model and have it play back any text as well:

The other set of demos feature a number of smart display technologies. These get shown off by Steven Bathiche, who is Microsoft's director of the applied sciences group and the co-inventor of Microsoft Surface. Bathiche demos the company's efforts to track movements on top of surfaces using camera systems that are mounted above a computing area. This is combined with a surface computing system so that users can interact with the same system using either interface.

Bathiche also demos a company prototype that makes use of "wedge lenses" that can pick up activity above a surface using cameras that are mounted below a flat screen (as opposed to above it in the previous example). Those same wedges can be used to transmit light back out the other way as two separate images to create an auto-stereoscopic image' this allows for glasses-free 3D in displays.

The wedge technology can also be combined with head tracking from Kinect to determine where users are situated in front of a panel. This has led to two advances in the displays, Bathiche notes. The first is that you can set the system up to recognize if there are two viewers and offer them different angles of the same image. The second, is that you can dial that control up even higher, and give each player a different image on the same display, as some TV manufacturers like Vizio have implemented (though with glasses).

You can see examples of this in action, along with a demo of head tracking being used to mimic the experience of looking out of a window while viewing an LCD display in the below video:

In a separate video about the projects, Craig Mundie, chief research and strategy officer for Microsoft, described them as part of one of the biggest shifts in the evolution of computing.

"Computing is becoming more invisible. It isn't in things you call 'computers' anymore, you call them something else," Mundie said. "If we want to bring the benefits of computing to literally billions of more people, and in many many other parts of their lives, we've got to make it a lot easier for them. We've got to, in essence, get rid of the learning curve."


Discuss   Add this link to...  Bury

Comments Who Voted Related Links