There is lots of potential for Kinect’s touchless interface in business, perhaps most immediately in retail and healthcare.
By Ellen Muraskin
January 14, 2013
In October 2012, Microsoft officially launched the SDK for its Kinect sensor device, giving its blessing to the marriage of Windows 8 applications with the voice-and-movement user interface known to Xbox game players. Developers, many with a head start taken by hacking Kinect, are now rushing to download the free SDK, buy themselves a Kinect device (it went on sale last February for $249), and produce a range of applications for scenarios in which hands can’t—or shouldn’t—touch controls.
Kinect for Windows is undergoing continual improvements over its game-oriented brother. Analyst Rob Sanfilippo, who covers gaming for independent research and analysis firm Directions On Microsoft, says, for example, that “today, Kinect for Windows can read facial expressions. It already can recognize people’s faces and voices.”
Sanfilippo sees a currently immeasurable potential for Kinect’s touchless user interface in business applications. This is now giving rise to start-ups whose offerings are still in the testing and venture-capital-seeking phase. Microsoft is promoting this effort through BizSpark, a program that includes Microsoft Developer Network access to Windows Azure, the cloud-based app development platform, as well as other software tools, training, tech support, and investor and peer networking.
Sanfilippo sees the biggest applications to date in retail and healthcare. Bloomingdale’s, for example, is working with Microsoft on a “virtual dressing room.” Here, Kinect scans the contours of a person’s body (it recognizes faces and knows men from women) and reads gestures to let people virtually try on clothes without entering the store, perhaps during off hours. This could work through clear storefront glass, giving “window shopping” a whole new meaning. It could also work with a consumer’s Kinect at home, taking some of the risk out of online clothing purchases. Touch-free kiosks present another possibility, improving self-service for an increasingly germ-phobic public. InterKnowlogy, an ISV headquartered in Carlsbad, Calif., has prototyped and demoed such kiosks.
In the healthcare field, a Toronto-based company called GestSure Technologies is developing a gesture-reading app for surgeons, who can better navigate MRIs and CAT scans in the operating room if they don’t have to relay instructions to assistants, or scrub out and back in again to touch unsterile keyboards and mice.
Emilie Hersh, InterKnowlogy CEO, finds the most innovation and investment currently in healthcare and life sciences. For physical therapists, the company is now working in Azure to develop a palette of physical exercises that can be assigned to patients and observed over the Internet. At home, Kinect tracks patients’ range of motion. A home-based dashboard tracks their progress; a second one for the clinician confirms patients’ compliance with the regimen. Hersh admits that the sensor presents some privacy issues, but that these should not be insurmountable, especially with healthcare reform’s added accountability for patient outcomes.
“With the price point—about $200—and the ability to link [Kinect] up to an inexpensive desktop or tablet and deliver this virtually anywhere, it seems like a no-brainer to get this into the market,” says Hersh. “If I had just had knee surgery, I could keep up with my PT program even while on the road.”
Even factoring in the cost of a commercial Microsoft license, Kinect deployment will have a very low barrier to entry. The device dramatically lowers the cost of 3-D scanning, widening its range far beyond mere controller replacement or even movement capture. Kinect also is about to bring down the cost of 3-D rendering, of particular interest to SMBs in such fields as architecture, industrial design, and prosthetic modeling.
This will happen when Microsoft’s KinectFusion software gets added to the SDK, although Redmond has not specified when that might happen. Instead of reading moving objects in front of it, the Kinect device itself can move to scan rooms and hallways, opening up newly affordable applications in augmented reality and robot navigation. MIT is working in this area, says Sanfilippo. With its infrared camera, Kinect can even work in low-light conditions, suggesting future robotic disaster-response applications that lessen risk to humans.
For More Information
A look at InterKnowlogy apps
Microsoft’s Accelerator 11 start-up incubatees and their Kinect apps
The downloadable Kinect SDK