“User interface started with the command prompt, moved to graphics, then touch, and then gestures,” Microsoft research executive Yoram Yaakobi told the Wall Street Journal. “It’s now moving to invisible UI, where there is nothing to operate. The tech around you understands you and what you want to do. We’re putting this at the forefront of our efforts.”
With the push, dubbed “UI.Next,” Microsoft is pursuing a future in which users do not need to tell their device what to do — by touching or speaking to it, for instance — and instead passively consume information that the device has already prepared in anticipation of their needs.
Both Apple and Google have nodded in this direction already, though the technology is far from mature. Apple’s Passbook, for instance, can dynamically surface information like event tickets based on the user’s location, while Google’s Google Now will adjust a user’s schedule based on traffic conditions.
Microsoft is reportedly “investing heavily” in so-called “invisible user interface” tech
“We were in an AI winter, and now we’re in an AI spring,” Microsoft research vice president Jeannette Wing said at a Microsoft event in Tel Aviv, referring to the new industry-wide focus on artificial intelligence. She pointed to Cortana’s natural language processing as Microsoft’s opening salvo.
“I speak to Cortana, Cortana responds. I speak back to it, and it understands that we’re still in the same conversation. It knows from the first sentence I said what I’m referring to,” she said. “That seems like such a small thing for human beings, but it’s huge.”
Similar natural language abilities have been a hallmark of Apple’s Siri since the feature’s debut, and the company has been seen making moves to bolster the underlying technology as well as expand Siri’s capabilities. Reports of a new Boston-based team of speech recognition experts tasked with improving Siri began circulating last summer, while word that Apple had acquired speech recognition firm Novauris Technologies, makers of a system that could allow Siri to process voice input locally rather than in the cloud, surfaced last week.