While we're slowly
reimagining touch UI, a new and complementary form of UI is emerging that may
feel even more intuitive to the average person: speech. Amazon made a cultural
splash with the release of its artificially intelligent (AI) personal assistant
system, Alexa, and the various voice-activated home assistant products it
released alongside it. Google, the supposed leader in AI, rushed to follow suit
with its own suite of home assistant products.
Whether you prefer
Amazon's Alexa, Google's Assistant, iPhone's Siri, or Windows Cortana, these
services are designed to let you interface with your phone or smart device and
access the knowledge bank of the web with simple verbal commands, telling these
‘virtual assistants' what you want. It’s an amazing feat of engineering. And
even while it’s not quite perfect, the technology is improving quickly. When
you combine this falling error rate with the massive innovations happening with
microchips and cloud computing (outlined in the upcoming series chapters), we
can expect virtual assistants to become pleasantly accurate by 2020.
Even better, the
virtual assistants currently being engineered will not only understand your
speech perfectly, but they will also understand the context behind the
questions you ask; they will recognize the indirect signals given off by your
tone of voice; they will even engage in long-form conversations with you, Her-style.
Overall, voice recognition based virtual assistants will become the primary way
we access the web for our day-to-day informational needs.
No comments:
Post a Comment