Meet the Star of TED 2020: A Glass App That Coaches You As You Talk

By
Vic Gundotra, senior vice president of engineering at Google Inc., wears Project Glass internet glasses while speaking at the Google I/O conference in San Francisco, California, U.S., on Thursday, June 28, 2012. Google Inc., owner of the world's most popular search engine, unveiled a cloud-computing service for building and running applications to help woo customers and challenge Amazon.com Inc.'s Web Services. Photographer: David Paul Morris/Bloomberg via Getty Images
Vic Gundotra, senior vice president of engineering at Google Inc., wears Project Glass internet glasses while speaking at the Google I/O conference in San Francisco, California, U.S., on Thursday, June 28, 2012. Photo: David Paul Morris/Bloomberg via Getty Images

The nightmare of public speaking is set to become slightly less vomit-inducing, thanks to an app for smart glasses that provides real-time advice on how to modulate volume and cadence. The researchers who created it have just developed the first version — but as soon as this is perfected, you can bet that every speaker at TED 2020 will be using it.

Developed by a team at the University of Rochester’s Human-Computer Interaction Group, the system is called Rhema, and it works by recording a speaker, analyzing the volume and rate of the recorded words, and immediately displaying recommendations for changes. Speaking too slow? Rhema will tell you to speed up. Beginning to whisper? Rhema will tell you to get louder.  

In the test, the researchers used Google Glass, but in theory this could work with any future smart glasses, too. After experimenting with a variety of ways of delivering suggestions, including moving graphs and a series of red and green lights, researchers settled on the simplest. Every 20 seconds Rhema flashes the words louder or softer based on its most recent measurements of volume and faster or slower based on speed. If you’re already speaking within acceptable ranges, the display pats you on the back and says “good.”

The biggest challenge researchers faced was designing a system that didn’t distract. “A significant enough distraction can introduce unnatural behaviors, such as stuttering or awkward pausing,” they wrote in a paper that will be presented today in Atlanta. “Because the human brain is not particularly adept at multitasking, this is a significant issue to address in our feedback design.”

A 20-second interval between recommendations and the Google Glass display’s location in the user's peripheral vision help solve that problem, but attempts to measure distraction from an audience perspective — is the speaker consistently breaking eye contact and pausing? — were inconclusive. Like all smart-glass uses, this one still has to overcome the main hurdle of the technology: making sure that users don't end up acting even more awkward than they would otherwise.