Augmented Reality and the Future of Language

tikibot-scaled

In the next few years SmartPhones will be augmented, and eventually replaced, by wearables. When that happens, we will be able to see and hear digital information floating in the air between and around us, even in casual conversation. This will have profound effects on everything from work to education to entertainment to social interaction.

Here at the NYU Future Reality Lab we are building prototypes to explore this future. Our team of students, research scientists and designers is creating what we call “The Metaroom” — an interactive computationally enhanced multi-person space that incorporates virtual, augmented and mixed reality technology, together with shared spatial audio and both passive and active haptics, to mirror the real world one-to-one. The Metaroom allows us to freely experiment with and explore the possibilities of how a virtually augmented world can enrich our shared human experience. Because it is built on top of a web-based client/server architecture, the Metaroom allows instant joining, updating and changing of shared worlds, as well as live coding of the properties of the shared world by all participants.

We are particularly interested in the difference between collocated and non-collocated real-time communication and shared experience. The former represents the gold standard for human to human communication, as it makes maximum use of the way our brains and bodies have evolved to communicate. All forms of real-time non collocated communication, from the telegraph/telephone onward, have functioned as a sort of prosthetic, a way to compensate for the problem of not being in the same physical location. By focusing primarily on collocated communication at this stage, we aim to establish a higher bar and therefore achieve greater insights for the eventual possibilities inherent in such forms of augmented communication for building and maintaining trust, empathy and effective conflict resolution, without the need to start out with the compromises that the constraint of non-collocation currently brings to the table. We are also studying how virtual objects in such a shared space are perceived and remembered as being “real”, and how, like physical objects in our world, they have the potential to tap into our innate sense of object permanence.

We are combining the above research focus with our project Chalktalk, which allows people to collaborate through “intelligent drawings” — freehand sketches which are then interpreted by software as a way to rapidly build complex interactive meaning structures and simulations. Our view is that when wearables become ubiquitous, such modes of intelligent drawing will be learned — and most likely evolved — by children at an early age, and will become an integral part of literacy.

As a new generation of children experiences these enhancements of reality as natives, everyday spoken language will begin to acquire a visual component. Given that language is the greatest superpower of our species, this change may have a positive effect on language and human capability that is even more fundamental than the invention of written language itself.

There are two important dimensions in which this research must discern questions that revolve around trust: (1) Is there an intrinsic and fundamental difference between the level of trust that can be achieved in collocated versus non-collocated communication? (2) To what degree can a person meaningfully be said to “trust” a non-human AI agent, as opposed to a fellow human being?

Leave a Comment

Your email address will not be published.