Jon Wiley is a director of AR and VR at Google. He co-founded the AR/VR team in 2014 and was director of UX (user experience) until 2018. Having joined Google in 2006, Wiley was the first designer for Google Autocomplete (predictive search queries) and led the Search UX team through three redesigns, the transition from desktop to mobile, and the addition of new features. He is a co-founder of Material Design, Google’s design language for all applications and the Android platform. He spoke in the 2019 session Designing in New Dimensions.
We caught up with him about the possibilities for human interaction with virtual reality, and just how far off that future could be.
You spearheaded the development of Google’s AR (augmented reality) and VR (virtual reality) team. What kinds of technologies has your team dreamed up and created?
In 2014, we announced Google Cardboard at Google’s annual I/O event. Google Cardboard was a simple VR viewer which highlighted the fact that continuous innovation in mobile phones had made all of the basic ingredients for VR available: a good computer capable of rendering reasonably high quality graphics paired with a sensor that could detect the device orientation.
On the VR front, our team has pioneered some of the very best immersive VR applications, like Google Earth VR and Tilt Brush. We collaborated closely with YouTube on their VR app. And we’ve done groundbreaking work on stereo 3D video capture and rendering.
On the AR front, our team developed ARCore — the set of capabilities necessary to deliver great AR applications on supported Android phones. And we created Google Lens to help people search what they see.
How do you see humans interacting with, and even relying on, these mind-bending technologies in the future?
Many credit advances in computing, such as greater speed or memory, as the foundations for people becoming even more productive and capable with computers. I think much of the credit should go to advances in the interface, what is often called Human-Computer Interaction or HCI.
Initially, computers were opaque. They required specialized knowledge in order to operate — people in lab coats fiddling with knobs and dials and punch cards. Over time, computers adopted input and output conventions more closely aligned with how people naturally experience the world. At first, a keyboard and a screen — it uses an alphabet and a basic written language. Then the mouse — suddenly we could point at things — and eventually Graphical User Interfaces and physical metaphors like a desktop.
The smartphone is among the fastest and most widely adopted pieces of technology in history. It fits right along with this progression of the adoption of interfaces which are more and more human — it fits in your hand, you can directly touch the software, and it goes where you go.
But computers, even smartphones, are still far behind. The interface isn’t human enough yet. The most intuitive interface is the natural one: the fully 3D world in which we perceive and move and interact. Computers today barely understand that world. But the progression continues, and investments in AR and VR are all about enabling people to work with computers and software as they do with the world around them. This, in turn, will enable people to be far more creative and productive than ever.
Right now, technological advances like AR/VR seem to only be available to a privileged few. How will the future of technology work to help everyone, not just those who can afford it?
Google’s vision is to bring helpful technology to everyone. More than half of all people on Earth have a smartphone and most of those devices run Android. Hundreds of millions of those devices are capable of AR experiences and that number is rapidly growing.
Google provides a streamlined search app called Google Go which works on a large variety of devices and internet connections. At I/O last month, we announced that we’ve integrated Google Lens features into Google Go.
For the nearly 800 million people in the world who struggle to read, Google engineers have built a text-to-speech feature into Google Lens. Now anyone can point a phone at text, and hear that text spoken out loud. This new feature, along with its availability through Google Go, is just one way we’re working to help more people understand the world around them.
How far off is this future? When do you think that AR and VR are going to "break through" in a meaningful way? When will these technologies be a part of people's day-to-day routines?
VR and AR are here today across a range of device capability and cost. On the inexpensive side, you can have a basic VR experience with your phone and Google Cardboard. AR applications are available today on your phone. More expensive devices are headsets, often called Head-Mounted Devices or HMDs. These can provide a rich immersive experience and include VR devices like the HTC Vive or the Oculus Rift, and AR devices like Magic Leap and HoloLens.
We’re unlikely to see a “breakthrough” moment. Rather, smartphones will continue to become more capable and more perceptive, enabling more advanced AR capabilities. And HMDs will become more portable and comfortable over time. Becoming part of people’s day-to-day routines will involve massive improvements in accessibility and ease-of-use. It’ll likely take a decade or so to achieve that level of capability for a large number of people.
That said, the AR and VR are already part of some people’s day-to-day. Many people are using these technologies in their jobs: architecture, industrial design, and manufacturing. Google Glass was a pioneer here and provides helpful augmentation today.
The views and opinions of the author are his own and do not necessarily reflect those of the Aspen Institute.
By Maya Kobe-Rundio, Editorial Intern, Aspen Ideas Festival