03 Feb The VR/AR revolution is just getting started
Context
Virtual & Augmented Reality are revolutionary and exciting technologies that have the potential to change our lives completely. In recent times however, there has been a lot of debate about whether Virtual Reality is dead or over-hyped. 2016 was supposed to be the year that VR took off but headset sales to consumers did not match expectations. 2017 was better but some analysts pointed to the fact that most headset manufacturers cut prices and others abandoned their VR projects entirely (e.g., Intel’s Project Alloy). Augmented Reality continues to enjoy a more positive outlook. However, even with Augmented Reality, analysts are primarily upbeat about Mobile AR (on your smartphone or tablet) vs. Smart Glasses based AR.
The Rush to Judgement
Part of the reason for this rush to judgment on VR/AR is that we have grown accustomed to the narrative that consumer adoption only gets faster with each new technology. The chart below shows that the number of years until 25% of the American population adopts new technologies has been steadily shrinking from electricity to the mobile phone and the world wide web.
According to this trend, VR/AR should be adopted by 25% of the population in 3 years. Since that hasn’t happened, it must be dead or at least over-hyped.
The Fallacy
While it is true that VR and AR projections have not been met, and that several barriers to adoption do exist, this analysis is fundamentally flawed. There are three reasons why we need to think differently about Virtual & Augmented Reality.
- Virtual & Augmented Reality represent a fundamental change in user interface and every change in user interface requires time for developing industry standards, re-writing applications, and re-training consumers
- Virtual & Augmented Reality are complex, multi-discipline technologies that envelop every single technology listed on the chart above – from the telephone, to the radio, television, PC, mobile phone and the internet
- Virtual & Augmented Reality have the potential to disrupt every single Enterprise and Consumer application, therefore the amount of education and migration required, is that much higher
Let us take each of these points one by one.
User Interface Disruption
The earliest computing user interface was punch cards, during the days of batch computing. The first major disruption to the use of punch cards was the introduction of a Graphical User Interface (GUI). The GUI was complemented by the introduction of the Mouse. Eventually, the mouse gave way to tactile interfaces with the introduction of touchscreens. Let us look at the history of these three user interface disruptions.
The Graphical User Interface (GUI)
The earliest form of a GUI was a command line interface (CLI). The big innovation with CLIs was that the user could make requests and view responses much more quickly. Graphical interfaces further improved on this by organizing applications and CLIs. The first computer to use a GUI was Xerox’s Alto workstation in 1972. Given the prohibitive price point of the Alto though, it wasn’t until 12 years later with the introduction of the original Macintosh computer that GUIs were made widely available. Microsoft adopted the GUI with the launch of Windows 1.0 in 1985. However, it was only in 1990 when Microsoft introduced Windows 3.0 that consumers truly began to get comfortable with the GUI as a user interface. It took nearly 18 years to have two standard operating systems with well-defined GUIs, to write a suite of applications for those operating systems, and to train consumers on how to use them.
The Mouse
The mouse similarly, took a long time for adoption. The first public demonstration of a mouse controlling a computer system was in 1968. The mouse looked like this:
Over the next several years, the mouse went through several design evolutions, as pictured below.
The Xerox Alto designed in 1972 is one of the first computers considered to have utilized a mouse. However, the mouse remained relatively obscure until Microsoft made a PC-compatible mouse in 1983 and the Macintosh 128K included a mouse in 1984.
Connection protocols for the mouse also kept changing. Early PC mice used a serial interface and protocol. Then in 1986, Apple implemented the Apple Desktop Bus and in 1987, IBM introduced the color-coded PS/2 interface – green for mice and purple for keyboards, which other manufacturers rapidly adopted.
It wasn’t until 1999, that Apple and most PCs adopted USB as the universal protocol for connecting mice.
The Touchscreen
The most recent revolution in user interfaces was the emergence of the touchscreen, widely credited to Apple with the release of the Apple iPhone. However, the history of touch screens is much longer than that. The first publications on capacitive touchscreens can be traced back to 1965. In 1975, an American inventor George Samuel Hurst received a US patent for a resistive touchscreen, but it wasn’t produced until 1982. The HP 150, starting in 1983, was one of the first computers to employ a touchscreen. Multi-touch technology was first developed by researchers at the University of Toronto in 1982. In 1985, they developed a multi-touch tablet based on capacitive touchscreen technology. By this time, both Bell Labs and Carnegie Mellon University had also developed prototypes that even demonstrated the multi-touch pinch-to-zoom gesture. However, this technology only became widely adopted by consumers when Apple introduced the first iPhone in January 2007. It was only after Apple standardized the multi-touch gestures with its scale, and the app economy emerged with applications re-written for the multi-touch interface, that touchscreens were adopted widely. Other manufacturers also quickly standardized on the same interface.
As these examples show, when a user interface shift is involved, mass adoption requires industry standardization, re-writing of applications and re-training of users. The VR/AR industry is a long way from this. There are at least five major operating systems or GUIs in the world of VR/AR today – Oculus, Steam (HTC), Google Cardboard/Daydream, Sony PlayStation and Microsoft’s Mixed Reality and HoloLens. That’s not even counting all the other headsets being introduced by other manufacturers. Each platform has its own set of controllers and gestures for hands-free operation.
It’s going to take a while before the industry arrives at an industry standard like a mouse connected via USB, before all manufacturers align on a standard set of hands-free gestures, and before we standardize on a set of new peripherals that provide haptic or aural feedback.
Technological Complexity
The telephone and mobile phones changed how we communicate. Radio and television changed how we get news, entertain and educate ourselves. The computer increased our productivity and the internet gave us access to a vast repository of knowledge. However, each of these technologies relied heavily on innovation in a few specific sectors – from semiconductors to communications protocols (e.g., 3G, WiFi) to software. Virtual and Augmented Reality however, do all of the above and do them better (e.g., 3D content, spatial sound, six degrees of freedom etc.). Therefore, the complexity of these technologies is significantly higher, and the pace of adoption will depend on the rate of innovation in multiple areas – from chipsets, to image and audio processing techniques, new display technologies, wireless transmission infrastructure and protocols, and software advances. Further, Virtual & Augmented Reality are driving innovation in completely new technologies like eye tracking, light field displays, robotics, and artificial intelligence (AI) that will also be relevant to other innovations like autonomous vehicles. It will take a few years for these technologies to mature, and deliver the full promise of VR/AR.
Breadth of Applications
Virtual & Augmented Reality impacts literally every industry – from Travel & Tourism to Real Estate, Retail, Education, Healthcare, Industrial & Manufacturing and Media & Entertainment. Participants in each of these industries need to understand the possibilities with VR/AR, develop and deploy new applications, and re-train users on the new applications and interfaces.
Conclusion
We need to think differently about the adoption of Virtual & Augmented Reality than simply in terms of number of headset units sold, or the % of consumers adopting the technology. Claiming that VR/AR is dead or over-hyped based on these metrics would be like claiming that personal computing is dead right after the first Macintosh computer was released in 1984.
Indeed, we should be looking instead at how quickly the VR/AR industry starts to align around standards, how quickly content & applications are written or re-written and in how many industries, how quickly some of the enabling technologies mature, and finally how many users have experienced VR/AR and its compelling features. In fact, the industry’s first goal is to expose users to VR/AR, since it is seldom that a user dons a VR/AR headset and walks away unimpressed.
Recent developments in the industry like the introduction of multiple new, cheaper, wireless headsets with higher resolution and improved field of view (e.g., Pico, Lenovo, HTC Pro, Meta, Magic Leap) and the rapidly growing number of Consumer and Enterprise use cases (more on that in the next blog posts in our series) demonstrate that the VR/AR revolution is just starting. We will eventually arrive at a place where we cannot imagine how things were done without VR or AR, just as we can no longer imagine a world without multi-touch interfaces.
Leave a Reply
You must be logged in to post a comment.