The IT crowd is addicted to acronyms. Most people can live a long and happy life without ever needing to know the difference between a CPU and a GPU or tell ROM from RAM. But as tech creeps further into our lives, the explosion of acronyms leaves me SMH (shaking my head).
The latest tech industry buzzwords are AR (augmented reality) and VR (virtual reality). The most prominent example of VR is the Facebook-owned Oculus Riftheadset, while AR is more commonly associated with smartphone apps such as Snapchat and Pokémon Go. Last week, Apple launched ARkit — a toolkit for developers that will make creating apps like those much easier, thrusting AR back into the headlines. To the casual observer, though, it can be hard to tell the difference.
The easiest way to distinguish between VR and AR is whether you can still see the real world (or RR, short for “real reality”, as some insiders perversely enjoy calling it) behind the virtual Pokémon. If you can, that is AR. If everything around you is digital — even the hand you hold up in front of your begoggled face — you know you are immersed in VR.
Many in Silicon Valley see all this “immersive computing” as a spectrum, running from smartphone camera effects at one end to glasses at the other. When they finally emerge from the research labs, these glasses will be able to display messages, games and other internet smarts right before our eyes, conjuring virtual images that mingle realistically with actual objects.
For the next time Mark Zuckerberg or Tim Cook start talking about VR or AR, here is the Financial Times’s survival guide to the alphabet soup of immersive computing.
Apple, Google and Facebook have all held their annual conferences for app developers in the past couple of months and, at each, AR was a big theme — turning the smartphone into a magic window on the real world.
Facebook’s chief, Mr Zuckerberg, talks about three main uses for AR: displaying information depending on what the camera sees, such as directions or messages; adding digital objects; and enhancing real objects, like your home or your face. Facebook’s “camera effects platform” allows developers and designers to put 3D digital objects inside photos or videos in Facebook, ranging from Snapchat-style selfie masks to virtual graffiti that appears on real walls.
Apple’s vision for AR turns this inside out, giving developers access to its camera technology and allowing them to bring the results to their own apps. Its ARkit makes it easy for a designer or app maker to position a digital object in a scene, like putting a virtual coffee cup on your kitchen table.
Google is focusing less on special effects and more on object recognition. Its forthcoming Google Lens app understands what it sees, to show you the name of the flower in your camera window or reveal restaurant reviews when you point your phone at a storefront.
The company has another AR project called Tango. This takes smartphone-based AR up a level by using special depth-detecting cameras and sensors that are not yet found on most iPhones or Android devices. So far, only a couple of Tango phones are on the market, from Asus and Lenovo. One Tango app by Lowe’s, an American DIY retailer, helps customers navigate its stores and place virtual chairs and tables in their living room, in actual size.
However, innovation in software is narrowing the gap between the more accurate placing of digital objects offered by Tango devices, and the camera capabilities that Apple and Facebook are building.
If AR mostly uses the devices we already have — smartphones — then VR requires some extra kit. Think of it as a video game that you wear on your face, but with the ability to simulate anything from travel (Google Earth is now on VR) to education (surgery simulators are a popular example).
VR headsets such as Oculus Rift and HTC Vive put a screen not dissimilar to a smartphone’s just a few inches away from your eyes. When the head moves, the virtual world moves around realistically, thanks to an array of sensors, both inside the headset and, sometimes, set up around the room
The highest-quality VR headsets today must be tethered to a powerful PC or PlayStation games console to function. As of this week, some of Apple’s Macs can also support the HTC Vive, too.
More accessible are mobile VR viewers, which slot a smartphone into a simplified headset to provide its screen and processing. If you are curious about VR but do not want to splurge hundreds of dollars on the high-end systems, something like Samsung’s Gear VR or Google’s Daydream viewer is the best place to start.
A note about video: VR tends to refer to applications and experiences conjured entirely in computer graphics, however realistic. If the world inside your headset is real-world footage filmed using a 360-degree camera, that is generally described as “spherical video” rather than VR.
Alongside AR and VR, the next acronym being bandied about is MR — mixed reality. This is used to describe headsets such as Microsoft’s HoloLens. To confuse things even more, I am now starting to see some techies use XR, a sort of super-acronym to rule them all, where X just seems to stand for “all of the above”.
Mixed reality is in some ways the hardest to define because there seems to be little agreement in the industry about what the term means. One way to think about it is it puts AR effects into something like a VR headset, so you see the real world through its lenses.
Magic Leap, which is working on a pair of mixed-reality glasses and has raised more than $1bn from the likes of Google and Alibaba, uses the term to distinguish its system’s realistic hologram-style graphics from smartphone-based AR apps.
Microsoft’s HoloLens headset, which is only for sale to app developers at the moment, is aiming for a similar goal: eyewear that places digital objects directly in your field of vision that one day will be indistinguishable from real reality.
Recently, Microsoft has muddled things a bit by working with a range of PC manufacturers on VR headsets that run what it calls Windows Mixed Reality. These have some of the same sensors as its HoloLens for tracking head movements and depth sensing, but the wearer cannot see the real world, only VR.
One day, perhaps in 10 years if Mr Zuckerberg is to be believed, the difference will be moot: we will all be wearing glasses all the time that can do VR, AR and MR. By that time, I hope Silicon Valley has come up with a better name than “XR” specs