Per realtà aumentata (o realtà mediata dall’elaboratore in inglese augmented reality, abbreviato “AR”), si intende l’arricchimento della percezione sensoriale umana mediante informazioni, in genere manipolate e convogliate elettronicamente, che non sarebbero percepibili con i cinque sensi.
Il cruscotto dell’automobile, l’esplorazione della città puntando lo smartphone o la chirurgia robotica a distanza sono tutti esempi di realtà aumentata.
Gli occhiali a realtà aumentata avevano fatto la loro apparizione in un lavoro di Ivan Sutherland del 1968.
Negli anni novanta sono nate le prime visioni coerenti e organizzate di come l’elettronica miniaturizzata, i dispositivi portatili, Internet e la geolocalizzazione possano condurre a mondi virtuali e/o arricchiti, mediati. La visione matura e si stabilizza nei primi anni duemila, e i primi prodotti d’uso comune si affacciano sul mercato alla fine di quel decennio.
Gli elementi che “aumentano” la realtà possono essere aggiunti attraverso un dispositivo mobile, come uno smartphone, con l’uso di un PC dotato di webcam o altri sensori, con dispositivi di visione (per es. occhiali a proiezione sulla retina), di ascolto (auricolari) e di manipolazione (guanti) che aggiungono informazioni multimediali alla realtà già normalmente percepita.
Le informazioni “aggiuntive” possono in realtà consistere anche in una diminuzione della quantità di informazioni normalmente percepibili per via sensoriale, sempre al fine di presentare una situazione più chiara o più utile o più divertente. Anche in questo caso si parla di AR.
Nella realtà virtuale (virtual reality, VR), le informazioni aggiunte o sottratte elettronicamente sono preponderanti, al punto che le persone si trovano immerse in una situazione nella quale le percezioni naturali di molti dei cinque sensi non sembrano neppure essere più presenti e sono sostituite da altre. Nella realtà aumentata (AR), invece, la persona continua a vivere la comune realtà fisica, ma usufruisce di informazioni aggiuntive o manipolate della realtà stessa.
La distinzione tra VR e AR è peraltro artificiosa: la realtà mediata, infatti, può essere considerata come un continuo, nel quale VR e AR si collocano adiacenti e non sono semplicemente due concetti opposti.
La mediazione avviene solitamente in tempo reale. Le informazioni circa il mondo reale che circonda l’utente possono diventare interattive e manipolabili digitalmente.
Già usata in ambiti molto specifici come militare, medico o nella ricerca, nel 2009 grazie al miglioramento della tecnologia la realtà aumentata è arrivata al grande pubblico sia come campagne di comunicazione augmented advertising pubblicate sui giornali o sulla rete, sia attraverso un numero sempre crescente di applicazioni per telefonini, in particolare per Windows Phone, Android e iPhone.
È oggi infatti possibile con la realtà aumentata trovare informazioni rispetto al luogo in cui ci si trova (come alberghi, bar, ristoranti, stazioni della metro) ma anche visualizzare le foto dai social network come Flickr o voci Wikipedia sovrapposte alla realtà; trovare i Twitters vicini; ritrovare la macchina parcheggiata; giocare a catturare fantasmi e fate invisibili usando una intera città come campo di gioco; taggare luoghi, inserire dei messaggini in realtà aumentata in un luogo specifico (metodo usato dai teenager giapponesi per incontrarsi e oggetto di un film di Keiichi Matsuda).
La progressiva diffusione delle tecniche di realtà aumentata, insieme ai vantaggi che porta, pone d’altra parte problemi sempre più accentuati relativi alla privacy degli utenti.
Tipi di realtà aumentata
Esistono due tipi principali di realtà aumentata:
- Realtà aumentata su dispositivo mobile. Il telefonino (o smartphone di ultima generazione) deve essere dotato necessariamente di Sistema di Posizionamento Globale (GPS), di magnetometro (bussola) e deve poter permettere la visualizzazione di un flusso video in tempo reale, oltre che di un collegamento Internet per ricevere i dati online. Il telefonino inquadra in tempo reale l’ambiente circostante; al mondo reale vengono sovrapposti i livelli di contenuto, dai dati da Punti di Interesse (POI) geolocalizzati agli elementi 3D.
- Realtà aumentata su computer. È basata sull’uso di marcatori, (ARtags), di disegni stilizzati in bianco e nero che vengono mostrati alla webcam, vengono riconosciuti dal computer, e ai quali vengono sovrapposti in tempo reale i contenuti multimediali: video, audio, oggetti 3D, ecc. Normalmente le applicazioni di realtà aumentata sono basate su tecnologia Adobe Flash.
La «pubblicità aumentata» (Augmented advertising) è esplosa nel 2009 attraverso numerose campagne di marchi (Toyota, Lego, Mini, Kellogg, General Electrics), cantanti (Eminem, John Mayer) o riviste (Colors, Esquire Magazine o Wallpaper). Spam Magazine, nata nel 2012 in Italia, è la prima rivista gratuita totalmente in realtà aumentata, sia per quanto riguarda i contenuti editoriali sia per quelli commerciali.
Augmented reality (AR) , is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real time and in semantic context with environmental elements, such as sports scores on TV during a match.
With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Information about the environment and its objects is overlaid on the real world. This information can be virtual or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space. Augmented reality brings out the components of the digital world into a person’s perceived real world. One example is an AR Helmet for construction workers which displays information about the construction sites. The first functional AR systems that provided immersive mixed reality experiences for users were invented in the early 1990s, starting with the Virtual Fixtures system developed at the U.S. Air Force’s Armstrong Labs in 1992
Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.
Various technologies are used in Augmented Reality rendering including optical projection systems, monitors, hand held devices, and display systems worn on the human body.
A head-mounted display (HMD) is a display device paired to the forehead such as a harness or helmet. HMDs place images of both the physical world and virtual objects over the user’s field of view. Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user’s head movements. HMDs can provide VR users mobile and collaborative experiences. Specific providers, such as uSens and Gestigon, are even including gesture controls for full virtual immersion.
In January 2015, Meta launched a project led by Horizons Ventures, Tim Draper, Alexis Ohanian, BOE Optoelectronics and Garry Tan. On February 17, 2016, Meta announced their second-generation product at TED, Meta 2. The Meta 2 head-mounted display headset uses a sensory array for hand interactions and positional tracking, visual field view of 90 degrees (diagonal), and resolution display of 2560 x 1440 (20 pixels per degree), which is considered the largest field of view (FOV) currently available.
Virtual Fixtures – first A.R. system1992, U.S. Air Force, WPAFB
AR displays can be rendered on devices resembling eyeglasses. Versions include eyewear that employ cameras to intercept the real world view and re-display its augmented view through the eye pieces and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear lens pieces.
NASA X-38 display showing video map overlays including runways and obstacles during flight test in 2000.
A head-up display, also known as a HUD, is a transparent display that presents data without requiring users to look away from their usual viewpoints. A precursor technology to augmented reality, heads-up displays were first developed for pilots in the 1950s, projecting simple flight data into their line of sight thereby enabling them to keep their “heads up” and not look down at the instruments. Near eye augmented reality devices can be used as portable head-up displays as they can show data, information, and images while the user views the real world. Many definitions of augmented reality only define it as overlaying the information. This is basically what a head-up display does; however, practically speaking, augmented reality is expected to include registration and tracking between the superimposed perceptions, sensations, information, data, and images and some portion of the real world.
CrowdOptic, an existing app for smartphones, applies algorithms and triangulation techniques to photo metadata including GPS position, compass heading, and a time stamp to arrive at a relative significance value for photo objects. CrowdOptic technology can be used by Google Glass users to learn where to look at a given point in time.
In January 2015, Microsoft introduced HoloLens, which is an independent smartglasses unit. Brian Blau, Research Director of Consumer Technology and Markets at Gartner, said that “Out of all the head-mounted displays that I’ve tried in the past couple of decades, the HoloLens was the best in its class.”. First impressions and opinions have been generally that HoloLens is a superior device to the Google Glass, and manages to do several things “right” in which Glass failed.
Contact lenses that display AR imaging are in development. These bionic contact lenses might contain the elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for wireless communication. The first contact lens display was reported in 1999 and subsequently, 11 years later in 2010/2011 Another version of contact lenses, in development for the U.S. Military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time. The futuristic short film Sight features contact lens-like augmented reality devices.
Virtual retinal display
A virtual retinal display (VRD) is a personal display device under development at the University of Washington’s Human Interface Technology Laboratory. With this technology, a display is scanned directly onto the retina of a viewer’s eye. The viewer sees what appears to be a conventional display floating in space in front of them.
The EyeTap (also known as Generation-2 Glass ) captures rays of light that would otherwise pass through the center of a lens of an eye of the wearer, and substitutes synthetic computer-controlled light for each ray of real light. The Generation-4 Glass (Laser EyeTap) is similar to the VRD (i.e. it uses a computer-controlled laser light source) except that it also has infinite depth of focus and causes the eye itself to, in effect, function as both a camera and a display, by way of exact alignment with the eye, and resynthesis (in laser light) of rays of light entering the eye.
Handheld displays employ a small display that fits in a user’s hand. All handheld AR solutions to date opt for video see-through. Initially handheld AR employed fiducial markers, and later GPS units and MEMS sensors such as digital compasses and six degrees of freedom accelerometer–gyroscope. Today SLAM markerless trackers such as PTAM are starting to come into use. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR is the portable nature of handheld devices and ubiquitous nature of camera phones. The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times as well as distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye. Such examples as Pokémon Go and Ingress utilize an Image Linked Map (ILM) interface, where approved geotagged locations appear on a stylized map for the user to interact with.
Spatial Augmented Reality (SAR) augments real world objects and scenes without the use of special displays such as monitors, head mounted displays or hand-held devices. SAR makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users.
Examples include shader lamps, mobile projectors, virtual tables, and smart projectors. Shader lamps mimic and augment reality by projecting imagery onto neutral objects, providing the opportunity to enhance the object’s appearance with materials of a simple unit- a projector, camera, and sensor.
Other applications include table and wall projections. One innovation, the Extended Virtual Table, separates the virtual from the real by including beam-splitter mirrors attached to the ceiling at an adjustable angle. Virtual showcases, which employ beam-splitter mirrors together with multiple graphics displays, provide an interactive means of simultaneously engaging with the virtual and the real. Many more implementations and configurations make spatial augmented reality display an increasingly attractive interactive alternative.
A SAR system can display on any number of surfaces of an indoor setting at once. SAR supports both a graphical visualisation and passive haptic sensation for the end users. Users are able to touch physical objects in a process that provides passive haptic sensation.
Modern mobile augmented-reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID and wireless sensors. These technologies offer varying levels of accuracy and precision. Most important is the position and orientation of the user’s head. Tracking the user’s hand(s) or a handheld input device can provide a 6DOF interaction technique.
Mobile augmented reality applications are gaining popularity due to the wide adoption of mobile and especially wearable devices. However they often rely on computationally intensive computer vision algorithms with extreme latency requirements. To compensate for the lack of computing power, offloading data processing to a distant machine is often desired. Computation offloading introduces new constrains in applications, especially in terms of latency and bandwidth. Although there is a plethora of real-time multimedia transport protocols there is a need for support from network infrastructure as well.
Techniques include speech recognition systems that translate a user’s spoken words into computer instructions and gesture recognition systems that can interpret a user’s body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear. Some of the products which are trying to serve as a controller of AR Headsets include Wave by Seebright Inc. and Nimble by Intugine Technologies.
The computer analyzes the sensed visual and other data to synthesize and position augmentations.
Software and algorithms
A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration which uses different methods of computer vision, mostly related to video tracking. Many computer vision methods of augmented reality are inherited from visual odometry.
Usually those methods consist of two parts. The first stage is to detect interest points, fiducial markers or optical flow in the camera images. This step can use feature detection methods like corner detection, blob detection, edge detection or thresholding and/or other image processing methods. The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiducial markers) are present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.
Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC), which consists of XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects.
Virtual reality (VR)
Virtual reality (VR) typically refers to computer technologies that use virtual reality headsets, sometimes in combination with physical spaces or multi-projected environments, to generate realistic images, sounds and other sensations that simulates a user’s physical presence in a virtual or imaginary environment. A person using virtual reality equipment is able to “look around” the artificial world, and with high quality VR move about in it and interact with virtual features or items. VR headsets are head-mounted goggles with a screen in front of the eyes. Programs may include audio and sounds through speakers or headphones.
VR systems that include transmission of vibrations and other sensations to the user through a game controller or other devices are known as haptic systems. This tactile information is generally known as force feedback in medical, video gaming and military training applications. Virtual reality also refers to remote communication environments which provide a virtual presence of users with through telepresence and telexistence or the use of a virtual artifact (VA). The immersive environment can be similar to the real world in order to create a lifelike experience grounded in reality or sci-fi. Augmented reality systems may also be considered a form of VR that layers virtual information over a live camera feed into a headset, or through a smartphone or tablet device.
Etymology and terminology
In 1938, Antonin Artaud described the illusory nature of characters and objects in the theatre as “la réalité virtuelle” in a collection of essays, Le Théâtre et son double. The English translation of this book, published in 1958 as The Theater and its Double, is the earliest published use of the term “virtual reality”. The term “artificial reality”, coined by Myron Krueger, has been in use since the 1970s. The term “virtual reality” was used in The Judas Mandala, a 1982 science fiction novel by Damien Broderick. “Virtual” has had the meaning “being something in essence or effect, though not actually or in fact” since the mid-1400s, “…probably via sense of “capable of producing a certain effect” (early 1400s)”. The term “virtual” has been used in the computer sense of “not physically existing but made to appear by software” since 1959.
A dictionary definition for “cyberspace” states that this word is a synonym for “virtual reality”, but the two terms are fundamentally different (something that is “virtual” does not necessarily need to rely on a network, for instance).
Virtual reality shares some elements with “augmented reality” (or AR). AR is a type of virtual reality technology that blends what the user sees in their real surroundings with digital content generated by computer software. The additional software-generated images with the virtual scene typically enhance how the real surroundings look in some way. Some AR systems use a camera to capture the user’s surroundings or some type of display screen which the user looks at (e.g., Microsoft’s HoloLens, Magic Leap).
The Virtual Reality Modelling Language (VRML), first introduced in 1994, was intended for the development of “virtual worlds” without dependency on headsets. The Web3D consortium was subsequently founded in 1997 for the development of industry standards for web-based 3D graphics. The consortium subsequently developed X3D from the VRML framework as an archival, open-source standard for web-based distribution of VR content.
All modern VR displays are based on technology developed for smartphones including: gyroscopes and motion sensors for tracking head, hand, and body positions; small HD screens for stereoscopic displays; and small, lightweight and fast processors. These components led to relative affordability for independent VR developers, and lead to the 2012 Oculus Rift kickstarter offering the first independently developed VR headset.
Independent production of VR images and video has increased by the development of omnidirectional cameras, also known as 360-degree cameras or VR cameras, that have the ability to record in all directions, although at low-resolutions or in highly compressed formats for online streaming. In contrast, photogrammetry is increasingly used to combine several high-resolution photographs for the creation of detailed 3D objects and environments in VR applications.
Before the 1950s
View-Master, a stereoscopic visual simulator, was introduced in 1939.
The exact origins of virtual reality are disputed, partly because of how difficult it has been to formulate a definition for the concept of an alternative existence.Elements of virtual reality have surfaced as early as the 1860s with French playwright Antonin Artaud who used avant-garde work to blur illusion and reality to be one and the same. He argued that a theatre audience should suspend their disbelief and consider the performance to be reality. The first references to the more modern concept of virtual reality came from science fiction. Stanley G. Weinbaum’s 1935 short story “Pygmalion’s Spectacles” describes a goggle-based virtual reality system with holographic recording of fictional experiences, including smell and touch.
Morton Heilig wrote in the 1950s of an “Experience Theatre” that could encompass all the senses in an effective manner, thus drawing the viewer into the onscreen activity. He built a prototype of his vision dubbed the Sensorama in 1962, along with five short films to be displayed in it while engaging multiple senses (sight, sound, smell, and touch). Predating digital computing, the Sensorama was a mechanical device. Heilig also developed what he referred to as the “Telesphere Mask” (patented in 1960). The patent application described the device as “a telescopic television apparatus for individual use…The spectator is given a complete sensation of reality, i.e. moving three dimensional images which may be in colour, with 100% peripheral vision, binaural sound, scents and air breezes”.
Also notable among the earlier hypermedia and virtual reality systems was the Aspen Movie Map, which was created at MIT in 1978. The program was a crude virtual simulation of Aspen, Colorado in which users could wander the streets in one of the three modes: summer, winter, and polygons. The first two were based on photographs—the researchers actually photographed every possible movement through the city’s street grid in both seasons—and the third was a basic 3-D model of the city. Atari founded a research lab for virtual reality in 1982, but the lab was closed after two years due to Atari Shock (North American video game crash of 1983). However, its hired employees, such as Tom Zimmerman, Scott Fisher, Jaron Lanier and Brenda Laurel, kept their research and development on VR-related technologies. By the 1980s the term “virtual reality” was popularized by Jaron Lanier, one of the modern pioneers of the field. Lanier had founded the company VPL Research in 1985. VPL Research has developed several VR devices like the Data Glove, the Eye Phone, and the Audio Sphere. VPL licensed the Data Glove technology to Mattel, which used it to make an accessory known as the Power Glove. While the Power Glove was hard to use and not popular, at US$75, it was an early affordable VR device.
The VR industry mainly provided VR devices for medical, flight simulation, automobile industry design, and military training purposes from 1970 to 1990.
In 1991, Carolina Cruz-Neira, Daniel J. Sandin and Thomas A. DeFanti from the Electronic Visualization Laboratory created the first cubic immersive room, The Cave. Developed as Cruz-Neira’s PhD thesis, it involved a multi-projected environment, similar to the holodeck, allowing people to see their own bodies in relation to others in the room.
The 1990s saw the first widespread commercial releases of consumer headsets. In 1991, Sega announced the Sega VR headset for arcade games and the Mega Drive console. It used LCD screens in the visor, stereo headphones, and inertial sensors that allowed the system to track and react to the movements of the user’s head. In the same year, Virtuality launched and went on to become the first mass-produced, networked, multiplayer VR entertainment system. It was released in many countries, including a dedicated VR arcade at Embarcadero Center in San Francisco. Costing up to $73,000 per multi-pod Virtuality system, they featured headsets and exoskeleton gloves that gave one of the first “immersive” VR experiences. Antonio Medina, a MIT graduate and NASA scientist, designed a virtual reality system to “drive” Mars rovers from Earth in apparent real time despite the substantial delay of Mars-Earth-Mars signals.
In 1991, Computer Gaming World predicted “Affordable VR by 1994”. By 1994, Sega released the Sega VR-1 motion simulator arcade attraction, in SegaWorld amusement arcades. It was able to track head movement and featured 3D polygon graphics in stereoscopic 3D, powered by the Sega Model 1 arcade system board. Also in 1994 Apple released QuickTime VR, which, despite using the term “VR”, was unable to represent virtual reality, and instead displayed 360 photographic panoramas.
A non-VR system called the Virtual Boy was created by Nintendo and was released in Japan on July 21, 1995 and in North America on August 15, 1995. Also in 1995, a group in Seattle created public demonstrations of a “CAVE-like” 270 degree immersive projection room called the Virtual Environment Theater, produced by entrepreneurs Chet Dagit and Bob Jacobson. The same system was shown in 1996 in tradeshow exhibits sponsored by Netscape Communications. Forte released the VFX1, a PC-powered virtual reality headset in 1995, which was supported by games including Descent, Star Wars: Dark Forces, System Shock and Quake.
In 1999, entrepreneur Philip Rosedale formed Linden Lab with an initial focus on the development of VR hardware. In its earliest form, the company struggled to produce a commercial version of “The Rig”, which was realized in prototype form as a clunky steel contraption with several computer monitors that users could wear on their shoulders. The concept was later adapted into the personal computer-based, 3D virtual world Second Life.
In 2001, SAS3 or SAS Cube became the first PC based cubic room, developed by Z-A Production (Maurice Benayoun, David Nahon), Barco, Clarté, installed in Laval France in April 2001. The SAS library gave birth to Virtools VRPack. By 2007, Google introduced Street View, a service that shows panoramic views of an increasing number of worldwide positions such as roads, indoor buildings and rural areas. It also features a stereoscopic 3D mode, introduced in 2010.
In 2010, Palmer Luckey designed the first prototype of the Oculus Rift. This prototype, built on a shell of another virtual reality headset, was only capable of rotational tracking. However, it boasted a 90-degree field of vision that was previously unseen in the consumer market at the time. This initial design would later serve as a basis from which the later designs came.
In 2013, Valve discovered and freely shared the breakthrough of low-persistence displays which make lag-free and smear-free display of VR content possible. This was adopted by Oculus and was used in all their future headsets.
In early 2014, Valve showed off their SteamSight prototype, the precursor to both consumer headsets released in 2016. It shared major features with the consumer headsets including separate 1K displays per eye, low persistence, positional tracking over a large area, and fresnel lenses.
On March 25, 2014, Facebook purchased Oculus VR for $2 billion. This purchase occurred before any of the devices ordered through Oculus’ 2012 Kickstarter had shipped. In that same month, Sony announced Project Morpheus (its code name for PlayStation VR), a virtual reality headset for the PlayStation 4 video game console. Google announces Cardboard, a do-it-yourself stereoscopic viewer for smartphones. The user places their smartphone in the cardboard holder, which they wear on their head. In 2015, the Kickstarter campaign for Gloveone, a pair of gloves providing motion tracking and haptic feedback, was successfully funded, with over $150,000 in contributions.
In February–March 2015, HTC and Valve Corporation announced the virtual reality headset HTC Vive and controllers. The set included tracking technology called Lighthouse, which utilized wall-mounted “base stations” for positional tracking using infrared light.
By 2016 there were at least 230 companies developing VR-related products. Facebook has 400 employees focused on VR development; Google, Apple, Amazon, Microsoft, Sony and Samsung all had dedicated AR and VR groups. Dynamic binaural audio was common to most headsets released that year. However, haptic interfaces were not well developed, and most hardware packages incorporated button-operated handsets for touch-based interactivity. Visually, displays were still of a low-enough resolution and frame-rate that images were still identifiable as virtual. On April 5, 2016, HTC shipped its first units of the HTC VIVE SteamVR headset. This marked the first major commercial release of sensor-based tracking, allowing for free movement of users within a defined space.
In early 2017, a patent filed by Sony showed they were developing a similar location tracking technology to the VIVE for PlayStation VR, with the potential for the development of a wireless headset.
Several virtual reality head mounted displays (HMD) were released for gaming during the early-mid 1990s. These included the Virtual Boy developed by Nintendo, the iGlasses developed by Virtual I-O, the Cybermaxx developed by Victormaxx and the VFX1 Headgear developed by Forte Technologies. Other modern examples of narrow VR for gaming include the Wii Remote, the Kinect, and the PlayStation Move/PlayStation Eye, all of which track and send motion input of the players to the game console somewhat accurately.
Commercial tethered headsets released for VR gaming include the Oculus Rift and the HTC Vive. Systems in development include Sony’s PlayStation VR, requiring a PlayStation instead of a PC to run; the StarVR; FOVE; and the Magic Leap.
Following the widespread release of commercial VR headsets in the mid-2010s, several VR-specific and VR versions of popular videogames have been released. Guild Software’s Vendetta Online was widely reported as the first MMORPG to support the Oculus Rift, making it potentially the first persistent online world with native support for a consumer virtual reality headset. Since 2013, there have been several virtual reality devices that seek to enter the market to complement Oculus Rift to enhance the game experience. One, Virtuix Omni, is based on the ability to move in a three dimensional environment through an omnidirectional treadmill. On April 27, 2016, Mojang announced that the popular children’s video game Minecraft was playable on the Gear VR. A separate version was released to the Oculus Store for use with the Gear VR, similar to the Pocket Edition of Minecraft.
Some companies are adapting VR for fitness by using gamification concepts to encourage exercise.
Cinema and entertainment
Films produced for VR permit the audience to view a 360 degree environment in every scene. Production companies, such as Fox Searchlight Pictures and Skybound, utilize VR cameras to produce films and series that are interactive in VR. Pornographic studios such as Naughty America, BaDoinkVR and Kink have applied VR into their products since late 2015 or early 2016. The clips and videos are shot from an angle that resembles POV-style porn.
In September 2016, two announcements were made for broadcast of sporting events in VR. Agon announced that the upcoming World Chess Championship match between Magnus Carlsen and Sergey Karjakin, scheduled for that November, would be “the first in any sport to be broadcast in 360-degree virtual reality.” This title was taken by Fox Sports’ Fox Sports VR, a series of virtual reality broadcasts consisting mainly of Fox College Football broadcasts. The telecasts (which use roughly 180 degrees of rotation) were made available through smartphone apps and head-mounted displays, through a TV Everywhere paywall. The first VR telecast, which featured Oklahoma hosting Ohio State, took place September 17.
Since 2015, virtual reality has been installed onto a number of roller coasters and theme parks, including Galactica at Alton Towers, The New Revolution at Six Flags Magic Mountain and Alpenexpress at Europapark, amongst others. The Void is a virtual reality theme park in Pleasant Grove, Utah that has attractions where, by using virtual reality, AR and customized mechanical rooms, an illusion of tangible reality is created by the use of multiple senses.
Healthcare and clinical therapies
According to a recent report from Goldman Sachs, healthcare could be one of the next markets that VR/AR disrupts. Already, VR devices are being used in clinical therapy, and the results are significant.
Anxiety disorder treatment
Virtual Reality Exposure Therapy (VRET) is a form of exposure therapy for treating anxiety disorders such as post traumatic stress disorder (PTSD) and phobias. Studies have indicated that when VRET is combined with other forms of behavioral therapy, patients experience a reduction of symptoms. In some cases, patients no longer meet the DSM-V criteria for PTSD after a series of treatments with VRET.
Immersive VR has been studied for acute pain management, on the theory that it may distract people, reducing their experience of pain. Researchers theorize that immersive VR helps with pain reduction by distracting the mind and flooding sensories with a positive experience.
Education and training
VR is used to provide learners with a virtual environment where they can develop their skills without the real-world consequences of failing.
Thomas A. Furness III was one of the first to develop the use of VR for military training when, in 1982, he presented the Air Force with a working model of his virtual flight simulator the Visually Coupled Airborne Systems Simulator (VCASS). The second phase of his project, which he called the “Super Cockpit”, was even more advanced, with high resolution graphics (for the time) and a responsive display. Furness III is often credited as a pioneer in virtual reality for this research. The Ministry of Defense in the United Kingdom has been using VR in military training since the 1980s. The United States military announced the Dismounted Soldier Training System in 2012. It was cited as the first fully immersive military VR training system.
NASA has used VR technology for twenty years. Most notable is their use of immersive VR to train astronauts while they are still on Earth. Such applications of VR simulations include exposure to zero-gravity work environments and training on how to spacewalk. Astronauts can even simulate what it is like to work with tools in space while using low cost 3D printed mock up tools.
Flight and vehicular applications
Flight simulators are a form of VR pilot training. They can range from a fully enclosed module to a series of computer monitors providing the pilot’s point of view. By the same token, virtual driving simulations are used to train tank drivers on the basics before allowing them to operate the real vehicle. Similar principles are applied in truck driving simulators for specialized vehicles such as firetrucks. As these drivers often have less opportunity for real-world experience, VR training provides additional training time.
VR technology has many useful applications in the medical field. Simulated surgeries allow surgeons to practice their technical skills without any risk to patients. Numerous studies have shown that physicians who receive surgical training via VR simulations improve dexterity and performance in the operating room significantly more than control groups. Through VR, medical students and novice surgeons have the ability to view and experience complex surgeries without stepping into the operating room. On April 14, 2016, Shafi Ahmed was the first surgeon to broadcast an operation in virtual reality; viewers followed the surgery in real time from the surgeon’s perspective. The VR technology allowed viewers to explore the full range of activities in the operating room as it was streamed by a 4K 360fly camera.
David Em was the first fine artist to create navigable virtual worlds in the 1970s. His early work was done on mainframes at Information International, Inc., Jet Propulsion Laboratory, and California Institute of Technology. Jeffrey Shaw explored the potential of VR in fine arts with early works like Legible City (1989), Virtual Museum (1991), and Golden Calf (1994).
Virtopia was the first VR Artwork to be premièred at a film festival. Created by artist/researcher Jacquelyn Ford Morie with researcher Mike Goslin, it debuted at the 1992 Florida Film Festival. Subsequent screenings of a more developed version of the project were at the 1993 Florida Film Festival and at SIGGRAPH 1994’s emerging tech venue, The Edge. Morie was one of the first artists to focus on emotional content in VR experiences.
Canadian artist Char Davies created immersive VR art pieces Osmose (1995) and Ephémère (1998). Maurice Benayoun’s work introduced metaphorical, philosophical or political content, combining VR, network, generation and intelligent agents, in works like Is God Flat? (1994), “Is the Devil Curved?” (1995), The Tunnel under the Atlantic (1995), and World Skin, a Photo Safari in the Land of War (1997). Other pioneering artists working in VR have include Knowbotic Research, Rebecca Allen and Perry Hoberman. In 2016, the first project in Poland called The Abakanowicz Art Room was realized – it was documentation of the art office Magdalena Abakanowicz made by Jarosław Pijarowski and Paweł Komorowski.
Some museums have begun making some of their content virtual reality accessible including the British Museum and the Guggenheim.
The use of 3D computer-aided design (CAD) data was limited by 2D monitors and paper printouts until the mid-to-late 1990s, when video projectors, 3D tracking, and computer technology enabled a renaissance in the use 3D CAD data in virtual reality environments. With the use of active shutter glasses and multi-surface projection units immersive engineering was made possible by companies like VRcom and IC.IDO. Virtual reality has been used in automotive, aerospace, and ground transportation original equipment manufacturers (OEMs) in their product engineering and manufacturing engineering . Virtual reality adds more dimensions to virtual prototyping, product building, assembly, service, performance use-cases. This enables engineers from different disciplines to view their design as its final product. Engineers can view the virtual bridge, building or other structure from any angle. As well, some computer models allow engineers to test their structure’s resistance to winds, weight, and other elements. Immersive VR engineering systems enable engineers to see virtual prototypes prior to the availability of any physical prototypes.
Virtual reality in occupational safety and health
VR simulates real workplaces for occupational safety and health purposes. Information and projection technology are used to produce a virtual, three-dimensional, dynamic work environment. Within work scenarios for example some parts of a machine move of their own accord while others can be moved by human operators. Perspective, angle of view, and acoustic and haptic properties change according to where the person is standing and how he or she moves relative to the environment. VR technology allows human information processing close to real life situations. VR enables all phases of a product life cycle, from design, through use, up to disposal, to be simulated, analysed and optimised. VR can be used for OSH purposes to:
- Review and improve the usability of products and processes whilst their development and design are still in progress. This enables errors in development and the need for subsequent modifications to be avoided.
- Systematically and empirically review design solutions for the human-system interfaces and their influence upon human behaviour. This reduces the need for physical modifications to machinery, and for extensive field studies.
- Safely test potentially hazardous products, processes and safety concepts. This avoids actual hazards during the study of human-system interaction.
- Identify cause-effect relationships following accidents on and involving products. This saves material, personnel, time and financial outlay associated with in-situ testing.
Heritage and archaeology
The first use of a VR presentation in a heritage application was in 1994, when a museum visitor interpretation provided an interactive “walk-through” of a 3D reconstruction of Dudley Castle in England as it was in 1550. This consisted of a computer controlled laserdisc-based system designed by British-based engineer Colin Johnson. The system was featured in a conference held by the British Museum in November 1994, and in the subsequent technical paper, Imaging the Past – Electronic Imaging and Computer Graphics in Museums and Archaeology. Virtual reality enables heritage sites to be recreated extremely accurately, so that the recreations can be published in various media. The original sites are often inaccessible to the public or, due to the poor state of their preservation, hard to picture. This technology can be used to develop virtual replicas of caves, natural environment, old towns, monuments, sculptures and archaeological elements.
Architectural and urban design
One of the first recorded uses of virtual reality in architecture was in the late 1980s when the University of North Carolina modeled its Sitterman Hall, home of its computer science department, in a virtual environment.
By 2010, VR programs were developed for urban regeneration, planning and transportation projects.
Music and concerts
VR has the possibility of changing how we view live music by allowing the audience to be right up front their band or to attend virtual concerts like Coachella. Virtual reality can also transform music videos by making them more intense and powerful. Music visualization also has the potential to be changed by VR with multiple apps being created for the Oculus and the HTC Vive although some people dubious as to how popular these will be. Virtual reality is also used in visual music applications.
On May 3, 2016, Norwegian pop band a-ha gave a multimedia performance in collaboration with Void, a Norwegian computational design studio. The stereoscopic VR-experience was made available for Android users directly through a YouTube app and also made available for iPhone users and other platforms.
Virtual reality presents a unique opportunity for advertisers to reach a completely immersed audience. Companies such as Paramount Pictures, Coca-Cola, McDonald’s, Disney, The North Face and Innis & Gunn have applied VR into marketing campaigns. Non-profit organizations such as Amnesty International, UNICEF, and World Wide Fund for Nature (WWF) have used virtual reality to bring potential supporters closer to their work, effectively bringing distant social, political and environmental issues and projects to members of the public in immersive ways not possible with traditional media. Panoramic 360 views of conflict in Syria and face to face encounters with CGI tigers in Nepal have been used in experiential activations and shared online for educational and fundraising purposes.
Lowe’s, IKEA, Wayfair and other retailers have developed systems that allow their products to be seen in virtual reality, to give consumers a better idea of how the product will fit into their home, or to allow the consumer to get a better look at the product from home. Consumers looking at digital photos of the products can “turn” the product around virtually, and see it from the side or the back.
Several companies develop software or services that allow architectural design firms and real estate clients to tour virtual models of proposed building designs. During the design process, architects can use VR to experience the designs they are working on before they are built. Seeing a design in VR can give architect a correct sense of scale and proportion. VR models can replace physical miniatures to demonstrate a design to clients or the public. Developers and owners can create VR model of built spaces that allow potential buyers or tenants to tour a space in VR, even if real-life circumstances make a physical tour unfeasible.
In July 2015, OnePlus became the first company to launch a product using virtual reality.
In fiction and popular culture
There have been many novels that reference and describe forms of virtual reality. Neal Stephenson’s Snow Crash (1992) and Ernest Cline’s Ready Player One (2011) are novels that have been influential for VR engineers working in the early 21st century.
In the 1980s and 1990s, Cyberpunks viewed the technology as a potential means for social change. The recreational drug subculture praised virtual reality not only as a new art form, but as an entirely new frontier.
Concerns and challenges
Virtual reality technology faces a number of challenges, including health and safety, privacy and technical issues. Long-term effects of virtual reality on vision and neurological development are unknown; users might become disoriented in a purely virtual environment, causing balance issues; computer latency might affect the simulation, providing a less-than-satisfactory end-user experience; navigating the non-virtual environment (if the user is not confined to a limited area) might prove dangerous without external sensory information. There have been rising concerns that with the advent of virtual reality, some users may experience virtual reality addiction. From an economic and financial perspective, early entrants to the virtual reality market may spend significant amount of time and money on the technology. If it is not adopted by enough customers, the investment will not pay off.
Health and safety
There are many health and safety considerations of virtual reality. Most virtual reality systems come with consumer warnings, including: seizures; developmental issues in children; trip-and-fall and collision warnings; discomfort; repetitive stress injury; and interference with medical devices.
A number of unwanted symptoms have been caused by prolonged use of virtual reality, and these may have slowed proliferation of the technology. Virtual reality sickness (also known as cybersickness) occurs when a person’s exposure to a virtual environment causes symptoms that are similar to motion sickness symptoms. The most common symptoms are general discomfort, headache, stomach awareness, nausea, vomiting, pallor, sweating, fatigue, drowsiness, disorientation, and apathy. Other symptoms include postural instability and retching. Virtual reality sickness is different from motion sickness in that it can be caused by the visually induced perception of self-motion; real self-motion is not needed. It is also different from simulator sickness; non-virtual reality simulator sickness tends to be characterized by oculomotor disturbances, whereas virtual reality sickness tends to be characterized by disorientation. A 2016 publication assessed the effects of exposure to 2D vs 3D dissection videos on nine pathology resident physicians, using self-reported physiologic symptoms. Watching the content in 3D vs 2D did not increase simulator sickness. Although the average simulator sickness questionnaire score did increase with time, statistical analysis does not suggest significance.
The persistent tracking required by all VR systems makes the technology particularly useful for, and vulnerable to, mass surveillance. The expansion of VR will increase the potential and reduce the costs for information gathering of personal actions, movements and responses. In networked VR spaces with capacity for public interaction, there is the potential for unexpected modifications the environment.