
‘Help me, Obi-Wan Kenobi. You’re my only hope.’
T
his phrase, repeatedly delivered by the stuck holographic projection of Princess Leia, has become one of the most memorable moments of Star Wars Episode IV: A New Hope.
The full 3D holographic projections, along with many other futuristic technologies depicted in Star Wars, have become sci-fi staples. These include omnipresent computers, computer-generated environments, sentient robots, and spacefaring vehicles.
Engineers in the real world have found ways to make some of those technologies real, with mixed results. That’s not to diminish the actual achievements: A portion of the wondertech prophesied in science fiction has gradually found its way into reality, even if in a different form.
Technologies that create a believable presence of illusionary objects in physical reality or create a fully immersive illusionary environment – these have accomplished some real milestones.
Virtual Reality Continuum
There is a longstanding concept of a ‘reality-virtuality continuum’, first introduced in mid-1990 in a scientific paper by Paul Milgram et al.
This continuum has, obviously, a physical world on its left extreme and a purely virtual, usually digital, space on the right. Everything in the middle is understood as ‘mixed reality’ – the best or worst of both worlds. An illusion of depth and perspective is usually considered important, at least, or even paramount, for the immersion effects.
This goes back a long way: a stereoscope, for instance, emerged back in the 1830s. The first attempts at 3D+ cinema hark back to the very early 20th century.
The ways of creating stereo effect differed, but in general, exploited visual inertia – the human brain’s limited capability to process rapid changes in light falling on the retina. Or they addressed our natural stereoscopic vision, so that the brain constructs a scene with perceived depth, thanks to the fact that each eye gets to see a slightly different image.
This simulates not just positive depth in the scene, but a negative too – the effect of the objects coming out from the screen. You might have seen this in IMAX or other cinema theaters with 3D (stereo) enabled. This is good for thrills, but the abuse of ‘negative depth’ has long been considered a sign of bad taste.
The virtual reality that implies the user’s full immersion encompasses such illusion-making tech too, yet goes further to simulate full interaction with digital objects perceived as dimensional and tactile. Usually, this requires extra devices, not only a 3D headgear, but also gloves or an entire jumpsuit with built-in micromotors that give users feedback. It can even involve motorized chairs, where vibrations and movement enhance the feeling of being in different places.
VR in Action
Various VR solutions found extensive use in fields where training within a virtual environment may be cost-efficient and/or non-hazardous as compared to the real world, such as manufacturing or the military.
In the latter case, some variants of virtual/mixed reality are widely used for training enlisted personnel, from pilots and tank crews to foot soldiers.
For instance, Bohemia Interactive, a Czech firm famed for its Operation Flashpoint/ArmA military simulator games, has a division that sells VBS (Virtual Battlefield Systems, later – Virtual BattleSpace) simulation series to multiple Western armies. The first iteration of VBS had the original 2001 Operation Flashpoint’s game engine at its core, and had been used by the US Marine Corps. The current iteration – VBS4 – probably has portions of newer engines (ArmA 2, ArmA 3) in its codebase, but by now, it is reportedly forked off from the games beyond comparison.
VBS isn’t the only such solution. And not only the military uses VR. Virtual tours, virtual classrooms, and warehouse training simulators are a few examples.
Civilian pilots traditionally use realistic flying simulators like Microsoft Flight in combination with multi-screen setups and a realistic set of physical controllers to work on the most challenging aspects of their job.
Then there is Augmented Reality, which is basically the visual overlaying of digital data upon the real world environment. Like VR, it usually requires some hardware, a smartphone, a tablet or AR-glasses with a built-in camera.
The waters run even deeper: the content can span multiple sensory modalities, from commonplace visual and auditory, through haptic, somatosensory, and olfactory. It’s all about the proper application of the stimulation.
The term Augmented Reality itself dates back to the aforementioned 1994 scientific paper by Milgram and his colleagues, linked above.
…And the first head-mounted display that had graphics rendered by a computer had been created in 1968. Quite a long story, isn’t it?
Cyberpunk in Reverse
As of late, mixed reality acquired an additional layer of meaning, while largely synonymous with AR, mixed reality is expected to blend digital content and physical reality to a greater degree, which, among other things, means more than sensible feedback from illusionary objects.
Then there is so-called Extended Reality (XR), which is an umbrella term to cover both VR, AR, and MR altogether, as well as their quite multiple joint use cases. The latter encompasses entertainment, education, and specialized training described above, as well as marketing (well, of course!), real estate, maintenance, and remote work. Therapeutic treatments, data exploration, and analysis are listed among the areas of practical XR applications as well.
The technology is intended to combine and/or mirror the physical environment with an interactive “digital twin”, where users get an immersive experience by being in a virtual or augmented “world”.
And finally, there is Spatial Computing – the newer thing. whereas ‘mixed reality’ is an umbrella term, ‘spatial computing’ or ‘spatial reality’ can be called ‘nebulous’: there is very little certainty of what is within its scope, and what is outside. Spatial computing does encompass AR, VR, XR, you-name-it-R, but also includes what they call ‘natural user interface’, ‘contextual computing‘, ‘affective computing‘, and ‘ubiquitous computing‘.
Traditionally, it is humans who learn to interact with computers to get something done; ‘spatial computing’ reverses this concept, as this time it is computers that are expected and trained to better understand and interact with people in the human world. Cyberpunk in reverse, if you please.
Computers involved in spatial computing are expected to have lots of sensors – RGB cameras, depth cameras, 3D and motion trackers, inertial measurement units, etc.; they should be capable of ‘understanding’ real world scenes, such as rooms, streets or stores, to read labels, to recognize objects, create 3D maps, and more, in other words to perceive physical context. And if at this point you start thinking of AI technologies – and their proper, non-generative use – that’s correct.
Spatial Reality
In the meantime, the concept of ‘spatial reality’ apparently had been ‘borrowed’ for something much more specific and, let’s say, mundane: displays. For example, Sony’s Spatial Reality Display is said to provide ‘incredibly realistic 3D images without the use of glasses or VR headsets’. The core of the technology is the micro-optical lens positioned precisely over the LCD display. They divide the image between the left and right eyes, creating a stereoscopic effect. Including the negative depth, i.e., ‘objects hopping out of the screen’.
So yes, you can get the illusion of a full-fledged holographic projection that you can view from every angle. Although again, it’s just the same illusion as in 3D cinema, sans the device sitting on your nose.
This is obviously good for professional applications such as engineering or architectural design, etc, where most of the work is done over a specific object. It may not be ideal for large scenes, but that’s in the eye of the beholder. One of the displays Sony offers has a diagonal of 27″, which is by no means small.
It’s hard to say where all of this goes eventually. Some of that tech may look engaging at first, but practical application may be dragging, and the public interest may be waning before something properly efficient arrives.
3D cinema, for example, has seen a few boom and bust cycles. It is in decline now once again, so deep that even IMAX cinemas play more traditional 2D films than 3D ones. In the end, the movie-goers prefer something less exploitative of their physiology.
Time will tell whether mixed reality technologies become de facto standards across various fields or fade into obscurity, dismissed as fleeting fads.
Ion Hatzithomas is CEO at RenderHub
