XR Talks is a series that features the best presentations and educational videos from the XR universe. It includes embedded video, as well as narrative analysis and top takeaways. Speakers’ opinions are their own. For a deeper indexed and searchable archive, subscribe to ARtillery PRO.
In daily interactions in spatial computing, we can get lost in the weeds. That makes the exercise of stepping back to look at the big picture valuable for perspective. How does spatial computing sit in a long line of technologies that got us here? And how will it carry that torch forward?
Answering this is the job of futurists, and one who’s particularly adept at breaking it down is Convergence* author Charlie Fink. As he outlined at AWE in May (video below), past innovation cycles can inform the course and pace of spatial computing. It’s all about converging tech.
“AR is maturing alongside a number of technologies in the same way that aerospace developed alongside of the piston engine. In this case we have a number of digital technologies which are coming together to change life as we know it… The Holy Grail is the moment is when we live in a world where computing is invisible ubiquitous and wearable. The evidence is all around us in little pieces, they haven’t yet converged [in] a device that we all use the way we use our smartphone, but it’s happening.”
Part of the fun and frustration of that convergence is that we don’t know how it will materialize (cue Bill Gates quote about overestimating near term, underestimating long term). Part of that also involves the law of unintended consequences, which can mean good and bad outcomes.
“We thought the internet was going to make us more free… because the truth can’t be suppressed on the Internet. And of course we didn’t realize that lies cannot be suppressed on the Internet either…Convergence also means platform innovation. The guys who created the steam engine didn’t know what was going to happen. They didn’t understand the factory and automation and the assembly line but it became the first source of power.”
Back to historical metaphors, one piece of the convergence puzzle that Fink says is missing is interoperability of spatial experiences. This is sort of like HTML today which created a common language for the web and for companies like Google to create value in an organizational layer.
“Without HTML, there is no Google. How do you search the internet without HTML? In the same way, [AR] needs an underlying layer which can unify it and make it searchable. Mirrorworld refers specifically to the digital twin of the real world mapped exactly on it, so there’s a digital twin of a tree and it’s automatically and perfectly mapped to that tree so that you can anchor content to that tree [which] becomes searchable.”
Another missing piece to the AR future we all envision is a system of filters and permissions. Otherwise, there will be digital overload. We’ll have to filter content in personalized ways that are governed by explicit preferences that are a sort of training set for ongoing intelligent filtration.
“How is the AI that controls these devices going to know? How is it going to say ‘hey Charlie, there’s relevant information here that you might want to see’? Well of course we will need a system of filters to do that… it’s going to be a lot like the content we already are consuming, but placed spatially and consumed in a contextually relevant time and place… I don’t want to see pictures of food in a news feed, I want to see pictures of food when I’m in the restaurant.”
One of those filters will likely draw from social signals, or a sort of spatial version of what we now know as the social graph. This was a concept we’ve examined in light of Facebook’s long quest to build a “social layer” to the spatial web as part of several XR efforts to future-proof its business.
“I don’t want to see everybody’s pictures of food in the restaurant, I just want to see the pictures of food from people that I know, from people in my social graph. So that social graph will continue to have a tremendous amount of importance in a visual-spatial world.”
Speaking of Facebook, tech giants’ already-apparent motivations for the spatial web (a topic of our contributed chapter* to Convergence) will naturally lead to walled gardens or at least proprietary silos of some sort. As long as they’re interoperable, per the above HTML metaphor, it could work.
“One of the biggest topics in AR is the AR cloud. There is no one AR cloud of course, there are going to be hundreds of AR clouds… Facebook does not want Google in its AR cloud,
that’s not going to happen. What we’re going to need is a system of filters that is smart enough to cut across all of those layers and all of those silos. we can only do it if we have a base layer that every device understands and can be anchored to.”
Sticking with tech giants, the multi-billion dollar question that everyone is asking is what market-inflecting hardware will apple launch in the next 1-2 years? As we’ve speculated, v1 of its proposed AR glasses could be a “notification layer,” followed by years of evolution… just like the iPhone.
“So there’s one big wild card: What’s Apple going to do? I’ve been known to flirt with Mac rumors so I’ll throw one out there for you: I keep thinking it’s going to be like the iPod iterated itself into the iPhone. What if it’s not really AR the way we’re thinking about it… maybe it’s a media consumption machine… it’s like you just bought the most high-definition big-screen TV in the world even though you might live in a 600 square foot apartment. So I think it’s very possible that this is where Apple is going and they will iterate their way out of it as 5G improves… that becomes more and more mobile as the form factors and the supporting technologies improve.”
Meanwhile, another flavor of AR is deviating from common connotations: audio AR. This is our new favorite topic as it builds on a base of heareables like AirPods, which are conditioning users for a persistent audio channel at massive scale. And it could beat its graphical cousin to market.
“The solution lies in sound because it’s something we’re already doing… Why can’t we do a lot of the things that we’re thinking that glasses are going to do?… I’m wearing my AirPods and I’m listening to NPR news and it pauses to tell me the A-train is on time and your first meeting is at 9:30. That would be useful and it wouldn’t be interruptive… If we’re at lunch and my my face starts lighting up and I’m reading, it’s gonna be pretty rude. These social issues with regard to new technology are real. But people wear these earbuds all day long and it is not weird to talk to somebody wearing AirPods. There are two ways into the brain: One is through optically through the eyes and the other is orally through the ears. So it may be that the first beachhead for AR is going to be sound.”
See the full talk below.
*Disclosure: AR Insider has no financial stake in the companies mentioned in this post, nor received payment for its production. AR Insider Editor Mike Boland contributed a chapter to the book reviewed in this article, but he nor us receive financial incentive nor direct payment for book sales. Disclosure and ethics policy can be seen here.
Header image credit: AWE