AI is the new black… especially generative AI such as DALL-E 2 and Stable Diffusion. Generating artwork on the fly with text inputs seems to be blowing everyone’s mind. Similarly, conversational AI like Chat GPT continues to preview the future of digital information retrieval.
Some have even called it the death of the Google-dominant era of search. Though that could be overblown or premature, there’s ample promise in the applications and endpoints for technology from the likes of OpenAI. AI could even replace the m-word as 2023’s tech topic.
Amidst all this excitement, one question we’re often asked is how generative AI will converge with AR. And though it’s early in the conceptualization of generative AI, we can see some high-level ways that the technology could merge with AR and engender new use cases.
Activate and Observe
Jumping into some of those potential points of intersection, they’re split between user-facing features and creator-facing features. Starting with the former, generative AI could be used as a way to create serendipitous AR experiences on the fly, based on text/verbal prompts.
This contrasts the way AR is experienced today which is mostly preordained. AR experiences and interactions are deliberately and painstakingly developed in advance, then experienced when activated in deliberate use cases. Generative AR could be more open-ended.
For example, envision educational settings where teachers dynamically activate AR animations and interactions for a given lecture using keywords. Of course, several technical kinks need to be worked out to realize this vision, but that’s the high-level speculative (and early) take.
Similarly, with the help of generative AI, AR could evolve into more of a utility. Like search today, information could be pull-based. Users could activate and observe 3D animations on demand, with a range of possibilities from entertainment to “how-to” content (e.g., tying a tie).
Moving on to the creator end of the spectrum, generative AI could be a key component of AR developer workflows. This could include generating 3D models that are the ingredients for their AR experiences. It could even evolve with neural radiance fields (NeRFs) for 3D scenes.
These measures would fill a key gap in AR’s value chain as 3D content is currently a bottleneck. That goes for individual AR developers as well as brands that are interested in marketing their products through AR visualization. They first need 3D models before any of that can happen.
Of course, the current state of generative AI isn’t good enough to produce 3D models that carry the nuance and dimension needed for things like brand marketing. But it could add value to the creation process, handling some of the generative heavy lifting that’s then human-refined.
Similarly, some AR creators are already using generative AI as an inspirational tool. As part of their ideation process, they can prototype or storyboard AR lenses by generating artwork through tools like DALL-E 2. This can help them refine ideas or spark new creative directions.
All the above is admittedly early, speculative, and at a surface level. For any of this future-gazing to materialize, there will have to be a closer look at AI’s capabilities and execution. But until then, it doesn’t hurt to conceptualize how these two influential technologies will converge.
Watching it play out will be the fun part. As we’ve learned from past emerging tech cycles, the use cases and endpoints are discovered organically and slowly. Like in the early days of the smartphone era, it took a few years for native thinking to seep into the developer mindset.
After those wheels started turning, we saw apps like Foursquare and Uber, which tapped into the native capabilities of the form factor (e.g, GPS). But before that point, apps were mostly just smaller websites. A similar evolutionary cycle will play out with generative AI and AR.
Put another way, we’ll look back at this article someday and laugh at the elementary level of speculated use cases and convergence points. Against our own interests, that’s the way we hope it plays out. That way, we’ll have ample innovation and surprises to look forward to.