Thanks to extended reality (XR), users can interact with environments in real time like never before. Finding the right rendering speed to make the setting seem as realistic as possible is one challenge developers face. Real-time rendering versus more traditional methods takes VR where it’s never gone before.

Real-time rendering creates and shows three-dimensional visuals as the user interacts with a virtual world. The rate of frames per second can make it seem as though the user is walking through a realistic world rather than interacting with a computer program.

History of Real-Time Rendering

Although computer technicians as far back as the 1960s were looking for ways to produce images faster, they were limited by processing speeds and memory. By the 1980s, programmers were using techniques such as ray tracing to create correct light distribution in scenes and make them look more realistic.

Developers like Cook and Kajiya came up with ways to render images that drove the technology forward into the 1990s. Designers began adopting physically based ways to create worlds. By 1997, Veach came up with rendering theories, such as bidirectional path tracing, multiple importance sampling and Metropolis light transport. New theories combined with faster machines with more memory drove VR and AR forward.

In the early 2000s, mental ray tracing became popular. Marcos Fajardo created the Arnold renderer, with global illumination, geometry, textures and rapid speeds. Imageworks helped him work on better motion blur and deferred loading, where objects in the distance were less clear than those up close.

As computers came with better graphics processing units (GPUs), the demand for more lifelike gaming emerged. Today, thanks to real-time rendering, scenes appear faster than the blink of an eye and are programmed to save resources.

Navigating the UI Challenges of Multi-User VR

Challenges When Working With Real-Time Rendering for XR

Although technology advanced far beyond the 1960s, real-time rendering for XR in particular still poses several challenges for developers.

1. Battery Longevity and Overheating

Programs serve images in frames per second (FPS). The faster the images, the more realistic the scene feels. However, to get things to translate so quickly requires a lot of processing power, which can reduce the length of battery charge and create extreme heat that could cause the entire system to fail or make headsets and laptop keyboards hot to the touch.

2. Latency

VR enthusiasts have likely noticed delay time between a user’s movements and what happens on the screen. More latency may make users feel nauseous and make a negative impression.

As more and more people — particularly those in dense population centers — join the world of XR, latency-related challenges may become more prevalent. Researchers predict around 68% of the world will live in urban centers by 2050, and higher population densities and complex infrastructures tend to exacerbate latency issues due to increased network traffic and limited bandwidth.

3. Complex Scenes

Stepping into XR means entering a scene with vivid details. Programmers must create the right lighting, textures, and realistic objects to make the person feel as though they are in the setting. The hours of input to get to something that works are heavy.

How Do Adaptive Interfaces Elevate Mixed Reality?

Real-Time Rendering Optimization Solutions

Although there are some challenges with optimizing XR for real-time rendering, the solutions are fairly straightforward.

1. Fix Battery Drain and Excessive Heat

Designers can reduce strain on the system by showing only part of the scene in detail. Cooling solutions can help manage thermal issues and may become more common in non-gaming computers. In the meantime, teach users how to add after-market elements to their gaming computers through simple adjustments such as using cooling pads and fans.

2. Design Efficiently for Complex Scenes

The future is likely to bring XR to ordinary tasks, such as watching the evening news broadcast. Rather than passively watching, users will interact with the screen, diving into the stories that interest them the most and personalizing the experience.

Advances in artificial intelligence (AI) and enhanced 5G communication are likely to help developers come up with the high-resolution images and coding to deliver complexities yet to be seen. Embrace AI for better development.

3. Lower Latency

Some ways to improve latency issues include using higher-performance GPUs and higher-resolution display screens. Other techniques that reduce latency include asynchronous spacewarp and timewarp. The goal is for the program to predict the user’s next movements and render the frames as needed.

4. Ignore Objects

Optimize real-time rendering with some culling techniques. Rather than serve up every tiny detail, you can speed up frames by only showing objects the user sees. Use the technique of frustum culling, where the program excludes objects outside of the viewing area until needed. You can also use occlusion culling and ignore things blocked by other objects, such as a tree behind a building.

Can Inclusive UI Engender an Equitable XR Ecosystem?

Small Changes Create Faster Frames

Small changes, such as pushing objects off screen, can lead to a faster FPS score. Anything you can do to make the XR environment more realistic and serve movement through the world to the user improves the experience and makes people more likely to interact with the program in the future.

Eleanor Hecks is Editor-in-Chief of Designerly Magazine where she specializes in design, development, and UX topics. Follow Designerly on X @Designerlymag.


More from AR Insider…