Enterprise mixed reality at the 5G Edge

May 31, 2019

1381 words 11 mins

By: Alisha Seam, AT&T Foundry in Palo Alto

Earlier this year, we expanded the AT&T Foundry Edge Computing Zone (EC Zone) to include an Ericsson 5G testing facility in Santa Clara, Calif. In the time since our launch event, Designing the Edge, we have been working with early EC Zone program participants to quantitatively assess the performance of our first 5G installations. Our experiments with Ericsson Research and Arvizio, a Canadian startup enabling enterprise mixed reality (MR) applications, demonstrated the potential 5G and edge computing bring over earlier generations of networks.

Mixed reality for immersive visualization of large-scale 3D models

Recent advances in hardware, spatial mapping and display technology have propelled MR as a medium to seamlessly bring 3D digital content into our physical world. While increasing numbers of MR experiences capture the imagination and showcase the future, few have been able to exhibit measurable value to existing businesses or consumers.

Arvizio

Figure 1: Example of MR. Photo courtesy of Arvizio.

Immersive visualization of complex industrial models stands out as one of the most compelling MR applications in the fog of this novel medium’s ambiguous future. Industries such as architecture, engineering and construction (AEC), industrial engineering, energy and mining have been using computer-aided design (CAD) software for decades to create complex virtual representations. The ability of MR to transport these models from a 2D viewing window into an immersive, 3D holographic experience is exhibiting real-world value. For example, the aerospace industry is leading in the use of MR, showing reduced error rates and significant time savings for inspection of complex systems simply by having manuals available to inspectors via MR headsets.

Remote rendering of complex structures

Though leading MR headsets continue to incorporate increasing amounts of processing power with each generation, they still struggle to accommodate the immense number of polygons, complex textures, and dense LiDar point clouds built into these industrial models. Even as they reach their peak capacity for model complexity, the headsets often must compensate for limited on-board computing power by reducing the display frame rate, creating a jarring experience for the end user.

The Arvizio platform is designed to distribute the rendering and display functions of these heavyweight models between the MR headset and a remote server, offering the freedom of a wireless MR experience without compromising the complexity of the data source. However, their ability to deliver a smooth and high-fidelity experience is extremely dependent on the quality of the wireless link. (See figure 2 below). The necessary high-bandwidth and extremely low latency streaming is currently achievable using WiFi.

Headset vs. Remote

Figure 2: Example of on-board vs. remote rendering.

Developing the ability to deploy the Arvizio remote rendering solution over a cellular network is a critical step for this platform to realize its full potential. Integration with a managed network and scalable server-side hosting can deliver the ubiquity, consistency and control that are necessary to open the market for these solutions. The impact of network integration becomes even more pronounced when considering use cases with multiple remote users interacting or simultaneously accessing the same content.

“We believe the ability to collaborate in near real-time is going to revolutionize many industries,” said Jonathan Reeves, CEO of Arvizio. “The near seamless integration of holographic computing and live communications allows for sharing real-world visual information and instant feedback among members of your team, wherever they are. The network needs to be able to offer multiple users, across remote locations, the ability to access and interact with content at-will, regardless of location and device. This would not be possible without 5G and edge computing.”

Previous generations of cellular networks were not designed for interactive content, but with the combination of 5G and edge computing, we will start to enable scenarios that previously would have only been feasible over local WiFi.

Bringing remote rendering to the 5G network Edge

Experimentation with Arvizio’s software upon the 5G testbed allowed us to gain valuable insight into the interplay between network delay and the entire end-to-end remote rendering pipeline.

Previous EC Zone investigation highlighted the fact that, from the standpoint of absolute millisecond contribution, network delay is merely a fraction of the entire end-to-end pipeline, with much of the latency coming from other elements of the compression and streaming components. Migration from the public cloud to the network edge environment did measurably improve the performance of the application that was tested. Still, the minimal delay introduced by the use of the production LTE radio network, rather than a local WiFi link, required all of the other elements of the pipeline to work extremely hard to compensate, significantly affecting the overall end-to-end latency.

We inferred that simply shaving off milliseconds from the average network delay alone would not be sufficient to deliver the desired end-to-end experience. Rather, we observed the need for more predictable network delay and tighter coupling between the network and application functions.

Thus, the goal of integrating Arvizio’s platform into our new 5G-equipped lab was to understand if the reduced network delay and increased delay predictability, offered by combining a 5G network with edge computing, would alleviate some of the challenges encountered in our previous cellular experimentation. We executed tests to compare the connection between the server and MR headset client over 3 configurations: local WiFi link only, intermediary 5G core and radio, intermediary 4G core and radio.

We evaluated two models of different complexities and corresponding file sizes. (See table 1 below). Models with higher polycount and textured surfaces had greater file sizes and rendering complexity. Model B was significantly larger and more complex (as represented by the number of polygons) than Model A, making an off-board rendering solution a more compelling option. If rendering were done on-board the headset, it would put a strain on the entire solution. It would take more time to download the file and the headset would potentially be unable to store or render the model, in addition to being more difficult to process in real-time.

 
Number of polygons
Surface type
File size
Model A 140K Solid surfaces 10 MB
Model B 880K Textured surfaces 300 MB

Table 1: Comparison of models size and complexity by surface type in our experiment.

Results and future work

We discovered that management and delivery over the ultra-low latency 5G configuration did allow the entire pipeline to operate with more regularity. While the introduction of the 4G component increased end-to-end latency by an average of approximately 200% relative to the local WiFi setup, 5G introduction only added an additional 10%.

Additionally, we observed that the application performance difference between the network configurations became more pronounced with increased complexity and file size of the model. The introduction of 5G demonstrated similar end-to-end latency effects for both Model A and Model B. However, the 4G introduction took a 25% higher toll on end-to-end latency in tests with Model B vs. Model A. Pipeline disruption in 4G testing for complex Model B also caused the backend, server-side calculation to drop to a mere 25% of the targeted frame rate, while it held steady at WiFi levels for 5G testing.

These are important results for EC Zone because they indicate that the 5G edge computing configuration exhibits a viable delay profile for the development of a cellular-enabled, remote rendering MR solution. This offers the opportunity to now turn our focus to optimizing performance with real-time coordination between the network and the application functions.

Our future work will also explore the multiuser coordination and the scalability of such platforms. These present a new set of challenges as we begin to address server-side resource optimization and the robust management of multiple simultaneous connections. We believe that the 5G network will become even more valuable as we enable people to collaborate remotely, annotate models, and change their own perspectives in real-time.

1 Polygons are 3D digital objects that are drawn as a mesh of individual polygonal surfaces. Complex objects can be composed of millions of polygons in order to achieve the desired resolution and level of detail.
2 LiDar, or light detection and ranging, is an optical remote-sensing method that uses laser light to produce highly accurate coordinate measurements of a target. Point clouds are a collection of points that represent a 3D shape or feature.
3 The last hop to the headset in all three scenarios was over WiFi as the headset does not currently have cellular capabilities.

View more stories