The cloud has transformed personal computing – connecting us to each other and delivering and endless supply of media to our fingertips, changing the way we live, work and travel. But as digital content becomes more interactive, and data generation becomes increasingly decentralized, we need to reshape our internet and bring the power of cloud computing to the 5G edge.

By building this processing power right into the network and bringing it closer to connected devices, we can enable a new class of real-time applications: lightweight displays can seamlessly blend digital content into our physical world; connected sensors can deliver actionable intelligence to avert disaster; and autonomous transport systems can work together to create safer and more efficient roadways.

This paradigm shift can’t be achieved by the network alone. We need the entire technology ecosystem to work together – designing the infrastructure, hardware, software, and content to utilize the power of the 5G network.

Cutting the cord on XR and gaming

XR and gaming

Despite some industry skepticism that truly mobile 3D content streaming will ever be necessary or even possible, the AT&T Foundry remains determined to “cut the cord” on all cloud-driven XR and gaming applications. We believe that by providing end users ubiquitous access to computing power and the highest-quality experiences, we will allow content creators to finally access their desired audience. We hope that this can help overcome a major sticking point in the growth of this fertile ecosystem and allow the 3D content medium to realize its full potential. However, achieving the required end-to-end, motion-to-photon latency over a mobile network while preserving high resolution and lossless image quality is no small feat.

The AT&T Foundry has been able to demonstrate the potential of edge computing to improve the performance of cloud XR and gaming applications. In September 2018, we collaborated with Ericsson and the NVIDIA GeForceNOW cloud gaming platform to showcase “Shadow of a Tomb Raider” running over edge computing and 5G. Furthermore, our experimentation with GridRaster allowed us to quantitatively understand how improved network performance metrics, such as delay and packet jitter, would translate to improvements in application performance metrics, such as motion-to-photon latency and frame loss, therefore yielding a better experience for the end user.

Though network optimization is critical to “cut the cord” on XR and gaming applications, it is certainly not sufficient. The ecosystem must work to streamline functions throughout the entire capture and rendering pipeline and devise new techniques to distribute functions between the cloud and mobile devices. We further believe that cloud-based immersive media applications will likely benefit from network functions and applications working more synergistically in real time.

This is one of the goals that has motivated the launch of our AT&T/Ericsson 5G Edge Computing Lab. We will take a deeper dive with all parts of the cloud XR and gaming ecosystem in order to reimagine and re-architect how applications are designed and implemented.

Designing the Edge event panel video

Real-time sensor fusion for public safety

Realtime IoT

Real-time analysis of data collected by connected sensors such as video cameras, audio devices, temperature monitors, etc., has the potential to provide a great deal of actionable intelligence to public safety systems. Furthermore, the data that is collected and understood over time can be used for more intelligent infrastructure planning and decision making in the future. Traffic cameras have been shown to identify hazardous intersections and even assist with license plate recognition in “Amber Alert” incidents. Firefighters can be equipped with an array of sensors to provide better situational awareness and proactive scene management.

The technical challenges of integrating intelligence from multiple connected devices while communicating usable, real-time insights make this a compelling use case for the 5G edge. Most tasks in sensor analytics are undoubtedly computationally intensive, and better insights can be generated by aggregating several sources of IoT data. Thus, it is often insufficient to deploy analytic solutions on individual IoT nodes. On the other hand, the use of centralized cloud resources is not ideal for applications that are time-sensitive or subject to security and privacy concerns. Furthermore, scalability requires that analytics be performed close to the point of capture because shipping video to the cloud from myriad cameras places excessive bandwidth stress on the ingress networks of a metropolitan area.

Introduction of the 5G network edge creates an opportunity to design a multi-tier deep learning system that provides sufficient computing power to connected devices for time-sensitive functions while still integrating with centralized resources for functions such as historical analysis and long-term storage. Integration with the AT&T network creates an opportunity to seamlessly add functions such as real-time alerts and event-triggered video transmission.

However, while many of the component technologies have been validated independently, the implementation of such systems is still extremely theoretical. The goal of the EC Zone Program is to allow the collective experimentation and engineering that will determine how to optimally integrate cloud, edge, and on-device computing for public safety IoT analytics.

Designing the Edge event panel video

Live HD-3D mapping for vehicle automation

HD/3D mapping

As the prevalence of autonomous vehicles (AV) and advanced driver-assistance systems (ADAS) continues to grow, it becomes increasingly possible, efficient, and even necessary for cars to act as cooperative fleets rather than independent data centers on wheels. Factors such as safety, traffic flow coordination, regulatory compliance, and cost reduction will require these vehicles to combine their own powerful on-board intelligence with data and decision-support systems that exist elsewhere. Vehicles must always be able to perform elementary operating functions regardless of network connectivity. However, the enhanced situational awareness afforded by connected fleets, particularly in dense traffic areas, means that automated vehicles are unlikely to develop as solely self-contained systems.

Collaborative mapping systems are a prime example of information sharing between vehicles. Unlike the flat maps employed by human drivers today, maps for AV and ADAS are complete three-dimensional (3D) recreations of the physical roadside environment, requiring high-definition (HD) precision often down to the centimeter scale. Access to this information can effectively allow a vehicle to “see around corners” and anticipate its future environment well beyond the 100- to 200-meter range typical of current AV sensors. Development of a scalable and highly accurate mapping system is critical to the widespread deployment of AV and ADAS technology in the real world. Thus, it’s not surprising that many of the same companies at the forefront of AV technology have also taken leading positions in the race toward an effective mapping system.

To maintain their accuracy, HD/3D maps must also be “live” – i.e., continuously updated. Some companies are developing end-to-end frameworks that efficiently manage a dynamic representation of environments by “crowd-sourcing” data from existing AV and ADAS operating on the road or, occasionally, those deployed to fly overhead. The same sensors (cameras, lidar, radar, etc.) used to enable autonomous perception can also be used to update a shared mapping system. The maps provide valuable context to the onboard sensors, and the discrepancies detected by these sensors deliver live updates to the centralized resource. This framework allows all connected vehicles to increase their safety and efficiency by operating as a cooperative fleet, sensing the world in a distributed manner from various points of view.

Hosting the HD/3D data and live update functions at the 5G network edge has the potential to improve the responsiveness and efficiency of these collaborative mapping systems. Mapping information and road conditions are geographically specific by nature, and thus well suited for processing at the edge. Furthermore, as the sophistication of these maps and the number of vehicles utilizing them continue to grow, the data volume and real-time processing requirements may outpace the capabilities of a centralized cloud framework. Integration with 5G will also provide positioning information that is much more precise than traditional GPS while being much less computationally expensive to the vehicle’s onboard system. By taking advantage of real-time connections to incorporate external decision support into mapping and driving algorithms, the 5G edge could become the catalyst for the widespread deployment of AV and ADAS technology.

Developing a framework to optimally integrate 5G and edge capabilities into a live HD/3D mapping system will require input and collaboration across the ecosystem. The EC Zone Program aims to bring together mapping software companies, OEMs, and other infrastructure players to determine how to effectively deploy and synchronize functions across each tier of the computing framework (e.g., vehicle, edge, and centralized) using our next-generation network. Together, we hope to improve the traffic flow and safety of our future roads.

Designing the Edge event panel video