To help make cities safer, smarter and greener, they need to be connected. And few companies know more about connecting than Verizon.
It’s joined nearly 100 other companies already using NVIDIA Metropolis, the company’s edge-to-cloud video platform for building smarter, faster deep learning-powered applications.
Verizon is a leading technology company with the nation’s most reliable network service. Its Smart Communities group has been busy working with cities to connect communities and set them up for the future, including attaching NVIDIA Jetson-powered smart camera arrays to street lights and other urban vantage points.
“LED street lighting delivers big savings in operating expenditures,” said David Tucker, head of product management in the Smart Communities Group at Verizon. “Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart city platform that will enable them to link applications now and in the future, helping to create efficiency savings and a new variety of citizen services.”
The arrays — which Verizon calls video nodes — use Jetson’s deep learning prowess to analyze multiple streams of video data to look for ways to improve traffic flow, enhance pedestrian safety, optimize parking in urban areas, and more.
Beta tests using proprietary datasets and models generated from neural network training are wrapping up on both coasts. Details of its commercial release are expected soon from Verizon.
Predicting Accidents Before They Occur
Released last year, the NVIDIA Metropolis platform includes tools, technologies and support to build deep learning applications for everything from traffic and parking management to law enforcement and city services.
High-performance deep learning inferencing happens at the edge with the NVIDIA Jetson embedded computing platform, and through servers and data centers with NVIDIA Tesla GPU accelerators.
Verizon’s video nodes leverage Jetson TX1 to collect and analyze data on the furthest edges of a city’s network. This supercomputer on a module accelerates deep learning at the edge, enabling real-time video analytics. All of this edge computing means more efficient, near real-time data analysis, and less high-cost streaming and storing of video over LTE and Wi-Fi networks.
The video nodes capture and classify objects such as vehicles, cyclists and pedestrians, and identify interactions in near real time, providing city officials with a 24/7 data stream of everything from illegal right turns on red lights to pedestrian movement outside of designated crosswalks to parking lot metrics.
“In Jetson, we saw an ability to leverage GPUs to create a consistent deep learning view from the cloud to the edge — through the full stack,” said Tucker.
While the Jetson-powered nodes can identify speeding vehicles, cyclist movements and handle other real-time tasks at the edge, once the data’s back in the cloud, it can be used for predictive analytics.
“We’re trending toward being able to spot something happening at intersection A and understanding the near real-time impact on intersections B and C a few blocks away,” Tucker explained.
Smart Lights for Smarter Streets
In Boston and Sacramento, CA, Verizon has deployed video nodes on existing street light infrastructures. Down the road, these smart lights may be able to communicate with autonomous vehicles and support street light-to-car communication that could help reduce congestion and keep pedestrians and drivers safer.
Verizon’s video nodes are part of its upcoming solution for supporting the safety of vehicles, pedestrians and cyclists. To enhance traffic safety, city leaders across the globe are increasingly turning to technology to help their communities become safer and friendlier.
More than 35 U.S. cities have signed up for Vision Zero, a global initiative to reduce pedestrian deaths. First implemented in Sweden in the 1990s, Vision Zero has spread to Europe and is now making its way to the U.S. — with deep learning at its core.