For self-driving cars
US-based Nvidia has developed the Parker multi-core processor. Two of them are integrated on the company’s Drive PX 2 platform.
The Drive PX 2 platform comprises two Parker multi-core graphic processors and two Pascal processors. The Parker consists of two 64-bit Denver CPU cores paired with four 64-bit ARM Cortex A57 CPUs. The platform is equipped with a dual-CAN interface for connecting to the in-vehicle networks and a Gigabit-Ethernet port for communication with domain ECUs. More than 80 carmakers and suppliers as well as research centers are using the Drive PX 2 platform.
The Denver CPU is a seven-way superscalar processor supporting the ARM instruction set and implements an improved dynamic code optimization algorithm and additional low-power retention states for better energy efficiency. The two Denver cores and the Cortex CPUs are interconnected through a proprietary coherent interconnect fabric. The 256-core Pascal processor delivers the performance needed to run advanced deep-learning inference algorithms for self-driving. The platform computes 24 trillion deep-learning operations per second. This is the level of supercomputers.
Self-driving cars need to know where they are, to recognize the objects around them, and to calculate continuously the path for a safe driving. This situational and contextual awareness of the vehicle and its surroundings demands a powerful visual computing system that can merge data from cameras and other sensors, plus navigation sources, while also figuring out the safest path. The Drive PX 2 platform is able to fuse data from 12 cameras, as well as lidar, radar, and ultrasonic sensors. The related Drive Works software development kit includes reference applications, tools, and library modules. It also provides a run-time pipeline framework that covers detection, localization, path planning, and visualization. This software is designed to educate the developers.
Today installed advanced driver assistance systems (ADAS) can detect some objects and alert the driver, and in some cases, they slow-down or stop the vehicle. Examples include blind spot monitoring, lane-change assistance, and forward collision warnings. But for autonomous driving cars, additionally “deep learning” capability is required. Therefore, the kit provider delivers the Digits deep neural network development software. This is a deep learning training system that lets computers train themselves to understand objects in the world around them. The result at the end is the same as today: you just steer and accelerate or decelerate. These simple commands can be transmitted via CAN or CAN FD networks. According to the provider the provided hardware and software can be used to develop solutions, which can discern a police car from a taxi, an ambulance from a delivery truck, or a parked car from one that just pulls out into traffic. Other applications include the identification of cyclists on the sidewalk to absent-minded pedestrians.
News and reports