Nvidia has launched the Jetson TX2 Internet-of-Things (IoT) platform. It comprises a multicore graphic processor unit (GPU) and a dual-core central processing unit (CPU).
The credit card-sized platform is designed for AI (artificial intelligence) computing. It provides twice the performance of its predecessor, Jetson TX1. Additionally, it features CAN connectivity. "Jetson TX2 brings powerful AI capabilities at the edge, making possible a new class of intelligent machines," said Deepu Talla, vice president and general manager of the Tegra business at Nvidia. "These devices will enable intelligent video analytics that keep our cities smarter and safer, new kinds of robots that optimize manufacturing, and new collaboration that makes long-distance work more efficient."
The product is equipped with the Pascal 256-core GPU, the Denver 2 dual 64-bit CPU with hex-core ARM processors, 8 GiB LPDDR4 memory, and 32 GiB eMMC storage capacity. On board there are also 12 CSI lanes supporting up to six cameras as well as video encoder and decoder. Besides the CAN interface, the platform provides 1-Gbit/s Ethernet, WLAN, and Bluetooth connectivity.
The Jetson family is supported the Jetpack 3.0 AI computing software platform. It also allows integrating the Tensor RT neural network inference engine for producing deep-learning applications. A library of deep neural network primitives is available, too. Other software packages include Visionworks for developing vision and image processing programs, graphics driver software, and Linux for Tegra.
The Jetson enabled Cisco to add AI features such as facial and speech recognition to its Cisco Spark products. Cisco is able to drive new experiences and remove the barriers between physical and virtual spaces, thanks to the Jetson TX2's advanced technology capabilities in AI computing and graphics.
"For years, Nvidia has demonstrated its commitment to First through multifaceted support by providing Jetson developer kits for robot builds, online training resources, and team and event funding," said Donald E. Bossi, president of First, an international K-12 nonprofit project focused on science and technology. "Through these efforts, Nvidia is helping to inspire more young students to become innovators and inventors."
The Jetson TX2 developer kit, which includes the carrier board and Jetson TX2 module, can be preordered today for US-$ 599 in the United States and Europe. It will be available in other regions in the coming weeks. The Jetson TX2 module will be available in summer for US-$ 399 (in quantities of 1000 or more). It measures 50 mm x 87 mm, weighs 85 g, and consumes typically 7,5 W.
Already in 1999, Nvidia introduced its GPU originally addressing the PC gaming market. More recently, GPU deep-learning ignited modern AI – the next era of computing – with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. The US-company regards itself as "the AI computing company”.
Many of the todays IoT devices function as simple gateways, just forwarding data from embedded systems. Dustin Franklin posted: “They rely on cloud connectivity to perform their heavy lifting and number-crunching. Edge computing is an emerging paradigm, which uses local computing to enable analytics at the source of the data.” With its performance, Jetson TX2 is ideal for deploying advanced AI to remote field locations with poor or expensive Internet connectivity, he added. The platform offers near-real-time responsiveness and minimal latency — key for “intelligent” machines that need mission-critical autonomy. Its two CAN controllers enable autopilot integration to control robots and drones.
Self-driving cars and trucks
Nvidia cooperates with Bosch and Paccar to develop control systems for self-driving vehicles. Recently, Bosch launched its AI Car Computer based on Nvidia’s hardware. During the presentation, it was made clear that deep-learning plays a vital role through the entire computational pipeline for a self-driving vehicle enabling it to get increasingly smarter based on experience. Consider the processing horsepower required to make sense of the ocean of data that streams-in from a car’s array of sensors, including cameras, radars, Lidar sensors, and ultrasonics. This is where deep learning comes in. By first developing and training a deep neural network in the data center, the Nvidia Drive PX system becomes able to understand everything happening around the car in real-time.
Many cars on the road today have some basic safety features, known as advanced driver assistance systems (ADAS). These systems are often based on smart cameras and offer basic detection of obstacles and identification of lane markings. These capabilities can help carmakers increase their New Car Assessment Program safety ratings.
The way from ADAS systems to a self-driving car is very long. And the amount of processing required for an autonomous vehicle is orders of magnitude greater. Hen-Hsun Huang, CEO of Nvidia, noted the incremental amount of processing to be at least 50 times greater. And that doesn’t include the addition of an AI co-pilot, which Nvidia presented at the CES 2017 tradeshow.
Nvidia and Paccar, a leading truck maker, cooperates to develop self-driving vehicles. Paccar manufactures trucks under several brands: DAF, Kenworth, and Peterbilt. The intention is to develop commercial vehicles using the Nvidia Drive PX2 technology to achieve the SAE level-4 for self-driving. During his keynote in Bosch’s Connected World event, Huang showcased a video of a Paccar semi-tractor trailer driving on a closed-course, handling a wide range of situations without a driver behind the wheel. The solution will improve driver productivity, enhance transportation efficiency, and increase safety.
In most of those self-driving computing systems, CAN is used as a deeply embedded network collecting data from sensors and ECUs. CAN FD with a higher throughput fills the gap between Classical CAN and the high-performance networks (e.g. Ethernet).
News and reports