Nvidia’s new Jetson Xavier NX Adds Horsepower to AI at the Edge

Jetson Xavier NX

Whether Nvidia’s bold claim that its new Xavier NX ($399) is “the world’s smallest supercomputer for AI at the Edge” is true or not, there is no doubt it will be an impressive accomplishment when it ships in March. Packing similar power to the current Xavier into the tiny form factor of a Jetson Nano, it will greatly multiply the amount of inferencing and training that can be performed in small, low-power, devices.

Nvidia expects the new module to find a home in a wide variety of AI and vision-centric applications, including autonomous machines, high-resolution sensors for robots, video analytics, and a variety of embedded devices. Basically, Xavier NX should be a solid upgrade for any application wanting to move up from a Nano, or shrink its form factor from a full-sized Xavier.

Nvidia’s Jetson Xavier NX By the Numbers

Featuring an Nvidia Volta with 384 CUDA cores, along with 48 tensor cores, and 2 Nvidia Deep Learning Accelerators, the Xavier NX is capable of 21 TOPS (INT8) or 6 TFLOPS (FP16) of AI performance while consuming only 15 watts of power. When limited to 10 watts, it can still perform at 14 TOPS. That compares to the .5 TFLOPS (FP16) of the similarly-sized Nano ($129) and almost matches the specs of the much larger Jetson AGX Xavier ($599).

Nvidia's Jetson family showing relative size, price, and specs

Nvidia’s Jetson family showing relative size, price, and specs

The Jetson Xavier NX is pin compatible with the current Jetson Nano, which should mean that current embedded devices will quickly be able to benefit from an order of magnitude increase in performance in essentially the same power and size envelope.

As its main processor, the Xavier NX features a 6-core Carmel Arm 64-bit CPU, with 6MB of L2 and 4MB of L3 cache.  It can directly support up to six CSI cameras over 12 MIPI CSI-2 lanes. It comes equipped with 8GB of 128-bit LPDDR4x RAM, capable of 51.2GB/second data transfer. The 70mm x 45mm module also features support for Gigabit Ethernet and like other products in the Jetson family runs an Ubuntu-based Linux.

Xavier NX Poised to be a Benchmark Leader in Edge-based Inferencing

In related news, Nvidia announced today that its current Xavier SoC was the top edge computing performer in the newly-released MLPerf Inferencing .5 benchmark. As a low-power version based on the same Xavier design, it should also prove to be an even more competitive entrant in the battle for edge computing design wins.

Xavier NX packs most of the power of a Xavier into a Nano form factor with more performance than a TX2

Xavier NX packs most of the power of a Xavier into a Nano form factor with more performance than a TX2

Xavier NX Fits Into Nvidia’s AI Ecosystem

One of the strongest points in favor of using Nvidia’s products for AI development and applications is the company’s extensive AI ecosystem, and how well it scales up and down across its product line. The Xavier NX should be no exception. It is supported by Nvidia’s popular JetPack SDK, and runs on the same CUDA-X AI software architecture as existing devices. Developers who want to get a head start on developing for the Xavier NX can work with the Jetson AGX Xavier Development Kit after applying a patch to emulate the Xavier NX.

Showcasing the broad adoption of its Jetson platform, Nvidia says there are over 400,000 registered developers using Jetson, and over 3,000 customers using them for applications. Some of the examples the company provided included PostMates Delivery Robot, Skydio’s new drone, and an AI Microscope developed by Stanford University. Speaking from personal experience developing on and for a Jetson Nano and a JetBot, the tools, platform, and support are excellent, so there is every reason to believe that Xavier NX will be a big hit with those looking to push AI at the Edge to the next level.

Now Read:

About Skype

Check Also

, Samsung Announces ‘Gauss’ AI for Galaxy S24, #Bizwhiznetwork.com Innovation ΛI

Samsung Announces ‘Gauss’ AI for Galaxy S24

For the last several years, smartphones have shipped with processors designed to accelerate machine learning …

Leave a Reply

Your email address will not be published. Required fields are marked *

Bizwhiznetwork Consultation