Intel’s Neuromorphic Loihi Processor Scales to 8M Neurons, 64 Cores

 

, Intel’s Neuromorphic Loihi Processor Scales to 8M Neurons, 64 Cores, #Bizwhiznetwork.com Innovation ΛI

Intel has announced a significant advance for its neuromorphic research processor, codenamed Loihi. The company has now scaled up its Loihi implementation to the 64-processor level, allowing it to create a system with more than 8M neurons (8.3M). This new configuration (codenamed Pohoiki Beach) delivers 1,000x better performance than conventional CPUsSEEAMAZON_ET_135 See Amazon ET commerce in applications like sparse coding, graph search, and constraint-satisfaction problems. Intel claims that the new Pohoiki Beach delivers a 10,000x energy efficiency improvement over conventional CPU architectures in these sorts of tests.

Neuromorphic computing is a subset of computing that attempts to mimic the brain’s architecture using modern technological analogues. Instead of implementing a typical CPU clock, for example. Loihi is based on a spiking neural network architecture. The basic Loihi processor contains 128 neuromorphic cores, three Lakefield (Intel Quark) CPU cores, and an off-chip communication network. In theory, Loihi can scale all the way up to 4,096 on-chip cores and 16,384 chips, though Intel has said it has no plans to commercialize a design this large.

, Intel’s Neuromorphic Loihi Processor Scales to 8M Neurons, 64 Cores, #Bizwhiznetwork.com Innovation ΛI

A close-up shot of an Intel Nahuku board, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

“With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware,” said Chris Eliasmith, co-CEO of Applied Brain Research and professor at University of Waterloo. “Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time.”

, Intel’s Neuromorphic Loihi Processor Scales to 8M Neurons, 64 Cores, #Bizwhiznetwork.com Innovation ΛI

One of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips, shown here interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Poihoiki Beach, annuounced in July 2019, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

The Pohoiki Beach implementation is not the largest planned deployment for the neuromorphic chip. Intel states that it intends to roll out an even larger design, codenamed Pohoiki Springs, which will deliver “an unprecedented level of performance and efficiency for scaled-up neuromorphic workloads.”

We’ve covered the advances and research in neuromorphic computing for several years at ET. The work being done on these CPUs is closely related to the work that’s being conducted in AI and machine intelligence overall, but neuromorphic computing isn’t just concerned with how to run AI / ML workloads efficiently on existing chips. The ultimate goal is to build processors that more closely resemble the human brain.

One of the oddities of computing is that analogies comparing human brain function and how computers work are so prevalent. Human brains and classic computers have very little overlap in terms of how they function. Transistors are not equivalent to neurons and the spiking neural network model that Loihi uses for transmitting information across its own processor cores is intended to be closer to the biological processes humans and other animals use than traditional silicon.

Projects like this have a number of long-term research goals, of course, but one of the most fundamental is to better understand how brains work in order to copy some of their energy efficiency. The human brain runs on roughly 20W. Exascale supercomputing, which is considered the minimum for advanced neural simulation of anything more complex than an earthworm, is expected to consume megawatts of power per supercomputer. The gap between those figures explains why we’re so interested in the long-term energy efficiency and computation potential of the brain in the first place. Architectures like Loihi aren’t just an effort to write programs that mimic what humans can do; the goal is to copy aspects of our neurology as well. It makes their progress a bit more interesting.

Feature Image Credit: Tim Herman/Intel Corporation

Now Read:

 

About Skype

Check Also

, Valve’s Steam Deck OLED Coming Nov. 16, #Bizwhiznetwork.com Innovation ΛI

Valve’s Steam Deck OLED Coming Nov. 16

The success of the Steam Deck has led to more handheld PC game machines like …

Leave a Reply

Your email address will not be published. Required fields are marked *

Bizwhiznetwork Consultation