158abd11 ac9c 4ea5 b1f0 c492f6d56dd4 Intel introduces Loihi, the next big thing in neuromorphic computing
The name might sound fancy but if we look closely the name “neuro” and “morphic” you will get an idea that these has something to do with neurons and change or more accurately adaptability. Neuromorphic computer chips behave more like the biological brains which have a network of billions of neurons and they constantly changes as it learns through time.

Neuromorphic chips have the same working principles, they have the ability to change or upgrade as it learns and experiences data and information. To put it in simple words it will be a huge milestone in developing AI.

What is Loihi? What do we know about it?

Loihi is the Intel’s first neuromorphic computer processor, while still being in its developing stages, it is showing impressive result in terms of energy expenditure, Loihi is claimed to be “1,000 times more energy-efficient”. Other most notable feature of Loihi, as we mentioned above is the ability to learn by itself, the need of training in conventional ways will be completely eliminated. Have a brief look at some of the capabilities of Loihi.

Intel will be sharing the samples of Loihi with leading university and research institutions. Loihi test chip highlights

• Fully asynchronous neuromorphic many core mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies with each neuron capable of communicating with thousands of other neurons.

• Each neuromorphic core includes a learning engine that can be programmed to adapt network parameters during operation, supporting supervised, unsupervised, reinforcement and other learning paradigms.

• Fabrication on Intel’s 14 nm process technology

How important it is to check your laptop's CPU temperature from time to time and which is the best tool for that?

.• A total of 130,000 neurons and 130 million synapses.

• Development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

LEAVE A REPLY