BRN Discussion Ongoing

7für7

Top 20
Is this a concern for BRN ????????? !!!!!!!!
I don't think it always has to be a problem for Brainchip when a new AI chip enters the market. In fact, it can be quite beneficial. It allows potential customers to get a better understanding of the offerings. The more AI systems there are, the more acceptance it enjoys, and the topic is taken more seriously rather than just as a scientific test. It should finally become firmly established in the market. Competition is good for business, and we certainly don't want any issues with antitrust authorities.😯

Edit: But, in essence, I want to add (what you already know) that Akida has the potential to enhance other applications in their performance while operating more efficiently with lower power consumption. It always depends on what you aim to achieve with Akida. That's what makes our chip special
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 19 users

Diogenese

Top 20
View attachment 47474 Here’s why Quadric has made our Top 13 AI/ML Startups to Bet On Q4 Update:

CEO Veerbhan K. exemplifies focus and Quadric’s mission is to create ML-optimized processor IP that empowers rapid AI SoC development and model porting. This unwavering focus on efficiency is how Quadric is going to make AI more accessible, and responsible and democratize it.

It's also Kheterpal’s boldness to challenge norms that stands out. I needed no more evidence than his fearlessly posting a probing question to Andrew Ng, a luminary of AI, during a Q&A session at the AI Hardware Summit. Fortune favors the brave!

Let’s explore how he and Quadric are making AI responsible and democratizing it:

The landscape of Large Language Models is growing exponentially, creating great potential yet also creating the risk of negative consequences like models with bias and inaccuracies. It also brings to light the issue of skyrocketing and unsustainable energy consumption in training LLMs.

Only large companies can manage the rising costs of training and retraining models, contradicting the core principle of democratization.

However, Meta introduced the highly-anticipated Llama2 LLM in July which stands out because it’s open-source, free, and designed for both commercial AND research use. It’s also trained on significantly more parameters than other models, emphasizing safety and responsibility.

Meta launched Llama2 along with a groundbreaking announcement of partnering with Qualcomm, which will integrate Llama2 into next-gen Snapdragon chips in smartphones and laptops beginning next year. This is considered a milestone since LLMs have been considered viable only in data centers with access to vast power resources.

Yet Quadric doesn’t view this through rose-colored lenses. CMO Steve Roddy voiced a contrarian perspective, asking, “Why would titans of the semiconductor and IP worlds need to wait until 2024 or 2025 or beyond to support today’s newest, hottest ML model?”

With the rate of change in LLMs and vision models intensifying, the reality is most accelerators designed for AI at the Edge would require a respin for each evolution. Even FPGAs like GPUs require more power than is suitable for Edge applications.

Quadric’s approach is different. Their general-purpose Neural Processing Unit, known as "Chimera," combines field programmability, like a GPU, with a power-performance profile that makes Edge AI feasible across a variety of consumer devices. What’s more, they support this blend of programmability and performance with a dedicated Developer Studio to significantly expedite the porting process.

Quadric’s emphasis on efficiency, driven by Kheterpal’s leadership, not only empowers developers but also paves the way with fewer hurdles and faster time-to-market at reduced costs; leaving us with no doubt that Quadric is playing a pivotal role in making AI genuinely accessible to all.


Quadric is addicted to MACs, but does not like them spiky:

US2023083282A1 SYSTEMS AND METHODS FOR ACCELERATING MEMORY TRANSFERS AND COMPUTATION EFFICIENCY USING A COMPUTATION-INFORMED PARTITIONING OF AN ON-CHIP DATA BUFFER AND IMPLEMENTING COMPUTATION-AWARE DATA TRANSFER OPERATIONS TO THE ON-CHIP DATA BUFFER

1697758751794.png



Systems and methods for implementing accelerated memory transfers in an integrated circuit includes configuring a region of memory of an on-chip data buffer based on a neural network computation graph, wherein configuring the region of memory includes: partitioning the region of memory of the on-chip data buffer to include a first distinct sub-region of memory and a second distinct sub-region of memory; initializing a plurality of distinct memory transfer operations from the off-chip main memory to the on-chip data buffer; executing a first set of memory transfer operations that includes writing a first set of computational components to the first distinct sub-region of memory, and while executing, using the integrated circuit, a leading computation based on the first set of computational components, executing a second set of memory transfer operations to the second distinct sub-region of memory for an impending computation.

[0045] … Accordingly, a technical benefit achieved by an arrangement of the large register file 112 within each array core 110 is that the large register file 112 reduces a need by an array core 110 to fetch and load data into its register file 112 for processing. As a result, a number of clock cycles required by the array core 112 to push data into and pull data out of memory is significantly reduced or eliminated altogether. That is, the large register file 112 increases the efficiencies of computations performed by an array core 110 because most, if not all, of the data that the array core 110 is scheduled to process is located immediately next to the processing circuitry (e.g., one or more MACs, ALU, etc.) of the array core 110 . For instance, when implementing image processing by the integrated circuit 100 or related system using a neural network algorithm(s) or application(s) (e.g., convolutional neural network algorithms or the like), the large register file 112 of an array core may function to enable a storage of all the image data required for processing an entire image. Accordingly, a majority, most or if not, all layer data of a neural network implementation (or similar compute-intensive application) may be stored locally in the large register file 112 of an array core 110 with the exception of weights or coefficients of the neural network algorithm(s), in some embodiments. Accordingly, this allows for optimal utilization of the computing and/or processing elements (e.g., the one or more MACs and ALU) of an array core 110 by enabling an array core 110 to constantly churn data of the register file 112 and further, limiting the fetching and loading of data from an off-array core data source (e.g., main memory, periphery memory, etc.)
.
 
  • Like
  • Fire
Reactions: 23 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers ,

Could just my over exuberant imagination.......... but thinking we may see a decent rise in share price today.

Seeing a bit of compression happening .

🙂.

Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Stockbob

Regular
IBM came out of stealth mode today with NorthPole, an extension of TrueNorth…



19 Oct 2023
News
6 minute read

A new chip architecture points to faster, more energy-efficient AI​

A new chip prototype from IBM Research’s lab in California, long in the making, has the potential to upend how and where AI is used efficiently.

image

A new chip prototype from IBM Research’s lab in California, long in the making, has the potential to upend how and where AI is used efficiently.

We’re in the midst of a Cambrian explosion in AI. Over the last decade, AI has gone from theory and small tests to enterprise-scale use cases. But the hardware used to run AI systems, although increasingly powerful, was not designed with today’s AI in mind. As AI systems scale, the costs skyrocket. And Moore’s Law, the theory that the density of circuits in processors would double each year, has slowed.

But new research out of IBM Research’s lab in Almaden, California, nearly two decades in the making, has the potential to drastically shift how we can efficiently scale up powerful AI hardware systems.

Since the birth of the semiconductor industry, computer chips have primarily followed the same basic structure, where the processing units and the memory storing the information to be processed are stored discretely. While this structure has allowed for simpler designs that have been able to scale well over the decades, it’s created what’s called the von Neumann bottleneck, where it takes time and energy to continually shuffle data back and forth between memory, processing, and any other devices within a chip. The work by IBM Research’s Dharmendra Modha and his colleagues aims to change this, taking inspiration from how the brain computes. “It forges a completely different path from the von Neumann architecture,” according to Modha.

Over the last eight years, Modha has been working on a new type of digital AI chip for neural inference, which he calls NorthPole. It’s an extension of TrueNorth, the last brain-inspired chip that Modha worked on prior to 2014. In tests on the popular ResNet-50 image recognition and YOLOv4 object detection models, the new prototype device has demonstrated higher energy efficiency, higher space efficiency, and lower latency than any other chip currently on the market, and is roughly 4,000 times faster than TrueNorth.

The first promising set of results from NorthPole chips were published today in Science. NorthPole is a breakthrough in chip architecture that delivers massive improvements in energy, space, and time efficiencies, according to Modha.
Using the ResNet-50 model as a benchmark, NorthPole is considerably more efficient than common 12-nm GPUs and 14-nm CPUs. (NorthPole itself is built on 12 nm node processing technology.) In both cases, NorthPole is 25 times more energy efficient, when it comes to the number of frames interpreted per joule of power required. NorthPole also outperformed in latency, as well as space required to compute, in terms of frames interpreted per second per billion transistors required. According to Modha, on ResNet-50, NorthPole outperforms all major prevalent architectures — even those that use more advanced technology processes, such as a GPU implemented using a 4 nm process.

How does it manage to compute with so much efficiency than existing chips? One of the biggest differences with NorthPole is that all of the memory for the device is on the chip itself, rather than connected separately. Without that von Neumann bottleneck, the chip can carry out AI inferencing considerably faster than other chips already on the market. NorthPole was fabricated with a 12-nm node process, and contains 22 billion transistors in 800 square millimeters. It has 256 cores and can perform 2,048 operations per core per cycle at 8-bit precision, with potential to double and quadruple the number of operations with 4-bit and 2-bit precision, respectively. “It’s an entire network on a chip,” Modha said.

IBM_NP_PCIe-PCB-Rear.png
The NorthPole chip on a PCIe card.

Architecturally, NorthPole blurs the boundary between compute and memory,” Modha said. "At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory.” This makes NorthPole easy to integrate in systems and significantly reduces load on the host machine.

But the biggest advantage of NorthPole is also a constraint: it can only easily pull from the memory it has onboard. All of the speedups that are possible on the chip would be undercut if it had to access information from another place.
Via an approach called scale-out, NorthPole can actually support larger neural networks by breaking them down into smaller sub-networks that fit within NorthPole’s model memory, and connecting these sub-networks together on multiple NorthPole chips. So while there is ample memory on a NorthPole (or collectively on a set of NorthPoles) for many of the models that would be useful for specific applications, this chip is not meant to be a jack of all trades. “We can’t run GPT-4 on this, but we could serve many of the models enterprises need,” Modha said . “And, of course, NorthPole is only for inferencing.”
This efficacy means that the device also doesn’t need bulky liquid-cooling systems to run — fans and heat sinks are more than enough — meaning that it could be deployed in some rather small spaces.


Potential applications for NorthPole​

While research into the NorthPole chip is still ongoing, its structure lends itself to emerging AI use cases, as well as more well-established ones.

In testing, NorthPole team focused primarily on computer vision-related uses, in part because funding for the project came from the U.S. Department of Defense. Some of the primary applications in consideration were detection, image segmentation, and video classification. But it was also tested in other arenas, such as natural language processing (on the encoder-only BERT model) and speech recognition (on the DeepSpeech2 model). The team is currently exploring mapping decoder-only large language models to NorthPole scale-out systems.

When you think of these AI tasks, all sorts of fantastical use cases spring to mind, from autonomous vehicles, to robotics, digital assistants, or spatial computing. Many sorts of edge applications that require massive amounts of data processing in real time could be well-suited for NorthPole. For example, it could potentially be the sort of device that’s needed to move autonomous vehicles from machines that require set maps and routes to operate on a small scale, to ones that can think and react to the rare edge-case situations that make navigating in the real world so challenging even for proficient human drivers. These sorts of edge-cases are the exact sweet spot for future NorthPole applications. NorthPole could enable satellites that monitor agriculture and manage wildlife populations, monitor vehicle and freight for safer and less congested roads, operate robots safely, and detect cyber threats for safer businesses.

What’s next

This is just the start of the work for Modha on NorthPole. The current state of the art for CPUs is 3 nm — and IBM itself is already years into research on 2 nm nodes. That means there’s a handful of generations of chip processing technologies NorthPole could be implemented on, in addition to fundamental architectural innovations, to keep finding efficiency and performance gains.

BIC-Group-Photo_2023-08-10_no-caption.png
Modha, center, with most of the team working on NorthPole.

But for Modha, this is just one important milestone along a continuum that has dominated the last 19 years of his professional career. He’s been working on digital brain-inspired chips throughout that time, knowing that the brain is the most energy-efficient processor we know, and searching for ways to replicate that digitally. TrueNorth was fully inspired by the structures of neurons in the brain — and had as many digital “synapses” in it as the brain of a bee. But sitting on a park bench in 2015 in San Francisco, Modha said he was thinking through his work to date. He had the belief that there was something in marrying the best of traditional processing devices with the structure of processing in the brain, where memory and processing are interspersed throughout the brain. The answer was “brain-inspired computing, with silicon speed,” according to Modha.

Over the next eight years, Modha and his colleagues were single-minded and hermetic in their goal of turning this vision into a reality. Toiling inconspicuously in Almaden, the team didn’t give any lectures or publish any papers on their work, until this year. Each person brought different skills and perspective yet everyone collaborated so that as a whole the team’s contribution was much greater than the sum of the parts. Now, the plan is to show what NorthPole could do, while exploring how to translate the designs into smaller chip production processes and further exploring the architectural possibilities.

This work stemmed from simple ideas — how can we make computers that work like the brain — and after years of fundamental research, has come up with an answer. Something that is really only possible today at a place like IBM Research, where there is the time and space to explore the big questions in computing, and where they can take us. “NorthPole is a faint representation of the brain in the mirror of a silicon wafer,” Modha said.


Here is a 61 page PDF file for the techies…





View attachment 47481
Great find FP, while I’m not nearly qualified enough to dissect this, but for a layman if anything this just cements what folks at BrainChip have achieved with peanuts compared to the Goliaths,

1) They say it’s @ 12nm and highly efficient while akida can be used at more cost efficient processes( although Anil has said we could go down to 7nm if necessary)
2) It says potential to double & quadruple with 4-bit and 2-bit precisions, while due to genius of the team, akida is already capable of this.
3) It says this is not meant to be a jack of all trades - Akida is.
4) Research into North pole is still ongoing.
5) I’m assuming it’s not available as IP (could be wrong or truenorth is not meant to be available as IP) - the fact they say heat sinks and fans are more than enough. For the edge AI market where BRN operates in the, this is critical.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Diogenese

Top 20
IBM came out of stealth mode today with NorthPole, an extension of TrueNorth…



19 Oct 2023
News
6 minute read

A new chip architecture points to faster, more energy-efficient AI​

A new chip prototype from IBM Research’s lab in California, long in the making, has the potential to upend how and where AI is used efficiently.

image

A new chip prototype from IBM Research’s lab in California, long in the making, has the potential to upend how and where AI is used efficiently.

We’re in the midst of a Cambrian explosion in AI. Over the last decade, AI has gone from theory and small tests to enterprise-scale use cases. But the hardware used to run AI systems, although increasingly powerful, was not designed with today’s AI in mind. As AI systems scale, the costs skyrocket. And Moore’s Law, the theory that the density of circuits in processors would double each year, has slowed.

But new research out of IBM Research’s lab in Almaden, California, nearly two decades in the making, has the potential to drastically shift how we can efficiently scale up powerful AI hardware systems.

Since the birth of the semiconductor industry, computer chips have primarily followed the same basic structure, where the processing units and the memory storing the information to be processed are stored discretely. While this structure has allowed for simpler designs that have been able to scale well over the decades, it’s created what’s called the von Neumann bottleneck, where it takes time and energy to continually shuffle data back and forth between memory, processing, and any other devices within a chip. The work by IBM Research’s Dharmendra Modha and his colleagues aims to change this, taking inspiration from how the brain computes. “It forges a completely different path from the von Neumann architecture,” according to Modha.

Over the last eight years, Modha has been working on a new type of digital AI chip for neural inference, which he calls NorthPole. It’s an extension of TrueNorth, the last brain-inspired chip that Modha worked on prior to 2014. In tests on the popular ResNet-50 image recognition and YOLOv4 object detection models, the new prototype device has demonstrated higher energy efficiency, higher space efficiency, and lower latency than any other chip currently on the market, and is roughly 4,000 times faster than TrueNorth.

The first promising set of results from NorthPole chips were published today in Science. NorthPole is a breakthrough in chip architecture that delivers massive improvements in energy, space, and time efficiencies, according to Modha.
Using the ResNet-50 model as a benchmark, NorthPole is considerably more efficient than common 12-nm GPUs and 14-nm CPUs. (NorthPole itself is built on 12 nm node processing technology.) In both cases, NorthPole is 25 times more energy efficient, when it comes to the number of frames interpreted per joule of power required. NorthPole also outperformed in latency, as well as space required to compute, in terms of frames interpreted per second per billion transistors required. According to Modha, on ResNet-50, NorthPole outperforms all major prevalent architectures — even those that use more advanced technology processes, such as a GPU implemented using a 4 nm process.

How does it manage to compute with so much efficiency than existing chips? One of the biggest differences with NorthPole is that all of the memory for the device is on the chip itself, rather than connected separately. Without that von Neumann bottleneck, the chip can carry out AI inferencing considerably faster than other chips already on the market. NorthPole was fabricated with a 12-nm node process, and contains 22 billion transistors in 800 square millimeters. It has 256 cores and can perform 2,048 operations per core per cycle at 8-bit precision, with potential to double and quadruple the number of operations with 4-bit and 2-bit precision, respectively. “It’s an entire network on a chip,” Modha said.

IBM_NP_PCIe-PCB-Rear.png
The NorthPole chip on a PCIe card.

Architecturally, NorthPole blurs the boundary between compute and memory,” Modha said. "At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory.” This makes NorthPole easy to integrate in systems and significantly reduces load on the host machine.

But the biggest advantage of NorthPole is also a constraint: it can only easily pull from the memory it has onboard. All of the speedups that are possible on the chip would be undercut if it had to access information from another place.
Via an approach called scale-out, NorthPole can actually support larger neural networks by breaking them down into smaller sub-networks that fit within NorthPole’s model memory, and connecting these sub-networks together on multiple NorthPole chips. So while there is ample memory on a NorthPole (or collectively on a set of NorthPoles) for many of the models that would be useful for specific applications, this chip is not meant to be a jack of all trades. “We can’t run GPT-4 on this, but we could serve many of the models enterprises need,” Modha said . “And, of course, NorthPole is only for inferencing.”
This efficacy means that the device also doesn’t need bulky liquid-cooling systems to run — fans and heat sinks are more than enough — meaning that it could be deployed in some rather small spaces.


Potential applications for NorthPole​

While research into the NorthPole chip is still ongoing, its structure lends itself to emerging AI use cases, as well as more well-established ones.

In testing, NorthPole team focused primarily on computer vision-related uses, in part because funding for the project came from the U.S. Department of Defense. Some of the primary applications in consideration were detection, image segmentation, and video classification. But it was also tested in other arenas, such as natural language processing (on the encoder-only BERT model) and speech recognition (on the DeepSpeech2 model). The team is currently exploring mapping decoder-only large language models to NorthPole scale-out systems.

When you think of these AI tasks, all sorts of fantastical use cases spring to mind, from autonomous vehicles, to robotics, digital assistants, or spatial computing. Many sorts of edge applications that require massive amounts of data processing in real time could be well-suited for NorthPole. For example, it could potentially be the sort of device that’s needed to move autonomous vehicles from machines that require set maps and routes to operate on a small scale, to ones that can think and react to the rare edge-case situations that make navigating in the real world so challenging even for proficient human drivers. These sorts of edge-cases are the exact sweet spot for future NorthPole applications. NorthPole could enable satellites that monitor agriculture and manage wildlife populations, monitor vehicle and freight for safer and less congested roads, operate robots safely, and detect cyber threats for safer businesses.

What’s next

This is just the start of the work for Modha on NorthPole. The current state of the art for CPUs is 3 nm — and IBM itself is already years into research on 2 nm nodes. That means there’s a handful of generations of chip processing technologies NorthPole could be implemented on, in addition to fundamental architectural innovations, to keep finding efficiency and performance gains.

BIC-Group-Photo_2023-08-10_no-caption.png
Modha, center, with most of the team working on NorthPole.

But for Modha, this is just one important milestone along a continuum that has dominated the last 19 years of his professional career. He’s been working on digital brain-inspired chips throughout that time, knowing that the brain is the most energy-efficient processor we know, and searching for ways to replicate that digitally. TrueNorth was fully inspired by the structures of neurons in the brain — and had as many digital “synapses” in it as the brain of a bee. But sitting on a park bench in 2015 in San Francisco, Modha said he was thinking through his work to date. He had the belief that there was something in marrying the best of traditional processing devices with the structure of processing in the brain, where memory and processing are interspersed throughout the brain. The answer was “brain-inspired computing, with silicon speed,” according to Modha.

Over the next eight years, Modha and his colleagues were single-minded and hermetic in their goal of turning this vision into a reality. Toiling inconspicuously in Almaden, the team didn’t give any lectures or publish any papers on their work, until this year. Each person brought different skills and perspective yet everyone collaborated so that as a whole the team’s contribution was much greater than the sum of the parts. Now, the plan is to show what NorthPole could do, while exploring how to translate the designs into smaller chip production processes and further exploring the architectural possibilities.

This work stemmed from simple ideas — how can we make computers that work like the brain — and after years of fundamental research, has come up with an answer. Something that is really only possible today at a place like IBM Research, where there is the time and space to explore the big questions in computing, and where they can take us. “NorthPole is a faint representation of the brain in the mirror of a silicon wafer,” Modha said.


Here is a 61 page PDF file for the techies…





View attachment 47481


For those interested in seeing how IBM Neural Inference Processor works, there's a group of patents here:

https://worldwide.espacenet.com/patent/search/family/072852599/publication/WO2021078486A1?q=nftxt = "ibm" AND nftxt = "neural inference"



Once again, a traditional processor maker from last millennium is addicted to calorie ladened, nutrition deficient MACs fast food and processed instructions.


US11537859B2 Flexible precision neural inference processing unit

WO2019207376A1 CENTRAL SCHEDULER AND INSTRUCTION DISPATCHER FOR A NEURAL INFERENCE PROCESSOR

I think Akida's lead just extended to 10 years.

Edit: It seems that IBM's employees are prevented from thinking outside the box by the Schwarzschild Radius formed by the company's size and history.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 109 users

IloveLamp

Top 20
For those interested in seeing how IBM Neural Inference Processor works, there's a group of patents here:

https://worldwide.espacenet.com/patent/search/family/072852599/publication/WO2021078486A1?q=nftxt = "ibm" AND nftxt = "neural inference"



Once again, a traditional processor maker from last millennium is addicted to calorie ladened, nutrition deficient MACs fast food and processed instructions.


US11537859B2 Flexible precision neural inference processing unit

WO2019207376A1 CENTRAL SCHEDULER AND INSTRUCTION DISPATCHER FOR A NEURAL INFERENCE PROCESSOR

I think Akida's lead just extended to 10 years.
1000006875.gif
 
  • Haha
  • Like
  • Fire
Reactions: 31 users

hotty4040

Regular
Oh for goodness sake, the guy asks a simple relevent question, then get the usual flogging by mindless GIF's, come on
guys, instead of childish GIF's respond with some sensible answers.
Hi Tels61, relevant indeed, ( The Mind Boggles - Doesn't it )

Akida is relevant too, isn't it, very much IMHO. These GIF thingies irritate a lot at times.

Akida Ballista

hotty...
 
  • Like
  • Love
Reactions: 7 users

Esq.111

Fascinatingly Intuitive.
Chippers ,
America's Love of 'Yellowstone' Helps Launch Bull Riding as a Team Sport -  WSJ


HANG ON..... more hand chalk may be required....... giddy up

Regards ,
Esq
 
  • Like
  • Haha
  • Fire
Reactions: 17 users

Slade

Top 20
Tinky Winky Dance GIF by Teletubbies
 
  • Haha
  • Like
Reactions: 12 users

toasty

Regular
Chippers ,
America's Love of 'Yellowstone' Helps Launch Bull Riding as a Team Sport -  WSJ's Love of 'Yellowstone' Helps Launch Bull Riding as a Team Sport -  WSJ


HANG ON..... more hand chalk may be required....... giddy up

Regards ,
Esq
Rather the hand chalk than the lube we've been subjected to lately...................:ROFLMAO:
 
  • Haha
  • Like
  • Fire
Reactions: 13 users

wilzy123

Founding Member
Rather the hand chalk than the lube we've been subjected to lately...................:ROFLMAO:


200w.gif



Please don't speak for others or share details of what you've been subjected to in your personal life with us. Ok, thank you.
 
  • Haha
  • Like
  • Sad
Reactions: 12 users

Slade

Top 20
NVISO’s website is looking very impressive.
“NVISO Neuro Model performance can be accelerated by an average of 3.67x using BrainChip Akida™ neuromorphic processor at 300MHz over a single core ARM Cortex A57 as found in a NVIDIA Jetson Nano (4GB) running at close to 5x the clock frequency. On a clock frequency normalization basis, this represents an acceleration of 18.1x.”
 
  • Like
  • Fire
  • Love
Reactions: 96 users
For those interested in seeing how IBM Neural Inference Processor works, there's a group of patents here:

https://worldwide.espacenet.com/patent/search/family/072852599/publication/WO2021078486A1?q=nftxt = "ibm" AND nftxt = "neural inference"



Once again, a traditional processor maker from last millennium is addicted to calorie ladened, nutrition deficient MACs fast food and processed instructions.


US11537859B2 Flexible precision neural inference processing unit

WO2019207376A1 CENTRAL SCHEDULER AND INSTRUCTION DISPATCHER FOR A NEURAL INFERENCE PROCESSOR

I think Akida's lead just extended to 10 years.

Edit: It seems that IBM's employees are prevented from thinking outside the box by the Schwarzschild Radius formed by the company's size and history.
Great find @Frangipani 👍

I've been wondering if IBM would do anything with TrueNorth for a while, as they are often mentioned along with Loihi 2, as competition to AKIDA (in a strictly neuromorphic sense).

I didn't have to read much, with my limited understanding, to see that they have not met the challenge..

'I think Akida's lead just extended to 10 years"

Gold comment and assessment, coming from someone who knows more than most here, the ins and outs of tech..
Even if it is obviously partly in jest..

One thing I noticed, is that at 12nm it's still a damn big chip..

"NorthPole was fabricated with a 12-nm node process, and contains 22 billion transistors in 800 square millimeters"

Still research too..

Love the new name too 🙄..
Great imagination, but I think SouthPole, would have been more appropriate 🤣..
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 34 users

Boab

I wish I could paint like Vincent
  • Like
  • Fire
  • Haha
Reactions: 6 users

IloveLamp

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 14 users
Great find @Frangipani 👍

I've been wondering if IBM would do anything with TrueNorth for a while, as they are often mentioned along with Loihi 2, as competition to AKIDA (in a strictly neuromorphic sense).

I didn't have to read much, with my limited understanding, to see that they have not met the challenge..

'I think Akida's lead just extended to 10 years"

Gold comment and assessment, coming from someone who knows more than most here, the ins and outs of tech..
Even if it is obviously partly in jest..

One thing I noticed, is that at 12nm it's still a damn big chip..

"NorthPole was fabricated with a 12-nm node process, and contains 22 billion transistors in 800 square millimeters"

Still research too..

Love the new name too 🙄..
Great imagination, but I think SouthPole, would have been more appropriate 🤣..
The chip works out to be about an inch square in 12nm process.

I hope not too many, saw my previous shitty maths 😬..
 
  • Haha
  • Like
  • Fire
Reactions: 7 users
  • Like
  • Thinking
  • Love
Reactions: 26 users

schuey

Regular
Chippers ,
America's Love of 'Yellowstone' Helps Launch Bull Riding as a Team Sport -  WSJ's Love of 'Yellowstone' Helps Launch Bull Riding as a Team Sport -  WSJ


HANG ON..... more hand chalk may be required....... giddy up

Regards ,
Esq
No hand chalk there mate. Resin for grip
 
  • Like
Reactions: 3 users

Perhaps

Regular
Short look on IBM:

There seems to be a significant change in the research of IBM. As I wrote yesterday, the analog design of TrueNorth had a lot of issues making it not suitable for commercial use. In mixed architectures with CNN there's a lack of compatibility cause of the clock free functionality. The analog design also is very sensible to higher temperatures making the processes running wild. Without additional cooling impossible to match with CPU/GPU. Also no foundry has the facilities to run a mass production of analog chips, even when possible it would lead to a wide quality spread in production.

The new IBM NorthPole chip is a digital design, so it's not an extension of TrueNorth but a new attempt. This shows many years of research at IBM been wasted. The new start throws them back in the timeline to something like Intel 5 years ago. As Intel is still in research phase there is no reason to worry about IBM.

To complete this an older article from the Brainchip site:

1697797533951.png

 
  • Like
  • Fire
  • Love
Reactions: 35 users
Top Bottom