BRN Discussion Ongoing

Hi DB,

Our N:M coding is applied to the activation signal, a time-varying signal. The first N of M signals are processed and the remainder discarded. This is because the strongest signals from the optic nerves arrive before weaker signals (the stronger input signal causes the nerve to reach its firing threshold earlier). The strongest signals carry the most relevant information.

Renesas apply N:M coding to the static weights stored in memory, so there is no time element. Renesas base their selection on the magnitude of the stored signal. It's not about which arrives first. It's about which is strongest.

It's similar but different (and derivative).
So basically they've found a workaround, to achieve a similar result, as AKIDAs N of M coding..

That's only one aspect, of what makes AKIDA special though, but considering their use case for our IP was limited (2 nodes) then it looks like this may be the reason, why Renesas "may" have not proceeded with producing chips with our IP (the "tape-out" was supposed to be at the end of 2023?).
Edit- end of 2022.

Sorry folks... But I'm the kind of person who will turn over stones, even though they may have a scorpion or centipede underneath..
(as a kid, I was actually looking for lizards)..
 
Last edited:
  • Like
  • Sad
  • Fire
Reactions: 16 users

Slade

Top 20
So basically they've found a workaround, to achieve a similar result, as AKIDAs N of M coding..

That's only one aspect, of what makes AKIDA special though, but considering their use case for our IP was limited (2 nodes) then it looks like this may be the reason, why Renesas "may" have not proceeded with producing chips with our IP (the "tape-out" was supposed to be at the end of 2023?).

Sorry folks... But I'm the kind of person who will turn over stones, even though they may have a scorpion or centipede underneath..
(as a kid, I was actually looking for lizards)..
What is your understanding of the term ‘tape out’?
 
  • Haha
  • Like
Reactions: 2 users
What is your understanding of the term ‘tape out’?
Tape-out is actually a redundant term, for what they used to do in preparation for making chips..

Industry still uses the term though.
(I learnt that from LDN).

Without Googling, my simple explanation, is that it is preparing the "masks" for the chips.

It's one of the first steps of producing the chips, it isn't "producing" the chips and taping out a chip, doesn't guarantee a chip will be produced.
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

Guzzi62

Regular
Tape-out is actually a redundant term, for what they used to do in preparation for making chips..

Industry still uses the term though.
(I learnt that from LDN).

Without Googling, my simple explanation, is that it is preparing the masks for the chips.

It's one of the first steps of producing the chips, it isn't "producing" the chips and taping out a chip, doesn't guarantee a chip will be produced.
Article from Dec 2022.

Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.




I don't know if it actually happened, the article is quite interesting to read,

Quote:

Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.

The take up of the technology depends on the market adoption, he says.

“We want to see where the market reception is the highest, that is what determines whether we bring things in house or through a third party.”
 
  • Like
  • Love
  • Fire
Reactions: 14 users

Slade

Top 20
Tape-out is actually a redundant term, for what they used to do in preparation for making chips..

Industry still uses the term though.
(I learnt that from LDN).

Without Googling, my simple explanation, is that it is preparing the "masks" for the chips.

It's one of the first steps of producing the chips, it isn't "producing" the chips and taping out a chip, doesn't guarantee a chip will be produced.
The reason I asked is because the tape out was in Dec 2022, not the end of 2023. At this stage I am not concerned that they haven’t released the chip. With time for fabrication, testing along with a timed release to the market I am not surprised that we haven’t heard any news yet. We saw how long it took for Akida between tapping out and getting the engineering chips to our EAPs. Renesas have been releasing a lot of new products lately and will want the market to absorb them before marketing yet another chip. That’s not to say that some established Renesas customers haven’t already got engineering samples in their hands. IMO
 
Last edited:
  • Like
  • Love
Reactions: 17 users
Article from Dec 2022.

Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.




I don't know if it actually happened, the article is quite interesting to read,

Quote:

Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.

The take up of the technology depends on the market adoption, he says.

“We want to see where the market reception is the highest, that is what determines whether we bring things in house or through a third party.”
From the same article and I mulled over this comment when it came out, about whether the 3rd party inference was client or foundry and also the reference to device which implies a product of some sort not just the chip imo.

If client then we wouldn't see anything from Renesas I suspect.

“Now you have accelerators for driving AI with neural processing units rather than a dual core CPU. We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.

Was also all interesting timing as Renesas said doing a 22nm CMOS and we come out around Jan saying we've taped (past tense) out the 1500 in 22nm but in FDSOI and was an article by Nick Flaherty at the time saying we were working with Renesas to use the 1500 IP in a MCU which would be a path for the Minsky AI engine in industrial use cases.
 
  • Like
  • Fire
  • Love
Reactions: 19 users
The reason I asked is because the tape out was in Dec 2022, not the end of 2023. At this stage I am not concerned that they haven’t released the chip. With time for fabrication, testing along with a timed release to the market I am not surprised that we haven’t heard any news yet. We saw how long it took for Akida between tapping out and getting the engineering chips to our EAPs. Renesas have been releasing a lot of new products lately and will want the market to absorb them before marketing yet another chip. That’s not to say that some established Renesas customers haven’t already got engineering samples in their hands. IMO
Yeah I thought end of '23 didn't sound right..

Renesas are huge, so who knows what their plans are..

According to the recent BrainChip presentation, there is another physical chip, with our IP in it, other than AKD1000 and AKD1500, but nobody seems too interested, curious, or excited about it..

20240305_015051.jpg


I've already made the observation, that, not only did they "highlight" the customer SOC, like "Look at this folks!" but AKD2000 isn't there, because it's not physical yet.

This isn't a slide showing theoretical products, but actual physical integrated circuits, in my opinion.
 
  • Like
  • Fire
  • Love
Reactions: 35 users

TopCat

Regular

Probably old, but I hadn’t seen this before.​

Cupcake Edge AI Server in Full Production​

NEWS PROVIDED BY
EIN Presswire
Feb 21, 2024, 12:24 PM ET
Unigen Corporation Announces Milestone Achievement
NEWARK, CALIFORNIA, UNITED STATES, February 21, 2024 /EINPresswire.com/ -- Unigen Corporation proudly announces the successful production launch of its highly anticipated Cupcake Edge AI Server. The first units have been produced at our cutting-edge facilities in Hanoi, Vietnam, and Penang, Malaysia, marking a significant milestone in Unigen's commitment to delivering AI solutions to the global market.

Certified for compliance with FCC, CE, VCCI, KCC, and WEEE standards, Cupcake has successfully completed rigorous testing protocols, ensuring its adherence to the highest industry regulations and quality benchmarks. The initial production units have been delivered from our state-of-the-art facilities in Vietnam and Malaysia. With mass production tooling now in place, we are fully equipped to meet the escalating demand for Cupcake, empowering businesses worldwide with unparalleled AI capabilities.

"Bringing our Cupcake Edge AI Server to life has been an exciting journey for us at Unigen," shared Paul W. Heng, Unigen’s founder and CEO. "It's been a company-wide effort to quickly bring groundbreaking technology to the market. By seamlessly integrating every aspect of Cupcake, from the motherboard to the enclosures, and collaborating closely with our Silicon partners, we’re finally able to see our customers receiving the fruits of our effort."

About Cupcake
Unigen’s Cupcake Edge AI Server delivers a reliable, high-performance, low-latency, low-power platform for Machine Learning and Inference AI in a compact and rugged enclosure. Cupcake integrates a flexible combination of I/O Interfaces and expansion capabilities to capture and process video and multiple types of signals through its Power-Over-Ethernet (POE) ports, and then delivers the processed data to the client either over a wired or wireless network. Neural Networks are supported by the leading ISV providers allowing for a highly customizable solution for multiple applications.
Cupcake is a small form factor fanless design in a ruggedized case perfect for environments where Visual Security is important (e.g., secure buildings, transportation, warehouses, or public spaces). External interfaces included are Ethernet, POE, HDMI, USB 3.0, USB Type-C, CANbus, RS232, SDCard, antennas for WIFI, and internal interfaces for optional M.2 SATA III, M.2 NVMe and SO-DIMMs. The flexibility in IO renders the Cupcake Edge AI Server suitable for multiple applications and markets.


( my bold )
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 53 users

cosors

👀
Last edited:
  • Like
  • Love
  • Fire
Reactions: 37 users

Boab

I wish I could paint like Vincent
  • Like
  • Love
  • Thinking
Reactions: 12 users

GStocks123

Regular
  • Like
  • Love
  • Fire
Reactions: 42 users

Esq.111

Fascinatingly Intuitive.
Its tomorrow-


3... 2... 0... and 🚀
View attachment 58441

View attachment 58442
Morning Cosors & Fellow Chippers ,

Two hours till LIFTOFF.

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

IloveLamp

Top 20
  • Like
  • Love
  • Haha
Reactions: 5 users

IloveLamp

Top 20
  • Like
  • Love
  • Thinking
Reactions: 10 users

MDhere

Regular
  • Like
  • Haha
  • Love
Reactions: 5 users

Tothemoon24

Top 20
Perhaps old news ..

February 19, 2024
by Michelle Cometa

Computer engineering faculty member joins national initiative on neuromorphic computing​

Cory Merkel contributes expertise in system development and testing strategies for the Center of Neuromorphic Computing under Extreme Environments​

Share on FacebookShare on TwitterShare on LinkedInShare on RedditShare via Email
Cory Merkel is shown sitting at a desk with a computer monitor showing a colorful image


Provided/RIT
Cory Merkel, a computer engineering faculty member, will represent the university in the new Center of Neuromorphic Computing under Extreme Environments, also referred to as CONCRETE.
Cory Merkel, assistant professor of computer engineering at Rochester Institute of Technology, will represent the university as one of five collegiate partners in the new Center of Neuromorphic Computing under Extreme Environments, also referred to as CONCRETE.
Based at the University of Southern California, center partners will build neuromorphic computing devices and software that can be used in extreme application domains from intense temperatures to dangerous conditions, such as radiation or highly corrosive elements.
Each university will bring its own expertise to the field of neuromorphic computing, with Merkel’s research group bringing its experience in development and testing methodologies for the new devices, circuits, and materials being used to build neuromorphic computing systems.
“For a long time, neuromorphic computing has been at the fundamental stage, but now we are thinking about how to scale it up. In this project, we’re interested in overcoming scaling challenges for neuromorphic systems that are exposed to extreme environments, especially how the behavior of the system changes as a result of these conditions,” said Merkel.
Neuromorphic computing, sometimes referred to as brain-inspired computing, is a growing field of artificial intelligence focusing on developing computing infrastructure. The physical, neural network architecture and its complex processing mechanisms are inspired by natural learning mechanisms in the human brain—its evolutionary ability to process data and signals efficiently. It is a $47 million global industry and expected to increase to $1 billion by 2028, according to industry research and the American Institute of Physics, because of increased demands from fields such as automotive, healthcare and defense.
“The community is looking at scalability as a challenge, but if we want people to pay attention to neuromorphic computing, we have to demonstrate its utility in large-scale applications and applying our techniques to real-world problems,” said Merkel, director of RIT’s Brain Lab in the Kate Gleason College of Engineering. Work in the lab is advancing the security of computing systems and developing bio-inspired artificial intelligence technologies. He also is one of the inaugural members of the BrainChip University AI Accelerator Program and a former researcher with the Air Force Research Lab.
Funded by the Air Force Office of Scientific Research and the Air Force Research Laboratory, RIT joins center leader University of Southern California and partners University of California-Los Angeles, Duke University, and University of Texas-San Antonio for the five-year, $5 million initiative. Work in developing the advanced computing system will also entail supporting university-Air Force workforce initiatives to educate the next generation workforce.
 
  • Like
  • Fire
  • Love
Reactions: 30 users

chapman89

Founding Member
Morning Cosors & Fellow Chippers ,

Two hours till LIFTOFF.

Regards,
Esq.
Here is the SpaceX link as the other one isn’t working-

 
  • Like
  • Love
  • Fire
Reactions: 28 users

Learning

Learning to the Top 🕵‍♂️
  • Like
  • Love
  • Fire
Reactions: 73 users

davidfitz

Regular
Nice write up on Wevolver site re edge ai box

The exposure we are getting at the moment is nothing like I have seen in the 9 years I have been holding!

Get your Company on Wevolver​

We reach millions of professional engineers every month who leverage the platform to stay up-to-date and connect with the industry.
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Top Bottom