BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Following @Fullmoonfever’s discovery yesterday of Fernando Sevilla Martinez’s (Data Science Specialist at Volkswagen) recent GitHub project - demonstrating Akida-powered neuromorphic processing for V2X and federated learning prototypes - there may be broader implications worth noting.

This article below, published a month ago, discusses how Verizon is launching its first commercial V2X platform and it has signed on Volkswagen, the Arizona Commerce Authority, the Delaware Department of Transportation and Rutgers University as its first customers.

The article suggests that Volkswagen could possibly incorporate V2X in next year's models.

in August 2024, the Biden administration, announced a plan to deploy V2X nationwide by 2036 which "means that the highways of the near future could be dotted with 5.9 GHz radio transceivers that communicate with cellular radios inside your car and update vehicles as they're driving to prevent accidents."

Verizon has been developing V2X capabilities for some time. In 2022, Verizon and Cisco collaborated on a successful demonstration in Las Vegas where they showed that cellular and mobile edge compute (MEC) technology can enable autonomous driving solutions without the use of physical roadside units to extend radio signals to enable C-V2X communication.

It’s also worth noting that BrainChip and Cisco partnered in 2016, specifically around neuromorphic video-analytics demos via Cisco’s IoE Innovation Center. While not directly connected to current V2X initiatives, that collaboration could become relevant again if neuromorphic approaches gain traction.

View attachment 88451






In addition to the above, we also have the following snippet...


Screenshot 2025-07-13 at 8.59.12 pm.png





Screenshot 2025-07-13 at 8.51.05 pm.png


 
  • Like
  • Love
  • Fire
Reactions: 25 users

Frangipani

Top 20
Keep in mind that Raúl Parada Medina has a telecommunications background and works as a Senior Researcher for CTTC in Castelldefels near Barcelona, the Centre Tecnològic de Telecomunicacions de Catalunya.

So when he describes himself as an “IoT research specialist within the connected car project at a Spanish automobile manufacturer”, the emphasis is on “connected” rather than on “automobile”. Therefore, any upcoming research projects may not involve cars at all.

Just to give you an idea what diverse topics Raúl Parada Medina has been working on over the past two years:



297C8D0E-8F65-4939-9C4A-C80322C7511C.jpeg
DF53A3E8-95BC-44BA-AE6A-7EF3179F566C.jpeg




05288B26-E4D4-4906-A054-40637138DAAC.jpeg




1D508558-9DAF-4ADA-B5E7-C8723AB4B0FA.jpeg




665C56BE-BA6D-470E-A329-737D88399E2D.jpeg






B952E996-2E03-4DE2-8B85-4A7234BFA543.jpeg






B770DC6E-0909-4905-AC95-0D5ACC1A95E3.jpeg




F9800809-2CAB-4420-8E27-E8D3B724129E.jpeg

BB22D2C0-49CE-4DA1-9E68-FA03CD53C819.jpeg




743B736D-14C3-448C-912B-856D825A01E5.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 29 users

Frangipani

Top 20
I believe this confirmation of an “active alliance” with BrainChip is new on the Tata Elxsi website?

Why Tata Elxsi?

(…)
  • Active alliances with Brainchip for neuromorphic computing and IISc Bangalore for edge cybersecurity.”




View attachment 88392

View attachment 88393 View attachment 88394 View attachment 88395
View attachment 88398 View attachment 88399

Despite our years of collaboration with TCS researchers and the above encouraging affirmation of an “active alliance” with Tata Elxsi, we should not ignore that TCS are also exploring other neuromorphic options for what will ultimately be their own clients.

And while a number of TCS patents do refer to Akida as an example of a neuromorphic processor that could be utilised, they always also refer to Loihi, as far as I’m aware of.

A recent case in point for Tata Consultancy Research’s polyamory is the June 2025 paper “The Promise of Spiking Neural Networks for Ubiquitous Computing: A Survey and New Perspectives”, co-authored by five Singapore Management University (SMU) researchers as well as Sounak Dey and Arpan Pal from TCS, both very familiar names to regular readers of this forum.

Although we know those two TCS researchers to be fans of Akida, they sadly did not express a preference for BrainChip’s neuromorphic processor over those from our competitors in below paper published less than six weeks ago.
On the contrary, in their concluding “key takeaway” recommendations of neuromorphic hardware (“We make the following recommendations for readers with different needs considering neuromorphic hardware chipsets”), the seven co-authors do not even mention Akida at all.

Even more surprisingly, the section on Akida is factually incorrect:
- AKD1500 is a first generation reference chip and is not based on Akida 2.0, BrainChip’s second generation platform that supports TENNs and vision transformers.
- An AKD2000 reference chip does not (yet) exist - it may or may not materialise. At present, only Akida 2.0 IP is commercially available - not an actual silicon chip, as claimed by the paper’s authors.
- The paper is in total ignorance of ultra-low-power Akida Pico, operating on less than 1mW of power, which was revealed by our company back in October 2024 and is based on Akida 2.0.
It is highly unlikely this (possibly revised) version of the paper published on 1 June 2025 would have been submitted to arXiv prior to BrainChip’s announcement of Akida Pico, and we can safely assume Sounak Dey and Arpan Pal would have been aware of that October 2024 BrainChip announcement (unlike maybe their SMU co-authors).

One could argue the reason Akida Pico is not mentioned could possibly be that an actual Akida Pico chip is not commercially available, yet, given the authors state

“5.2 Neuromorphic Hardware
In this subsection, we summarize the latest commercially available neuromorphic hardware chipsets, highlighting their capabilities and development support for building and deploying spiking neural networks.”,

which, however, in turn begs the question, why Loihi 2 is listed, then, as it was always conceptualised as a research chip and is not commercially available. In the paragraph on Loihi 2, the authors correctly state that “this neuromorphic research chipset is available only through the Intel Neuromorphic Research Community (INRC).”

Given the fact that Sounak Dey and Arpan Pal co-authored this paper, the above inaccuracies are bewildering, to say the least. Did the two TCS researchers who both have firsthand experience with Akida contribute to only part of this paper and not proofread the final version before it was submitted?

Either way not a good look…




65AF922B-BF11-4B8B-88CB-B56AAB53AE02.jpeg


(…)

18F94192-34EE-433B-A749-C1780323F6D3.jpeg
5CAB37BF-8FC9-4CE9-B46B-74D720DE5373.jpeg
9DCDFA61-5266-4C51-A826-AAE07C6E6073.jpeg

(…)

6779A981-B39B-4282-B09E-018FCB2097BA.jpeg
 
Last edited:
  • Like
  • Sad
  • Fire
Reactions: 24 users

MegaportX

Regular
  • Like
  • Fire
  • Love
Reactions: 11 users

7für7

Top 20
It’s been so quiet from the company side lately…

Almost eerie.

I’m starting to think they’re mad at us and just don’t want to talk anymore…

Fine. I’ll go first:


Dear BrainChip, we’re sorry.
We’re sorry for asking so much of you.
Sorry for being stressed out that the share price is still stuck in traffic.

But try to understand us, too:

We paid out of our own pockets. No insider bonuses. No free millions of shares. Just hope, patience… and a sprinkle of desperation.

So let’s just move on, shall we?

After all – philosophically speaking – we’re just artificial intelligence too.
(Emotional beta version, though.)

Sorry Cry Me A River GIF by Offline Granny!
 
  • Like
Reactions: 4 users

7für7

Top 20
I keep having doubts…

“What if you picked the wrong horse? What if it botched the start and collapses right before the finish line?”

But then I remind myself…
This isn’t a horse race…. It’s a rollercoaster experience…. A pricy one….
 
  • Thinking
Reactions: 1 users

Tezza

Regular
I keep having doubts…

“What if you picked the wrong horse? What if it botched the start and collapses right before the finish line?”

But then I remind myself…
This isn’t a horse race…. It’s a rollercoaster experience…. A pricy one….
I keep thinking of beta or blackberry. I just hope we are vhs or iphone.
 
Last edited:
  • Like
  • Sad
  • Haha
Reactions: 4 users

Gazzafish

Regular
Probably silence because they are busy negotiating exclusive rights of Akida 2.0 with Apple for a fixed 3 year period.. 🤪 disclaimer: this is me dreaming and completely made up…..but I wonder
 
  • Like
  • Haha
  • Love
Reactions: 6 users

Rach2512

Regular


Sorry of already posted.

Screenshot_20250714_135922_Samsung Internet.jpg
Screenshot_20250714_135943_Samsung Internet.jpg
Screenshot_20250714_135953_Samsung Internet.jpg
Screenshot_20250714_140001_Samsung Internet.jpg
Screenshot_20250714_140015_Samsung Internet.jpg
Screenshot_20250714_140023_Samsung Internet.jpg
 
  • Like
  • Fire
  • Love
Reactions: 23 users
I keep having doubts…

“What if you picked the wrong horse? What if it botched the start and collapses right before the finish line?”

But then I remind myself…
This isn’t a horse race…. It’s a rollercoaster experience…. A pricy one….
Most expensive one in town
Let me on
 

Rach2512

Regular
  • Like
  • Thinking
  • Love
Reactions: 6 users
Interesting channel for BRN , hopefully they are speaking with them soon 🤔


 
  • Like
  • Love
  • Thinking
Reactions: 6 users

Frangipani

Top 20

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

(…) Given that Raúl Parada Medina describes himself as an “IoT research specialist within the connected car project at a Spanish automobile manufacturer”, I had already suggested a connection to the Volkswagen Group via SEAT or CUPRA at the time.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424590

View attachment 88422
View attachment 88424





View attachment 88425

Extremely likely the same Raúl Parada Medina whom you recently spotted asking for help with Akida in the DeGirum Community - very disappointingly, no one from our company appears to have been willing to help solve this problem for more than 3 months!

Why promote DeGirum for developers wanting to work with Akida and then not give assistance when needed? Not a good look, if we are to believe shashi from the DeGirum team, who wrote on February 12 he would forward Parada’s request to the BrainChip team, but apparently never got a reply.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-461608

View attachment 88428

The issue continued, until it was eventually solved on 27 May by another DeGirum team member, stephan-degirum (presumably Stephan Sokolov, who recently demonstrated running the DeGirum PySDK directly on BrainChip hardware at the 2025 Embedded Vision Summit - see the video here: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-469037)





raul.parada.medina
May 27

Hi @alex and @shashi for your reply, it looks there is no update from Brianchip in this sense. Please, could you tell me how to upload this model in the platform? Age estimation (regression) example — Akida Examples documentation. Thanks!

1 Reply



shashiDeGirum Team
May 27

@stephan-degirum
Can you please help @raul.parada.medina ?




stephan-degirum
143_2.png
raul.parada.medina
May 27

Hello @raul.parada.medina , conversion of a model from BrainChip’s model zoo into our format is straightforward:
Once you have an Akida model object, like Step 4 in the example:
model_akida = convert(model_quantized_keras)

You’ll need to map the model to your device and then convert it to a compatible binary:


from akida import devices

# Map model onto your Akida device
dev = devices()[0]
try:
model_akida.map(dev, hw_only=True)
except RuntimeError:
model_akida.map(dev, hw_only=False)

# Extract the C++-compatible program blob
blob = model_akida.sequences[0].program
with open("model_cxx.fbz", "wb") as f:
f.write(blob)

print("C++-compatible model written to model_cxx.fbz")

Note: You want to be sure that the model is supported on your Akida device. There are many models on the BrainChip model zoo that are not compatible with their “version 1 IP” devices.
If your device is a v1 device, you’ll need to add a set_akida_version guard:

from cnn2snn import convert, set_akida_version, AkidaVersion
# Convert the model
with set_akida_version(AkidaVersion.v1):
model_akida = convert(model_quantized_keras)
model_akida.summary()

from akida import devices
# Map model onto your Akida device
# ... (see above)

for more information on v1/v2 model compatibility please see their docs: Akida models zoo — Akida Examples documentation

Once you have a model binary blob created:

Create a model JSON file adjacent to the blob by following Model JSON Structure | DeGirum Docs or by looking at existing BrainChip models on our AI Hub for reference: https://hub.degirum.com/degirum/brainchip
ModelPath is your binary model file
RuntimeAgent is AKIDA
DeviceType is the middle output from akida devices in all caps.
For example for if akida devices shows: PCIe/NSoC_v2/0 you put: NSOC_V2
Your JSON + binary model blob are now compatible with PySDK. Try running the inference on your device locally by specifying the full path to the JSON as a zoo_url, see: PySDK Package | DeGirum Docs
“For local AI hardware inferences you specify zoo_urlparameter as either a path to a local model zoo directory, or a path to model’s .json configuration file.”
You can then zip them up and upload them to your model zoo in our AI Hub.
Let me know if this helped.
P.S. we currently have v1 hardware in our cloud farm, and this model is the face estimation model for NSoC_v2:
https://hub.degirum.com/degirum/brainchip/vgg_regress_age_utkface--32x32_quant_akida_NSoC_1


Anyway, as you had already noticed in your first post on this DeGirum enquiry, Raúl Parada Medina (assuming it is the same person, which I have no doubt about) and Fernando Sevilla Martínez are both co-authors of a paper on autonomous driving:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450543

View attachment 88429

In fact, they have co-published two papers on autonomous driving, together with another researcher: Jordi Casas-Roma. He is director of the Master in Data Science at the Barcelona-based private online university Universitat Oberta de Catalunya, the same department where Fernando Sevilla Martínez got his Master’s degree in 2022 before moving to Wolfsburg the following year, where he now works as a data scientist at the headquarters of the Volkswagen Group.


View attachment 88430


View attachment 88426 View attachment 88427

After digging a little further, it looks more and more likely to me that CUPRA is the Spanish automobile manufacturer with the connected car project Raúl Parada Medina is currently involved in (cf. his LinkedIn profile).

Which in turn greatly heightens the probability that he and Fernando Sevilla Martínez (who works for the Volkswagen Group as a data scientist in the Volkswagen logistics data lake) have been collaborating once again, this time jointly experimenting on “networked neuromorphic AI for distributed intelligence” with the help of an Akida PCIe board paired with a Raspberry Pi 5. (https://github.com/SevillaFe/SNN_Akida_RPI5)

While the GitHub repository SNN_Akida_RPI5 is described very generally as “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware”, and one of the use cases (involving MQTT - Message Queuing Telemetry Transport) “supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics” - and hence a very broad range of applications - one focus is evidently on V2X (= Vehicle-to-Everything) communication systems: Use case 4.2 in the GitHub repository demonstrates how “neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.”

F7B8CD08-3233-4F94-B3D2-E228670574D6.jpeg



CUPRA is nowadays a standalone car brand owned by SEAT, a Spanish (Catalonian to be precise) automobile manufacturer headquartered in Martorell near Barcelona. In fact, CUPRA originated from SEAT’s motorsport division Cupra Racing. Both car brands are part of the Volkswagen Group.

CUPRA’s first EV, the CUPRA Born, introduced in 2021 and named after a Barcelona neighbourhood, is already equipped with Car2X technology as standard (see video below). Two more CUPRA models with Car2X functions have since been released: CUPRA Tavascan and CUPRA Terramar.

Broadly speaking, Car2X/V2X (often, but not always used interchangeably, see the Telekom article below) stands for technologies that enable vehicles to communicate with one another and their environment in real time. They help to prevent accidents by warning other nearby vehicles of hazards ahead without visibility, as V2X can “see” around corners and through obstacles in a radius of several hundred meters, connecting all users who have activated V2X (obviously provided their vehicles support it) to a safety network.

“B. V2X technology

I. Principles

Your vehicle is equipped with V2X technology. If you activate the V2X technology, your vehicle can exchange important road traffic information, for example about accidents or traffic jams, with other road users or traffic infrastructure if they also support V2X technology. This makes your participation in road traffic even safer. When you log into the vehicle for the first time, you must check whether the V2X setting is right for you and you can deactivate V2X manually as needed.

Communication takes place directly between your vehicle and other road users or the traffic infrastructure within a close range of approximately 200 m to 800 m. This range can vary depending on the environment, such as in tunnels or in the city.

(…)


III. V2X functionalities

V2X can assist you in the following situations:

1. Warning of local hazards

The V2X function scans the range described above around your vehicle in order to inform you of relevant local hazards. To do this, driving information from other V2X users is received and analysed. For example, if a vehicle travelling in front initiates emergency braking and sends this information via V2X, your vehicle can display a warning message. Please note that your vehicle does not perform automatic driving interventions due to such warnings. In other words, it does not automatically initiate emergency braking, for example.

2. Supplement to adaptive cruise control

The V2X technology can supplement your vehicle's predictive sensor system (e.g. radar and camera systems) and detect traffic situations even more quickly to give you more time to react to them. With more precise information about a traffic situation, adaptive cruise control, for example, can respond to a tail end of a traffic jam in conjunction with the cruise control system and automatically adjust the speed. Other functions, such as manual lane change assistance, are also improved.

3. Other functionalities

Further V2X functions may be developed in future. We will inform you separately about data processing in connection with new V2X functions.

IV. Data exchange

If you activate the V2X technology, it continuously sends general traffic information to other V2X users (e.g. other vehicles, infrastructure) and allows them to evaluate the current traffic situation. The following data is transmitted for this: information about the V2X transmitter (temporary ID, type), vehicle information (vehicle dimensions), driving information (acceleration, geographical position, direction of movement, speed), information from vehicle sensors (yaw rate, cornering, light status, pedal status and steering angle) and route (waypoints, i.e. positioning data, of the last 200 m to 500 m driven).

The activated V2X technology also transmits additional data to other V2X users when certain events occur. In particular, these events include a vehicle stopping, breakdowns, accidents, interventions by an active safety system and the tail end of traffic jams. The data is only transmitted when these events occur. The following data is additionally transmitted: event information (type of event, time of event and time of message, geographical position, event area, direction of movement) and route (waypoints, i.e. positioning data, of the last 600 m to 1,000 m driven).

The data sent to other V2X users is pseudonymised. This means that you are not displayed as the sender of the information to other V2X users.

Volkswagen AG does not have access to this data and does not store it.”




Here is a February 2024 overview of Car2X hazard warning dashboard symbols in Volkswagen Group automobiles, which also shows 8 models that were already equipped with this innovative technology early last year: 7 VW models as well as the CUPRA Born. Meanwhile the number has increased to at least 13 - see the screenshot of a Motoreport video uploaded earlier this month.


2D9278F8-0660-499F-9306-FDDCA70707C1.jpeg



And here is an informative article on Car2X/V2X by a well-know German telecommunications provider that - just as numerous competitors in the field - has a vested interest in the expansion of this technology, especially relating to the “development towards nationwide 5G Car2X communications”.


255ED097-709F-4C17-B90B-2D4EE50BBBA2.jpeg
2D633CA0-F4FD-455A-B7D9-878110B11994.jpeg
E03843A1-AF78-413B-BBFD-D618DBE46694.jpeg



A5D2E96C-C609-480C-A56D-4A2850A13D21.jpeg

AA6C28C7-BE7B-43BB-998F-1D5172ABD30F.jpeg




76798919-57CB-40B2-A8F3-1FB4685312C6.jpeg




B84FD131-41AB-4E2F-8663-CF28FAA9E522.jpeg
 

Attachments

  • 1BF7BB9F-41DF-4A65-8746-FB48998E9FA5.jpeg
    1BF7BB9F-41DF-4A65-8746-FB48998E9FA5.jpeg
    524.9 KB · Views: 41
  • Like
  • Fire
  • Love
Reactions: 30 users
I just finished listening to latest episode of the "Brains and Machines" Podcast. It's an interview with Professor Gordon Cheng of the Technical University of Munich in Germany about creating artificial skin for machines. The title of the episode is "Event-Driven E-Skins Protect Both Robots and Humans".


Once again, I found it a very interesting interview. But what particularly caught my attention was a statement at about 28:19 and 29:00 where Gordon Cheng talks about Intel Munich shutting down their neuromorphic efforts and a certain telephone company. I'm not sure if this telephone company is meant to be a part of Intel or if he's referring to another company.

Anybody has a guess which company he's talking about?

The relevant quote from the available transcript is:
SB: Absolutely. And you wouldn’t like to tell me which companies these are that have dropped their neuromorphic efforts, would you?

GC: I think Intel closed the lab in Munich already, and the other big company is the, you know, the telephone company that they are.


Another section I found quite interesting was part of the discussion between Sunny Bains, Giulia D’Angelo and Ralph Etienne-Cummings about the interview afterwards:

GDA:
[...]
But anyway, one thing that I think is very important—he talks finally about the elephant in the room! He said that Intel already closed the neuromorphic lab, and many other companies are leaving us. And he’s mentioning the chicken-and-egg problem: people would use technology if it has a use case, but the technology hasn’t been created enough to satisfy the use case. What do you think about this specific point that he made?

REC: So I think timing is everything in the development of these types of technologies. Something that comes too early does not get turned into an actual application as quickly as something that comes at the right time. And you can see that all along the way, right? In fact, Intel itself, before the Loihi—back in the ’90s there was the ETANN chip, right? Which was, for all intents and purposes, an entire replication, if you will, of neural networks and implementation of backprop, and so on and so forth. That was done at that time, but that died as well. And then Loihi came back in the 2000s and survived a bit. And now we’re saying that it’s going away.

But, at the end of the day, we have a number of startups now that are looking at the application space. And you have companies that have well-developed—you look at PROPHESEE, right? Yes, they’re having a little bit of difficulty recently but, on the other hand, you had 120 people and they still have 70 people doing this stuff. And that’s in the vision space. And you have a company like BrainChip. I know that there’s a few other big neuromorphs—meaning people who’ve been doing neuromorphic engineering for a long time in our field—who are now running the technology part of that company.

SB: There’s got to be at least 30 or 40 people at BrainChip, because I was there last year—it’s quite a lot of people.

REC: Yeah. And they are putting out ideas and chips that are being developed very much with ideas of application right into it.
 
  • Like
  • Love
  • Fire
Reactions: 24 users
I just finished listening to latest episode of the "Brains and Machines" Podcast. It's an interview with Professor Gordon Cheng of the Technical University of Munich in Germany about creating artificial skin for machines. The title of the episode is "Event-Driven E-Skins Protect Both Robots and Humans".


Once again, I found it a very interesting interview. But what particularly caught my attention was a statement at about 28:19 and 29:00 where Gordon Cheng talks about Intel Munich shutting down their neuromorphic efforts and a certain telephone company. I'm not sure if this telephone company is meant to be a part of Intel or if he's referring to another company.

Anybody has a guess which company he's talking about?

The relevant quote from the available transcript is:



Another section I found quite interesting was part of the discussion between Sunny Bains, Giulia D’Angelo and Ralph Etienne-Cummings about the interview afterwards:
@CrabmansFriend take a look at Ericsson 6G zero energy white paper. Sorry I don’t have it on hand.

I’m sure the white paper was talking neuromorphic, can’t recall if it was BC specific but I’m thinking them and Loihi.

Then Ericsson shelved it for whatever reason. Possibly poor timing for various global strategic reasons and maybe technical issues also.

The good news is most of the works been done. They can take it off the shelf when the time is right, they have the issues ironed out and the worlds ready to implement it!

Ericsson at the time laid off thousands. Unfortunate set of circumstances for all concerned. If that had gone ahead we’d all be fighting over berths for our yachts instead of watching paint dry.

The RTX, AFRL announcement earlier will still get us a return. Once the initial development is completed it should lead to a contract and trailing revenue. Defence and drones will be massive!

There’s plenty of irons in the fire. 🔥

(DYOR)
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Diogenese

Top 20

Hi TTM,

This is very encouraging. The microDoppler project was announced 7 months ago.

The icing on the cake is that this article refers to ISL as endorsing Akida for the project, even though RTX is the sub-contractor.


The initiative builds on prior successful demonstrations of radar processing algorithms on our Akida™ neuromorphic hardware. These demonstrations were independently validated by the RTX team and the team at ISL 2, who each reported significant performance benefits, with ISL further endorsing our participation in the AFRL program,” commented Brightfield.

We know that ISL has been engaged with Akida for a while,

https://www.edgeir.com/brainchip-an...red-radar-for-military-and-aerospace-20250407

BrainChip and ISL advance AI-powered radar for military and aerospace​

Apr 7, 2025


and ISL is in good standing with AFRL:

ISL has a successful record of commercialization (recipient of three (3) Phase III SBIR contracts) and was the subject of an Air Force Research Laboratory (AFRL) SBIR Success story (see https://www.sbir.gov/node/1526807 ).

https://www.islinc.com/company-bio

I recall Dr Guerci's enthusiastic anticipation of TENNs, and more recently:


“We have proven the efficacy of using BrainChip’s Akida neuromorphic chip to implement some of the most challenging real-time radar/EW signal processing algorithms,”
said Dr. Joseph R. Guerci, ISL President and CEO.

https://brainchip.com/
- 3rd dot along in What binnovators are saying

So to bring this up again after 7 months, "Where there's smoke, ..."
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Diogenese

Top 20
I just finished listening to latest episode of the "Brains and Machines" Podcast. It's an interview with Professor Gordon Cheng of the Technical University of Munich in Germany about creating artificial skin for machines. The title of the episode is "Event-Driven E-Skins Protect Both Robots and Humans".


Once again, I found it a very interesting interview. But what particularly caught my attention was a statement at about 28:19 and 29:00 where Gordon Cheng talks about Intel Munich shutting down their neuromorphic efforts and a certain telephone company. I'm not sure if this telephone company is meant to be a part of Intel or if he's referring to another company.

Anybody has a guess which company he's talking about?

The relevant quote from the available transcript is:



Another section I found quite interesting was part of the discussion between Sunny Bains, Giulia D’Angelo and Ralph Etienne-Cummings about the interview afterwards:
Was that Ericsson?

Edit: um ... SNAP! @Stable Genius
 
  • Like
  • Fire
  • Love
Reactions: 10 users

CHIPS

Regular
https://medium.com/m/signin?operati...------------new_post_topnav------------------






Neuromorphic Computing: Mimicking the Brain for Next-Gen AI

https://medium.com/@marketing_30607...f9d67c---------------------------------------
3 days ago




1*Wyem3H6J-sUGnwVTmu15MQ.png

The digital age is constantly pushing the boundaries of computing. As Artificial Intelligence (AI) becomes more complex and pervasive, the energy demands and processing limitations of traditional computer architectures (known as von Neumann architectures) are becoming increasingly apparent. Our current computers, with their separate processing and memory units, face a “memory bottleneck” — a constant back-and-forth movement of data that consumes significant power and time.
But what if we could design computers that work more like the most efficient, parallel processing machine known: the human brain? This is the promise of Neuromorphic Computing, a revolutionary paradigm poised to redefine the future of AI.
What is Neuromorphic Computing?
Inspired by the intricate structure and function of the human brain, neuromorphic computing aims to build hardware and software that mimic biological neural networks. Unlike traditional computers that process instructions sequentially, neuromorphic systems feature processing and memory integrated into the same unit, much like neurons and synapses in the brain.
This fundamental architectural shift allows them to process information in a highly parallel, event-driven, and energy-efficient manner, making them uniquely suited for the demands of next-generation AI and real-time cognitive tasks.
How Does it Work? The Brain-Inspired Blueprint
The core of neuromorphic computing lies in replicating key aspects of neural activity:
  1. Spiking Neural Networks (SNNs): Instead of continuous data flow, neuromorphic chips use Spiking Neural Networks (SNNs). In SNNs, artificial neurons “fire” or “spike” only when a certain threshold of input is reached, similar to how biological neurons communicate via electrical impulses. This “event-driven” processing drastically reduces power consumption compared to constantly active traditional circuits.
  2. Event-Driven Processing: Computations occur only when and where there is relevant information (an “event” or a “spike”). This contrasts with conventional CPUs/GPUs that execute instructions continuously, even when processing redundant data.
  3. Synaptic Plasticity: Neuromorphic systems implement artificial synapses that can strengthen or weaken their connections over time based on the activity patterns, mirroring the brain’s ability to learn and adapt (synaptic plasticity). This allows for on-chip learning and continuous adaptation without extensive retraining.
  4. Parallelism: Billions of artificial neurons and synapses operate in parallel, enabling highly efficient concurrent processing of complex information, much like the human brain handles multiple sensory inputs simultaneously.
Leading the charge in hardware development are chips like Intel’s Loihi and IBM’s TrueNorth, alongside innovative startups like BrainChip with its Akida processor. These chips are designed from the ground up to embody these brain-inspired principles. For example, Intel’s recently launched Hala Point (April 2024), built with 1,125 Loihi 2 chips, represents the world’s largest neuromorphic system, pushing the boundaries of brain-inspired AI.
Why is it the “Next Frontier”? Unlocking AI’s Potential
Neuromorphic computing offers critical advantages over traditional architectures for AI workloads:
  • Superior Energy Efficiency: This is perhaps the biggest draw. By processing data only when an event occurs and integrating memory and processing, neuromorphic chips can achieve orders of magnitude greater energy efficiency compared to GPUs, making powerful AI feasible for edge devices and continuous operations where power is limited.
  • Real-Time Processing: The event-driven and parallel nature allows for ultra-low latency decision-making, crucial for applications like autonomous vehicles, robotics, and real-time sensor data analysis.
  • On-Device Learning & Adaptability: With built-in synaptic plasticity, neuromorphic systems can learn and adapt from new data in real-time, reducing the need for constant cloud connectivity and retraining on large datasets.
  • Enhanced Pattern Recognition: Mimicking the brain’s ability to recognize patterns even from noisy or incomplete data, neuromorphic chips excel at tasks like image, speech, and natural language processing.
  • Fault Tolerance: Just like the brain can compensate for damage, neuromorphic systems, with their distributed processing, can exhibit greater resilience to component failures.
Real-World Applications: From Smart Homes to Space
The unique capabilities of neuromorphic computing are opening doors to revolutionary applications:
  • Edge AI & IoT: Enabling billions of connected devices (smart home sensors, industrial IoT, wearables) to perform complex AI tasks locally and efficiently, reducing reliance on cloud processing and enhancing privacy. Imagine a wearable that can detect complex health anomalies in real-time, or a smart city sensor that predicts pollution patterns without constantly sending data to the cloud.
  • Autonomous Systems: Powering self-driving cars and drones with ultra-fast, energy-efficient decision-making capabilities, allowing them to react instantly to dynamic environments.
  • Robotics: Giving robots more adaptive perception and real-time learning capabilities, enabling them to navigate complex factory layouts or interact more naturally with humans.
  • Advanced Sensing: Developing smart sensors that can process complex data (e.g., visual or auditory) with minimal power, leading to breakthroughs in areas like medical imaging and environmental monitoring.
  • Cybersecurity: Enhancing anomaly detection by rapidly recognizing unusual patterns in network traffic or user behavior that could signify cyberattacks, with low latency.
  • Biomedical Research: Providing platforms to simulate brain functions and model neurological disorders, potentially leading to new treatments for conditions like epilepsy or Parkinson’s.
Challenges and the Road Ahead
Despite its immense promise, neuromorphic computing is still in its nascent stages and faces significant challenges:
  • Hardware Limitations: Developing neuromorphic chips that can scale to the complexity of the human brain (trillions of synapses) while remaining manufacturable and cost-effective is a monumental engineering feat.
  • Software Ecosystem: There’s a lack of standardized programming languages, development tools, and frameworks tailored specifically for neuromorphic architectures, making it challenging for developers to easily create and port algorithms.
  • Integration with Existing Systems: Integrating these fundamentally different architectures with existing IT infrastructure poses compatibility challenges.
  • Algorithm Development: While SNNs are powerful, developing efficient algorithms that fully leverage the unique strengths of neuromorphic hardware is an active area of research.
  • Ethical Considerations: As AI becomes more brain-like, concerns around conscious AI, accountability, and the ethical implications of mimicking biological intelligence will become increasingly relevant.
Conclusion
Neuromorphic computing represents a profound shift in how we approach computation. By learning from the brain’s incredible efficiency and parallelism, it offers a pathway to overcome the limitations of traditional computing for the ever-increasing demands of AI. While significant research and development are still required to bring it to widespread commercialization, the momentum is palpable.
As we move forward, neuromorphic computing holds the potential to unlock new frontiers in AI, creating intelligent systems that are not just powerful, but also remarkably energy-efficient, adaptable, and truly integrated with the world around us. It’s a journey to build the next generation of AI, one synapse at a time.
 
  • Like
  • Fire
Reactions: 18 users

CHIPS

Regular
Do you see the above post in English or German? Sometimes it translates automatically...
 
Top Bottom