BRN Discussion Ongoing

Frangipani

Regular
Here is another preprint published yesterday, in which Prof. Osvaldo Simeone from King’s College London is listed as one of the authors. His five co-authors are all from Berlin (Fraunhofer Heinrich Hertz Institute and Technical University of Berlin).

While no specific neuromorphic hardware is mentioned (although the paper’s first reference is to a 2021 paper by Intel researchers on Loihi and the penultimate one is to Prophesee’s EVK4 HD), it is nevertheless an interesting article as it shows that the wireless community is at the forefront of advancing cutting-edge technologies and thus confirms what ISL’s CEO Dr. Joe Guerci said in the recent From The Crow’s Nest podcast:


“Well, and to amplify that point, all the advanced capabilities that we have in our RF systems, radar and EW, most of that is driven by the wireless community, the trillion-dollar wireless community compared to a paltry radar and EW ecosystem.”

Also note the funding supported by the governments of Germany and the UK as well as through the EU’s Horizon Europe funding programme (luckily for King’s College London, post-Brexit UK it is still an associated country…)

A6F8DB2F-CFC2-42AC-9DB7-F3D01622DA47.jpeg




78C9400B-C2BD-4BBE-B0B0-5E7BE2BBB623.jpeg



VII. CONCLUSIONS

In this work, we introduced a new system solution for device-edge co-inference that targets energy efficiency at the end device using neuromorphic hardware and signal processing, while implementing conventional radio and computing technologies at the edge. The investigated communication scheme combines on-device SNN and server-based ANN, leveraging variational directed information bottleneck technique to perform inference tasks. From a deployment perspective, our model demonstrates superior performance, less need for communication overhead and robustness under time-varying channel conditions, implying promising potential in the further 6G work. Aspects of the proposed system solution were validated in a preliminary testbed setup that implements a wireless robotic control application based on gesture recognition via a neuromorphic sensor. The testbed setup is currently being expanded to integrate end-to-end learning via an impulse-radio communication link. The proposed architecture and the corresponding testbed setup are general in the sense that they can support the implementation of different semantic tasks. Besides applications in robotics, in the future, we will also consider bio-medical applications that leverage the energy and communication efficiency of this architecture.
 
  • Like
  • Love
  • Fire
Reactions: 27 users

cosors

👀
But they don’t lie 😉
That's not really true. I listened to a science podcast about that issue.

Even a system that has programmed extra honest as a strategy game participant deliberately lied.
AI systems have learned how to deceive humans. What does that mean for our future?
Just one of several examples. It is also about a ChatGPT4 trading bot programmed by Apollo Research in London to manage a fictitious portfolio.
Hopa mentioned the famous example of the capcha where the Ai/computer lied to the person it called and claimed it was blind.

Unfortunately, the podcast is only in German.

By the way, I find it interesting that Ai systems were proven to have answered absolutely correctly and then floundered after being accused of making false statements.
However, Ai can definitely lie.


"AI and lies
Will artificial intelligence soon trick us?

Artificial intelligence has learnt to deceive. There are already cases in which the systems have lied to people. Some experts are asking themselves: can we still trust the machines we create?"

Maybe for the Germans among us. I find it this very interesting:

How can the goal be achieved? By circumventing the security barriers, by pretending not to lie.
 
Last edited:
  • Like
  • Wow
  • Fire
Reactions: 10 users

Tothemoon24

Top 20
IMG_8735.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 62 users

RobjHunt

Regular
That's not really true. I listened to a science podcast about that issue.

Even a system that has programmed extra honest as a strategy game participant deliberately lied.
AI systems have learned how to deceive humans. What does that mean for our future?
Just one of several examples. It is also about a ChatGPT4 trading bot programmed by Apollo Research in London to manage a fictitious portfolio.
Hopa mentioned the famous example of the capcha where the Ai/computer lied to the person it called and claimed it was blind.

Unfortunately, the podcast is only in German.

By the way, I find it interesting that Ai systems were proven to have answered absolutely correctly and then floundered after being accused of making false statements.
However, Ai can definitely lie.


"AI and lies
Will artificial intelligence soon trick us?

Artificial intelligence has learnt to deceive. There are already cases in which the systems have lied to people. Some experts are asking themselves: can we still trust the machines we create?"

Maybe for the Germans among us. I find it this very interesting:

How can the goal be achieved? By circumventing the security barriers, by pretending not to lie.
Mmmm, yes. I was actually relaying part of an old yarn that my father once told me years ago when big companies converted to computers, about an older fellow who was sent an excessively high electricity bill. When he queried the bill with the electricity company the employee exclaimed "I'm sorry sir but there is no mistake because computers do not lie" and needed to pay the full amount. Then a couple of months later the gentleman received quite a substantial credit from the electricity company in the form of a cheque. When the electricity company tried to retrieve the amount that was mistakenly sent to the gentleman he replied "I'm sorry sir but computers do not lie" ;)
 
  • Haha
  • Love
  • Like
Reactions: 15 users

skutza

Regular
Well done to all the.dot joiners and investigators on how good AKIDA is. I'm more convinced than ever that it works well and could be the future. But I'll keep doing what Sean said and watch the financials. As old mate Whitlam said "God save the Queen because only revenue is going to save the CEO!"
 
  • Like
  • Haha
  • Love
Reactions: 7 users

stuart888

Regular
  • Like
  • Wow
Reactions: 5 users

IloveLamp

Top 20
🤔



1000014755.jpg
1000014757.jpg
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Love
  • Fire
Reactions: 26 users

stuart888

Regular

The Digital Twin that Jensen speaks of so often appears so perfect in the Tesla Car Monitor. When I watch these videos, I focus on the that Digital Twin monitor display.

First time I saw it in night mode, and it seems to work fantastically.

Thanks mrgds for the video. Tesla is good for the world, and ultimately their massive AI video learning library is going to shine.

1712180751672.png
 
  • Like
  • Fire
Reactions: 5 users

stuart888

Regular
In Germany, you would ask where are the lights at the front and rear? No lights on both ends no permission to use it on public roads.
Good point Chips. They actually do come with integrated Lights front and rear. It is called the Equipped Version, and it is $100 dollars more, plus it comes with the fenders too.

1712181295440.png
 
  • Like
  • Love
Reactions: 3 users

TECH

Regular




EXtract 1
View attachment 60255




Extract 2
View attachment 60256

Very nice indeed !

Apple was well behind the eight ball when I first connected with Brainchip, and I believe that they scoured the planet in search of
top quality talent, including poaching staff in a bid to "catch-up" so to speak.

Over the last 2 years they appear to have made big strides, and the only thing lacking in that article Bravo are the words we ALL
wish to see, BRAINCHIP & AKIDA...it's still possible we are or have chatted with Apple, purely speculation on my part, as Apple are
well known for liking to go it alone.

Cheers for the post....Tech :cool:
 
  • Like
  • Love
Reactions: 9 users

stuart888

Regular
Although there are actually many chip foundries, around the World, there are only a few major ones.

TSMC is by far the largest, by revenue.


View attachment 60211

Global Foundries is right up there too and Intel Foundry Services is only just ramping up, it's external customer base (wanting to take on TSMC).

Seeing as BrainChip, doesn't have a reputation of dealing with outliers, I'm guessing the next foundry to be announced, would have to be Samsung?..
The SIA, Semiconductor Industry Association has up to date info. A USA map with dots showing all the players, and you can sort nicely by specific sub industries.

https://www.semiconductors.org/ecosystem/
Last updated March 28, 2024
The U.S. semiconductor industry is one of the world’s most advanced manufacturing and R&D sectors. The U.S. Semiconductor Ecosystem Map demonstrates the breadth of the industry, including locations conducting research and development (R&D), intellectual property and chip design software providers, chip design, semiconductor fabrication, and manufacturing by suppliers of semiconductor manufacturing equipment and materials. Adjust the filters below, hover over a pin, and zoom in on the map to see more information. A glossary of key terms can also be found below. For a map of semiconductor ecosystem projects announced since introduction of the CHIPS Act, please visit this page.

1712182539564.png
 
  • Like
  • Love
  • Wow
Reactions: 11 users

stuart888

Regular
Nothing new , just a blog about the recent podcast


I thought Fact Finder nailed it. A short to the point update, easy read. Not too laborious.

Well done @Fact Finder - Big fan here, cheers to you sir. 🍻🍻🍺🍺
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Very nice indeed !

Apple was well behind the eight ball when I first connected with Brainchip, and I believe that they scoured the planet in search of
top quality talent, including poaching staff in a bid to "catch-up" so to speak.

Over the last 2 years they appear to have made big strides, and the only thing lacking in that article Bravo are the words we ALL
wish to see, BRAINCHIP & AKIDA...it's still possible we are or have chatted with Apple, purely speculation on my part, as Apple are
well known for liking to go it alone.

Cheers for the post....Tech :cool:
Hi Tech,

Yes, purely speculation on my behalf also, however I live in hope.

Here's another recent Apple patent which doesn't mention BrainChips's Akida but it would be a perfect fit IMO, particularly in light of Nandan's previous comments (refer to extract below).

Screenshot 2024-04-04 at 9.44.09 am.png


EXTRACT FROM THE PATENT

Screenshot 2024-04-04 at 9.43.35 am.png




TRANSCRIPT EXTRACT ( Leadership Insights Series Video)

Screenshot 2024-04-04 at 9.45.01 am.png


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 31 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers ,

Feeling a little excited this morning ..

Be onguard .



Regards,
Esq.
 
  • Haha
  • Like
  • Fire
Reactions: 18 users

7für7

Top 20
Very nice indeed !

Apple was well behind the eight ball when I first connected with Brainchip, and I believe that they scoured the planet in search of
top quality talent, including poaching staff in a bid to "catch-up" so to speak.

Over the last 2 years they appear to have made big strides, and the only thing lacking in that article Bravo are the words we ALL
wish to see, BRAINCHIP & AKIDA...it's still possible we are or have chatted with Apple, purely speculation on my part, as Apple are
well known for liking to go it alone.

Cheers for the post....Tech :cool:
Apple is well known for stealing technology… I expect an announcement from them soon with something like “we introduce you our all new brain inspired Neuromorphic chip THE AKIZA”
 
  • Haha
  • Like
Reactions: 10 users

stuart888

Regular
Synsense has both analog and digital SNNs and they have used their analog Dynap (Speck) SNN with Prophesee for vision:

US2023385617A1 SIGNAL PROCESSING METHOD FOR NEURON IN SPIKING NEURAL NETWORK AND METHOD FOR TRAINING SAID NETWORK 20210716

View attachment 60138




SynSense | Neuromorphic Developers Partner to Integrate Sensor, Processor – EETimes | SynSense



SALLY WARD-FOXTON

And remind me, this is mixed-signal or digital?

DYLAN MUIR

This is fully digital. It’s an asynchronous digital architecture. I mean the pixel itself obviously has got some analog components for the photo detector and so on. But the processing side is fully digital.

SALLY WARD-FOXTON

Seems like you have two families of chip. The one that’s in this Speck module. And then there’s another family, which is newer, or older?

DYLAN MUIR

Xylo – it’s a little bit newer. So that’s we’re targeting that for natural signal processing, meaning audio, biosignals, vibration, IMU/accelerometers, things like this. We’ve had a few tapeouts already for the Xylo, we have a dev kit available with an audio version, so that includes a little digital processor core plus a very efficient audio front end. We have a recent publication, some presentations from the end of last year where we’re demonstrating ultra-low power audio processing applications on that chip at a few hundred microwatts. So the idea then of course is also smart low power edge sensory processing. The family will have a common processor architecture plus a number of very efficient sensory interfaces. So we have audio out already, we have an IMU sensory interface which we’re bringing up and testing actually at the moment. We plan to have samples available at the end of this year, beginning of next year, and then there will be other sensory interfaces for other classes of application as well. Yeah, we’ll take these in turn for the use cases that that look like the most commercially accessible.

It’s a different architecture, so Speck is really tailored for vision processing applications and it does very efficient convolutional neural network inference. Xylo is a more general-purpose architecture for mostly non convolutional network architectures. It also has a more advanced neuron model. So it’s still spiking neurons of course, the spiking neurons on Speck are very simple. They’re essentially just a digital counter with no temporal dynamics, whereas the neurons on Xylo are a simulation of a a leaky integrate and fire spiking neuron, a very standard neuron model, including these temporal dynamics, which are very configurable on the chip, so that that’s really suited for temporal signal processing. So when we do things like keyword spotting for example, for audio processing, the standard neural network approach is to buffer 100 milliseconds or 50 milliseconds of audio, produce a spectrogram for that 100 milliseconds, and then treat that as an image.

We don’t do that. We really operate in a continuous streaming mode, where we process the audio as it comes into the device meaning we can have potentially lower latency. We don’t need to do buffering, we don’t need… We can get away with smaller resources for the temporal signal processing, because we’ve got this temporal integration of information in the neurons themselves. and so this lets us operate at lower power.

The Xylo chip is synchronous digital. The reason for the difference is the DVS sensor is fundamentally asynchronous itself, and if you’ve got a static camera application, then there can be basically nothing changing, nothing going on in the visual scene, and then you get basically no input and you don’t need to do anything. Whereas for audio processing, you’ve always got ambient sound coming in and so you essentially need to be processing continuously and then the synchronous digital design is a simpler design
.

So essentially we have a digital clocked simulation of the leaky integrate and fire dynamics on Xylo. And so your input clock frequency might be for example 5 megahertz, but then the network dynamics can be slower than this. You can choose to integrate over several seconds and then the simulation of each individual neuron is computed inside Xylo, but you can choose a long time scale to continuously integrate information inside the neuron, if you if you so desire. It’s essentially a little a little synchronous ASIC core for inference in spiking neurons, including these temporal dynamics. So we’re just running a little digital simulation of the of the spiking neuron dynamics.

SALLY WARD-FOXTON

Last time we spoke, which admittedly was a while ago, you had partnered with Prophesee, also. How is that collaboration going?

DYLAN MUIR

That’s going very well. We’ve fabricated a device with Prophesee, and this is… we’re testing this, examining this at the moment, in conjunction with them.

SALLY WARD-FOXTON

OK, to be clear, it uses a different processor that you’ve made compared to the Speck module – or the same?

DYLAN MUIR


The processor IP is basically the same, same design, that’s our processor design for Speck. These little DYNAP-CNN cores and that’s what we apply for CNN processing. So we’ve also looked at, for example, an RGB standard CMOS camera interface which we could then also process using the same spiking CNN architecture.
Just tossing out a thought: Many Neuromorphic techniques will likely win hugely.

Look at the Cyper Security stocks: Fortinet, Crowdstrike, and Palo Alto Networks are all blasting together. The AI is going to produce a lot of very valuable data that must be protected. Cyber Security spending is a multi-decade trend.

Rocket faster? Neuromorphic intelligence techniques are the same multi-decade trend. Could grow faster is my thought.

SynSense, Brainchip, Tesla, Deepmind, etc are on this track.
Similar technology, different end Use-Cases. 🍀🌷🍩
 
  • Like
  • Love
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Thinking we might pull a few G's shortly.....

 
  • Like
  • Wow
  • Fire
Reactions: 10 users

Boab

I wish I could paint like Vincent
  • Like
  • Love
Reactions: 8 users

RobjHunt

Regular
Thinking we might pull a few G's shortly.....


I don’t have live on my platform Esqy, was wondering what volume has come through up to now mate?
 
  • Like
  • Fire
Reactions: 4 users
Top Bottom