BRN Discussion Ongoing

equanimous

Norse clairvoyant shapeshifter goddess
You've been talking about partnership. This is a presentation from 2019. The actual known partnership is with Tata Elxsi.
Yep I agree that I should of not said partnership with TCS but its likely that we are working with them. Isnt it all under the Tata umbrella?
 
Last edited:
  • Like
  • Love
Reactions: 5 users

Perhaps

Regular
Yep I agree that I should of not said partnership with TCS but its likely that we are working with them
Hard to say what's really going on. My view is they try everything neuromorphic and build their own patent portfolio. Can this be filed under partner or competition, who knows. TCS is big in patent filings.

Here the actual TCS portfolio dedicated to neuromorphic technology:

And here a white paper from TCS, worth a read, Brainchip is mentioned also:
 
  • Like
  • Fire
Reactions: 4 users

Perhaps

Regular
Additional to TCS discussion:

1697106361055.png



1697106713895.png


Here the full package of neuromorphic research at TCS.
Worth a deeper look, don't have the time now to dig into it:

 
Last edited:
  • Like
  • Love
Reactions: 7 users

Frangipani

Regular
The podcast with Dr Alexandre Marcireau from ICNS reminded me that I had long wanted to bring another podcast to your attention, namely the first episode of the new Brains and Machines podcast with Sunny Bains and her interview partner Prof André van Schaik, Director of ICNS, which was recorded during the Capo Caccia Neuromorphic Workshop 2023 (organised by the Zurich Institute of Neuroinformatics) in the first week of May.


Now the tech is way above my head, hence I could be totally off, but I just thought I’d mention it anyway, so the more tech-savvy can give their opinions. So here are my thoughts:

Even though Brainchip is targeting the Edge AI market with Akida, I have been wondering whether André van Schaik could be hinting at Akida being used in the ICNS Deep South project, a large FPGA-based neuromorphic simulator that’s currently in the works. He talks about commercial hardware they will be using to build it, and as completion of building the platform, which will consist of “a bit over a hundred FPGAs”, is scheduled for the end of the year, the timeline would nicely align with the recent release of Akida 2000.

ICNS originally seems to have started on this project back in 2021 in collaboration with Intel, set out as a two year proof-of-concept project at the time:

D882CD5C-A865-451F-9953-2FA57889C294.jpeg

More details here:



I am wondering, though, whether the ICNS researchers started out with Intel, then may have gotten their hands on the Akida 1000 reference chip, realised that Brainchip’s product(s) would be a much better choice for their envisaged large neural simulator and hence switched to Akida for any further planning, once their proof of concept model with Intel was done and dusted resp have been waiting with bated breath for Akida 2000 to be released?

I find it very weird that André van Schaik does not mention Intel at all, when talking about the Deep South project, as they did start out with them in 2021. Also the option to scale up and down sounds very familiar, doesn’t it? Does anyone know the price tag for an Akida 2000 reference chip?


Here are some excerpts from the podcast that I found relevant to my thoughts:

SB: More recently, you’ve been working on a very ambitious project to build large neural simulators using FPGAs. So, can you start by telling us what you hope to achieve with this project, and then I’ll ask you a little bit more about how it works.

AVS: Sure. What I’m trying to achieve with this project is a similar enabling technology as GPUs were for neural networks when, at the beginning I mentioned neural networks, tanking in the 90s just as I wanted to start on it and coming back when GPUs made it possible to simulate really large, deep neural networks in a reasonable amount of time.

Now a problem for spiking neural networks, which is what we’re interested in at the moment because brains are spiking neural networks, is that they are terrible to simulate on a computer—it’s very slow to do large networks. And so, I want to create a technology that enables you to simulate these large-scale spiking neural networks in a reasonable amount of time.

And I want to do it in such a way that we don’t build our own chips for this, but that we use commercial hardware instead.
Because similar to the GPU, was not developed for neural networks. It was a technology that was commercially done for graphical processing on computers. FPGAs’ reconfigurable hardware is another commercial technology that’s being used for various applications, and one application that we think it’s good for is simulating spiking neural networks.

The advantage of using commercial technology is that we don’t have to develop our own chips in a small university research group, where you can maybe do one generation of chip every so many years. Some of the other groups around the world that are doing spiking neural network accelerators, they’re building their own chip. And you’re seeing that going from generation one to generation two takes typically five, six years to do that. So, that’s a very slow iteration—and my contention is that we don’t yet know what it is exactly that we want on these hardware systems, on these accelerators. What neural model do we need? What plasticity model? What connectivity patterns? That should still be open.

So, the advantage of FPGAs is that we can reconfigure it, so it can be very flexible. We’ll start with an initial design, but then the design can be iterated, and it will be open source, as well. So, anybody in the world will be able to work with the machine, but also add things to it if we think we need it.


(…)

Sb: Do you have a name for it, by the way?

AVS: Not really. The original design we call Deep South in response to True North, but True North now is getting really old and is not really that active anymore, so we need to come up with a better system, but it was Deep South because we’re based in Australia down under, so it was a nice balance to the True North.

Sunny Bains:
So, the new FPGA machine—it’s essentially a simulator. So, although you could use it to solve problems in its own right, right? You could use it as a machine to do stuff. That’s actually not its intention. The main intention is to understand the principles and to optimize models that could then be built in the next generation of optimized, small, power efficient hardware to do those things in all sorts of applications, right?

So, this is almost like an intermediate step, an experimental platform much in the same way that Spinnaker and Loihi are intermediate steps and experimental platforms.

AVS: Absolutely. We just think that this platform based on FPGAs provides more flexibility for this intermediate step to figure out what is it that we actually want on the system, before you distil that down into a really efficient, low-power chip if that’s what you need.

Also, the design is modular, so we can make really large systems, or you can use one FPGA and use a smaller system that you’d want on a robot or on a drone or something like that to do the processing locally. So, you can scale up and down with this system, as well. In theory, all the way to human brain level computation and beyond in terms of the number of operations per second that you’re doing.

SB:
Now this is, this is a long-term project that you’re sort of at the beginning of, right? As I understand it, you’ve done some proof of concept, first iteration of your design, but you’ve got quite a bit of work to do to get to what you’ve just described, am I right?

AVS:
Yes, but we’ve made a fair bit of progress behind the curtains, I guess. And so, we are looking to build a system that can do human brain scale number of operations per second this year using commercially available FPGAs. That system hopefully will exist at the end of the year. Then it’s a matter of making it also user friendly because we don’t want people to have to do FPGA programming to use the system. So, it’s a matter of providing software interface, user interface, that allows people to specify the networks that they want on it. That might take a little bit longer to be ready for people to use, but I hope that we’ll be pretty far on that next year.

And again, I hope that we’ll do that in an open-source way with contributions from a global community.
And then, even longer-term aspect of it is, is if you have that data of a billion neurons in your spiking neural network, how do you analyze that data? And that’s an interesting research question—how do you visualize that? And those are clear areas where we’re going to need help because I don’t really know the answer to those questions.

SB: And I noticed, because we’re recording this at the Capo Caccia Neuromorphic Workshop, I noticed you’re looking for postdocs and people to come and help you in this endeavor.

AVS: Yes, the more the merrier, basically. And, we have positions open in Sydney, pretty much constantly. It’s just hard to find the number of people that we need for these efforts. And we’re in an interesting time at the moment in neuromorphic engineering where funding is easier to get than people, and that hasn’t always been the case.

But there’s a lot of interest from industry, defense—all those non-academic players in neuromorphic engineering and what can it do. That’s been a real change over the years. It used to always be, I had to explain first coming through the door to somebody what neuromorphic engineering was. Whereas now, company representatives contact me and ask about what can neuromorphic engineering do for them or how can we collaborate.

And that’s a massive change that has happened over the last five years.

SB:
So, you’re expecting that by the end of…certainly by the end of 2024, you would have people in different labs around the globe playing with your machine in the cloud, essentially.

AVS: Absolutely. Yes.

SB: And presumably because it’s commercially based FPGAs, they could, if they wanted to have a local one, they could do that very simply as well, right? So, that’s your goal.

AVS: The funding, I can buy one FPGA or a few FPGAs. These are high-end FPGAs, so they are about $10,000 each. So, it’s not something that everybody will buy, but at the same time, as a university piece of equipment, buying several of them is possible, obviously. And our system will cost several million dollars to put all the hardware together—that’s a lot of money. But at the same time, it’s not impossible for somebody to replicate somewhere else if they realize they need a system at their university or at their company.

SB: So how many chips will be on the version of the system that you’re building right now?

AVS: A bit over a hundred FPGAs will be on that system.


(…)

SB: Now I don’t want to encourage you to become a betting man, but you talked about applications. Looking forward over the next 10-15 years, that kind of timescale, which are the ones that you think are most likely to have neuromorphic elements to them, before the likes of you and I retire?

AVS: The most safe bet for me would be neuromorphic vision systems. Sensing in a larger term too, but vision systems are the ones that have developed the furthest. And I think there will be a fair bit of development in that area. At the moment, we use event cameras as I described earlier. There [is] only one form of camera, of neuromorphic camera.

All each pixel does is detect changes in time. Biological retinas also look at spatial processing, what are the neighboring photoreceptors doing? That’s not in these cameras. We can try and build that in. An advantage of the current camera is that if you keep the camera still, only things that move will generate changes, and therefore you automatically extract things that move.


But if you start moving the camera, everything changes all the time, which is actually a disadvantage for the cameras. We can build cameras that try and compensate for these things. We can build cameras that don’t just use visible light, but that do infrared or ultraviolet or hyperspectral versions of these cameras.

So, there’s a whole range in applications in sensing, vision sensing in particular, but we’re also doing work in the lab on audio, on olfaction, smell. I’m interested in tactile. I’m interested in electric sense of the shark, or radar, or neuromorphic versions of that. So, I think there we’ll see a lot of first applications happening
.

I’m hopeful that we will get applications out of neuromorphic computation with spiking neural networks and with the system that we do, that it inspires stuff, and we saw that with the GPUs. It reinvigorated the field and once you have a critical mass, things are going really, really fast, and then snowball and progress has been so fast over the last decade in deep neural networks that we might trigger that in spiking neural networks, but that’s much harder to predict, so I wouldn’t want to bet on that.

(…)

REC: Yeah, so definitely, Andre speaks to this, right. [He] indicates that from the olden days, if you will, what constituted [a] neuromorphic system is very different from what constituted a neuromorphic system now. For example, back in the day, a neuromorphic system had to be hardware, had to be analog, had to mimic parsimoniously as possible the biological system that’s being modelled.

Today, we have the various models like the Deep South that Andre speaks about, which is strictly a digital system. Back then, that would not have been considered to be neuromorphic.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

IMG_0243.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Tothemoon24

Top 20

Audio sensors and beyond neuromorphic

“Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.”

In the meantime, event-based cameras are meeting the needs of several other markets. These include industrial use cases that require ultra-high-speed counting, particle size monitoring and vibration monitoring for predictive maintenance. Other applications include eye tracking, visual odometry and gesture detection for AR and VR. And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

Neuromorphic principles can also be applied to audio sensors. Like the retina, the cochlea does not sample spectrograms at fixed intervals. It just conveys changes in sensory input. So far, there are not many examples of neuromorphic audio sensors, but that’s likely to change soon since audio-based AI is now in high demand. Neuromorphic principles can also be applied to sensors with no biological counterpart, like radar or LiDAR.
 
  • Like
  • Fire
Reactions: 23 users

jtardif999

Regular
In the US, the Constitution only permits the actual inventor to apply for a patent. Where the inventor works for a company and the invention relates to the company's business, the patent will normally be assigned to the company.
Yeah I think the only patent directly owned by PVDM is the original, filed in 2008 before BrainChip the company came into existence.
 
  • Like
Reactions: 5 users

Diogenese

Top 20
Yeah I think the only patent directly owned by PVDM is the original, filed in 2008 before BrainChip the company came into existence.
No. US8250011 was the one filed in 2008.
 
  • Like
Reactions: 13 users

Tothemoon24

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 13 users

MDhere

Regular
I may as well tick it all!!

20231013_060513.jpg
 
  • Like
  • Love
  • Fire
Reactions: 31 users
In 1963, Control Data Corporation's supercomputer, the CDC 6600, outperformed IBM's.

This prompted IBM's CEO, Thomas J. Watson Jr., to pen this memo.

Thought this group might like it.

1697148711810.png
 
  • Like
  • Fire
  • Love
Reactions: 46 users

buena suerte :-)

BOB Bank of Brainchip
In 1963, Control Data Corporation's supercomputer, the CDC 6600, outperformed IBM's.

This prompted IBM's CEO, Thomas J. Watson Jr., to pen this memo.

Thought this group might like it.

View attachment 46925
Love it.. not a happy TJ ...("including the janitor") :) ..Thanks AI
 
  • Like
Reactions: 9 users

buena suerte :-)

BOB Bank of Brainchip
So far looking good!! :)

1697151726508.png
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Damo4

Regular
1697151708798.png

Jurassic Park Looking GIF - Jurassic Park Looking Whoa - Discover & Share  GIFs
 
  • Haha
  • Like
Reactions: 20 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers ,

There's that expected volume I referred to yesterday......... all be it three to four hours late..... Having flash backs to my reporting on the Indian Moon Landing.

😀.

Certainly set up for a cracking day.

Regards,
Esq.
 
  • Like
  • Haha
  • Fire
Reactions: 36 users

Damo4

Regular
Morning Chippers ,

There's that expected volume I referred to yesterday......... all be it three to four hours late..... Having flash backs to my reporting on the Indian Moon Landing.

😀.

Certainly set up for a cracking day.

Regards,
Esq.

Love your work Esq, we deserve a little reprieve

Gattuso Sometimes Maybe Good Sometimes Maybe Shit Gattuso GIF - Gattuso Sometimes  Maybe Good Sometimes Maybe Shit Gattuso Sometimes Maybe Good - Discover &  Share GIFs
 
  • Haha
  • Like
Reactions: 11 users

AARONASX

Holding onto what I've got
I am having a guess here but this is very good!?

So anyone wondering the importance of TSC Tata Consultancy Services ?

US 20230325001 A1

Published Yesterday : 2023-10-12

Abstract

Conventional gesture detection approaches demand large memory and computation power to run efficiently, thus limiting their use in power and memory constrained edge devices. Present application/disclosure provides a Spiking Neural Network based system which is a robust low power edge compatible ultrasound-based gesture detection system. The system uses a plurality of speakers and microphones that mimics a Multi Input Multi Output (MIMO) setup thus providing requisite diversity to effectively address fading. The system also makes use of distinctive Channel Impulse Response (CIR) estimated by imposing sparsity prior for robust gesture detection. A multi-layer Convolutional Neural Network (CNN) has been trained on these distinctive CIR images and the trained CNN model is converted into an equivalent Spiking Neural Network (SNN) via an ANN (Artificial Neural Network)-to-SNN conversion mechanism. The SNN is further configured to detect/classify gestures performed by user(s).


1697154120161.png

1697154461767.png


1697154355180.png
 
  • Like
  • Fire
  • Love
Reactions: 82 users

buena suerte :-)

BOB Bank of Brainchip
Morning Chippers ,

There's that expected volume I referred to yesterday......... all be it three to four hours late..... Having flash backs to my reporting on the Indian Moon Landing.

😀.

Certainly set up for a cracking day.

Regards,
Esq.
Huge accumulation @0.195 !! will we finish on 20/21c today????
 
  • Like
  • Fire
  • Love
Reactions: 19 users

Esq.111

Fascinatingly Intuitive.
Guessing $0.2325 to $0.245

1697154796221.png


Regards,
Esq.
 
  • Like
  • Fire
  • Haha
Reactions: 25 users
Top Bottom