BRN Discussion Ongoing

Slade

Top 20
  • Haha
  • Like
Reactions: 6 users
Not listened too.

 
  • Like
  • Fire
Reactions: 2 users

MDhere

Top 20
M, , I really hope you are right, but something fishy is going on.
1757658540938.png
 
  • Haha
Reactions: 2 users

CHIPS

Regular

Does the report link work ? I’m getting 404 error

It works fine from my side.
 
  • Like
  • Love
Reactions: 2 users

7für7

Top 20
It works fine from my side.
And once again we have valuable comments under the LinkedIn post from highly intelligent investors who are putting our company in a positive light… congratulations… with your contributions BrainChip is gaining attractiveness. More than the report itself…
 
  • Haha
Reactions: 1 users
Good to see MB also running a joint project with the University below, this one on neuromorphic vision / cameras.

Not dug deep too try find any connection yet but you trust we in the mix in the background with the other players they mentioned in the recent presso.

Is in German so translated version pasted below.




Further develop autonomous driving with Neuromorphic Computing​



Neuromorphic Computing ahmt das menschliche Gehirn nach, KI-Berechnungen intelligenter Systeme werden dadurch energieeffizienter und mit höherer Geschwindigkeit ausgeführt. Sicherheitssysteme könnten zum Beispiel Verkehrszeichen, Fahrspuren und Objekte auch bei schlechter Sicht viel besser erkennen und schneller reagieren – ohne die Reichweite des Fahrzeugs zu beeinflussen.

Neuromorphic computing mimics the human brain, making AI calculations of intelligent systems more energy efficient and faster. For example, safety systems could detect traffic signs, lanes and objects much better and react more quickly even in poor visibility – without affecting the vehicle's range.

11.12.2024

Through collaboration in the EVSC (Event Vision Stream Compression) project, Karlsruhe University of Applied Sciences (HKA) supports Mercedes-Benz in the further development of autonomous driving technologies. The focus is on optimizing complex camera technologies in the area of neuromorphic computing.​

In addition to the rest of the sensors, intelligent camera systems are considered a core technology in autonomous driving. Current cameras create an image at fixed intervals. They continuously provide the autonomous driving system with snapshots of the environment. However, there is a disadvantage in how it works: the driving system does not receive any information from the camera system during the time that passes between taking two images. A typical frame rate of current cameras is 30 frames per second. So at a driving speed of 100 km/h we are already moving a meter before the autonomous system receives new visual information. In addition, today's camera data processing systems require a lot of energy because 30 frames per second have to be completely processed.

New approach with Neuromorphic Computing​

Mercedes-Benz has now launched a comprehensive international research project. The automobile giant would like to advance the technical possibilities for intelligent mobility, including through Computing. This approach mimics the human brain, making AI calculations of intelligent systems more energy efficient and faster. For example, safety systems could detect traffic signs, lanes and objects much better and react more quickly even in poor visibility – without affecting the vehicle's range.

Neuromorphic computing has the potential to reduce the energy required for data processing in autonomous driving by 90 percent compared to today's systems.
An essential component of neuromorphic computing are neuromorphic cameras.

The so-called event cameras are the focus of the HKA's EVSC project. This camera technology represents a paradigm shift: Unlike conventional cameras, event cameras dynamically perceive changes in your field of vision instead of taking an image at fixed intervals. This is accompanied by a dramatic improvement in temporal resolution Event cameras can provide the autonomous driving system with new information within a few microseconds, while conventional image sensors are „blind“ in the meantime.

Applied to the example from above, an event camera reduces the response distance to around 3 cm at a speed of 100 km/h. A significant improvement of factor 30 to current systems. „The greatest difficulty in implementing this revolutionary technology“, says Prof. Dr. Jan Bauer, Professor of Image Processing & Neural Networks at the Faculty of Electrical and Information Technology at HKA, „is the integration into the overall system, which is often associated with complex cabling and high power consumption of the cameras. In our project, we are also looking at how we can improve the ability of event cameras to be integrated into the car. Our main goal is to use compression to limit the peak bit rate in data transmission, reducing the cost and net power consumption of transmitting the event camera data.“

With the project cooperation, Karlsruhe University of Applied Sciences is consistently taking a further step in helping to shape the future of autonomous driving, because Prof. Dr. Bauer is sure that the use of event cameras will significantly expand the ability of autonomous vehicles to capture the environment more quickly and accurately. „And with the EVSC project we can significantly improve the ability of this new camera technology to be integrated into the vehicle“, said the HKA researcher.
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Diogenese

Top 20
Gosh ... here it is 4:30 pm on Friday and I just received an email from the bank asking me to confirm my details ...

I spose I'll just send them the information and then check back on Monday morning to see if they've got it sorted.
 
  • Haha
  • Fire
Reactions: 9 users

HopalongPetrovski

I'm Spartacus!
Gosh ... here it is 4:30 pm on Friday and I just received an email from the bank asking me to confirm my details ...

I spose I'll just send them the information and then check back on Monday morning to see if they've got it sorted.
Don't forget to click on their link.
The Nigerian Prince's and lonely Ukranian girls can't understand why they haven't heard from you lately baby. 🤣
 
  • Haha
  • Wow
Reactions: 7 users

Diogenese

Top 20
Don't forget to click on their link.
The Nigerian Prince's and lonely Ukranian girls can't understand why they haven't heard from you lately baby. 🤣
Actually, we're engaged!
 
  • Haha
Reactions: 8 users
View attachment 91099

In the article, why do they need to say, "We reiterate.....", in the opening paragraph? Are they trying to make this comment stick?


All I glean from this report is the board getting their ducks in a row to sell the company for $1/share!.

The way I see it is like this:
- SP continually being suppressed/maintained at ~$0.20 for no apparent reason.
- Company building a case for its worth at $1/share to ward off ill feelings from people.
-Around 25% SP dilution since the ATH of $2.34.
- In the meantime senior execs continue to meet their KPI's and get their bonuses.

The time will come when they will offload the company. The reports say fair value is $1.00, the company will offer shareholders say anything between $1.10-$1.50 per share and their commentary being its a good deal as it is higher than the media report, and the ATH was a freak event.
'SP continually being suppressed/maintained at ~$0.20 for no apparent reason.'
I think most of us agree that the lack of information announced on the ASX is partly to blame. Where are the 'Investor presentations' that other companies announce on the ASX as not price sensitive. As it stands at the moment the average Joe BLOW would not even know about BrainChip or what they do.
By not announcing milestones, Company news and other 'non-upsetting ASX' info, the manipulators will continue to have a field day.
I thought to myself, Self, does the low share price help the Company?? Self replied, it does help those who take some of their remuneration in shares.
Now, I'm just pulling some figures out of the air but if an employee under this scheme takes $50K in shares @ 20 cents heshe would get 250000 shares. If the share price was $1 heshe would get 50K.
Now when the share price does get to say $3. those shares are worth $750K Versus $150K. So all I know that if I was confident in the success of the Company I know which parcel of shares I would rather have!!

Now, I'm not suggesting anything.....................................

*No AI assistance was used to prepare this post, in fact very little intelligence at all was used.*
 
  • Like
  • Haha
  • Thinking
Reactions: 9 users
FF

Just in case you think the above is just an academic oddity both Mercedes Benz and Nissan have been working on this exact same idea for quite some time and in fact it features as part of Mercedes Benz's future vehicle:

https://www.bitbrain.com/blog/nissan-brain-to-vehicle-technology

Now perhaps it is a big leap but as Mercedes Benz is working with Brainchip and this paper confirms the benefits of using Brainchip's technology for this purpose. Not to mention the Onsor Nexa glasses as further proof of AKIDA's claim to fame in this area. It would be a brave person who would discount that this is one of the many areas Mercedes Benz is looking at with Brainchip particularly now that AKIDA 2.0 TENNS is fully in play. Particularly as we know thanks to @FullMoonFever finding and sharing the presentation by the Neuromorphic research arm at Mercedes Benz that it is aware of not just AKD1000 but also AKIDA 2.0
 
  • Like
  • Fire
Reactions: 7 users

TheDrooben

Pretty Pretty Pretty Pretty Good
FF

Just in case you think the above is just an academic oddity both Mercedes Benz and Nissan have been working on this exact same idea for quite some time and in fact it features as part of Mercedes Benz's future vehicle:

https://www.bitbrain.com/blog/nissan-brain-to-vehicle-technology

Now perhaps it is a big leap but as Mercedes Benz is working with Brainchip and this paper confirms the benefits of using Brainchip's technology for this purpose. Not to mention the Onsor Nexa glasses as further proof of AKIDA's claim to fame in this area. It would be a brave person who would discount that this is one of the many areas Mercedes Benz is looking at with Brainchip particularly now that AKIDA 2.0 TENNS is fully in play. Particularly as we know thanks to @FullMoonFever finding and sharing the presentation by the Neuromorphic research arm at Mercedes Benz that it is aware of not just AKD1000 but also AKIDA 2.0
Interesting........Tata getting involved in this field as well possibly using Akida (patent previously posted)....

 
  • Like
Reactions: 1 users
Sean needs to address these statements.
Surely
I hope BrainChip paid for the report
It as crap as our sales team and run on the board
Piss poor
 

Diogenese

Top 20
FF

Just in case you think the above is just an academic oddity both Mercedes Benz and Nissan have been working on this exact same idea for quite some time and in fact it features as part of Mercedes Benz's future vehicle:

https://www.bitbrain.com/blog/nissan-brain-to-vehicle-technology

Now perhaps it is a big leap but as Mercedes Benz is working with Brainchip and this paper confirms the benefits of using Brainchip's technology for this purpose. Not to mention the Onsor Nexa glasses as further proof of AKIDA's claim to fame in this area. It would be a brave person who would discount that this is one of the many areas Mercedes Benz is looking at with Brainchip particularly now that AKIDA 2.0 TENNS is fully in play. Particularly as we know thanks to @FullMoonFever finding and sharing the presentation by the Neuromorphic research arm at Mercedes Benz that it is aware of not just AKD1000 but also AKIDA 2.0
Hi SS,

Most of the following is speculation.

When we had the big MB reveal (2022?) they would have been using COTS Akida 1 chip. The neurons in this were designed using an amazing SNN architecture which was far superior to any other chip in performing classification. At the same time, the Akida 2 architecture was being designed and patented. As well as TENNs, this offered ViT in the top level configuration with lots of neurons/NPUs.

Meanwhile, back at the ranch, Rudy, Olivier and co, were developing TENNs. At first, TENNs proved its value by implementing analysis of the temporal element of input signals, (video, voice, ...).

The TENNs team proceeded to develop TENNs models. Then, perhaps by a stroke of serendipity, they found that TENNs had an affinity for MACs. Akida 1 had adopted a few MACs per node when it was upgraded to 4-bit, so maybe these were used to implement some TENNs functionality - I don't know. The outcome was that TENNs seems to have met every challenge, and, this is my personal heresy, but much to my technical fan boy disappointment, TENNs seems to have now totally displaced the brilliant original Akida 1 neurons/NPUs and Akida now runs on 128 MAC nodes using TENNs models.

In any case, the developers tested TENNs on a largeish bundle of MACs (possibly on an FPGA?) The result = duck/water.

Now it appears that all the Akida nodes include 128 MACS, but ViT is no longer offered as an option. So I assume that TENNs produces the result that ViT produced, only more efficiently.

This would leave us with a load of legacy Akida 1000/1500 chips built using the original NPUs which are supported by the original models. It is still possible to build new models for the original Akida architecture, presumably on a "by request" or do-it-yourself basis, but the model development now will be directed to TENNs. At present, some basic TENNs models are available for download, but the more sophisticated models are provided only on request.

Another thing that Tony Lewis (?) said was that they have used look-up tables (LUTs) to implement activation functions, a task which our competitors calculate in software.

So Akida architecture now includes 128 MAC nodes with LUTs.

Since MB is an early adopter and we have confirmation that they are still interested in Akida 2, MB will be fully familiar with TENNs.

My guess is that MB and other select EAPs have been playing with Akida 2 in FPGA format (6 nodes like the online version?) for some months now.

Tony also mentioned that we use a lot fewer MACs than the competition (ARM ETHOS, maybe Qualcomm Hexagon?).

We know that MB is using Qualcomm, and Hexagon uses MACs:

https://www.qualcomm.com/products/technology/processors/hexagon

The Hexagon NPU mimics the neural network layers and operations of popular models, such as activation functions, convolutions, fully-connected layers, and transformers, to deliver peak performance, power efficiency, and area efficiency crucial for executing the numerous multiplications, additions, and other operations in machine learning.

Distinguished by its system approach, custom design, and fast innovation, the Hexagon NPU stands out. The Hexagon NPU fuses together the scalar, vector, and tensor accelerators for better performance and power efficiency. A large, dedicated, shared memory allows these accelerators to share and move data efficiently. Our cutting-edge micro tile inferencing technology delivers ultra-low power consumption and sets a new benchmark in AI processing speed and efficiency.
.

US2020073636A1 MULTIPLY-ACCUMULATE (MAC) OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS Priority: 20180831

1757676350614.png


[0056] FIG. 4 is a block diagram illustrating an exemplary software architecture 400 that may modularize artificial intelligence (AI) functions. Using the architecture, applications may be designed that may cause various processing blocks of an SOC 420 (for example a CPU 422 , a DSP 424 , a GPU 426 and/or an NPU 428 ) to support fast multiply-accumulate (MAC) computations during run-time operation of an AI application 402 , according to aspects of the present disclosure.

[0058] A run-time engine 408 , which may be compiled code of a runtime framework, may be further accessible to the AI application 402 . The AI application 402 may cause the run-time engine, for example, to request an inference at a particular time interval or triggered by an event detected by the user interface of the application. When caused to provide an inference response, the run-time engine may in turn send a signal to an operating system in an operating system (OS) space 410 , such as a Linux Kernel 412 , running on the SOC 420 . The operating system, in turn, may cause a fast MAC computation to be performed on the CPU 422 , the DSP 424 , the GPU 426 , the NPU 428 , or some combination thereof. The CPU 422 may be accessed directly by the operating system, and other processing blocks may be accessed through a driver, such as a driver 414 , 416 , or 418 for, respectively, the DSP 424 , the GPU 426 , or the NPU 428 . In the exemplary example, the deep neural network may be configured to run on a combination of processing blocks, such as the CPU 422 , the DSP 424 , and the GPU 426 , or may be run on the NPU 428 .

Qualcomm suggest that the MAC computation may be performed on the CPU, GPU, DSP or NPU. With Akida, the NPU would be so far ahead there would be no choice.
 
  • Like
  • Love
  • Fire
Reactions: 8 users

Diogenese

Top 20
How thick are these security services?

Surely our Aquirema drone could spot a rooftop shooter?
 
  • Like
Reactions: 3 users
Top Bottom