BRN Discussion Ongoing

I'm afraid annb0t has left us or is broken. We'll be waiting a long time here. 😅
1700635315499.gif
 
  • Haha
Reactions: 5 users

Preparing For The Next Frontier​



At this year’s AGM, I shared my vision and BrainChip’s goal to extend technology leadership, increase customer focus, and to accelerate AI deployment at the Edge. With the recent release of Akida products based off the 2nd Generation architecture, we are building on the foundation set by the first generation of Akida. Beyond this, we consistently invest in developing an extensive technology pipeline that contributes to our competitive advantage, with 18 patents granted and 30 pending. We are also looking to empower our organization to bring this technology to market and prepare for the next step in our growth.
To that effect, I want to announce the planned retirement of co-founder and CTO Peter Van der Made which is scheduled for the end of the year. Peter has been the face of BrainChip since its inception. His intelligence and energy have been foundational in the creation and growth of the company. As the company has moved firmly into the commercialization stage with the addition of several qualified industry veterans to the leadership team, Peter feels confident of the company’s position and that it is the proper time to retire from day-to-day activities. However, Peter will continue to be involved with BrainChip in three meaningful ways. He will continue to sit on the Board of Directors, and the Scientific Advisory Board and, most importantly, will advise the company as Technologist Emeritus.
It is also my pleasure to introduce M. Anthony (Tony) Lewis as the new CTO for BrainChip. Tony has had a stellar career in industry in academia and is at heart, an entrepreneur, with a strong focus on Artificial Intelligence, Neuromorphic Computing and Robotics. As the former VP and Global Head of the AI and Emerging Compute Lab at HP, Inc., he played a pivotal role in AI integration into a variety of print and computer products, customer service, and commercial printer maintenance. Tony also made significant contributions at Qualcomm, Inc., where he led the Zeroth© Neuromorphic Engineering Project. He also contributed to projects in intelligent AI agents and robotics and collaborated closely with Qualcomm Ventures in new business development activities. In the academic arena, Tony has served as a visiting or adjunct professor at the University of Arizona, the University of Illinois, Urbana-Champaign, and the University of Waterloo. His entrepreneurial spirit is evident in his founding of a government funded R&D company specializing in neuromorphic computing and robotics. Tony has also made his mark as a startup advisor and investor, bringing his technical acumen to various innovative ventures.
Tony holds a Ph.D. and master’s in electrical engineering, with a specialization in robotics and neuroscience from the University of Southern California and a BS in Cybernetics from University of California, Los Angeles.
I would like to welcome Tony to our leadership team and personally thank Peter for his energy, intelligence, and drive that has helped bring BrainChip to where it is today. While it is bitter-sweet, I’m also excited that with this seamless transition to Tony and his immense experience, we will further accelerate Akida’s technology pipeline to market.
Picture1_clean.png

Sean Hehir, CEO

 
  • Like
  • Love
  • Fire
Reactions: 73 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 67 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I just tried to count how many times I've mentioned Qualcomm in my posts on TSEx and my calculator caught fire!


download.jpeg



PS: I'm a little bit excited!

will-ferrell-stressed-out.gif
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 55 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Mr Lewis has some very influential people congratulating him on his appointment with Brainchip....


View attachment 50212 View attachment 50213 View attachment 50214 View attachment 50215 View attachment 50216 View attachment 50217 View attachment 50218 View attachment 50219 View attachment 50220 View attachment 50221



OMG!!!!! WHAT???? PHOAR!!!!! I'M DEFINITELY NOT GOING TO BE ABLE TO SLEEP TONIGHT!!!! NO BEDTIME!!!! NO RULES!!!!! 🥳💃🥂🍸🍾🍹🥃🍻🍷


200w (3).gif
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 35 users

Pepsin

Regular
I just tried to count how many times I've mentioned Qualcomm in my posts on TSEx and my calculator caught fire!


View attachment 50222


PS: I'm a little bit excited!

View attachment 50224
You can actually use the advanced search function. since Febuary 2022 you´ve mentioned qualcomm about 200 times (ten pages of search-results each 20 results). You´re welcome ;)
Can´t wait for the announcment!
 
  • Like
  • Haha
  • Fire
Reactions: 21 users

Flenton

Regular
To whoever thinks that Peter will not be involved in the next generations of akida are kidding themselves.

I think he is completely comfortable with where the company is at and with the people now running it that he can now step aside in a full time capacity and enjoy the rewards coming from the amount of time he's spent and the volume work he has done.

My thoughts say he now trusts he has someone he can explain his vision for future Akida generations and provide input or advice when required... whether this is something like one day every couple of months, a week straight each month or 3 days a week is anyone's guess.

Sit back and enjoy these next few years Peter cause I know I'm looking forward to it.
 
  • Like
  • Love
  • Fire
Reactions: 48 users

Proga

Regular
This is the lowest 3 day accumulative total of daily shorting I've seen in years. They bought back 250k more than they shortered on the 15th.

Latest Reported Shorts (Daily)​

DATEREPORTED SHORTISSUED SHARES% SHORT
20 November 202351,1911,790,058,1450.00%
17 November 202337,0001,790,058,1450.00%
16 November 202358,8351,790,058,1450.00%

Latest Reported Shorts (Aggregate)​

DATEREPORTED SHORTISSUED SHARES% SHORTDAILY RANK
15 November 2023101,740,8871,790,058,1455.6837%26th 1
14 November 2023101,978,5271,790,058,1455.6969%27th 1
13 November 2023101,499,7841,790,058,1455.6702%28th
10 November 2023100,760,1711,790,058,1455.6289%28th
9 November 2023100,208,5711,790,058,1455.5981%28th 1
8 November 2023100,428,2491,790,058,1455.6103%29th 1
Early days still but they only shorted 6109 yesterday. Why would you bother. You can see above they shorted 58,835 on 16/11 and below bought back just under 1.2m. They might be buying them back to return the borrowed stock under their contractual obligations or beginning to exit their positions. It will take about a week to get a better understanding

Latest Reported Shorts (Daily)​

DATEREPORTED SHORTISSUED SHARES% SHORT
21 November 20236,1091,790,058,1450.00%
20 November 202351,1911,790,058,1450.00%
17 November 202337,0001,790,058,1450.00%

Latest Reported Shorts (Aggregate)​

DATEREPORTED SHORTISSUED SHARES% SHORTDAILY RANK
16 November 2023100,585,3781,790,058,1455.6191%27th 1
15 November 2023101,740,8871,790,058,1455.6837%26th 1
14 November 2023101,978,5271,790,058,1455.6969%27th 1
13 November 2023101,499,7841,790,058,1455.6702%28th
10 November 2023100,760,1711,790,058,1455.6289%28th
 
  • Like
  • Fire
  • Love
Reactions: 19 users

The Pope

Regular
Hi Bravo
This is a very nice lead in to a point I wanted to make to those who question the significance of Tony Lewis accepting the offer of employment as Chief Technology Officer at Brainchip and doubt that he is genuinely excited by the prospect.

To prove that he is excited by the prospect I have reproduced chapter 2 of the article that appeared in March, 2023 in the EETimes covering the release of AKIDA 2.0. To this article I will only add two later developments which if known by the articles author would have no doubt added and adjective or two to his description of AKIDA 2.0 as more than Mind-Boggling Neuromorphic:

1. At the time this article was written AKIDA 2.0 P was running at 50 TOPS. As a result of the early customer feedback Brainchip added more nodes (a further 128) and when it was finally released AKIDA 2.0 P was running at 113TOPS.

2. At the time this article was written AKIDA 2.0 was able to run Vision Transformers however at this point it has added the ability to run Large Language Models LLM's on chip unconnected.

My opinion only DYOR
Fact Finder
AKIDA BALLISTA

max-0216-image-for-home-page-brainchip-deux.jpg

March 9, 2023

Mind-Boggling Neuromorphic Brain Chips (Part 2)​


by Max Maxfield
In my previous column, we discussed how the year 2030 seems set to be an exciting time to be in artificial intelligence (AI) and machine learning (ML) space (where no one can hear you scream). For example, in addition to the industrial IoT (IIoT) we also have the artificial intelligence of things (AIoT).
“What’s the AIoT when it’s at home?” I hear you cry. Well, according to the IoT Agenda, “The AIoT is the combination of AI technologies with the IoT infrastructure to achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics […] the AIoT is transformational and mutually beneficial for both types of technology as AI adds value to IoT through machine learning capabilities and IoT adds value to AI through connectivity, signaling, and data exchange.”
I’ve said it before and I’ll say it again, I couldn’t have said this any better myself.
As we noted in Part 1 of this 2-part miniseries, the folks at PwC project that the impact of AI on global GDP by 2030 will be around $15T. According to Forbes, the AIoT piece of this pie could be $1.2T in 2030. Meanwhile, the 2023 Edge AI Hardware Report from VDC Research estimates that the market for Edge AI hardware processors will be $35B by 2030.
All of this goes to explain why the folks at BrainChip are so excited by the recent announcement of their Akida 2.0 platform. As I mentioned in my earlier musings, the Akida is a fully digital neuromorphic event-based processor.
The first big point to note about this new incarnation is that it can run humongous models like ResNet-50 completely on the neural processor, thereby freeing up the host processor.
The second point you need to know is that, once you’ve trained your original model in the cloud, this platform supports the unique ability to learn on-device without cloud retraining, thereby allowing developers to extend previously trained models.
The third point that will excite those in the know is that the guys and gals at BrainChip have added spatial temporal convolutions and the innovative handling of time series data. This means that, in addition to traditional 2D data (like images), these features allow the Akida platform to treat various types of 1D and 3D data smartly, thereby enabling much better predictive analysis, video analytics, speech analysis, etc. All of this goes to provide better accuracy at the edge while substantially reducing models in terms of size and weights.
And the fourth big point is that the little scamps at BrainChip have added hardware support for vision transformers in the edge, thereby delivering a dramatic boost in machine vision performance.
What are vision transformers? I’m so glad you asked. Until recently, typical image processing systems like ResNet have employed convolutional neural networks (CNNs). In 2017, a transformer architecture was introduced for use in natural language processing (NLP), which involves analyzing, extracting, and comprehending information from human language. NLP includes both reading text and listening to spoken words.
In artificial neural networks (ANNs), the term “attention” refers to a technique that is meant to mimic cognitive attention (i.e., the concentration of awareness on some phenomenon to the exclusion of other stimuli). Transformers measure the relationships between pairs of input tokens (words in the case of text strings), and this is what we mean by attention in this context.
This concept was extended in 2019 into a vision transformer architecture for processing images without need of convolutions. The idea is basically to break down input images into a series of patches, which—once transformed into vectors—are essentially equivalent to the words processed by a normal transformer. The bottom line is that vision transformers can do a much better job of vision analysis by treating a picture like a sentence or a paragraph, as it were.
By “better” in this context I mean smaller and faster. How much smaller? How much faster? We will come to that in a moment. Before we go there, however, let’s first note that this latest incarnation of the Akida comes in three temptingly tasty flavors as shown below.
max-0216-01-akidae-s-p.png

Akida: One platform for many scalable products (Source: BrainChip)

All of these flavors are intended for processing at the edge, but this leads us to the question, “What exactly is the edge?” In its broadest sense, the edge is where the internet meets the real world (where the rubber meets the road). For myself, I tend to think about the “extreme edge” or the “sensor edge” as being so close to the real world that you can smell it (assuming your sensor is of an olfactory nature, of course). By comparison, the “network edge” includes things like fog and mist computing in which edge devices (like servers) are used to carry out a substantial amount of computation, storage, and communication locally before ultimately connecting into the cloud.
The Akita-E, which has 1 to 4 nodes and provides up to 200 GOPS (giga operations per second), is targeted at low end sensor nodes. This can be run standalone or in conjunction with a min-spec MCU, and it’s ideal for always-on applications.
Next, we have the sensor-balanced Akita-S, which boasts 2 to 8 nodes and can provide up to 1 TOPS (tera operations per second). And for those applications demanding the maximum performance, we have the Akita-P, which flaunts (yes, I said “flaunts” and I’m not ashamed of myself) 8 to 128 nodes to deliver a squealing 50 TOPS (where the “squealing” qualifier would be me squealing in excitement). This bodacious beauty boasts temporal event-based neural nets and vision transformers. It has detection, classification, segmentation, and prediction. Basically, it can handle all types of complex networks with minimum CPU intervention.
Another way to look at this is as shown below. There are a bunch of simple AI/ML tasks like vibration detection, anomaly detection, keyword spotting, and sensor fusion that can be implemented using an MCU (although an MCU is not optimal, these tasks are certainly within its performance envelope). However, these tasks, which are shown in gray in the image below, could be better served using an Akida-E, which will be orders of magnitude more efficient.
max-0216-02-akida-apps.png

Example spectrum of edge AI/ML tasks and computational offerings (Source: BrainChip)
The applications shown as orange in the diagram cannot be implemented using MCUs by themselves; instead, they require some level of ML acceleration. These tasks can be easily accomplished using an Akida-S accompanied by a min-spec or mid-spec MCU.
Finally, we get to applications like speech recognition, gesture recognition, and object detection and classification, which are colored blue in the above diagram. Tasks of this ilk are usually performed using something like a Jetson (MPU+GPU) or a Snapdragon (AP+GPU). This is the application area where the Akida-P strides to the fore.
We don’t have the time (and I don’t have the energy) to take too deep a dive into the technology (rest assured that the folks at BrainChip will be delighted to talk your ears off if you ask them), but a super high-level view is as shown below.
max-0216-03-akidae-architecture.png

High-level architectural view of the Akida (Source: BrainChip)
Everything inside the dotted line is the IP that developers will integrate into their system-on-chip (SoC) devices. Communication with the rest of the SoC is realized via standard AXI bus interconnect. In addition to the temporal event-based neural nets (TENNs) and vision transformers, we have a local scratch pad memory, a system interface DMA, and an enhanced high resolution coding (HRC) DMA. This fully digital event-based design, which is portable across different foundries, accelerates all types of networks, including CNNs, DNNs, vision transformers, native SNN, and sequence prediction.
The Akida efficiently computes event domain convolutions, temporal convolutions, and vision transformer encoding. The DMAs supporting all of this minimize the load on the system and the host CPU because they access data when they need it, as opposed to relying on the CPU to manage everything when it has better things to do. Moreover, there’s an intelligent runtime software layer that handles everything transparent to the user.
One distinguishing characteristic of the Akida is its ability to support multi-pass processing, which means developers can implement more complex networks on smaller die areas. For example, suppose you have a complex and parallel network that would ideally be served by four Akida nodes. Now suppose you don’t have space budgeted on the die for this, in which case you could use a single node and perform multiple passes.
max-0216-04-simplifying-and-differentiating.png
Additional features supported by the Akida (Source: BrainChip)
The great thing here is that the runtime software takes care of everything, such as performing segmentation, partitioning, and then running the multiple levels. Obviously, there’s latency involved, but this latency is minimized by having the runtime and the DMA take care of everything, which means the Akida doesn’t need to keep on transferring control back to the CPU.
As we previously noted, the Akida also supports continuous on-chip learning, which allows developers to use existing trained models, extract features, and then extend classes on the last fully connected layer. All of this customization is performed on the device without the need for cloud retraining.
In the case of vision transformers, the Akida supports the encoder block fully in hardware. This encoder block is fully contained and—once again—managed by DMA and runtime software. Just to give a sense of what this means, two nodes running at 800MHz provide 30 frames per second performance for high resolution video, which is compelling for an edge device.
Another new feature is the concept of temporal event-based neural nets, which are easy to train and extremely data efficient. The idea here is to regard 3D data as spatial frames with a temporal component, and to train the network with back propagation like a CNN, extracting both spatial and temporal kernels. Subsequently, when inferencing, the system employs the spatial kernel as a 2D frame and the timing aspect, which is the third dimension, as a recurrent inference. The result is to facilitate tasks like object and target tracking.

Yet another way all of this becomes interesting is when dealing with 1D data or signals. For example, the Akida is extremely effective when working on things like raw audio signals or health care signals.
And so we come to the old proverb that states, “The proof of the pudding is in the eating.” Just how well does the Akida perform with industry-standard, real-world benchmarks?
Well, the lads and lasses at Prophesee.ai are working on some of the world’s most advanced neuromorphic vision systems. From their website we read: “Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology.”
According to the paper Learning to Detect Objects with a 1 Megapixel Event Camera, Gray.Retinanet is the latest state-of-the-art in event-camera based object detection. When working with the Prophesee Event Camera Road Scene Object Detection Dataset at a resolution of 1280×720, the Akida achieved 30% better precision while using 50X fewer parameters (0.576M compared to 32.8M with Gray.Retinanet) and 30X fewer operations (94B MACs/sec versus 2432B MACs/sec with Gray.Retinanet). The result was improved performance (including better learning and object detection) with a substantially smaller model (requiring less memory and less load on the system) and much greater efficiency (a lot less time and energy to compute).
As another example, if we move to a frame-based camera with a resolution of 1352×512 using the KITTI 2D Dataset, then ResNet-50 is kind of a standard benchmark today. In this case, Akida returns equivalent precision using 50X fewer parameters (0.57M vs. 26M) and 5X fewer operations (18B MACs/sec vs. 82B MACs/sec) while providing much greater efficiency (75mW at 30 frames per second in a 16nm device). This is the sort of efficiency and performance that could be supported by untethered or battery-operated cameras.
Another very interesting application area involves networks that are targeted at 1D data. One example would be processing raw audio data without the need for all the traditional signal conditioning and hardware filtering.
Consider today’s generic solution as depicted on the left-hand side of the image below. This solution is based on the combination of Mel-frequency cepstral coefficients (MFCCs) and a depth-wise separable CNN (DSCNN). In addition to hardware filtering, transforms, and encoding, this memory-intensive solution involves a heavy software load.
max-0216-05-simplifying-raw-audio.png

Raw audio processing: Traditional solution (left) vs. Akida solution (right)
(Source: BrainChip)
By comparison, as we see on the right-hand side of the image, the raw audio signal can be fed directly into an Akida TENN with no additional filtering or DSP hardware. The result is to increase the accuracy from 92% to 97%, lower the memory (26kB vs. 93kB), and use 16X fewer operations (19M MACs/sec vs. 320M MACs/sec). All of this basically returns single inference while consuming two microjoules of energy. Looking at this another way, assuming 15 inferences per second, we’re talking less than 100µW for always-on keyword detection.
Similar 1D data is found in the medical arena for tasks like vital signs prediction based on a patient’s heart rate or respiratory rate. Preprocessing techniques don’t work well with this kind of data, which means we must work with raw signals. Akida’s TENNs do really well with raw data of this type.
In this case, comparisons are made between Akida and the state-of-the-art S4 (SOTA) algorithm (where S4 stands for structured state space sequence model) with respect to vital signs prediction based on heart rate or respiratory rate using the Beth Israel Deaconess Medical Center Dataset. In the case of respiration, Akida achieves ~SOTA accuracy with 2.5X fewer parameters (128k vs. 300k) and 80X fewer operations (0.142B MACs/sec vs. 11.2B MACs/sec). Meanwhile, in the case of heart rate, Akida achieves ~SOTA accuracy with 5X fewer parameters (63k vs. 600k) and 500X fewer operations (0.02B MACs/sec vs. 11.2B MACs/sec).
It’s impossible to list all the applications for which Akida could be used. In the case of industrial, obvious apps are robotics, predictive maintenance, and manufacturing management. When it comes to automotive, there’s real-time sensing and the in-cabin experience. In the case of health and wellness, we have vital signs monitoring and prediction; also, sensory augmentation. There are also smart home and smart city applications like security, surveillance, personalization, and proactive maintenance. And all of these are just scratching the surface of what is possible.
Hi FF

A mate of mine (disciple) is a little shy to post on TSE but wanted to share this linked to your post.
He feels this surely can’t be just coincidence linked to Mercedes post on LinkedIn earlier today linked to LLM’s

Refer link below


Cheers
The Pope
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Labsy

Regular
  • Like
  • Love
  • Fire
Reactions: 9 users

Rach2512

Regular


This guy was congratulating Tony on his appointment.

Interesting to see what Atlazo are doing, wonder if they were previously aware of Brainchip?


Experience

Atlazo, Inc.

6 yrs 4 mos

CTOAug 2017 - Present 6 yrs 4 mosSan Diego County, California, United States

Atlazo is developing innovative application-focused electronics platform and software to enable next generation health tracking and medical devices. Our mission is to make health tracking and medical devices smarter, smaller, lower power, more secure and seamlessly connected to improve quality of care and life.
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Esq.111

Fascinatingly Intuitive.

Attachments

  • 20230730_140356.jpg
    20230730_140356.jpg
    2.3 MB · Views: 47
  • Like
  • Love
Reactions: 37 users
20231123_010335.jpg

🤣...

20231123_011224.jpg
 
  • Haha
  • Like
  • Sad
Reactions: 29 users

Sirod69

bavarian girl ;-)

🔍🔋 Meet GENX320, the industry’s smallest and most power-efficient event-based vision sensor to date!

Choose from a comprehensive product range built to precisely match your project requirements and accelerate time to design and production:


👉 Order a Bare die to productize an event-based vision system:

• 3x4mm size, 1/5” optical format 320x320 pixel Event-based Metavision® sensor with embedded features


👉 Experiment adapting to size constraints using the off-the-shelf Optical Flex Module:

• Experience GENX320 in a compact integrated optics module housing


👉 Connect Prophesee Evaluation Kit (available in two versions) and maximize your experience with our free Metavision Intelligence Suite - the most comprehensive event-based vision software with more than 95 algorithms, 79 code samples and 24 tutorials:

• Chip on board development module variant: Flexibility to pick your S-mount lens for evaluation as part of the EVK3 platform

• Compact optical module: Compact integrated optics system evaluation as part of the EVK3 platform


👉 Explore low-power, low-latency applications with the STM32 Discovery Board Assembly Kit (available in two versions):

• Chip-on-board development module: Plug-and-play, ready-to-use platform to get a head start on evaluation with added S-mount lens holder format flexibility

• Compact optical module: Plug and play, ready-to-use platform to get a head start on evaluation of compact integrated optics system


🚀 Experience the power of the world’s smallest and most power-efficient event-based vision sensor today: https://bit.ly/3QgQoRY

#EdgeComputing #VisionSensing #eventsensor #ebv #eventbasedvision #neuromorphic #eventcamera #metavision
 
  • Like
  • Love
  • Thinking
Reactions: 35 users

Sirod69

bavarian girl ;-)
  • Like
  • Love
  • Fire
Reactions: 34 users

Glen

Regular
  • Like
  • Thinking
  • Fire
Reactions: 26 users

Sirod69

bavarian girl ;-)
Brainchip!!!!
BrainChip integrates Akida with Arm Cortex-M85 Processor - Cortex-M52 extends the Armv8.1-M Cortex-M line-up (which includes the Cortex-M55 and Cortex-M85) to a new efficiency point, a critical milestone in bringing ML capabilities to microcontrollers.
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 36 users

Getupthere

Regular
  • Like
  • Wow
Reactions: 5 users
Top Bottom