BRN Discussion Ongoing

DJM263

LTH - 2015
I'm excited:

What would the attraction to Tony be to leave his current employment situation to move to a first mover with no revenue!!! Love to know what Sean's pitch was?? and if any NDA's were signed or disclosed during the negotiations :unsure:

I'm also happy that the succession plan retains PVDM in the loop in key roles.


Congratulations and thankyou PVDM
 
  • Like
  • Love
  • Fire
Reactions: 62 users

Damo4

Regular
Not sure that's correct @Damo4 ? I was under the impression it was the company that submits it price sensitive or not. Could be wrong, but i believe this was clarified by t.dawe a while ago?
Hi Ilovelamp

In the last few weeks a company WBT had an announcement which was released by the company to the ASX as price sensitive and the ASX changed it to not price sensitive.

The CEO of the company on social media put out a statement saying something to the effect that he could not understand why the ASX downgraded it.

I am sure WBT investors on here will be able to confirm this and perhaps supply the announcement and the CEO’s response.

The actual Guidance Note from the ASX also states that if a company is unsure if an announcement is price sensitive they should mark it that way and the ASX will decide.

If I was the ASX as Peter van der Made is remaining on the Board, moving across to the Scientific Advisory Board and being given an Honorary title and not changing position till the end of 2023 with no suggestion he is selling his shares I would consider it is a steady as she goes non price sensitive announcement.

My opinion only DYOR
Fact Finder

AKIDA BALLISTA

Sorry my wording wasn't great, you are correct BRN chooses what they "think" it is, but ultimately the ASX decides.
As FF says, they can mark is Price Sensitive, even if they aren't sure and the ASX can decide.
I would assume price-sensitive but I'm not well-read on the continuous disclosure rules.
 
  • Like
Reactions: 4 users
Sorry my wording wasn't great, you are correct BRN chooses what they "think" it is, but ultimately the ASX decides.
As FF says, they can mark is Price Sensitive, even if they aren't sure and the ASX can decide.
I would assume price-sensitive but I'm not well-read on the continuous disclosure rules.
Thanks PVDM.. Good luck with your semi retirement..

Early Dec is my timeline to start seeing some more price appreciation above 21c.. Still lots of churn to go through.. Then holding above 27-28c by early next year…

IMO that would be somewhat constructive from a TA point of view..
 
  • Like
  • Fire
Reactions: 4 users

Home101

Regular
Who is bloody selling now? This is such a positive hire.
 
  • Like
Reactions: 11 users

7für7

Top 20
Who is bloody selling now? This is such a positive hire.
Sell on good news ? 🤔
Also people who just get the information wrong… a lot of people who just overreact as well because of lack of understanding.. just my view of point !
 
  • Like
Reactions: 4 users

MrNick

Regular
I'm excited:

What would the attraction to Tony be to leave his current employment situation to move to a first mover with no revenue!!! Love to know what Sean's pitch was?? and if any NDA's were signed or disclosed during the negotiations :unsure:

I'm also happy that the succession plan retains PVDM in the loop in key roles.


Congratulations and thankyou PVDM
Tony would have been made fully aware of where BC stands in it's development timeline, and with which companies. He would also have been under no illusion that no beans would be allowed to spill from his mouth when Sean's patter concluded.

A great appointment and one that comes with a huge back catalogue of potential development areas to be explored for Akida.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

KMuzza

Mad Scientist


1700611452123.png



Well now i think Elon and his cohorts knew about Brainchip.- :cool: - as a Qualcomm man now works for BRAINCHIP.:giggle:(y)(y)
Remember the -Elon didn"t see this one coming taunt from BRAINCHIP.

AKIDA BALLISTA UBQTS.-
 
  • Like
  • Fire
Reactions: 17 users

HopalongPetrovski

I'm Spartacus!


Put up by Rickk over on the crapper.
Nice to see and hear Tony in action.
Talking our kind of talk and happy to hear his emphasis on beneficent usage of AI.
Looks to be a good fit.
Has anyone found his age anywhere?
50's-60's?
Keen to hear more from him on just where he wants to guide us.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

Labsy

Regular
I'm excited:

What would the attraction to Tony be to leave his current employment situation to move to a first mover with no revenue!!! Love to know what Sean's pitch was?? and if any NDA's were signed or disclosed during the negotiations :unsure:

I'm also happy that the succession plan retains PVDM in the loop in key roles.


Congratulations and thankyou PVDM
Indeed! Very encouraging....🤔🤔🤔🤔😉😉😉🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Labsy

Regular
Amongst other projects of course, I would guess his emphasis would be on running LLM's on next Gen akida assisted processors if not solely on akida...closely working with qualcomm perhaps and their Snapdragon...
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Who is bloody selling now? This is such a positive hire.
Volatility around Nvidia earnings mainly.. I doubt it has much to do with new CTO and PVDM retiring..
 
  • Like
Reactions: 3 users
Volatility around Nvidia earnings mainly.. I doubt it has much to do with new CTO and PVDM retiring..
I think you might be right here SL. The Aussie All Tech Index is currently down 0.90%, and the Nvidia Q3 earnings report may have something to do with it.

For anyone interested, here is a link to an article about the Nvidia report https://www.cnbc.com/2023/11/21/nvidia-nvda-q3-earnings-report-2024.html

"Nvidia faces obstacles, including competition from AMD and lower revenue because of export restrictions that can limit sales of its GPUs in China. But ahead of Tuesday report, some analysts were nevertheless optimistic.

“GPU demand continues to outpace supply as Gen AI adoption broadens across industry verticals,” Raymond James’ Srini Pajjuri and Jacob Silverman wrote in a note Monday to clients, with a “strong buy” recommendation on Nvidia stock. “We are not overly concerned about competition and expect NVDA to maintain >85% share in Gen AI accelerators even in 2024.”

Interesting bit of info here "During the quarter, Nvidia announced the GH200 GPU, which has more memory than the current H100 and an additional Arm processor onboard. The H100 is expensive and in demand. Nvidia said Australia-based Iris Energy, an owner of bitcoin mining data centers, was buying 248 H100s for $10 million, which works out to about $40,000 each." Oh wow!
 
  • Like
  • Fire
  • Wow
Reactions: 15 users

Labsy

Regular
I think you might be right here SL. The Aussie All Tech Index is currently down 0.90%, and the Nvidia Q3 earnings report may have something to do with it.

For anyone interested, here is a link to an article about the Nvidia report https://www.cnbc.com/2023/11/21/nvidia-nvda-q3-earnings-report-2024.html

"Nvidia faces obstacles, including competition from AMD and lower revenue because of export restrictions that can limit sales of its GPUs in China. But ahead of Tuesday report, some analysts were nevertheless optimistic.

“GPU demand continues to outpace supply as Gen AI adoption broadens across industry verticals,” Raymond James’ Srini Pajjuri and Jacob Silverman wrote in a note Monday to clients, with a “strong buy” recommendation on Nvidia stock. “We are not overly concerned about competition and expect NVDA to maintain >85% share in Gen AI accelerators even in 2024.”

Interesting bit of info here "During the quarter, Nvidia announced the GH200 GPU, which has more memory than the current H100 and an additional Arm processor onboard. The H100 is expensive and in demand. Nvidia said Australia-based Iris Energy, an owner of bitcoin mining data centers, was buying 248 H100s for $10 million, which works out to about $40,000 each." Oh wow!
Mental...buying 248 horse and ploughs to dig for fairy dust....
 
  • Haha
  • Like
Reactions: 8 users

TECH

Regular
Good afternoon back in Australia,

Gee, I was speaking with my son in Perth at 6.45am this morning, Perth in the grips of a heatwave, 35c on Sunday through to this Sunday with
temps ranging between 37c and 40c....I'm sweating it out here at the top of NZ in 22c :ROFLMAO:....November heatwave, pretty sure it's going to be a
new record, I personally can't remember temps like that in November ever since I first arrived back in 1985.

Well here's a nice comment, some might say it's also hot !

"we’re using Akida 1 chip to run machine learning models for the vision system of our space robotics system. We plan to train it while in space. For us the major decision factor was the low power consumption, and subsequently less heat generation, which is a significant consideration for space applications."

No names mentioned, but nobody does it better than Peter's brilliant architecture, we have numerous points of difference between us and the mob, the MOST OBVIOUS is, we have a commercial, fully functioning, proven beyond a reasonable doubt technology that more and more, including tech media giants have FINALLY RECOGNISED that the (current) future is Spiking Neural Networks, Neuromorphic Computing, thanks to pioneers like Peter and the ones who came before him.

Our journey is feeling very uplifting, roll on 2024.

Tech ;)
 
  • Like
  • Love
  • Fire
Reactions: 53 users
Good afternoon back in Australia,

Gee, I was speaking with my son in Perth at 6.45am this morning, Perth in the grips of a heatwave, 35c on Sunday through to this Sunday with
temps ranging between 37c and 40c....I'm sweating it out here at the top of NZ in 22c :ROFLMAO:....November heatwave, pretty sure it's going to be a
new record, I personally can't remember temps like that in November ever since I first arrived back in 1985.

Well here's a nice comment, some might say it's also hot !

"we’re using Akida 1 chip to run machine learning models for the vision system of our space robotics system. We plan to train it while in space. For us the major decision factor was the low power consumption, and subsequently less heat generation, which is a significant consideration for space applications."

No names mentioned, but nobody does it better than Peter's brilliant architecture, we have numerous points of difference between us and the mob, the MOST OBVIOUS is, we have a commercial, fully functioning, proven beyond a reasonable doubt technology that more and more, including tech media giants have FINALLY RECOGNISED that the (current) future is Spiking Neural Networks, Neuromorphic Computing, thanks to pioneers like Peter and the ones who came before him.

Our journey is feeling very uplifting, roll on 2024.

Tech ;)
Your lucky it cool in Perth as it’s a cold 34 degrees and full humidity here is cairns today

1700630754659.gif
 
  • Haha
  • Like
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Tony also made significant contributions at Qualcomm, Inc., where he led the Zeroth Neuromorphic Engineering Project and contributed to projects in intelligent AI agents and robotics while collaborating closely with Qualcomm Ventures.

View attachment 50179

Plus this...


And this...

Screen Shot 2023-11-22 at 4.32.08 pm.png

And this..
Screen Shot 2023-11-22 at 4.35.30 pm.png



And finally this...


wasted-drunk.gif
 
  • Like
  • Love
  • Fire
Reactions: 77 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Phew! I've been out all day and I was literally busting to post that!

cheeky-cute-smile.gif
 
  • Haha
  • Like
  • Love
Reactions: 25 users
Plus this...


And this...

View attachment 50205
And this..
View attachment 50206


And finally this...


View attachment 50207
Hi Bravo
This is a very nice lead in to a point I wanted to make to those who question the significance of Tony Lewis accepting the offer of employment as Chief Technology Officer at Brainchip and doubt that he is genuinely excited by the prospect.

To prove that he is excited by the prospect I have reproduced chapter 2 of the article that appeared in March, 2023 in the EETimes covering the release of AKIDA 2.0. To this article I will only add two later developments which if known by the articles author would have no doubt added and adjective or two to his description of AKIDA 2.0 as more than Mind-Boggling Neuromorphic:

1. At the time this article was written AKIDA 2.0 P was running at 50 TOPS. As a result of the early customer feedback Brainchip added more nodes (a further 128) and when it was finally released AKIDA 2.0 P was running at 113TOPS.

2. At the time this article was written AKIDA 2.0 was able to run Vision Transformers however at this point it has added the ability to run Large Language Models LLM's on chip unconnected.

My opinion only DYOR
Fact Finder
AKIDA BALLISTA

max-0216-image-for-home-page-brainchip-deux.jpg

March 9, 2023

Mind-Boggling Neuromorphic Brain Chips (Part 2)​


by Max Maxfield
In my previous column, we discussed how the year 2030 seems set to be an exciting time to be in artificial intelligence (AI) and machine learning (ML) space (where no one can hear you scream). For example, in addition to the industrial IoT (IIoT) we also have the artificial intelligence of things (AIoT).
“What’s the AIoT when it’s at home?” I hear you cry. Well, according to the IoT Agenda, “The AIoT is the combination of AI technologies with the IoT infrastructure to achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics […] the AIoT is transformational and mutually beneficial for both types of technology as AI adds value to IoT through machine learning capabilities and IoT adds value to AI through connectivity, signaling, and data exchange.”
I’ve said it before and I’ll say it again, I couldn’t have said this any better myself.
As we noted in Part 1 of this 2-part miniseries, the folks at PwC project that the impact of AI on global GDP by 2030 will be around $15T. According to Forbes, the AIoT piece of this pie could be $1.2T in 2030. Meanwhile, the 2023 Edge AI Hardware Report from VDC Research estimates that the market for Edge AI hardware processors will be $35B by 2030.
All of this goes to explain why the folks at BrainChip are so excited by the recent announcement of their Akida 2.0 platform. As I mentioned in my earlier musings, the Akida is a fully digital neuromorphic event-based processor.
The first big point to note about this new incarnation is that it can run humongous models like ResNet-50 completely on the neural processor, thereby freeing up the host processor.
The second point you need to know is that, once you’ve trained your original model in the cloud, this platform supports the unique ability to learn on-device without cloud retraining, thereby allowing developers to extend previously trained models.
The third point that will excite those in the know is that the guys and gals at BrainChip have added spatial temporal convolutions and the innovative handling of time series data. This means that, in addition to traditional 2D data (like images), these features allow the Akida platform to treat various types of 1D and 3D data smartly, thereby enabling much better predictive analysis, video analytics, speech analysis, etc. All of this goes to provide better accuracy at the edge while substantially reducing models in terms of size and weights.
And the fourth big point is that the little scamps at BrainChip have added hardware support for vision transformers in the edge, thereby delivering a dramatic boost in machine vision performance.
What are vision transformers? I’m so glad you asked. Until recently, typical image processing systems like ResNet have employed convolutional neural networks (CNNs). In 2017, a transformer architecture was introduced for use in natural language processing (NLP), which involves analyzing, extracting, and comprehending information from human language. NLP includes both reading text and listening to spoken words.
In artificial neural networks (ANNs), the term “attention” refers to a technique that is meant to mimic cognitive attention (i.e., the concentration of awareness on some phenomenon to the exclusion of other stimuli). Transformers measure the relationships between pairs of input tokens (words in the case of text strings), and this is what we mean by attention in this context.
This concept was extended in 2019 into a vision transformer architecture for processing images without need of convolutions. The idea is basically to break down input images into a series of patches, which—once transformed into vectors—are essentially equivalent to the words processed by a normal transformer. The bottom line is that vision transformers can do a much better job of vision analysis by treating a picture like a sentence or a paragraph, as it were.
By “better” in this context I mean smaller and faster. How much smaller? How much faster? We will come to that in a moment. Before we go there, however, let’s first note that this latest incarnation of the Akida comes in three temptingly tasty flavors as shown below.
max-0216-01-akidae-s-p.png

Akida: One platform for many scalable products (Source: BrainChip)

All of these flavors are intended for processing at the edge, but this leads us to the question, “What exactly is the edge?” In its broadest sense, the edge is where the internet meets the real world (where the rubber meets the road). For myself, I tend to think about the “extreme edge” or the “sensor edge” as being so close to the real world that you can smell it (assuming your sensor is of an olfactory nature, of course). By comparison, the “network edge” includes things like fog and mist computing in which edge devices (like servers) are used to carry out a substantial amount of computation, storage, and communication locally before ultimately connecting into the cloud.
The Akita-E, which has 1 to 4 nodes and provides up to 200 GOPS (giga operations per second), is targeted at low end sensor nodes. This can be run standalone or in conjunction with a min-spec MCU, and it’s ideal for always-on applications.
Next, we have the sensor-balanced Akita-S, which boasts 2 to 8 nodes and can provide up to 1 TOPS (tera operations per second). And for those applications demanding the maximum performance, we have the Akita-P, which flaunts (yes, I said “flaunts” and I’m not ashamed of myself) 8 to 128 nodes to deliver a squealing 50 TOPS (where the “squealing” qualifier would be me squealing in excitement). This bodacious beauty boasts temporal event-based neural nets and vision transformers. It has detection, classification, segmentation, and prediction. Basically, it can handle all types of complex networks with minimum CPU intervention.
Another way to look at this is as shown below. There are a bunch of simple AI/ML tasks like vibration detection, anomaly detection, keyword spotting, and sensor fusion that can be implemented using an MCU (although an MCU is not optimal, these tasks are certainly within its performance envelope). However, these tasks, which are shown in gray in the image below, could be better served using an Akida-E, which will be orders of magnitude more efficient.
max-0216-02-akida-apps.png

Example spectrum of edge AI/ML tasks and computational offerings (Source: BrainChip)
The applications shown as orange in the diagram cannot be implemented using MCUs by themselves; instead, they require some level of ML acceleration. These tasks can be easily accomplished using an Akida-S accompanied by a min-spec or mid-spec MCU.
Finally, we get to applications like speech recognition, gesture recognition, and object detection and classification, which are colored blue in the above diagram. Tasks of this ilk are usually performed using something like a Jetson (MPU+GPU) or a Snapdragon (AP+GPU). This is the application area where the Akida-P strides to the fore.
We don’t have the time (and I don’t have the energy) to take too deep a dive into the technology (rest assured that the folks at BrainChip will be delighted to talk your ears off if you ask them), but a super high-level view is as shown below.
max-0216-03-akidae-architecture.png

High-level architectural view of the Akida (Source: BrainChip)
Everything inside the dotted line is the IP that developers will integrate into their system-on-chip (SoC) devices. Communication with the rest of the SoC is realized via standard AXI bus interconnect. In addition to the temporal event-based neural nets (TENNs) and vision transformers, we have a local scratch pad memory, a system interface DMA, and an enhanced high resolution coding (HRC) DMA. This fully digital event-based design, which is portable across different foundries, accelerates all types of networks, including CNNs, DNNs, vision transformers, native SNN, and sequence prediction.
The Akida efficiently computes event domain convolutions, temporal convolutions, and vision transformer encoding. The DMAs supporting all of this minimize the load on the system and the host CPU because they access data when they need it, as opposed to relying on the CPU to manage everything when it has better things to do. Moreover, there’s an intelligent runtime software layer that handles everything transparent to the user.
One distinguishing characteristic of the Akida is its ability to support multi-pass processing, which means developers can implement more complex networks on smaller die areas. For example, suppose you have a complex and parallel network that would ideally be served by four Akida nodes. Now suppose you don’t have space budgeted on the die for this, in which case you could use a single node and perform multiple passes.
max-0216-04-simplifying-and-differentiating.png
Additional features supported by the Akida (Source: BrainChip)
The great thing here is that the runtime software takes care of everything, such as performing segmentation, partitioning, and then running the multiple levels. Obviously, there’s latency involved, but this latency is minimized by having the runtime and the DMA take care of everything, which means the Akida doesn’t need to keep on transferring control back to the CPU.
As we previously noted, the Akida also supports continuous on-chip learning, which allows developers to use existing trained models, extract features, and then extend classes on the last fully connected layer. All of this customization is performed on the device without the need for cloud retraining.
In the case of vision transformers, the Akida supports the encoder block fully in hardware. This encoder block is fully contained and—once again—managed by DMA and runtime software. Just to give a sense of what this means, two nodes running at 800MHz provide 30 frames per second performance for high resolution video, which is compelling for an edge device.
Another new feature is the concept of temporal event-based neural nets, which are easy to train and extremely data efficient. The idea here is to regard 3D data as spatial frames with a temporal component, and to train the network with back propagation like a CNN, extracting both spatial and temporal kernels. Subsequently, when inferencing, the system employs the spatial kernel as a 2D frame and the timing aspect, which is the third dimension, as a recurrent inference. The result is to facilitate tasks like object and target tracking.

Yet another way all of this becomes interesting is when dealing with 1D data or signals. For example, the Akida is extremely effective when working on things like raw audio signals or health care signals.
And so we come to the old proverb that states, “The proof of the pudding is in the eating.” Just how well does the Akida perform with industry-standard, real-world benchmarks?
Well, the lads and lasses at Prophesee.ai are working on some of the world’s most advanced neuromorphic vision systems. From their website we read: “Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology.”
According to the paper Learning to Detect Objects with a 1 Megapixel Event Camera, Gray.Retinanet is the latest state-of-the-art in event-camera based object detection. When working with the Prophesee Event Camera Road Scene Object Detection Dataset at a resolution of 1280×720, the Akida achieved 30% better precision while using 50X fewer parameters (0.576M compared to 32.8M with Gray.Retinanet) and 30X fewer operations (94B MACs/sec versus 2432B MACs/sec with Gray.Retinanet). The result was improved performance (including better learning and object detection) with a substantially smaller model (requiring less memory and less load on the system) and much greater efficiency (a lot less time and energy to compute).
As another example, if we move to a frame-based camera with a resolution of 1352×512 using the KITTI 2D Dataset, then ResNet-50 is kind of a standard benchmark today. In this case, Akida returns equivalent precision using 50X fewer parameters (0.57M vs. 26M) and 5X fewer operations (18B MACs/sec vs. 82B MACs/sec) while providing much greater efficiency (75mW at 30 frames per second in a 16nm device). This is the sort of efficiency and performance that could be supported by untethered or battery-operated cameras.
Another very interesting application area involves networks that are targeted at 1D data. One example would be processing raw audio data without the need for all the traditional signal conditioning and hardware filtering.
Consider today’s generic solution as depicted on the left-hand side of the image below. This solution is based on the combination of Mel-frequency cepstral coefficients (MFCCs) and a depth-wise separable CNN (DSCNN). In addition to hardware filtering, transforms, and encoding, this memory-intensive solution involves a heavy software load.
max-0216-05-simplifying-raw-audio.png

Raw audio processing: Traditional solution (left) vs. Akida solution (right)
(Source: BrainChip)
By comparison, as we see on the right-hand side of the image, the raw audio signal can be fed directly into an Akida TENN with no additional filtering or DSP hardware. The result is to increase the accuracy from 92% to 97%, lower the memory (26kB vs. 93kB), and use 16X fewer operations (19M MACs/sec vs. 320M MACs/sec). All of this basically returns single inference while consuming two microjoules of energy. Looking at this another way, assuming 15 inferences per second, we’re talking less than 100µW for always-on keyword detection.
Similar 1D data is found in the medical arena for tasks like vital signs prediction based on a patient’s heart rate or respiratory rate. Preprocessing techniques don’t work well with this kind of data, which means we must work with raw signals. Akida’s TENNs do really well with raw data of this type.
In this case, comparisons are made between Akida and the state-of-the-art S4 (SOTA) algorithm (where S4 stands for structured state space sequence model) with respect to vital signs prediction based on heart rate or respiratory rate using the Beth Israel Deaconess Medical Center Dataset. In the case of respiration, Akida achieves ~SOTA accuracy with 2.5X fewer parameters (128k vs. 300k) and 80X fewer operations (0.142B MACs/sec vs. 11.2B MACs/sec). Meanwhile, in the case of heart rate, Akida achieves ~SOTA accuracy with 5X fewer parameters (63k vs. 600k) and 500X fewer operations (0.02B MACs/sec vs. 11.2B MACs/sec).
It’s impossible to list all the applications for which Akida could be used. In the case of industrial, obvious apps are robotics, predictive maintenance, and manufacturing management. When it comes to automotive, there’s real-time sensing and the in-cabin experience. In the case of health and wellness, we have vital signs monitoring and prediction; also, sensory augmentation. There are also smart home and smart city applications like security, surveillance, personalization, and proactive maintenance. And all of these are just scratching the surface of what is possible.
 
  • Like
  • Love
  • Fire
Reactions: 96 users

cosors

👀
Last edited:
  • Haha
Reactions: 7 users
Top Bottom