BRN Discussion Ongoing

 
  • Like
Reactions: 9 users
D

Deleted member 118

Guest
  • Like
  • Thinking
Reactions: 3 users

Sirod69

bavarian girl ;-)
PROPHESEE
PROPHESEE
1 Std. •


Our own Luca Verre weighs in with some thoughts on how #AI and #Machinelearning are enabling more effective vision sensing, even in resource constrained applications.

Thanks Vision Systems Design for including our views. With the explosion of data generated by ubiquitous video, combined with the increasing use of vision at the Edge, more efficient methods are required to capture and process scenes with mobile, wearable and IoT devices.

Neuromorphic enabled #eventbased vision can address the performance, power and challenging lighting and motion conditions many of these use cases operate in.

👉 https://lnkd.in/gjWmnhjF

#AR #neuromorphic #Eventcamera #EdgeAI
1678388374124.png
 
  • Like
  • Love
Reactions: 16 users

Quiltman

Regular
Part 2 is out and it’s a ripper of a read.


What a wonderful read, although my comprehension of all the new capabilities is somewhat lacking.
However, I think most of us understand real world comparisons and benchmarking.
Hence, I think this extract from the article is particularly powerful:
......

And so we come to the old proverb that states, “The proof of the pudding is in the eating.” Just how well does the Akida perform with industry-standard, real-world benchmarks?

Well, the lads and lasses at Prophesee.ai are working on some of the world’s most advanced neuromorphic vision systems. From their website we read: “Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology.”

According to the paper Learning to Detect Objects with a 1 Megapixel Event Camera, Gray.Retinanet is the latest state-of-the-art in event-camera based object detection. When working with the Prophesee Event Camera Road Scene Object Detection Dataset at a resolution of 1280×720, the Akida achieved 30% better precision while using 50X fewer parameters (0.576M compared to 32.8M with Gray.Retinanet) and 30X fewer operations (94B MACs/sec versus 2432B MACs/sec with Gray.Retinanet). The result was improved performance (including better learning and object detection) with a substantially smaller model (requiring less memory and less load on the system) and much greater efficiency (a lot less time and energy to compute).

As another example, if we move to a frame-based camera with a resolution of 1352×512 using the KITTI 2D Dataset, then ResNet-50 is kind of a standard benchmark today. In this case, Akida returns equivalent precision using 50X fewer parameters (0.57M vs. 26M) and 5X fewer operations (18B MACs/sec vs. 82B MACs/sec) while providing much greater efficiency (75mW at 30 frames per second in a 16nm device). This is the sort of efficiency and performance that could be supported by untethered or battery-operated cameras.

Another very interesting application area involves networks that are targeted at 1D data. One example would be processing raw audio data without the need for all the traditional signal conditioning and hardware filtering.

Consider today’s generic solution as depicted on the left-hand side of the image below. This solution is based on the combination of Mel-frequency cepstral coefficients (MFCCs) and a depth-wise separable CNN (DSCNN). In addition to hardware filtering, transforms, and encoding, this memory-intensive solution involves a heavy software load.

max-0216-05-simplifying-raw-audio.png


Raw audio processing: Traditional solution (left) vs. Akida solution (right)
(Source: BrainChip)


By comparison, as we see on the right-hand side of the image, the raw audio signal can be fed directly into an Akida TENN with no additional filtering or DSP hardware. The result is to increase the accuracy from 92% to 97%, lower the memory (26kB vs. 93kB), and use 16X fewer operations (19M MACs/sec vs. 320M MACs/sec). All of this basically returns single inference while consuming two microjoules of energy. Looking at this another way, assuming 15 inferences per second, we’re talking less than 100µW for always-on keyword detection.

Similar 1D data is found in the medical arena for tasks like vital signs prediction based on a patient’s heart rate or respiratory rate. Preprocessing techniques don’t work well with this kind of data, which means we must work with raw signals. Akida’s TENNs do really well with raw data of this type.

In this case, comparisons are made between Akida and the state-of-the-art S4 (SOTA) algorithm (where S4 stands for structured state space sequence model) with respect to vital signs prediction based on heart rate or respiratory rate using the Beth Israel Deaconess Medical Center Dataset. In the case of respiration, Akida achieves ~SOTA accuracy with 2.5X fewer parameters (128k vs. 300k) and 80X fewer operations (0.142B MACs/sec vs. 11.2B MACs/sec). Meanwhile, in the case of heart rate, Akida achieves ~SOTA accuracy with 5X fewer parameters (63k vs. 600k) and 500X fewer operations (0.02B MACs/sec vs. 11.2B MACs/sec).

It’s impossible to list all the applications for which Akida could be used. In the case of industrial, obvious apps are robotics, predictive maintenance, and manufacturing management. When it comes to automotive, there’s real-time sensing and the in-cabin experience. In the case of health and wellness, we have vital signs monitoring and prediction; also, sensory augmentation. There are also smart home and smart city applications like security, surveillance, personalization, and proactive maintenance. And all of these are just scratching the surface of what is possible.
 
  • Like
  • Fire
  • Love
Reactions: 51 users

RobjHunt

Regular
Market/shorts have decided to keep the price where it is. 2nd half of the year I expect some more action or it might be time for me to reconsider my investment
Especially recently, I too have been continually reconsidering my investment but unfortunately, of late, I don’t have available funds to top up.

Pantene Peeps!
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

JDelekto

Regular
What a wonderful read, although my comprehension of all the new capabilities is somewhat lacking.
However, I think most of us understand real world comparisons and benchmarking.
Hence, I think this extract from the article is particularly powerful:
......

And so we come to the old proverb that states, “The proof of the pudding is in the eating.” Just how well does the Akida perform with industry-standard, real-world benchmarks?

Well, the lads and lasses at Prophesee.ai are working on some of the world’s most advanced neuromorphic vision systems. From their website we read: “Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology.”

According to the paper Learning to Detect Objects with a 1 Megapixel Event Camera, Gray.Retinanet is the latest state-of-the-art in event-camera based object detection. When working with the Prophesee Event Camera Road Scene Object Detection Dataset at a resolution of 1280×720, the Akida achieved 30% better precision while using 50X fewer parameters (0.576M compared to 32.8M with Gray.Retinanet) and 30X fewer operations (94B MACs/sec versus 2432B MACs/sec with Gray.Retinanet). The result was improved performance (including better learning and object detection) with a substantially smaller model (requiring less memory and less load on the system) and much greater efficiency (a lot less time and energy to compute).

As another example, if we move to a frame-based camera with a resolution of 1352×512 using the KITTI 2D Dataset, then ResNet-50 is kind of a standard benchmark today. In this case, Akida returns equivalent precision using 50X fewer parameters (0.57M vs. 26M) and 5X fewer operations (18B MACs/sec vs. 82B MACs/sec) while providing much greater efficiency (75mW at 30 frames per second in a 16nm device). This is the sort of efficiency and performance that could be supported by untethered or battery-operated cameras.

Another very interesting application area involves networks that are targeted at 1D data. One example would be processing raw audio data without the need for all the traditional signal conditioning and hardware filtering.

Consider today’s generic solution as depicted on the left-hand side of the image below. This solution is based on the combination of Mel-frequency cepstral coefficients (MFCCs) and a depth-wise separable CNN (DSCNN). In addition to hardware filtering, transforms, and encoding, this memory-intensive solution involves a heavy software load.

max-0216-05-simplifying-raw-audio.png


Raw audio processing: Traditional solution (left) vs. Akida solution (right)
(Source: BrainChip)


By comparison, as we see on the right-hand side of the image, the raw audio signal can be fed directly into an Akida TENN with no additional filtering or DSP hardware. The result is to increase the accuracy from 92% to 97%, lower the memory (26kB vs. 93kB), and use 16X fewer operations (19M MACs/sec vs. 320M MACs/sec). All of this basically returns single inference while consuming two microjoules of energy. Looking at this another way, assuming 15 inferences per second, we’re talking less than 100µW for always-on keyword detection.

Similar 1D data is found in the medical arena for tasks like vital signs prediction based on a patient’s heart rate or respiratory rate. Preprocessing techniques don’t work well with this kind of data, which means we must work with raw signals. Akida’s TENNs do really well with raw data of this type.

In this case, comparisons are made between Akida and the state-of-the-art S4 (SOTA) algorithm (where S4 stands for structured state space sequence model) with respect to vital signs prediction based on heart rate or respiratory rate using the Beth Israel Deaconess Medical Center Dataset. In the case of respiration, Akida achieves ~SOTA accuracy with 2.5X fewer parameters (128k vs. 300k) and 80X fewer operations (0.142B MACs/sec vs. 11.2B MACs/sec). Meanwhile, in the case of heart rate, Akida achieves ~SOTA accuracy with 5X fewer parameters (63k vs. 600k) and 500X fewer operations (0.02B MACs/sec vs. 11.2B MACs/sec).

It’s impossible to list all the applications for which Akida could be used. In the case of industrial, obvious apps are robotics, predictive maintenance, and manufacturing management. When it comes to automotive, there’s real-time sensing and the in-cabin experience. In the case of health and wellness, we have vital signs monitoring and prediction; also, sensory augmentation. There are also smart home and smart city applications like security, surveillance, personalization, and proactive maintenance. And all of these are just scratching the surface of what is possible.
I have to say that this is my favorite article to date. Written by the industry for whom its target audience is as opposed to being written by investors that do not seem to understand the technology.

One of my favorite sentences in this article: "The bottom line is that vision transformers can do a much better job of vision analysis by treating a picture like a sentence or a paragraph, as it were."

That puts a whole new spin on a picture worth a thousand words!
 
  • Like
  • Love
  • Fire
Reactions: 41 users

chapman89

Founding Member
Show me another company that’s going to change the world in every industry?

This is how I see the Brainchip story playing out.
I believe later in the year from Q3 onwards we will see 1-2 material contracts signed, one being Prophesee, although Prophesee as @Diogenese sees it is it’s more of a co-development partnership. I see multiple licensing agreements/payments in 2024.

Now as we know factually that Renesas will have finished taping out MCU’s containing akida IP that will be available for the market.

I see more partners being revealed, I see updates on megachips and megachips taping out, possibly with akd1500.

I see Socionext taping out containing akida IP.

AI is the hottest topic right now, and Brainchip is set to be the dominant Edge AI player.

Now what comes with this? Well yes a much much higher share price but for me personally I see once Renesas hits the market with MCU’s containing akida IP being huge huge huge, because no longer are we just a company that is talking about neuromorphic, we are a company that has commercial applications containing neuromorphic technology and the wider market will see this, no longer will those that are skeptical be skeptic, their interest will raise and I believe 2024 will be our hockey stick growth.

Big funds like Cathy Woods ARK investment who has likened neural networks bigger than the internet, will be scrambling to get a piece of Brainchip shares.



Those who are patient and have done their own research and have the foresight imo will reap the rewards.



You only have to look at those in the industry at what they’re saying about Brainchip and how amazing and differentiated and unique it is.



Truly exciting times.

My opinion only.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 107 users
Interesting article from Design & Reuse. The Renesas spokesperson just gave AKIDA 2nd gen a rap and Plumeria well some will remember the name and the connections to Brainchip and ARM should be immediately recognised but where did the Ai come from. If you believe the article it came from Renesas ecosystem partners:

https://www.design-reuse.com/redir3/35974/352793/8IDCcehr87FC7QuZSKvfeNO7vLEwt

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 43 users

TechGirl

Founding Member
Part 2 is out and it’s a ripper of a read.


Thanks Slade ❤️

Wow what a read 💥

This is my new favourite, Mind Boggling.

But I must admit I do love science fiction oh so much too, so how am I supposed to choose now?

I know I’ve slapped them together, so from here on in peeps BrainChip’s Akida shall be known as

BRAINCHIP’s AKIDA - MIND BOGGLING SCIENCE FICTION - as described by industry experts and respected press

Gotta love the Chip 💪
 
  • Like
  • Love
  • Fire
Reactions: 57 users

chapman89

Founding Member
Interesting article from Design & Reuse. The Renesas spokesperson just gave AKIDA 2nd gen a rap and Plumeria well some will remember the name and the connections to Brainchip and ARM should be immediately recognised but where did the Ai come from. If you believe the article it came from Renesas partners:

https://www.design-reuse.com/redir3/35974/352793/8IDCcehr87FC7QuZSKvfeNO7vLEwt

My opinion only DYOR
FF

AKIDA BALLISTA


“At embedded world in 2022, Renesas became the first company to demonstrate working silicon based on the Arm Cortex-M85 processor. This year, Renesas is extending its leadership by showcasing the features of the new processor in demanding AI use cases. The first demonstration showcases a people detection application developed in collaboration with Plumerai, a leader in Vision AI, that identifies and tracks persons in the camera frame in varying lighting and environmental conditions. The compact and efficient TinyML models used in this application lead to low-cost and lower power AI solutions for a wide range of IoT implementations. The second demo showcases a motor control predictive maintenance use case with an AI-based unbalanced load detection application using Tensorflow Lite for Microcontrollers with CMSIS-NN.

Delivering over 6 CoreMark/MHz, Cortex-M85 enables demanding IoT use cases that require the highest compute performance and DSP or ML capability, realized on a single, simple-to-program Cortex-M processor. The Arm Cortex-M85 processor features Helium technology, Arm’s M-Profile Vector Extension, available as part of the Armv8.1M architecture. It delivers a significant performance uplift for machine learning (ML) and digital signal processing (DSP) applications, accelerating compute-intensive applications such as endpoint AI. Both demos will showcase the performance uplift made possible by the application of this technology in AI use cases. Cortex-M hallmarks such as deterministic operation, short interrupt response time, and state-of-the-art low-power support are uncompromised on Cortex-M85.

“We’re proud to again lead the industry in implementing the powerful new Arm Cortex-M85 processor with Helium technology,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “By showcasing the performance of AI on the new processor, we are highlighting technical advantages of the new platform and at the same time demonstrating Renesas’ strengths in providing solutions for emerging applications with our innovative ecosystem partners.”
 
  • Like
  • Love
  • Fire
Reactions: 47 users
  • Like
  • Love
  • Fire
Reactions: 37 users
“At embedded world in 2022, Renesas became the first company to demonstrate working silicon based on the Arm Cortex-M85 processor. This year, Renesas is extending its leadership by showcasing the features of the new processor in demanding AI use cases. The first demonstration showcases a people detection application developed in collaboration with Plumerai, a leader in Vision AI, that identifies and tracks persons in the camera frame in varying lighting and environmental conditions. The compact and efficient TinyML models used in this application lead to low-cost and lower power AI solutions for a wide range of IoT implementations. The second demo showcases a motor control predictive maintenance use case with an AI-based unbalanced load detection application using Tensorflow Lite for Microcontrollers with CMSIS-NN.

Delivering over 6 CoreMark/MHz, Cortex-M85 enables demanding IoT use cases that require the highest compute performance and DSP or ML capability, realized on a single, simple-to-program Cortex-M processor. The Arm Cortex-M85 processor features Helium technology, Arm’s M-Profile Vector Extension, available as part of the Armv8.1M architecture. It delivers a significant performance uplift for machine learning (ML) and digital signal processing (DSP) applications, accelerating compute-intensive applications such as endpoint AI. Both demos will showcase the performance uplift made possible by the application of this technology in AI use cases. Cortex-M hallmarks such as deterministic operation, short interrupt response time, and state-of-the-art low-power support are uncompromised on Cortex-M85.

“We’re proud to again lead the industry in implementing the powerful new Arm Cortex-M85 processor with Helium technology,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “By showcasing the performance of AI on the new processor, we are highlighting technical advantages of the new platform and at the same time demonstrating Renesas’ strengths in providing solutions for emerging applications with our innovative ecosystem partners.”
“We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge,” says Roger Wendelken, the senior vice president of Renesas’ IoT and Infrastructure Business Unit.”

Despite this a WANCA said today while recommending Retail Food Group as the pick for 2023 “what’s a neuromorphic chip anyway”.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 40 users

Evermont

Stealth Mode
Just FYI - Subscription only unfortunately.

9 March 2023

BrainChip’s second-gen neuromorphic silicon gunning for the big boys​


 
  • Like
  • Fire
  • Love
Reactions: 31 users
Part 2 is out and it’s a ripper of a read.

I must say for my money and Brainchip has now got quite a bit of it the killer part is the Prophesee confirmation:

“According to the paper Learning to Detect Objects with a 1 Megapixel Event Camera, Gray.Retinanet is the latest state-of-the-art in event-camera based object detection. When working with the Prophesee Event Camera Road Scene Object Detection Dataset at a resolution of 1280×720, the Akida achieved 30% better precision while using 50X fewer parameters (0.576M compared to 32.8M with Gray.Retinanet) and 30X fewer operations (94B MACs/sec versus 2432B MACs/sec with Gray.Retinanet). The result was improved performance (including better learning and object detection) with a substantially smaller model (requiring less memory and less load on the system) and much greater efficiency (a lot less time and energy to compute.”

If you take these results together with the multiple statements by Luca Verre and throw in that Tim Llewellyn of Nviso stated that with Anil Mankar’s assistance they were able to optimise AKIDA to their specific needs then no doubt Anil Mankar and his team are optimising AKIDA for Prophesee’s needs the signing off on some form of commercial relationship must be a foregone conclusion.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 50 users
Worrying about the SP every day must be truly exhausting for those who find it hard to handle the psychology of the 'red' days of recent times. It's only painful if you NEED to sell now or don't believe in Brainchip anymore. These SP fluctuations are surely going to seem trivial when Brainchip start closing deals (no matter whether these deals are announced or we see it via revenue in quarterlies). Getting excited for Brainchip's promising future seems a much wiser use of time, given the announcement on 6th March.

The company recently released a fantastic announcement that has strengthened and increased the lead of Brainchip's position in neuromorphic edge AI. Some of our partners are quoted, gushing about us. Brainchip has implemented new features in it's second generation Akida platform that customers asked for to meet their needs. Game changing new features. I don't know about you - but any major company I have ever worked for, does not spend massive $$$ to improve functionality on a whim, without being confident that said changes will secure deals.

Lead adopters are engaging with the second gen Akida NOW. We will be in a Renesas chip this year. How could anyone be anything except excited, is beyond me.

DYOR, all in my excited opinon.
Has any one noticed the change in messaging from partnering with companies and building an ecosystem to " Join our Essential AI Ecosystem" now we have a number of partners who are now quoted on the website in relation to our second generation technology.
 
  • Like
  • Love
  • Fire
Reactions: 45 users

jtardif999

Regular
As some seem intent on finding comparisons to rate Brainchip's performance against on the ASX when there are absolutely none I will remind readers both here and in the background of what Steven Leibson of Tirias Research said in his Forbes Magazine article dated 6 March, 2023:

"Brainchip’s bio-inspired Akida platform is certainly an unusual way to tackle AI/ML applications. While most other NPU vendors are figuring out how many MACs they can fit – and power – on the head of a pin, Brainchip is taking an alternative approach that’s been proven by Mother Nature to work over many tens of millions of years.

In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell."


Edge Impulse described AKIDA technology as science fiction. A data scientist and Phd candidate described the first generation AKIDA 1000 as a "beast".

Against this background the WANCA's have continued to ask what's a neuromorphic chip anyway, well at least those who can say 'neuromorphic'.

"It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations." Mark these words as they are only directed at AKIDA 2000 not the overall mission which Brainchip has embarked upon of creating Artificial General Intelligence.

Brainchip, Peter van der Made and Anil Mankar are creating an entirely new computing paradigm well beyond anything that exists.

They are on track with the timetable set out by Peter van der Made and with each step are opening up engineering possibilities that have only existed in unfulfilled patents and dreams.

The first and second generation AKIDA technology will create industries that do not yet exist.

Peter van der Made has stated recently that his vision of Artificial General Intelligence will be fulfilled in about 7 years which puts it at 2030.

The enormity of what Artificial General Intelligence means is found in the quote of Bill Gates that the person who invents Artificial General Intelligence will have created a company worth ten times Microsoft. I am not saying that his valuation was or is correct what I am pointing you too is the significance he attaches to what Brainchip and Peter van der Made are pursuing.

The late Stephen Hawking and the still living Elon Musk have both stated that Artificial General Intelligence could lead to the destruction of mankind. There are of course many others who are saying and who have said the same thing.


Now in the overly optimistic hope that the above will be sufficient to shut down the mindless comparisons can anyone here point to a technology company on the ASX that could just by its mere existence lead to the destruction of mankind in seven years time.

Of course not this is why Brainchip is not understood by normal retail investors.

Why it is not understood by WANCA commentators who cannot even say neuromorphic.

Why it is extremely obvious that anyone who persists down this road of comparison either is ignorant of just what Brainchip is doing or has some other motive innocent or otherwise.

My opinion only DYOR
FF

AKIDA BALLISTA




We need.another category for appreciating FFs posts - maybe LOL+Fire 🤓
 
  • Haha
  • Like
  • Fire
Reactions: 17 users
As some seem intent on finding comparisons to rate Brainchip's performance against on the ASX when there are absolutely none I will remind readers both here and in the background of what Steven Leibson of Tirias Research said in his Forbes Magazine article dated 6 March, 2023:

"Brainchip’s bio-inspired Akida platform is certainly an unusual way to tackle AI/ML applications. While most other NPU vendors are figuring out how many MACs they can fit – and power – on the head of a pin, Brainchip is taking an alternative approach that’s been proven by Mother Nature to work over many tens of millions of years.

In Tirias Research’s opinion, it’s not the path taken to the result that’s important, it’s the result that counts. If Brainchip’s Akida event-based platform succeeds, it won’t be the first time that a radical new silicon technology has swept the field. Consider DRAMs (dynamic random access memories), microprocessors, microcontrollers, and FPGAs (field programmable gate arrays), for example. When those devices first appeared, there were many who expressed doubts. No longer. It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations. Time will tell."


Edge Impulse described AKIDA technology as science fiction. A data scientist and Phd candidate described the first generation AKIDA 1000 as a "beast".

Against this background the WANCA's have continued to ask what's a neuromorphic chip anyway, well at least those who can say 'neuromorphic'.

"It’s possible that Brainchip has developed yet another breakthrough that could rank with those previous innovations." Mark these words as they are only directed at AKIDA 2000 not the overall mission which Brainchip has embarked upon of creating Artificial General Intelligence.

Brainchip, Peter van der Made and Anil Mankar are creating an entirely new computing paradigm well beyond anything that exists.

They are on track with the timetable set out by Peter van der Made and with each step are opening up engineering possibilities that have only existed in unfulfilled patents and dreams.

The first and second generation AKIDA technology will create industries that do not yet exist.

Peter van der Made has stated recently that his vision of Artificial General Intelligence will be fulfilled in about 7 years which puts it at 2030.

The enormity of what Artificial General Intelligence means is found in the quote of Bill Gates that the person who invents Artificial General Intelligence will have created a company worth ten times Microsoft. I am not saying that his valuation was or is correct what I am pointing you too is the significance he attaches to what Brainchip and Peter van der Made are pursuing.

The late Stephen Hawking and the still living Elon Musk have both stated that Artificial General Intelligence could lead to the destruction of mankind. There are of course many others who are saying and who have said the same thing.


Now in the overly optimistic hope that the above will be sufficient to shut down the mindless comparisons can anyone here point to a technology company on the ASX that could just by its mere existence lead to the destruction of mankind in seven years time.

Of course not this is why Brainchip is not understood by normal retail investors.

Why it is not understood by WANCA commentators who cannot even say neuromorphic.

Why it is extremely obvious that anyone who persists down this road of comparison either is ignorant of just what Brainchip is doing or has some other motive innocent or otherwise.

My opinion only DYOR
FF

AKIDA BALLISTA




So good to have you back FF.
 
  • Like
  • Haha
  • Love
Reactions: 13 users

Steve10

Regular
Excellent write up in part 2.

Appears Akida has limitless applications.

It's a no brainer for companies to make their devices smart at the edge for 20-30c BRN IP.

1.15 trillion chips per year.

1% = 11.5B chips pa x 25c = $2.875B.

New industry with limited players allows BRN with superior tech to gain large market share.

Edge processor market forecast to be worth AUD $53.1B by 2030.

Massive TAM. BRN has potential to become very big fast.

Overnight US markets sold off. There is an inverse correlation between US 10 year bond yield & SPX/NDX etc. Overlay SPX/NDX on US 10 year bond yield chart & the yield highs = SPX/NDX lows & vice versa.

The US 10 year bond yield RSI & yield peaked on 2nd March. Has started to roll over & MACD turned red today on daily chart for US 10 year bond yield. May have a few shaky days until CPI next Tuesday in USA. CPI should come down next month as they will be using same calculation method as last month which caused a higher reading for last month. This will result in 10 year bond yield going lower & markets going higher. If for some reason CPI comes in hot again the markets will be rattled. 10 year bond yield chart looks similar to 3rd October 2022 & 6th January 2023 which was just prior to the rally up by markets.
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Top Bottom