BRN Discussion Ongoing

Diogenese

Top 20
Here are a couple of links to articles by Weebit Nano that sound to my limited understanding to have similarities to Akida. Sorry if posted previously.

The topics are in the URLs

June 2023


Dec 2022
Hi wasMadx,

WeeBt ReRAM is analog. Analog suffers from manufacturing variations which result in unreliable operation as shown in this graph:

1711707667578.png


As shown, the analog neuron's output amplitude varies over 30%. This may pass muster for single-bit storage, but it presents problems where multi-value voltage calculations are concerned.

We discussed EnChange yesterday with Anastasi's video:

3 New Groundbreaking Chips Explained: Outperforming Moore's Law (youtube.com)




EnChange use a metal capacitor in an attempt to overcome this problem and use a single ADC for multiple "MAC" columns. as described in this patent application:

US2023370082A1 SHARED COLUMN ADCS FOR IN-MEMORY-COMPUTING MACROS 20220516

1711708018904.png
 
  • Like
  • Love
  • Fire
Reactions: 30 users

JDelekto

Regular
... Just like an owner always believes their house is worth so much more than the market believes. That owner will not sell their house....
I tend to think there is a duality in the housing market. One believes their house is worth much more than the market does when it comes to selling it, but worth much less than it is when it comes to paying taxes.

As my parents did, I strongly believe that anyone allowed to appraise my house for taxation purposes should also be ready to find a buyer to purchase my home at their appraised price. But I digress...
 
  • Like
Reactions: 3 users

Diogenese

Top 20
I tend to think there is a duality in the housing market. One believes their house is worth much more than the market does when it comes to selling it, but worth much less than it is when it comes to paying taxes.

As my parents did, I strongly believe that anyone allowed to appraise my house for taxation purposes should also be ready to find a buyer to purchase my home at their appraised price. But I digress...
Rates notice?
 
  • Haha
  • Fire
Reactions: 6 users
Not sure where the debate is? IMO the situation is kinda simple. I first bought brainchip at 4.3c or so a while back. Not as long as FF and many others here. As FF has apparently posted on the other site, there isn't really a question whether AKIDA is real or if it works. The IP is good, everything is good and yes we'll have the boxes out soon. I did sell the majority of my holdings a while back but have bought more than I once owned and at a much higher price than 4.3c but of course none of this matters. What Matters IMO are 2 things.
Partnerships are great for retail but fluff for the big money.
Revenue is the thing that will see us climb out from the place we currently find ourselves. I am concerned about the 4c coming as I don't believe it will be great. But it no longer matters for me personally. At first I wrote the only thing is revenue, but I deleted this as a full out Mercedes "we are continuing to make progress with AKIDA" or Intel coming out and saying "we look forward to working with Brainchip directly on XYZ of course will also blow a hole in the ass if any shorters position. Is AKIDA the bomb? According to many companies, yes. Are we seeing large revenue starting to grow, we'll see, but again, not sure where the debate is. $$$$ talks. Simple. Happy Easter let's see if Revenue grows.

Edit: FF can call all the doubters lacking in intelligence all he likes. But here are the Facts. The market decides the SP not one poster. Just like an owner always believes their house is worth so much more than the market believes. That owner will not sell their house. Currently the market believes Brainchip is worth a MC of $500 million. The market is always right. Sorry.
I agree with most of your post, except the parts about the "Market" deciding the share price and the "Market" is always right.

The Market, is comprised of humans, "entities" and bots etc...

The humans are governed mostly by their emotions.

The entities will use what ever manipulation they can legal and/or illegal, to make a profit, through the servitude of the bots.

To make the statement of "The Market is always right" has huge holes in it in my opinion, because more often than not "it" is wrong.

And that can be both when the share price is flying, or in the gutter.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users

JDelekto

Regular
Rates notice?
LOL. I live in South Florida in the US, so I assume the property taxes here are the closest to the council rates in AU. I've been living in my home for about two decades now and the valuation of my home has increased about 285%. We have a "homestead exemption" which caps the amount on which taxes are calculated if it is one's primary residence.

What's more egregious right now is property insurance, which is required if you own a mortgage on your home to ensure the bank gets its money if anything happens. Two years ago I was paid $6,000 annually for homeowner's insurance. Last year, it was $8,000; this year, it's gone up to $11,000. (USD of course).

The housing market in Florida is insane right now. The insurance situation is out of control and, if insurance companies aren't trying to drop people, they're raising their rates to ludicrous values for those who have mortgages and have no choice but to pay or spend time trying to shop around in a shrinking insurance market. Certain areas of Florida are likely to see prices collapsing as more and more people put their homes up for sale at these high prices, but no buyer wants to pay the insurance or taxes on those large amounts.

I've accepted that when I retire, I will still be paying rent, but hopefully enough earned from my investments to put that on autopilot and not have to stress about it. Also have savings to cover emergencies so I could put insurance companies in my rearview mirror.
 
  • Like
  • Wow
  • Love
Reactions: 23 users

Diogenese

Top 20
Edge impulse has got Nvidia TAO working for CPUs:

https://www.edgeimpulse.com/blog/nv...JzQ&utm_content=300346629&utm_source=hs_email

Edge Impulse is pleased to announce that we have unlocked previously inaccessible NVIDIA AI capabilities for any edge device with NVIDIA TAO and Omniverse, alongside our launch of native support for NVIDIA Jetson Orin hardware. These developments further amplify what’s achievable at the edge for developers and enterprises who want to accelerate time to market and gain a competitive advantage with world-class AI solutions.

With these new integrations, Edge Impulse provides the only way for developers to deploy NVIDIA technology directly to MCU and CPU. Engineers can speed up the use of large NVIDIA GPU trained models on low-cost MCUs and MPUs with AI accelerators, while accessing a powerful set of tools to create digital twins, synthetic datasets, and virtual model testing environments
.

NVIDIA TAO delivers a low-code, open-source AI framework to accelerate vision AI model development suitable for all skill levels — from beginners to expert data scientists.

With the NVIDIA TAO Toolkit integration, Edge Impulse’s enterprise customers can now use the power and efficiency of transfer learning to achieve state-of-the-art accuracy and production-class throughput in record time with adaptation and optimization. This is integrated directly into the Edge Impulse platform for any existing object detection project, available for all enterprise users today.

The TAO Toolkit provides a faster, easier way to create highly accurate, customized, and enterprise-ready AI models to power your vision AI applications. The open-source TAO toolkit for AI training and optimization delivers everything you need, putting the power of the world’s best Vision Transformers (ViTs) in the hands of every developer and service provider.

Built on TensorFlow and PyTorch, the NVIDIA TAO toolkit uses the power of transfer learning while simultaneously simplifying the model training process and optimizing the model for inference throughput on the target platform. The result is an ultra-streamlined workflow. Take your own models or pre-trained models, adapt them to your own real or synthetic data, then optimize for inference throughput. All without needing AI expertise or large training datasets
.

The integration of NVIDIA TAO into Edge Impulse means that engineers can finally utilize NVIDIA’s industry-leading AI models on hardware outside of that offered by NVIDIA, a capability exclusively provided by Edge Impulse.

Over 100 accurate, custom, production-ready computer vision models are accessible via the Edge Impulse and NVIDIA TAO Toolkit, allowing engineers to seamlessly deploy to edge-optimized hardware, including the Arm® Cortex®-M based NXP I.MXRT1170, Alif E3, STMicro STM32H747AI, and Renesas EK-RA8D1
.

"The advent of generative AI and the growth of IoT deployments means the industry must evolve to run AI models at the edge,” said Paul Williamson, senior vice president and general manager, IoT Line of Business, Arm. “NVIDIA and Edge Impulse have now made it possible to deploy state-of-the-art computer vision models on a broad range of technology based on Arm Cortex-M and Cortex-A CPUs and Arm Ethos™-U NPUs, unlocking a multitude of new AI use cases at the edge."

... and who's Edge Impulse's golden haired edge AI boy since forever?

https://brainchip.com/brainchip-and-edge-impulse-partner-to-accelerate-ai-ml-deployments/

BrainChip and Edge Impulse Partner to Accelerate AI/ML Deployments​



Laguna Hills, Calif. – May 15, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI IP, and Edge Impulse, the leading development platform for machine learning (ML) on edge devices, are partnering to deliver next-generation platforms to customers looking to develop products utilizing the companies’ unique machine learning capabilities.

Edge Impulse is ushering in the future of embedded machine learning by empowering developers to create and optimize solutions with real-world data. The company is making the process of building, deploying, and scaling embedded ML applications easier and faster than ever, unlocking massive value across every industry, with millions of developers making billions of devices smarter.

Organizations are understanding more and more the importance of implementing machine learning capabilities within their products to turn them into the ‘smart’ devices that consumers are clamoring for,” said Zach Shelby, CEO and co-founder at Edge Impulse. “By integrating solutions, such as deploying BrainChip’s neuromorphic IP with our ML platform, developers and enterprise customers are empowered to build advanced machine learning solutions quickly and efficiently so that they are well-positioned as leaders within their respective markets.”


https://developer.nvidia.com/blog/a...ment-workflows-with-nvidia-tao-toolkit-5-0-2/

NVIDIA TAO Toolkit provides a low-code AI framework to accelerate vision AI model development suitable for all skill levels, from novice beginners to expert data scientists. With the TAO Toolkit, developers can use the power and efficiency of transfer learning to achieve state-of-the-art accuracy and production-class throughput in record time with adaptation and optimization.

NVIDIA released TAO Toolkit 5.0, bringing groundbreaking features to enhance any AI model development. The new features include source-open architecture, transformer-based pretrained models, AI-assisted data annotation, and the capability to deploy models on any platform.

Release highlights include:

  • Model export in open ONNX format to support deployment on GPUs, CPUs, MCUs, neural accelerators, ‌and more.
  • Advanced Vision Transformer training for better accuracy and robustness against image corruption and noise.
  • New AI-assisted data annotation, accelerating labeling tasks for segmentation masks.
  • Support for new computer vision tasks and pretrained models for optical inspection, such as optical character detection and Siamese Network models.
  • Open source availability for customizable solutions, faster development, and integration.



This may simplify the process of implementing the TAO models on Akida which will broaden Akida's market appeal.
 
  • Like
  • Love
  • Fire
Reactions: 69 users
Edge impulse has got Nvidia TAO working for CPUs:

https://www.edgeimpulse.com/blog/nvidia-tao-omniverse/?utm_campaign=Edge AI Insights Newsletter&utm_medium=email&_hsmi=300346629&_hsenc=p2ANqtz-8xr2wVNzLyJ8XUruBET6YWdC5SpasbNiJyLtXN3P15uAJcOw2e3TtSW2iqjIh0V8wd1KUeUBpIVN6MD_Qa632HE6oJzQ&utm_content=300346629&utm_source=hs_email

Edge Impulse is pleased to announce that we have unlocked previously inaccessible NVIDIA AI capabilities for any edge device with NVIDIA TAO and Omniverse, alongside our launch of native support for NVIDIA Jetson Orin hardware. These developments further amplify what’s achievable at the edge for developers and enterprises who want to accelerate time to market and gain a competitive advantage with world-class AI solutions.

With these new integrations, Edge Impulse provides the only way for developers to deploy NVIDIA technology directly to MCU and CPU. Engineers can speed up the use of large NVIDIA GPU trained models on low-cost MCUs and MPUs with AI accelerators, while accessing a powerful set of tools to create digital twins, synthetic datasets, and virtual model testing environments
.

NVIDIA TAO delivers a low-code, open-source AI framework to accelerate vision AI model development suitable for all skill levels — from beginners to expert data scientists.

With the NVIDIA TAO Toolkit integration, Edge Impulse’s enterprise customers can now use the power and efficiency of transfer learning to achieve state-of-the-art accuracy and production-class throughput in record time with adaptation and optimization. This is integrated directly into the Edge Impulse platform for any existing object detection project, available for all enterprise users today.

The TAO Toolkit provides a faster, easier way to create highly accurate, customized, and enterprise-ready AI models to power your vision AI applications. The open-source TAO toolkit for AI training and optimization delivers everything you need, putting the power of the world’s best Vision Transformers (ViTs) in the hands of every developer and service provider.

Built on TensorFlow and PyTorch, the NVIDIA TAO toolkit uses the power of transfer learning while simultaneously simplifying the model training process and optimizing the model for inference throughput on the target platform. The result is an ultra-streamlined workflow. Take your own models or pre-trained models, adapt them to your own real or synthetic data, then optimize for inference throughput. All without needing AI expertise or large training datasets
.

The integration of NVIDIA TAO into Edge Impulse means that engineers can finally utilize NVIDIA’s industry-leading AI models on hardware outside of that offered by NVIDIA, a capability exclusively provided by Edge Impulse.

Over 100 accurate, custom, production-ready computer vision models are accessible via the Edge Impulse and NVIDIA TAO Toolkit, allowing engineers to seamlessly deploy to edge-optimized hardware, including the Arm® Cortex®-M based NXP I.MXRT1170, Alif E3, STMicro STM32H747AI, and Renesas EK-RA8D1
.

"The advent of generative AI and the growth of IoT deployments means the industry must evolve to run AI models at the edge,” said Paul Williamson, senior vice president and general manager, IoT Line of Business, Arm. “NVIDIA and Edge Impulse have now made it possible to deploy state-of-the-art computer vision models on a broad range of technology based on Arm Cortex-M and Cortex-A CPUs and Arm Ethos™-U NPUs, unlocking a multitude of new AI use cases at the edge."

... and who's Edge Impulse's golden haired edge AI boy since forever?

https://brainchip.com/brainchip-and-edge-impulse-partner-to-accelerate-ai-ml-deployments/

BrainChip and Edge Impulse Partner to Accelerate AI/ML Deployments​



Laguna Hills, Calif. – May 15, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI IP, and Edge Impulse, the leading development platform for machine learning (ML) on edge devices, are partnering to deliver next-generation platforms to customers looking to develop products utilizing the companies’ unique machine learning capabilities.

Edge Impulse is ushering in the future of embedded machine learning by empowering developers to create and optimize solutions with real-world data. The company is making the process of building, deploying, and scaling embedded ML applications easier and faster than ever, unlocking massive value across every industry, with millions of developers making billions of devices smarter.

Organizations are understanding more and more the importance of implementing machine learning capabilities within their products to turn them into the ‘smart’ devices that consumers are clamoring for,” said Zach Shelby, CEO and co-founder at Edge Impulse. “By integrating solutions, such as deploying BrainChip’s neuromorphic IP with our ML platform, developers and enterprise customers are empowered to build advanced machine learning solutions quickly and efficiently so that they are well-positioned as leaders within their respective markets.”


https://developer.nvidia.com/blog/a...ment-workflows-with-nvidia-tao-toolkit-5-0-2/

NVIDIA TAO Toolkit provides a low-code AI framework to accelerate vision AI model development suitable for all skill levels, from novice beginners to expert data scientists. With the TAO Toolkit, developers can use the power and efficiency of transfer learning to achieve state-of-the-art accuracy and production-class throughput in record time with adaptation and optimization.

NVIDIA released TAO Toolkit 5.0, bringing groundbreaking features to enhance any AI model development. The new features include source-open architecture, transformer-based pretrained models, AI-assisted data annotation, and the capability to deploy models on any platform.

Release highlights include:


  • Model export in open ONNX format to support deployment on GPUs, CPUs, MCUs, neural accelerators, ‌and more.
  • Advanced Vision Transformer training for better accuracy and robustness against image corruption and noise.
  • New AI-assisted data annotation, accelerating labeling tasks for segmentation masks.
  • Support for new computer vision tasks and pretrained models for optical inspection, such as optical character detection and Siamese Network models.
  • Open source availability for customizable solutions, faster development, and integration.



This may simplify the process of implementing the TAO models on Akida which will broaden Akida's market appeal.
Wow Dio. Would I be correct in saying the use of ViTs means Akida2.0?
SC
 
  • Like
Reactions: 5 users

stuart888

Regular
Nice view of the Brainchip Space!

 
  • Like
  • Fire
  • Love
Reactions: 18 users

Diogenese

Top 20
Wow Dio. Would I be correct in saying the use of ViTs means Akida2.0?
SC
Hi SC,

ViTs are not proprietary to BRN, and several other companies are using them.
 
  • Like
Reactions: 12 users
Hi SC,

ViTs are not proprietary to BRN, and several other companies are using them.
Thanks Dio. Wasn't too sure on that one. Hopefully we have a hand in this.

SC
 
  • Like
Reactions: 3 users

Tothemoon24

Regular


Join Dr. Tony Lewis, BrainChip’s Chief Technology Officer, for this expansive conversation about SpaceTech with Laurent Hili, Microelectronics and Data Handling Engineer of the European Space Agency and Luis Mansilla, AI expert in the software technology section at the European Space Research and Technology Center. In this episode, the trio talks about why space technology matters, how far it has come and how neuromorphic AI plays a major and pivotal role in the future of SpaceTech. Learn how the European Space Agency is investing in and driving forward the growth and just where BrainChip’s Akida fits in.

IMG_8715.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 33 users

IloveLamp

Top 20
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users

IloveLamp

Top 20
  • Like
  • Love
  • Wow
Reactions: 14 users

IloveLamp

Top 20
1000014622.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 20 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 14 users

IloveLamp

Top 20
1000014632.jpg
 
  • Like
  • Fire
Reactions: 16 users

IloveLamp

Top 20
Run!

1000014634.jpg
1000014636.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 11 users
Question on BRN breaking out of NDAs silence .

At what point will BRN be included in discussions concerning Neuromorphic Compute with their partners in the commodity arena.

It is clear across the board Neuromorphic commute is being implemented and is going to be a huge part of the future of the edge market and soon in data centres as well.

Will we ever be included in these discussions with the likes of Dell, Nivida, Apple 🍎 , Sony, Samsung ext ext or will we always be in the back ground hiding from the masses as the integral part of development that’s not to be mentioned outside of podcasts.

It seems everyone gets their name discussed on a daily basis regarding edge compute and we don’t.

I am on BRN side 100% yet want to understand this question more clearly as all I hear is others names and not ours everyday.

Honest question 🙋‍♂️?
 
Last edited:
  • Like
  • Love
Reactions: 16 users

IloveLamp

Top 20
1000014639.jpg
 
  • Like
  • Fire
Reactions: 8 users
Top Bottom