BRN Discussion Ongoing

D

Deleted member 118

Guest
That thing could follow you around carrying an eski full of beer. Mobile self driving eski........it's a winner!
As long as they make em to go this fast I’m keen

 
  • Like
  • Haha
  • Love
Reactions: 9 users

Shadow59

Regular
  • Like
  • Fire
  • Love
Reactions: 6 users

Dozzaman1977

Regular
volcano GIF
Time is getting closer to when BRN explodes, it will be a greater explosion than Krakatoa in 1883, The lava flow will represent every edge device being transformed by AKIDA. Nothing can stand in its way. Game over for the old technology of yesteryear.
Boom Explosion GIF
mad episode 4 GIF

page mother GIF
river flowing GIF
night volcano GIF
Dog Flying GIF by moonbug
pizza face GIF by Domino’s UK and ROI
Game Over GIF by Temple Run
Explode War GIF by World of Warships
 
  • Like
  • Haha
  • Fire
Reactions: 42 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
Reactions: 27 users
D

Deleted member 118

Guest
  • Haha
  • Love
Reactions: 14 users
Morning Brain Fam,

Everyone, including Blind Freddie😝, knows about my obsession with Cerence as per multiple previous posts, so continuing on in that vein, I thought I'd share this conversation between Christophe Couvreur (Vice-President Core Technologies and Hybrid Platforms at Cerence) and Dean Harris (Automotive Business Development, NVIDIA), Alexandra Baleta (Manufacturing and Automotive Industry Director, VMWare) and Sunil Samel (VP Products, Akridata).

Every single one of the 'challenges' they raised back in June 2021 ( latency, scalability, security, etc) can be addressed via the adoption of Akida, which Christophe Couvreur would already be aware of having worked together with Mercedes on the EQXX concept car.

Don't forget Cerence prides itself on having the Fastest, Most Powerful and Intelligent AI Assistant Platform for Global Mobility.

For Cerence it's all about providing the best user experience, which they are unable to offer without AKIDA IMO.




View attachment 6330


View attachment 6331


View attachment 6332
If they have a connection to Nviso you have won me over completely. FF
 
  • Like
  • Fire
Reactions: 9 users
@Fact Finder I have to ask, are you and @Blind Freddie one and the same? Perhaps similar to Sir Les and Dame Edna?



Poor Jacki Weaver!


All I can say is that the FBI claims we have never been photographed together??? Makes you wonder???
😂🤣😌😇 FF
 
  • Like
  • Haha
  • Thinking
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Maybe someone (@Dhm) 😛would like to contact Eli and put him on the right track. It'd be pretty groovy to have an article on BrainChip published in Forbes!


AI On The Edge: The Path To Maturity For A 40-Year-Old Industry?​

Forbes Technology Council
Eli David
Forbes Councils Member
Forbes Technology Council
COUNCIL POST| Membership (fee-based)
May 10, 2022,09:15am EDT


(Extract Only)
The good news is that when you have that type of accuracy ratio, there is ample room for improvement. At this stage of technology development, the accuracy of the print drives the rest of the metrics: yield, waste and efficiency. So you can imagine the enormous profit incentive and competitive advantage for a solution that could raise accurate outcomes in AM by 25% to 30%.

Deep neural networks have spurred revolutions in image, voice and text recognition. Traditional machine learning methods rely on features provided by human experts. Thus, instead of directly learning from raw data (e.g., pixels in images), they only process those specific patterns that humans can think of.


Deep learning, on the other hand, is the first and currently the only AI method that can directly learn from raw data. They take inspiration from how our own brains work, and similar to our brains, they process all of the data they observe.

The advancements in the last few years in deep learning have resulted in great leaps in the history of artificial intelligence. Suddenly, we see improvement in accuracy in numerous computer vision, speech recognition and language understanding tasks.

There is enormous potential in the realm of digital manufacturing. If we could apply this deep learning inference engine to the 3D printing process, we could boost accuracy immensely. There would be very little waste, much lower materials costs and a giant leap in yield and efficiency—because we wouldn’t be making a lot of rejected parts anymore.

More technically speaking, if we use sensors and deep learning to detect the very early stages of a flaw, could we correct the course of a print job to avoid it developing further?

The short answer is yes. Deep learning can identify slight manufacturing flaws that the human eye would not even notice, something we will cover in my next column. And yes, many of these flaws can be compensated for during the printing of the object.
But there is a caveat to these, and it goes back to the tension between theory and practice. To use AI in a laboratory setting is extremely intensive in computing and memory requirements, so you need the requisite high-performance hardware. In theory, we could have the sophisticated AI hardware attached to every printer, but that would make the machines prohibitively expensive.

Many AI-driven solutions, Alexa or Google Home, for instance, work around this by deploying basic processors in their edge devices and connecting to AI that operates on servers in the cloud. This works well for some applications, but not for others.
If the edge device moves around, like a vehicle or a drone, it might lose connectivity. The second problem with this approach is latency—the time it takes to send data to the cloud and retrieve an AI answer back. This latency does not lend itself to procedures requiring immediate, real-time response, like discriminating between shadows of trees or pedestrians on a roadway—or for correcting a 3D printer without stopping it every time.
The dream of integrating deep learning into AM is still very much alive—it is merely a practical design problem that stands in the way of a mature, perfected manufacturing method.

What is required is a two-tier software and hardware architecture: one for computationally heavy learning and another for local, autonomous and immediate decision-making. Future columns in this series will look at how these two systems can coordinate to bring the best insights of AI from the lab out to the edge.

 
  • Like
  • Fire
  • Love
Reactions: 19 users

Cyw

Regular
I know a few of us are getting old and I came across and thought I’d share it with a few of you, because there will be one a day that you might not be able to drive a car.

Before we can use self-driving cars, we need a lot of liability and insurance laws and road rules to be changed. As the legal system is always 10 steps behind technology, it may be quite a while before we see some serious self-driving cars around.

I don't expect to enjoy one before my time is up.
 
  • Like
Reactions: 4 users

The emerging field of neuromorphic processing isn’t an easy one to navigate. There are major players in the field that are leveraging their size and ample resources – the highest profile being Intel with its Loihi processors and IBM’s TrueNorth initiative – and a growing list of startups that include the likes of SynSense, Innatera Nanosystems and GrAI Matter Labs.

Included in that latter list is BrainChip, a company that has been developing its Akida chip – Akida is Greek for “spike” – and accompanying IP for more than a decade. We’ve followed BrainChip over the past few years, speaking with them in 2018 and then again two years later, and the company has proven to be adaptable in a rapidly evolving space. The initial plan was to get the commercial SoC into the market by 2019, but BrainChip extended the deadline to add the capability to run convolutional neural networks (CNNs) along with spiking neural networks (SNNs).

In January, the company announced the full commercialization of its AKD1000 platform, which includes its Mini PCIe board that leverages the Akida neural network processor. It’s a key part of BrainChip’s strategy of using the technology as reference models as it pursues partnerships with hardware and chip vendors that will incorporate it in their own designs.

“Looking at our fundamental business model, is it a chip or IP or both?” Jerome Nadel, BrainChip’s chief marketing officer, tells The Next Platform. “It’s an IP license model. We have reference chips, but our go-to-market is definitely to work with ecosystem partners, especially who would take a license, like a chip vendor or a ASIC designer and tier one OEMs. … If we’re connected with a reference design to sensors for various sensor modalities or to an application software development, when somebody puts together AI enablement, they want to run it on our hardware and there’s already interoperability. You’ll see a lot of these building blocks as we’re trying to penetrate the ecosystem, because ultimately when you look at the categoric growth in edge AI, it’s really going to come from basic devices that leverage intelligent sensors.”

BrainChip is aiming its technology at the edge, where more data is expected to be generated in the coming years. Pointing to IDC and McKinsey research, BrainChip expects the market for edge-based devices needing AI to grow from $44 billion this year to $70 billion by 2025. In addition, at last week’s Dell Technologies World event, CEO Michael Dell reiterated his belief that while 10 percent of data now is generated at the edge, that will shift to 75 percent by 2025. Where data is created, AI will follow. BrainChip has designed Akida for the high-processing, low-power environment and to be able to run AI analytic workloads – particularly inference – on the chip to lessen the data flow to and from the cloud and thus reduce latency in generating results.

Neuromorphic chips are designed to mimic the brain through the use of SNNs. BrainChip broaden the workloads Akida could run by being able to run CNNs as well, which are useful in edge environments for such tasks as embedded vision, embedded audio, automated driving for LiDAR and RADAR remote sensing devices, and industrial IoT. The company is looking at such sectors as autonomous driving, smart health and smart cities as growth areas.




BrainChip already is seeing some success. It’s Akida 1000 platform is being used in Mercedes-Benz’s Vision EQXX concept car for in-cabin AI, including driver and voice authentication, keyword spotting and contextual understanding.

The vendor sees partnerships as an avenue for increasing its presence in the neuromorphic chip field.

“If we look at a five-year strategic plan, our outer three years probably look different than our inner two,” Nadel says. “In the inner two we we’re still going to focus on chip vendors and designers and tier-one OEMs. But the outer three, if you look at categories, it’s really going to come from basic devices, be they in-car or in-cabin. be they in consumer electronics that are looking for this AI enablement. We need to be in the ecosystem. Our IP is de facto and the business model wraps around that.”

The company has announced a number of partnerships, including with nViso, an AI analytics company. The collaboration will target battery-powered applications in robotics and automotive sectors using Akida chips for nViso’s AI technology for social robots and in-cabin monitoring systems. BrainChip also is working with SiFive to integrate the Akida technology with SiFive’s RISC-V processors for edge AI computing workloads and MosChip, running its Akida IP with the vendor’s ASIC platform for smart edge devices. BrainChip also is working with Arm.

To accelerate the strategy, the company this week rolled out its AI Enablement Program to offer vendors working prototypes of BrainChip IP atop Akida hardware to demonstrate the platform’s capabilities for running AI inference and learning on-chip and in a device. The vendor also is offering support for identifying use cases for sensor and model integration.



The program includes three levels – the Basic and Advanced prototypes to the Functioning Solution – with the number of AKD1000 chips scaling to 100, custom models for some users, 40 to 160 hours with machine learning experts and two to ten development systems. The prototypes will enable BrianChip to get its commercial products to users at a time when other competitors are still developing their own technologies in the relatively nascent market.

“There’s a step of being clear about the use cases and perhaps a road map of more sensory integration and sensor fusion,” Nadel says. “This is not how we make a living as a business model. The intent is to demonstrate real, tangible working systems out of our technology. The thinking was, we could get these into the hands of people and they could see what we do.”

BrainChips Akida IP includes support for up to 1,024 nodes that can be configured into two to 256 nodes connected over a mesh network, with each node comprising four neural processing units. Each of the NPUs includes configurable SRAM and each NPU can be configured for CNNs if needed and each is based on events or spikes, using data sparsity, activations, and weights to reduce the number operations by at least two-fold. The Akida Neural SoC can be used standalone or integrated as a co-processor a range of use cases and provides 1.2 million neurons and 10 billion synapses.

The offering also includes the MetaTF machine learning framework for developing neural networks for edge applications and three reference development systems for PCI, PC shuttle and Raspberry Pi systems.

The platform can be used for one-shot on-chip learning by using the trained model to extract features and adding new classes onto it or in multi-pass processing that leverages parallel processing to reduce the number of NPUs needed.

Here is the one shot:



And there is the multi-pass:




“The idea of our accelerator being close to the sensor means that you’re not sending sensor data, you’re sending inference data,” Nadel said. “It’s really a systems architectural play that we envision our micro hardware is buddied up with sensors. The sensor captures data, it’s pre-processed. We do the inference off of that and the learning at the center, but especially the inference. Like an in-car Advanced Driver Assistance System, you’re not tasking the server box loaded with GPUs with all of the data computation and inference. You’re getting the inference data, the metadata, and your load is going to be lighter.”

The on-chip data processing is part of BrainChip’s belief that for much of edge AI, the future will not require clouds. Rather than send all the data to the cloud – bringing in the higher latency and costs – the key will be doing it all on the chip itself. Nadel says it’s a “bit of a provocation to the semiconductor industry talking about cloud independence. It’s not anti-cloud, but the idea is that hyperscale down to the edge is probably the wrong approach. You have to go sensor up.”

Going back to the cloud also means having to retraining the model if there is a change in object classification, Anil Mankar, co-founder and chief development officer, tells The Next Platform. Adding more classes means changing the rates in the classification.

“On-chip learning,” Mankar says. “It’s called incremental learning or continuous learning, and that is only possible because … we are working with spikes and we actually copy similarly how our brain learns faces and objects and things like that. People don’t want to do transfer learning – go back to the cloud, get new rates. Now you can classify more objects. Once you have an activity on the device, you don’t need cloud, you don’t need to go backwards. Whatever you learn, you learn” and that doesn’t change when something new is added.
Great Article Slade
One of the highlights for me related to a future customer is this:
"CEO Michael Dell reiterated his belief that while 10 percent of data now is generated at the edge, that will shift to 75 percent by 2025. Where data is created, AI will follow."

Im confident DELL, in knowing the above also know they need Akida in their PCs, Monitors, cameras, printers, data storage and management centers.
Someone posed that each signed contract will add 50c to BRN SP. May I suggest a contract with DELL will be blow that away.
There are a number of candidates on FF running list that will do the same.
Cant afford to be out of this company for a day in case one of those names are dropped.
 
  • Like
  • Fire
Reactions: 43 users
Having just watched this video can I say Blind Freddie encourages everyone to watch it and has asked me to CONGRATULATE the 1,000 Eyes for calling the partnership with Nvidia BEFORE ALL THE INVESTMENT COMMUNITY WORLD WIDE.

I personally would like to remind everyone here how much we have achieved by moving over here and bidding goodbye to the conflict model.

I argued before that we all carry sufficient personal doubt and insecurity that there is nothing to be gained from having deliberate purveyors of negative feedback for so called balance in our lives and particularly not when making business or investment decisions.
Congratulations to everyone here may it long continue.

2025 - 2025 - 2025 - 2025 - 2025

My opinion only DYOR
FF

AKIDA BALLISTA
Hi FF
I’m sure you are not making the same error I made the other day when confusing NVISO with NVIDIA.
You have listed them as separate entities with BRN relationship in a later post but I am missing something, please elaborate.
Where is the confirmed link with NVIDIA apart from Blind Freddie's vision in this video.

Cheers
 
Last edited:
  • Like
  • Fire
Reactions: 7 users

Slade

Top 20
I remember feeling a little disappointed at the last 4C because I thought we would have had more revenue from the sale of boards. Not now as it is becoming clearer by the day who the majority of these boards would have gone to, the companies that were EAPs and whom are now coming out of the woodwork. My guess is they were all given boards with settlement due once they sign up for our IP. Rob Telson has not been exaggerating when he said ‘tip of the iceberg’.
 
  • Like
  • Love
  • Fire
Reactions: 43 users

Proga

Regular
"This little green thing with flashing lights" - Love it!







View attachment 6309

Just to get my facts straight. We're working with Nvidia and Valeo to install the new architecture in Mercedes vehicles in 2024 and Nviso will release their version to their customers in 2025.

Hearing BrainChip and Akida being said so many times was so awesome. If I run into any of the doubters I'll just whip out the vid
 
  • Like
  • Love
  • Fire
Reactions: 25 users
Apologies to other posters Blind Freddie just checked the above post and kicked me in the shins for the leaving out the most exciting FACT so here it is.

European and US legislatures are adopting standards and legislation to enshrine those standards for DRIVER ALERTNESS MONITORING INCLUDING FATIGUE & PASSENGER PRESENCE IN VEHICLES to protect CHILDREN & PETS.

These standards will demand always on working systems which are unaffected by lack of cloud connectivity before vehicles including Autonomous vehicles can leave the garage or children and pets can be transported including commercially.

Guess what Nviso, Brainchip and Nvidia have demonstrated in this video.

If Brainchip did not have any other product category covered this one single achievement would blow the lid off the reservoir of its commercial success.

If you read the post of Tim Llewellyn CEO of Nviso it is clear he understands.

“Tip of the Iceberg” people.

As @MC is taking his 500th curtain call I will point out that it was my High Court decision that discovered micro sleeps which set in place the need for driver alertness monitoring worldwide.

My opinion only DYOR
FF


AKIDA BALLISTA
The following linked article is very informative and well worth reading but if I wanted one single takeaway then the following extracted quote is it.
My opinion only DYOR
FF

AKIDA BALLISTA:

"The auto industry has committed to introducing rear-seat reminders that include a combination of auditory and visual alerts in essentially all cars and trucks by the 2025 model year."
 
  • Like
  • Fire
Reactions: 31 users
The following linked article is very informative and well worth reading but if I wanted one single takeaway then the following extracted quote is it.
My opinion only DYOR
FF

AKIDA BALLISTA:

"The auto industry has committed to introducing rear-seat reminders that include a combination of auditory and visual alerts in essentially all cars and trucks by the 2025 model year."
Do not know where the link went but here it is:

 
  • Like
  • Fire
Reactions: 12 users
Hi FF
I’m sure you are not making the same error I made the other day when confusing NVISO with NVIDIA.
You have listed them as separate entities with BRN relationship in a later post but I am missing something, please elaborate.
Where is the confirmed link with NVIDIA apart from Blind Freddie's vision in this video.

Cheers
Hi FK

@Proga has partly answered your question but here are the links that satisfied Blind Freddie and FF:

Rob Telson was asked about Nvidia being a competitor and the risks this brought and he responded that they (Brainchip) see Nvidia more as a partner moving forward.

It was then revealed that Mercedes, Nvidia and Brainchip were the brains underpinning the EQXX compute and Mercedes nominates/describes Brainchip as the Artificial Intelligence experts.

Then Nviso was revealed and it transpired they are working with Nvidia for automotive as well as with Brainchip for automotive and other technology areas.

Today's video shows Nviso, Brainchip and Nvidia working in partnership for the purpose of demonstrating the in vehicle monitoring.

These are the primary reasons why I have Nvidia promoted into the Hall of Fame along side the fully revealed by press release or ASX announcement companies.

Of course they might just be friends with benefits but probably partners is a better way to describe companies.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 39 users

Boab

I wish I could paint like Vincent
  • Like
  • Love
Reactions: 7 users

Dhm

Regular
Maybe someone (@Dhm) 😛would like to contact Eli and put him on the right track. It'd be pretty groovy to have an article on BrainChip published in Forbes!


AI On The Edge: The Path To Maturity For A 40-Year-Old Industry?​

Forbes Technology Council
Eli David
Forbes Councils Member
Forbes Technology Council
COUNCIL POST| Membership (fee-based)
May 10, 2022,09:15am EDT


(Extract Only)
The good news is that when you have that type of accuracy ratio, there is ample room for improvement. At this stage of technology development, the accuracy of the print drives the rest of the metrics: yield, waste and efficiency. So you can imagine the enormous profit incentive and competitive advantage for a solution that could raise accurate outcomes in AM by 25% to 30%.

Deep neural networks have spurred revolutions in image, voice and text recognition. Traditional machine learning methods rely on features provided by human experts. Thus, instead of directly learning from raw data (e.g., pixels in images), they only process those specific patterns that humans can think of.


Deep learning, on the other hand, is the first and currently the only AI method that can directly learn from raw data. They take inspiration from how our own brains work, and similar to our brains, they process all of the data they observe.

The advancements in the last few years in deep learning have resulted in great leaps in the history of artificial intelligence. Suddenly, we see improvement in accuracy in numerous computer vision, speech recognition and language understanding tasks.

There is enormous potential in the realm of digital manufacturing. If we could apply this deep learning inference engine to the 3D printing process, we could boost accuracy immensely. There would be very little waste, much lower materials costs and a giant leap in yield and efficiency—because we wouldn’t be making a lot of rejected parts anymore.

More technically speaking, if we use sensors and deep learning to detect the very early stages of a flaw, could we correct the course of a print job to avoid it developing further?

The short answer is yes. Deep learning can identify slight manufacturing flaws that the human eye would not even notice, something we will cover in my next column. And yes, many of these flaws can be compensated for during the printing of the object.
But there is a caveat to these, and it goes back to the tension between theory and practice. To use AI in a laboratory setting is extremely intensive in computing and memory requirements, so you need the requisite high-performance hardware. In theory, we could have the sophisticated AI hardware attached to every printer, but that would make the machines prohibitively expensive.

Many AI-driven solutions, Alexa or Google Home, for instance, work around this by deploying basic processors in their edge devices and connecting to AI that operates on servers in the cloud. This works well for some applications, but not for others.
If the edge device moves around, like a vehicle or a drone, it might lose connectivity. The second problem with this approach is latency—the time it takes to send data to the cloud and retrieve an AI answer back. This latency does not lend itself to procedures requiring immediate, real-time response, like discriminating between shadows of trees or pedestrians on a roadway—or for correcting a 3D printer without stopping it every time.
The dream of integrating deep learning into AM is still very much alive—it is merely a practical design problem that stands in the way of a mature, perfected manufacturing method.

What is required is a two-tier software and hardware architecture: one for computationally heavy learning and another for local, autonomous and immediate decision-making. Future columns in this series will look at how these two systems can coordinate to bring the best insights of AI from the lab out to the edge.

@Bravo mission accomplished.

Screen Shot 2022-05-12 at 1.07.00 pm.png
 
  • Like
  • Love
  • Fire
Reactions: 37 users

db1969oz

Regular
  • Like
Reactions: 2 users
Top Bottom