BRN Discussion Ongoing

Diogenese

Top 20
View attachment 27014
1673645325292-png.27015


View attachment 27016


This patent has two owners:

EP4060983A1 A METHOD FOR ACCUMULATING EVENTS USING AN EVENT-BASED VISION SENSOR AND OVERLAPPING TIME WINDOWS

Applicants TOYOTA MOTOR CO LTD [JP]; INIVATION AG [CH]

1673654320472.png


A method for elaborating image frames using an event-based vision sensor (PM) which comprises a plurality of pixels and is configured to output a stream of change events (ES), wherein each change event corresponds to a change in brightness detected by one pixel and comprises a time stamp, a pixel address, and a parameter describing a sign of said brightness change, wherein a first image frame and a second image frame are elaborated, using change events accumulated in buffers, these change events appearing in time windows which have a relative time position.

Inivation is working with Toyota on event vision.

Rob Telson and Chris Stevens like Inivation's CES 2023 Event vision post which promises longer battery life and reduced bandwidth requirement ...

I'm including Toyota in my "Possibles" team.
 
  • Like
  • Fire
  • Love
Reactions: 61 users

JK200SX

Regular
In the same vein sort of, because it is not Intel related, I noticed that Valeo posted a short video on Linkedin that was a group of happy employees dancing to the song YMCA. Cute and self depricating, but also about them just celebrating a great CES event that they had, backstopped and supported of course by their A.I. technology. That post was pointed out origionally by others here, ...Learning... I believe.

The video is on a Valeo Linkedin post and generated lot's of backslapping and celebrating THEIR moment with some comments and hundreds of (y) reactions. Many of the (y) were from Valeo employees.

Note that they were celebrating and thumbs upping THEIR moment. Not ours.....but theirs. And among the actual comments there is one
that simply says, "Akida 1000 ? " Meaning I presume,....is Valeo's A.I. tech powered by and made worth demo-ing by dancing and celebrating because of Akida technology. I ask this poster / commentor who is obviously a shareholder and may be among this TSE crowd,....1) what did you expect Valeo to say to your comment? 2) Did you seriously think you would actually get a response? 3) If not, why say it? 4) What did you hope to accomplish by asking about Akida 1000 during their party? Feel free to respond to those questions.

As far as I'm concerned the commentor crashed their "feel good moment party" and threw a turd in their punchbowl by commenting with the "Akida 1000?" words. A thumbs up reaction would have been fine, but your comment crossed the line simply because it was inappropriate in this case.

I ask this person ....do you visit Brainchip's website? Do you see Valeo listed as a trusted partner? If yes, then why might Brainchip do that, what possible reason would they have?

Perhaps yet another cringeworthy moment for Brainchips management., unfortunately. Regards, .... dippY

All in my opinion
I found the post you referred to above on linkedin, and I don't see anything wrong with what that poster did. He made a comment, "AKIDA 1000", and so what? Its not derogatory or distasteful as referenced by Tech.
And when people from Valeo read that post are they going to wonder how someone from Australia might have known that AKIDA IP was included....... I don't think so. Honestly, I think they will grin and be amused.

As I mentioned before, I don't think its suitable to post anything derogatory or distasteful (whether it be here on on other platforms), but let not also become a nanny state on the forum here.
 
  • Like
  • Fire
  • Love
Reactions: 40 users
  • Like
  • Fire
  • Love
Reactions: 46 users

Diogenese

Top 20
A blog on the Brainchip website, take note of the language used.


“Conventional AI silicon and cloud-centric inference models do not perform efficiently at the automotive edge. As many semiconductor companies have already realized, latency and power are two primary issues that must be effectively addressed before the automotive industry can manufacture a new generation of smarter and safer cars. To meet consumer expectations, these vehicles need to feature highly personalized and responsive in-cabin systems while supporting advanced assisted driving capabilities.

That’s why automotive companies are untethering edge AI functions from the cloud – and performing distributed inference computation on local neuromorphic silicon using BrainChip’s AKIDA. This Essential AI model sequentially leverages multiple AKIDA-powered smart sensors and AKIDA AI SoC accelerators to efficiently capture and analyze inference data within designated regions of interest or ROI (samples within a data set that provide value for inference).

brainchip-diagram1-300x118.png


In-Cabin Experience
According to McKinsey analysts, the in-cabin experience is poised to become one of the most important differentiators for new car buyers. With AKIDA, automotive companies are designing lighter, faster, and more energy efficient in-cabin systems that can act independent of the cloud. These include advanced facial detection and customization systems that automatically adjust seats, mirrors, infotainment settings, and interior temperatures to match driver preferences.

AKIDA also enables sophisticated voice control technology that instantly responds to commands – as well as gaze estimation and emotion classification systems that proactively prompt drivers to focus on the road. Indeed, the Mercedes-Benz Vision EQXX features AKIDA-powered neuromorphic AI voice control technology which is five to ten times more efficient than conventional systems.

Neuromorphic silicon – which processes data with efficiency, precision, and economy of energy – is playing a major role in transforming vehicles into transportation capsules with personalized features and applications to accommodate both work and entertainment. This evolution is driven by smart sensors capturing yottabytes of data from the automotive edge to create holistic and immersive in-cabin experiences.

Assisted Driving – Computer Vision
In addition to redefining the in-cabin experience, AKIDA allows advanced driver assistance systems (ADAS) such as computer vision to detect vehicles, pedestrians, bicyclists, signs, and objects with incredibly high levels of precision.

Specifically, pairing two-stage object detection inference algorithms with local AKIDA silicon enables computer vision systems to efficiently perform processing in two primary stages – at the sensor (inference) and AI accelerator (classification). Using this paradigm, computer vision systems rapidly and efficiently analyze vast amounts of inference data within specific ROIs.

Intelligently refining inference data eliminates the need for compute heavy hardware such as general-purpose CPUs and GPUs that draw considerable amounts of power and increase the size and weight of computer vision systems. It also allows ADAS to generate incredibly detailed real-time 3D maps that help drivers safely navigate busy roads and highways.

Assisted Driving – LiDAR
Automotive companies are also leveraging a sequential computation model with AKIDA-powered smart sensors and AI accelerators to enable the design of new LiDAR systems. This is because most LiDAR typically rely on general-purpose GPUs – and run cloud-centric, compute-heavy inference models that demand a large carbon footprint to process enormous amounts of data.

Indeed, LiDAR sensors typically fire 8 to 108 laser beams in a series of cyclical pulses, each emitting billions of photons per second. These beams bounce off objects and are analyzed to identify and classify vehicles, pedestrians, animals, and street signs. With AKIDA, LiDAR processes millions of data points simultaneously, using only minimal amounts of compute power to accurately detect – and classify – moving and stationary objects with equal levels of precision.

Limiting inference to a ROI helps automotive manufacturers eliminate compute and energy heavy hardware such as general-purpose CPUs and GPUs in LiDAR systems – and accelerate the rollout of advanced assisted driving capabilities.

Essential AI
Scalable AKIDA-powered smart sensors bring common sense to the processing of automotive data – freeing ADAS and in-cabin systems to do more with less by allowing them to infer the big picture from the basics. With AKIDA, automotive manufacturers are designing sophisticated edge AI systems that deliver immersive end-user experiences, support the ever-increasing data and compute requirements of assisted driving capabilities, and enable the self-driving cars and trucks of the future.

To learn more about how BrainChip brings Essential AI to the next generation of smarter cars, download our new white paper here.
Hi Jesse,

As you say, "take note of the language used".

The words I like:

"That’s why automotive companies are untethering edge AI functions from the cloud – and performing distributed inference computation on local neuromorphic silicon using BrainChip’s AKIDA. This Essential AI model sequentially leverages multiple AKIDA-powered smart sensors and AKIDA AI SoC accelerators to efficiently capture and analyze inference data within designated regions of interest or ROI (samples within a data set that provide value for inference)"

"multiple AKIDA-powered smart sensors and AKIDA AI SoC accelerators"

"With AKIDA, automotive companies are designing lighter, faster, and more energy efficient in-cabin systems that can act independent of the cloud."

"AKIDA also enables sophisticated voice control technology that instantly responds to commands – as well as gaze estimation and emotion classification systems that proactively prompt drivers to focus on the road."

"Assisted Driving – LiDAR
A
utomotive companies are also leveraging a sequential computation model with AKIDA-powered smart sensors and AI accelerators to enable the design of new LiDAR systems."

Present tense = it's happening now!

[Sorry - I have to stop - I'm getting a nose bleed.]
 
  • Like
  • Fire
  • Love
Reactions: 101 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers,

Snippets from Weekend Financial Review...

1st is self explanitory.

2nd , Interesting, if Brainchip is not classed as DEEP TECH & worth a mention , I'll go he.....

He , he.

Regards,
Esq
 

Attachments

  • 20230114_114645.jpg
    20230114_114645.jpg
    2.4 MB · Views: 171
  • 20230114_114702.jpg
    20230114_114702.jpg
    2.3 MB · Views: 166
  • Like
  • Fire
  • Haha
Reactions: 27 users

Diogenese

Top 20
Hi Jesse,

As you say, "take note of the language used".

The words I like:

"That’s why automotive companies are untethering edge AI functions from the cloud – and performing distributed inference computation on local neuromorphic silicon using BrainChip’s AKIDA. This Essential AI model sequentially leverages multiple AKIDA-powered smart sensors and AKIDA AI SoC accelerators to efficiently capture and analyze inference data within designated regions of interest or ROI (samples within a data set that provide value for inference)"

"multiple AKIDA-powered smart sensors and AKIDA AI SoC accelerators"

"With AKIDA, automotive companies are designing lighter, faster, and more energy efficient in-cabin systems that can act independent of the cloud."

"AKIDA also enables sophisticated voice control technology that instantly responds to commands – as well as gaze estimation and emotion classification systems that proactively prompt drivers to focus on the road."

"Assisted Driving – LiDAR
A
utomotive companies are also leveraging a sequential computation model with AKIDA-powered smart sensors and AI accelerators to enable the design of new LiDAR systems."

Present tense = it's happening now!

[Sorry - I have to stop - I'm getting a nose bleed.]
All this and transformers still to come!
 
  • Like
  • Fire
  • Love
Reactions: 39 users
In the same vein sort of, because it is not Intel related, I noticed that Valeo posted a short video on Linkedin that was a group of happy employees dancing to the song YMCA. Cute and self depricating, but also about them just celebrating a great CES event that they had, backstopped and supported of course by their A.I. technology. That post was pointed out origionally by others here, ...Learning... I believe.

The video is on a Valeo Linkedin post and generated lot's of backslapping and celebrating THEIR moment with some comments and hundreds of (y) reactions. Many of the (y) were from Valeo employees.

Note that they were celebrating and thumbs upping THEIR moment. Not ours.....but theirs. And among the actual comments there is one
that simply says, "Akida 1000 ? " Meaning I presume,....is Valeo's A.I. tech powered by and made worth demo-ing by dancing and celebrating because of Akida technology. I ask this poster / commentor who is obviously a shareholder and may be among this TSE crowd,....1) what did you expect Valeo to say to your comment? 2) Did you seriously think you would actually get a response? 3) If not, why say it? 4) What did you hope to accomplish by asking about Akida 1000 during their party? Feel free to respond to those questions.

As far as I'm concerned the commentor crashed their "feel good moment party" and threw a turd in their punchbowl by commenting with the "Akida 1000?" words. A thumbs up reaction would have been fine, but your comment crossed the line simply because it was inappropriate in this case.

I ask this person ....do you visit Brainchip's website? Do you see Valeo listed as a trusted partner? If yes, then why might Brainchip do that, what possible reason would they have?

Perhaps yet another cringeworthy moment for Brainchips management., unfortunately. Regards, .... dippY

All in my opinion
2nd hand embarrassment 🤦
 
  • Like
  • Fire
Reactions: 6 users

stuart888

Regular
A blog on the Brainchip website, take note of the language used.


“Conventional AI silicon and cloud-centric inference models do not perform efficiently at the automotive edge. As many semiconductor companies have already realized, latency and power are two primary issues that must be effectively addressed before the automotive industry can manufacture a new generation of smarter and safer cars. To meet consumer expectations, these vehicles need to feature highly personalized and responsive in-cabin systems while supporting advanced assisted driving capabilities.

That’s why automotive companies are untethering edge AI functions from the cloud – and performing distributed inference computation on local neuromorphic silicon using BrainChip’s AKIDA. This Essential AI model sequentially leverages multiple AKIDA-powered smart sensors and AKIDA AI SoC accelerators to efficiently capture and analyze inference data within designated regions of interest or ROI (samples within a data set that provide value for inference).

brainchip-diagram1-300x118.png


In-Cabin Experience
According to McKinsey analysts, the in-cabin experience is poised to become one of the most important differentiators for new car buyers. With AKIDA, automotive companies are designing lighter, faster, and more energy efficient in-cabin systems that can act independent of the cloud. These include advanced facial detection and customization systems that automatically adjust seats, mirrors, infotainment settings, and interior temperatures to match driver preferences.

AKIDA also enables sophisticated voice control technology that instantly responds to commands – as well as gaze estimation and emotion classification systems that proactively prompt drivers to focus on the road. Indeed, the Mercedes-Benz Vision EQXX features AKIDA-powered neuromorphic AI voice control technology which is five to ten times more efficient than conventional systems.

Neuromorphic silicon – which processes data with efficiency, precision, and economy of energy – is playing a major role in transforming vehicles into transportation capsules with personalized features and applications to accommodate both work and entertainment. This evolution is driven by smart sensors capturing yottabytes of data from the automotive edge to create holistic and immersive in-cabin experiences.

Assisted Driving – Computer Vision
In addition to redefining the in-cabin experience, AKIDA allows advanced driver assistance systems (ADAS) such as computer vision to detect vehicles, pedestrians, bicyclists, signs, and objects with incredibly high levels of precision.

Specifically, pairing two-stage object detection inference algorithms with local AKIDA silicon enables computer vision systems to efficiently perform processing in two primary stages – at the sensor (inference) and AI accelerator (classification). Using this paradigm, computer vision systems rapidly and efficiently analyze vast amounts of inference data within specific ROIs.

Intelligently refining inference data eliminates the need for compute heavy hardware such as general-purpose CPUs and GPUs that draw considerable amounts of power and increase the size and weight of computer vision systems. It also allows ADAS to generate incredibly detailed real-time 3D maps that help drivers safely navigate busy roads and highways.

Assisted Driving – LiDAR
Automotive companies are also leveraging a sequential computation model with AKIDA-powered smart sensors and AI accelerators to enable the design of new LiDAR systems. This is because most LiDAR typically rely on general-purpose GPUs – and run cloud-centric, compute-heavy inference models that demand a large carbon footprint to process enormous amounts of data.

Indeed, LiDAR sensors typically fire 8 to 108 laser beams in a series of cyclical pulses, each emitting billions of photons per second. These beams bounce off objects and are analyzed to identify and classify vehicles, pedestrians, animals, and street signs. With AKIDA, LiDAR processes millions of data points simultaneously, using only minimal amounts of compute power to accurately detect – and classify – moving and stationary objects with equal levels of precision.

Limiting inference to a ROI helps automotive manufacturers eliminate compute and energy heavy hardware such as general-purpose CPUs and GPUs in LiDAR systems – and accelerate the rollout of advanced assisted driving capabilities.

Essential AI
Scalable AKIDA-powered smart sensors bring common sense to the processing of automotive data – freeing ADAS and in-cabin systems to do more with less by allowing them to infer the big picture from the basics. With AKIDA, automotive manufacturers are designing sophisticated edge AI systems that deliver immersive end-user experiences, support the ever-increasing data and compute requirements of assisted driving capabilities, and enable the self-driving cars and trucks of the future.

To learn more about how BrainChip brings Essential AI to the next generation of smarter cars, download our new white paper here.
On this part: "latency and power are two primary issues that must be effectively addressed".
1673660538379.png


Brainchip's competitive advantage differentiators are blasting off. Use cases: fast and low power. Cream of the crop low power AI/ML smarts, best in class, etc.

The ARM top guy's blog was start (first sentence) to finish (last sentence in first paragraph) about energy conservation. He stayed off big data. Could not find it or would have posted it here!

Exact verbiage and the order of that exact verbiage reveals lots of clues. All these companies think about every sentence. First verbose statement in lead paragraph, gets more weight than the second sentence in the second paragraph, etc. Human writing has clues all over the place.
 
  • Like
  • Fire
Reactions: 22 users

stuart888

Regular
The clues say Wow Brainchip and their fast-low-power solutions.

It is in our partners' text, words, and the vibe is strong.

1673662285819.png
 
  • Like
  • Thinking
Reactions: 9 users

Kachoo

Regular
I would step back and say the following comments:

1. CES 2023 was great success for BRN and it's partners.
2. The potential leads have really increased and our technology is out there.

The pull back in SP is market fighting shorting pumping longs and many other variables out of control to retail investors.

The lack of to revenue is not a result of lack of activity. I think the end product needs to be sold by the end suppliers before we see payments down the line.

I would also add to the few that poke these partners to be humble and let them market ther products weather it's Akida or not.

Not many companies are going to thank IP suppliers for the innovation and they don't need to it's there product they spent the money and that's it. Poking these people and c companies will only slow us and make it harder for BRN employees to market products.

I would say that black outs on info could even be a result of how the share holders will circle the company of the next product and tell them to put Akida on the package lol.

Let's look at Megachips we don't even know who they work for but they grow.

In the end like The CEO says watch the financial numbers. They I believe are comming there is a Lot of activity and soon we will start seeing it roll in.

Let's be professional. I know not many have not been but please to those that poke these companies let them enjoy their moments too they engineered the product also lots of amazing work went in to the whole product.
 
  • Like
  • Fire
  • Love
Reactions: 51 users

JK200SX

Regular
Published today in the Edge Impulse Blog :)

Edge Impulse

BrainChip Akida™ Platform Features in Edge Impulse

Blog


mathijsTeam
6h

Edge Impulse enables developers to rapidly build enterprise-grade ML algorithms, trained on real sensor data, in a low- to no-code environment. With the complete integration of BrainChip Akida™ technology, these trained algorithms and new algorithms can now be converted into spiking neural networks (SNNs) and deployed to BrainChip Akida target devices such as the Akida mini PCIe development board. This blog highlights the new features with BrainChip technology that are now available in Edge Impulse Studio to provide easy, quick, and advanced model development and deployment for BrainChip Akida neuromorphic technology.
The development of models for BrainChip Akida is now integrated in Edge Impulse Studio. Developers can select BrainChip Akida (refer Figure 1) in the learning block of an impulse design. There are two learning blocks available today — classification, which supports development of new models and transfer learning, which provides access to a model zoo that is optimized for BrainChip Akida devices. The type of learning blocks visible depend on the type of data collected and intent of the project such as classification, object detection. Using the BrainChip Akida learning blocks ensures that the models developed are optimized and successfully deployed to the BrainChip Akida mini PCIe development board.
Figure 1: Choose BrainChip Classification or Transfer learning blocks for development on BrainChip Akida devices
In the learning block of the impulse design one can compare between float, quantized, and Akida versions of a model. If you added a processing block to your impulse design, you will need to generate features before you can train your model. Developers can use Edge Impulse Studio (refer Figure 2) to edit predefined models, and training parameters.
Figure 2: Visual mode of BrainChip Akida learning block with options for to profile float, quantized, and Akida accuracies
Edge Impulse Studio also gives the ability for users to modify pre-generated Python code as a way to get more exact behavior from the learn blocks. In this area the more advanced user can also call into the Akida MetaTF Python package as is integrated into the Akida learn blocks (Figure 3a and 3b).
Figure 3a: How to access Expert Mode of a BrainChip Akida learn blockFigure 3b: Example of Expert Mode code that calls BrainChip’s MetaTF package functions
While training the BrainChip Akida learn block, useful information such as model compatibility, summary, sparsity, and # of NPs required are also displayed in the log output (refer Figure 4). This helps developers to review and modify their models to generate custom, optimized, and desired configurations for BrainChip Akida devices (Figure 5).
Figure 4: Profile information for BrainChip Akida
If the project uses a transfer learning block, the developer will be presented with a list of models (refer Figure 5) from BrainChip’s Model Zoo that are pre-trained and optimized for BrainChip Akida. These models will provide a great starting point for developers and implement transfer learning for their projects. As of today, several AkidaNet-based models are integrated into the Edge Impulse Studio and many more will be integrated over time. If developers have a specific request on this, please let us know via the Edge Impulse forums.
Figure 5: List of models available when BrainChip Transfer Learning block is chosen
Any model that has been developed in the impulse design on Edge Impulse Studio can be deployed to BrainChip Akida target devices. In order to download these models for custom deployments, developers must choose the BrainChip MetaTFTM model block (refer Figure 5a) under the deployment stage to obtain a .zip file with the converted model in it. Alternatively, there is also a BrainChip Akida PCIe deployment block (refer Figure 5b) available which will generate a pre-built binary that can be used with Edge Impulse Linux CLI to run on any compatible Linux installation where this board is installed.
The pre-built binary that is provided from the deployment block also has in-built tools that provide performance metrics for the AKD1000 mini PCI reference board (refer Figure 7). This is a very unique integration as developers can not only deploy their favorite projects to BrainChip Akida devices but can also capture information such as efficiency that help build a prototype system with Akida technology.
Figure 7: Performance information unique to BrainChip Akida devices
To quickly get started, please see these example projects that are available to clone immediately for quick training and deployment to the AKD1000 development platform.
If you are new to Edge Impulse, please try out one of the Getting Started tutorials before continuing with Edge Impulse BrainChip Akida features. Please reach out to us on Edge Impulse forums for technical support. If you need more information about getting started with BrainChip Akida on Edge Impulse, visit the documentation page.

This is a companion discussion topic for the original entry at https://www.edgeimpulse.com/blog/brainchip-akidatm-platform-features-in-edge-impulse
 
  • Like
  • Fire
  • Love
Reactions: 44 users
  • Like
  • Fire
  • Love
Reactions: 14 users
  • Like
  • Fire
  • Love
Reactions: 27 users
Might not be related but it's interesting that both Dell and Panasonic are using it. Reads like it's using a radar solution similar to what Socionext was developing while using ultra low power:


Synaptics’ Emza Visual Sense AI Powers Human Presence Detection in Latest Dell and Panasonic Mobile PCs​

Ultra-low-power, machine learning (ML) algorithms analyze user and on-looker behavior to conserve power while enhancing privacy and security.​

Las Vegas, NV, January 3rd, 2023 – Synaptics® Incorporated (Nasdaq: SYNA) announced today that Dell and Panasonic have deployed its Emza Visual Sense artificial intelligence (AI) technology to enable Human Presence Detection (HPD) in their latest Latitude and SR mobile PCs, respectively. Running advanced Emza ML algorithms on dedicated, ultra-low power edge AI hardware, Synaptics provides PC OEMs with a turnkey solution that enables both longer battery life and enhanced privacy and security. By analyzing context, Synaptics’ technology goes far beyond basic presence detection to automatically wake the system when a user is engaged, dim the screen when they are not, hide information when an on-looker is detected, and lock the PC when the user walks away—all while the PC’s webcam is off.

CES 2023: To see the latest demonstration of Emza Visual Sense and HPD, visit us in the Venetian Hotel, Level 2 Exhibitor, Bellini Ballroom, #2105. Email press@synaptics.com for an appointment.

“By using Emza Visual Sense AI for HPD, Dell and Panasonic are leading an era of context-aware computing that will combine multiple sensing modes with edge AI to enable user-friendly, efficient, and secure intelligent platform experiences,” said Saleel Awsare, SVP & GM at Synaptics. “We are as excited for our partners as they embark on this journey as we are for the end users who will quickly reap the benefits.”

Both the Dell Latitude laptop and Panasonic SR notebook are shipping today.

Additional SB7900 features include:

Synaptics’ KatanaTM AI SoC platform also pairs with Emza Visual Sense ML algorithms to form ultra-low power, highly intelligent computer vision platforms with broad deployability. Together, they expand HPD applications to devices beyond PCs and notebooks to include smart TVs and assisted living cameras.

Availability
Synaptics’ Emza Visual Sense technology is available now. For more information, contact your local Synaptics sales representative.
 
  • Like
  • Fire
Reactions: 17 users

Foxdog

Regular
I would step back and say the following comments:

1. CES 2023 was great success for BRN and it's partners.
2. The potential leads have really increased and our technology is out there.

The pull back in SP is market fighting shorting pumping longs and many other variables out of control to retail investors.

The lack of to revenue is not a result of lack of activity. I think the end product needs to be sold by the end suppliers before we see payments down the line.

I would also add to the few that poke these partners to be humble and let them market ther products weather it's Akida or not.

Not many companies are going to thank IP suppliers for the innovation and they don't need to it's there product they spent the money and that's it. Poking these people and c companies will only slow us and make it harder for BRN employees to market products.

I would say that black outs on info could even be a result of how the share holders will circle the company of the next product and tell them to put Akida on the package lol.

Let's look at Megachips we don't even know who they work for but they grow.

In the end like The CEO says watch the financial numbers. They I believe are comming there is a Lot of activity and soon we will start seeing it roll in.

Let's be professional. I know not many have not been but please to those that poke these companies let them enjoy their moments too they engineered the product also lots of amazing work went in to the whole product.
Good post, thanks.
After the CEO podcast I've turned my attention (and expectations) to 2024. There's already a groundswell around AI growing in the media and I think this will continue to grow throughout the remainder of this year. As for our revenue, perhaps a bit more in the next 4C followed by bigger 'lumps' towards end 2023. We should see some solid revenue in 2024 as customers products mature and enter the marketplace.
I don't know if we'll ever see 'Akida Inside' advertised on products but we should see a steady increase in SP reflective of a successful company providing an amazing product to many other successful companies.
That's my take away from the CEO's podcast anyway.
 
  • Like
  • Love
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Howdy Brain Fam,

Hope everyone is having a great weekend. Let's hope I can make it even better!

I just watched the Cerence 25th Annual Needham Growth Conference which was filmed on the 10th Jan 2023. It's a 40 min approx video presentation that you have to sign up for to watch (full name and email address required for access). This link is here if you're interested in watching. https://wsw.com/webcast/needham

I'm itching to share a bit of information from the presentation because I believe there were numerous points raised throughout the presentation that indicate quite strongly the possible use of our technology in Cerence's embedded voice solution IMO.

For some background, Cerence is a global leader in conversational AI and they state that they are the only company in the world to offer the "complete stack" including conversational AI, audio, speech to text AI. Cerence state that every second newly defined SOP (start of production) car uses their technology, and they’re working with some very big names such as BYD, NIO, GM, Ford, Toyota, Volkswagen, Stellantis, Mercedes, BMW.

In the presentation they discussed how in November they held their second Analyst Day in which they outlined their new strategy called "Destination Next". They said that from a technology perspective this strategy or transition means they are going to be evolving from a voice only driver-centric solution via their Cerence Assistant or Co-pilate to a truly immersive in-cabin experience. Stefan Ortmanns (CEO Cerence) said early in the presentation something like "which means we're bringing in more features and applications beyond conversational AI, for example, wellness sensing, for example surrounding awareness, emotional AI or the interaction inside and outside the car with passengers and we have all these capabilities for creating a really immersive companion”. He also said something about the underlying strategy being based on 3 pillars, "scalable AI, teachable AI, and the immersive in-cabin experience", which has been bought about as a result of a "huge appetite for AI".

At about 6 mins Stefan Ortmanns says they have recently been shifting gear to bring in more proactive AI and he said something along these lines "What does it mean? So you bring everything you get together, so you have access to the sensors in the car, you have an embedded solution, you have a cloud solution, and you also have this proactive AI, for example the road conditions or the weather conditions. And if you can bring everything together you have a personalised solution for the driver and also for the passengers and this is combines with what we call the (??? mighty ?? intelligence). And looking forward for the immersive experience, you need to bring in more together, it's not just about speech, it's about AI in general right so, with what I said wellness sensing, drunkenness detection, you know we're working on all this kind of cool stuff. We're working on emotional AI to have a better engagement with the passengers and also with the driver. And this is our future road map and we have vetted this with 50-60 OEM's across the globe and we did it together with a very well know consultancy firm."

At about 13 mins they describe how there will be very significant growth in fiscal years 23/24 because of the bookings they have won over the last 18 to 24 months that will go into production at the end of this year and very early in 2024 and a lot of them will have the higher tech stack that Stefan talked about.

At roughly 25 mins Stefan Ortmanns is asked how they compete with big tech like Alexa, Google, Apple, and how are they are co-exisiting because there are certain OEMS's using Alexa and certain ones using Cerence as well. In terms of what applications is Cerence providing Stephan replied stating something like "Alexa is a good example, so what you're seeing in the market is that OEM's are selecting us for their OEM branded solution and we are providing the wake word for interacting with Alexa, that's based on our core technology".

Now here comes the really good bit. At 29 mins the conversation turns to partnership statements, and they touch on NVDIA and whether Cerence view NVDIA as a competitor or partner (sounds familiar). This question was asked in relation to NVDIA having its own chauffeur product which enables some voice capabilities with its own hardware and software however Cerence has also been integrated into NVDIA's DRIVE platform. In describing this relationship, Stefan Ortmanns says something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about specifically Mercedes where they're positioned with their hardware and with our solution. There's also another semi-conductor big player, Qualcomm namely now they are working with Volkswagen group and they're also using our technology. So we're very flexible and open with our partners".

Following on from that they discuss how Cerence is also involved in the language processing for BMW which has to be "seamlessly integrated" with "very low latency".

So, a couple of points I wanted to throw in to emphasise why I'm thinking all of this so strongly indicates the use of BrainChip's technology being a part of Cerence's stack.
  • Cerence presented Mercedes as the premium example in which to demonstrate how advanced their voice technology is in comparison to NVDIA's. Since this presentation is only a few days old, I don't think they'd be referring to Mercedes old voice technology but rather the new advanced technology developed for the Vision EQXX. And I don't think Cerence would be referring to Mercedes at all if they weren't still working with them.
  • This is after Mercedes worked with BrainChip on the “wake word” detection for the Vision EQXX which made it 5-10 times more efficient. So, it only seems logical if Cerence's core technology is to provide the wake word that they should incorporate BrainChip’s technology to make the wake word detection 5-10 times faster.
  • In November 2022 Nils Shanz, who was responsible for user interaction and voice control at Mercedes and who also worked on the Vision EQXX voice control system was appointed Chief Product Officer at Cerence.
  • Previous post in which Cerence describe their technology as "self-learning",etc #6,807
  • Previous post in which Cerence technology is described as working without an internet connection #35,234 and #31,305
  • I’m no engineer but I would have thought the new emotion detection AI and contextual awareness AI that are connected to the car’s sensors must be embedded into Cerence’s technology for it all to work seamlessly.
Anyway, naturally I would love to hear what @Fact Finder has to say about this. As we all know he is the world's utmost guru in being able to sort the chaff from the wheat and always stands at the ready to pour cold water on any outlandish dot joining attempts when the facts don't stack up.

Of course, these are only the conclusions I have arrived at after watching the presentation and I would love to hear what everyone else’s impression are. Needless to say, I hope I'm right.

B 💋

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 134 users

Foxdog

Regular
Mate I reckon we're in good shape while the Fool keeps bagging us. Lookout when they decide to pump our stock and recommend it - that might be time to sell. Don't fret tho, it'll be 10 bucks plus by then 😜
 
  • Like
  • Fire
Reactions: 10 users

Vladsblood

Regular
Mate I reckon we're in good shape while the Fool keeps bagging us. Lookout when they decide to pump our stock and recommend it - that might be time to sell. Don't fret tho, it'll be 10 bucks plus by then 😜
I’m not selling until at least 4 stock splits on our Full Nasdaq Listing folks. By then we could be 50-100 dollars Or MORE. Cheers Chippers Vlad
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Esq.111

Fascinatingly Intuitive.
Evening Chippers,

Breaking news...

World first, Pioneer DJ mixing table utilising Brainchips Akida neuromorphic chip on the International Space Station.
Personally can't imagine having to wash the external windows , whilst attached via umbilical, without some groovy tunes.

😄 .

* With any luck, may pull Fact Finder back, to give me a dressing down.
Seemed to work last time.

All in good humour.

ARi - Matasin, Live Series, Ep.003 ( Melodic Techno Progressive House Mix) 7th Jan 2023.

If a savvy individual could post link, Thankyou in advance.
This may be our only hope of retrieving Fact Finder.

Cheers for all the great finds and posts today.

Regards,
Esq.
 
  • Like
  • Haha
  • Fire
Reactions: 29 users
Top Bottom