BRN Discussion Ongoing

wilzy123

Founding Member
Yes but it’s not the only indicator who shows, brainchip is moving forward. You must be really completely ignorant not to see the progress. People are just watching the share price and complaining about less revenue. If you ever had to do with startups, you know that you have to suffer minimum 3 years (depends on the industry you are in) and we talk here about a complete new business! IMO
1000025690.jpg
 
  • Haha
  • Like
Reactions: 5 users

TECH

Regular
How come VVDN doesn't have any AKIDA Edge Boxes for sale on it's website? I can only find this link to NVIDIA's Edge Box.


Yes, I have been monitoring the VVDN site for quite awhile now and noticed the same thing, maybe Jensen said, "pump my
product or else !" :ROFLMAO::ROFLMAO:
 
Last edited:
  • Like
Reactions: 5 users

Tothemoon24

Top 20
IMG_8965.jpeg



🎇 Unleashing GenAI & 5G for the automotive industry!

🎤 We are having a blast at Viva Technology talking innovation in the automotive industry.

David Roine, Director of Connectivity Strategy at Valeo discussed with Verizon Business the opportunities offered by 5G for safer and more autonomous mobility.

"5G revolutionizes the automotive industry by enhancing in-car experiences, supporting autonomous driving, and improving safety with V2X technologies,” Roine shared. “Connectivity is crucial for the modern car, transforming it into a 'smartphone on wheels' and ensuring ongoing innovation and safety."

Valeo's AI4ALL Director Cédric MERLIN covered current applications and the enormous potential of Gen AI for the automotive industry in a discussion with Thomas Morel from McKinsey & Company and Ozgur Tohumcu and Ralph Hengstenberg from Amazon Web Services (AWS).

"At Valeo, we're harnessing AI and GenAI to revolutionize the automotive industry, from enhancing EV range and smart lighting to advancing autonomous driving,” Merlin shared. “We’re integrating AI into every aspect of our business, ensuring rapid development and heightened security. Our GenAI initiatives are transforming employee efficiency and product innovation."

You can bet Valeo is at the heart of both of these revolutions 💚
 
  • Like
  • Love
  • Thinking
Reactions: 11 users

Diogenese

Top 20
View attachment 63628


🎇 Unleashing GenAI & 5G for the automotive industry!

🎤 We are having a blast at Viva Technology talking innovation in the automotive industry.

David Roine, Director of Connectivity Strategy at Valeo discussed with Verizon Business the opportunities offered by 5G for safer and more autonomous mobility.

"5G revolutionizes the automotive industry by enhancing in-car experiences, supporting autonomous driving, and improving safety with V2X technologies,” Roine shared. “Connectivity is crucial for the modern car, transforming it into a 'smartphone on wheels' and ensuring ongoing innovation and safety."

Valeo's AI4ALL Director Cédric MERLIN covered current applications and the enormous potential of Gen AI for the automotive industry in a discussion with Thomas Morel from McKinsey & Company and Ozgur Tohumcu and Ralph Hengstenberg from Amazon Web Services (AWS).

"At Valeo, we're harnessing AI and GenAI to revolutionize the automotive industry, from enhancing EV range and smart lighting to advancing autonomous driving,” Merlin shared. “We’re integrating AI into every aspect of our business, ensuring rapid development and heightened security. Our GenAI initiatives are transforming employee efficiency and product innovation."

You can bet Valeo is at the heart of both of these revolutions 💚


Given the BRN/Valeo JD partnership, and the fact that Akida 2 has not made it to silicon, I've been developing the hypothesis that Valeo Scala 3 is provided with Akida software for classification of point cloud objects, so I looked at some recent Valeo patents which do use CNN software running on a system computer for point cloud classification. Using software would have made it possible to continually upgrade the system as TeNNs/ViT was refined.

"At Valeo, we're harnessing AI and GenAI to revolutionize the automotive industry, from enhancing EV range and smart lighting to advancing autonomous driving,” Merlin shared. “We’re integrating AI into every aspect of our business, ensuring rapid development and heightened security."

This Valeo patent relates to training a NN:

US2023146935A1 CONTENT CAPTURE OF AN ENVIRONMENT OF A VEHICLE USING A PRIORI CONFIDENCE LEVELS - 20211109

[0014] The steps of capturing the environment of the vehicle by the at least one environment sensor and of processing the point cloud using the trained artificial intelligence for the content capture of the environment relate to the operation of the vehicle. These steps are therefore each performed individually in each driving assistance system. These steps are furthermore performed repeatedly in the driving assistance system in order to perform continuous content capture of the environment.

In effect, this would relate to the preparation of NN models from lidar data.


This application was filed in October 2022:

WO2024088937A1 METHOD TO ANALYZE AT LEAST ONE OBJECT IN AN ENVIRONMENT OF A SENSOR DEVICE FOR A VEHICLE 20221027

The analysis algorithm may, in particular, comprise a convolutional neural network. The analysis algorithm is at least configured to, for example, detect, identify and/or classify the object. By means of the convolutional neural network, it is, for example, possible to perform a classification of the object if the convolutional neural network is trained to classify objects.

It seems highly probable that BRN were working with Valeo to develop NN software to process lidar signals.

Remembering also that the TeNNs patent applications were filed in June 2022:
WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

it would not have been possible to have TeNNs silicon prepared at that time. In fact we know that TeNNs was not taped out until much later.

This being the case, and in view of the recent Valeo patents I'm prepared to bet that the BRN/JD partnership encompasses the development of software NN, including the creation of NN models. The milestone payments would have included model development milestones.

So, in my view, there is no Akida SoC in Scala 3, but there is Akida NN model software provided with Scala 3 to run on the ADAS processor, and adaptable to run on an Akida 2 processor with TeNNs when the silicon is available.

I can't say whether or not this is the model that Mercedes has adopted, but since they don't have Akida 2 silicon, there is a high probability that it is the case.
 
  • Like
  • Love
  • Fire
Reactions: 70 users

IloveLamp

Top 20
Last edited:
  • Like
  • Fire
Reactions: 14 users

Learning

Learning to the Top 🕵‍♂️
Given the BRN/Valeo JD partnership, and the fact that Akida 2 has not made it to silicon, I've been developing the hypothesis that Valeo Scala 3 is provided with Akida software for classification of point cloud objects, so I looked at some recent Valeo patents which do use CNN software running on a system computer for point cloud classification. Using software would have made it possible to continually upgrade the system as TeNNs/ViT was refined.

"At Valeo, we're harnessing AI and GenAI to revolutionize the automotive industry, from enhancing EV range and smart lighting to advancing autonomous driving,” Merlin shared. “We’re integrating AI into every aspect of our business, ensuring rapid development and heightened security."

This Valeo patent relates to training a NN:

US2023146935A1 CONTENT CAPTURE OF AN ENVIRONMENT OF A VEHICLE USING A PRIORI CONFIDENCE LEVELS - 20211109

[0014] The steps of capturing the environment of the vehicle by the at least one environment sensor and of processing the point cloud using the trained artificial intelligence for the content capture of the environment relate to the operation of the vehicle. These steps are therefore each performed individually in each driving assistance system. These steps are furthermore performed repeatedly in the driving assistance system in order to perform continuous content capture of the environment.

In effect, this would relate to the preparation of NN models from lidar data.


This application was filed in October 2022:

WO2024088937A1 METHOD TO ANALYZE AT LEAST ONE OBJECT IN AN ENVIRONMENT OF A SENSOR DEVICE FOR A VEHICLE 20221027

The analysis algorithm may, in particular, comprise a convolutional neural network. The analysis algorithm is at least configured to, for example, detect, identify and/or classify the object. By means of the convolutional neural network, it is, for example, possible to perform a classification of the object if the convolutional neural network is trained to classify objects.

It seems highly probable that BRN were working with Valeo to develop NN software to process lidar signals.

Remembering also that the TeNNs patent applications were filed in June 2022:
WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

it would not have been possible to have TeNNs silicon prepared at that time. In fact we know that TeNNs was not taped out until much later.

This being the case, and in view of the recent Valeo patents I'm prepared to bet that the BRN/JD partnership encompasses the development of software NN, including the creation of NN models. The milestone payments would have included model development milestones.

So, in my view, there is no Akida SoC in Scala 3, but there is Akida NN model software provided with Scala 3 to run on the ADAS processor, and adaptable to run on an Akida 2 processor with TeNNs when the silicon is available.

I can't say whether or not this is the model that Mercedes has adopted, but since they don't have Akida 2 silicon, there is a high probability that it is the case.
I hope you are on the money Dio,

It would make my comment about 1 1/2 year ago look smart.😁😁😁. Although I am only guessing at the time. Before the release of TeNNs.

Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-192307

Learning 🪴
 
  • Like
  • Fire
  • Love
Reactions: 18 users
Hadn't seen this one before or I've missed it. Quick search on the patent title didn't show anything on TSEx.

It is lodged by the Research Institute not BRN at this point :unsure:

Picked it up from the Aus IP Official Journal released June 23.

@Diogenese may have an opinion.




Australian application number2023901547
Patent application typeProvisional
Serial number
Application statusFiled
First IPC mark
Currently under oppositionNo
Proceeding types
Invention titleMETHOD AND SYSTEM FOR UNSUPERVISED FEATURE EXTRACTIONS IN A LAYERED NEURAL NETWORK
InventorsNot Given

Applicant details
Applicant 1 nameBrainChip Research Institute Pty Ltd
Applicant 1 addressWA 6000 Australia

Publication history
Publication actionVolume/issuePublication dateDocument kind
Provisional Applications Filed57/2101-June-2023
 
  • Like
  • Love
  • Fire
Reactions: 14 users

Diogenese

Top 20
I hope you are on the money Dio,

It would make my comment about 1 1/2 year ago look smart.😁😁😁. Although I am only guessing at the time. Before the release of TeNNs.

Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-192307

Learning 🪴
Hi Learning,

Thanks for the archeological link - I think I missed the presentation in November 2022. The presentation stopped about halfway through just now.

I think we are talking about two different things with the software v hardware debate, but your clairvoyance was remarkable.

My understanding of what Anil was talking about is that the functions of Akida 1 were divided between the hardware and the algorithms (software run on the host processor).
1716558805988.png

The left hand side is done in silicon without the need for computer intervention to produce inference/classification. Once the NN is configured during setup, the NPUs run automatically in response to input spike streams.

The right hand side does use algorithms running on the host processor.

For example CNN is converted to SNN using software. SNN models are used to configure the NPUs during setup. (See, eg, slide 10 where each layer can process a separate frame while each other layer is processing another frame before passing the frame on to the next layer.)

With few-shot learning, the name of the object needs to be supplied via the computer.

However, the Valeo hypothesis is that there is no Akida silicon, and the whole function is simulated in software. After all, an earlier business model was Brainchip Studio, a purely software solution.

I had thought that the simulation software would be adequate for proof-of-concept demonstrations for customers, but would probably not be sufficient for commercial applications such as automotive.

Now they we have the hyper-efficient TeNNs, this may still be better than the competition.

Looking back to 2022, the JD partnership was probably developing the software option, the models being a major part of the job.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Diogenese

Top 20
Hadn't seen this one before or I've missed it. Quick search on the patent title didn't show anything on TSEx.

It is lodged by the Research Institute not BRN at this point :unsure:

Picked it up from the Aus IP Official Journal released June 23.

@Diogenese may have an opinion.




Australian application number2023901547
Patent application typeProvisional
Serial number
Application statusFiled
First IPC mark
Currently under oppositionNo
Proceeding types
Invention titleMETHOD AND SYSTEM FOR UNSUPERVISED FEATURE EXTRACTIONS IN A LAYERED NEURAL NETWORK
InventorsNot Given

Applicant details
Applicant 1 nameBrainChip Research Institute Pty Ltd
Applicant 1 addressWA 6000 Australia

Publication history
Publication actionVolume/issuePublication dateDocument kind
Provisional Applications Filed57/2101-June-2023
It's a provisional application. The complete application or PCT will need to be filed by 1 June 2024. The specification will not be published for another 6 months.
 
  • Like
  • Fire
Reactions: 16 users

JDelekto

Regular
BrainChip Proxy Voting with Fidelity and DTC Eligibility

This afternoon, I spent about an hour chatting with a representative from Fidelity to get answers about requesting proxy voting materials and DTC eligibility, which affects the foreign transaction fee they assess.

Fidelity confirmed the information I received from Boardroom Pty Ltd. HSBC Bank Australia Limited is the sub-custodian that holds the shares of BRN. While this is the case for Fidelity, I am unsure if this is true for those trading on other platforms. To request proxy voting materials, contact Fidelity to initiate the process.

Fidelity advises to contact them 5-7 business days before the cutoff date for proxy voting, though adding a few extra days as a buffer is recommended. You can call them, or send a message through their platform, specifying that you hold a foreign security (specify OTCMKTS: BRCHF). Indicate that the proxy materials should be requested through their back office, which will submit a request on your behalf to their sub-custodian, HSBC Bank Australia Limited.

Fidelity informed me that BrainChip Ltd. is not "fully" DTC eligible. While Investor Relations stated that BrainChip is DTC eligible, this is only part of the story. According to Fidelity, the company did not fully register as DTC eligible and still requires a clearing agent for their trades. Because of this clearing agent, Fidelity assesses the $50 fee per trade for this security.

Although I cannot be certain without further inquiry, it seems likely that the BrainChip is probably saving money by doing this. However, I wonder what benefits they are getting by being partially DTC-eligible because there do not seem to be any conveyed to those purchasing the stock.

For those using other trading platforms besides Fidelity, I suggest contacting your representatives to see if you receive similar information and get the steps required to vote by proxy.

Have a great weekend all!
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Frangipani

Regular
C4E1A4AF-42F6-4385-9374-B7989B5B094A.jpeg


7B80762E-C44B-4111-8BF7-F48ABC2CA5E6.jpeg

ED92A618-C005-43A0-BB93-0FD5F0C75C9E.jpeg



And also an interesting like:

E202AED8-EDB1-4F73-A9A1-D02B05874E83.jpeg

8E5B60B3-5882-458D-91DC-88D6A1513435.jpeg


F37C7585-806B-4DB0-8D38-C60CACB9A5D0.jpeg



When thinking of Spanish automobile manufacturers that could be involved in a connected car project, SEAT or CUPRA (a subsidiary of SEAT) - both part of the Volkswagen Group - come to mind?
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 31 users

IloveLamp

Top 20
1000015949.jpg
1000015951.jpg
 
  • Like
  • Love
  • Fire
Reactions: 18 users

IloveLamp

Top 20
Us?




1000015959.jpg
1000015956.jpg
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 21 users

FJ-215

Regular

No...

Saw this the other week but didn't post it. Need @Diogenese to give the full rundown. Above my paygrade.


IoT: new energy-efficient chips could expand the scope of artificial intelligence in edge computing

"Last year, in an article in Nature Communication published in collaboration with scientists from, among others, Robert Bosch GmbH, the Technical University of Munich, and the Indian Institute of Technology in Kanpur, Kämpfe unveiled an innovative chip design that makes use of ferroelectric field effect transistors (FeFET) that can store information even when they are disconnected from a power source. The new chip also has the key advantage of being able to simultaneously store and process data in transistors, which greatly reduces the bottleneck between data processing and memory."

“The chip we developed with Bosch and Fraunhofer IMPS, which is currently in production in the USA at GlobalFoundries, can deliver 885 TOPS/W”, explains the researcher. For an idea of what this means in practical terms, consider that GPU chips currently used in AI deliver 10 to 20 TOPS/W [Tera Operations Per Second and per Watt]. For the time being, however, the developers of the new architecture are not aiming to replace GPU-based systems but to target a range of uses in edge computing, where AI is deployed at the point where data is collected: in IoT devices, sensors and autonomous vehicles. “One use case, for example, is in automobile systems that pre-analyse objects captured by cameras” without relaying data to a central processing unit. The new chips will therefore open up opportunities to implement AI in highly miniaturized low-latency systems. “In the future, we will get around to integrating them into larger systems,” points out Thomas Kämpfe."
 
Last edited:
  • Like
  • Love
  • Sad
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Morning ILoveLamp ,

Looks promising.... at the 1:43 minute mark ... the black box on this chaps desk looks similar to ours , though it is hard to get a clear shot of it.

May need to blow the image up on a bigger screen to confirm or rule out , ... it's black , boxy and about the right size.

Regards,
Esq.
 
  • Like
  • Fire
  • Haha
Reactions: 7 users

hotty4040

Regular
Given the BRN/Valeo JD partnership, and the fact that Akida 2 has not made it to silicon, I've been developing the hypothesis that Valeo Scala 3 is provided with Akida software for classification of point cloud objects, so I looked at some recent Valeo patents which do use CNN software running on a system computer for point cloud classification. Using software would have made it possible to continually upgrade the system as TeNNs/ViT was refined.

"At Valeo, we're harnessing AI and GenAI to revolutionize the automotive industry, from enhancing EV range and smart lighting to advancing autonomous driving,” Merlin shared. “We’re integrating AI into every aspect of our business, ensuring rapid development and heightened security."

This Valeo patent relates to training a NN:

US2023146935A1 CONTENT CAPTURE OF AN ENVIRONMENT OF A VEHICLE USING A PRIORI CONFIDENCE LEVELS - 20211109

[0014] The steps of capturing the environment of the vehicle by the at least one environment sensor and of processing the point cloud using the trained artificial intelligence for the content capture of the environment relate to the operation of the vehicle. These steps are therefore each performed individually in each driving assistance system. These steps are furthermore performed repeatedly in the driving assistance system in order to perform continuous content capture of the environment.

In effect, this would relate to the preparation of NN models from lidar data.


This application was filed in October 2022:

WO2024088937A1 METHOD TO ANALYZE AT LEAST ONE OBJECT IN AN ENVIRONMENT OF A SENSOR DEVICE FOR A VEHICLE 20221027

The analysis algorithm may, in particular, comprise a convolutional neural network. The analysis algorithm is at least configured to, for example, detect, identify and/or classify the object. By means of the convolutional neural network, it is, for example, possible to perform a classification of the object if the convolutional neural network is trained to classify objects.

It seems highly probable that BRN were working with Valeo to develop NN software to process lidar signals.

Remembering also that the TeNNs patent applications were filed in June 2022:
WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

it would not have been possible to have TeNNs silicon prepared at that time. In fact we know that TeNNs was not taped out until much later.

This being the case, and in view of the recent Valeo patents I'm prepared to bet that the BRN/JD partnership encompasses the development of software NN, including the creation of NN models. The milestone payments would have included model development milestones.

So, in my view, there is no Akida SoC in Scala 3, but there is Akida NN model software provided with Scala 3 to run on the ADAS processor, and adaptable to run on an Akida 2 processor with TeNNs when the silicon is available.

I can't say whether or not this is the model that Mercedes has adopted, but since they don't have Akida 2 silicon, there is a high probability that it is the case.


Don't allow those " thought processes " to quell, Doggy, any time soon.

Just keep on ( DELVING ), the " jackpot " will emerge, eventually, and become " IMMINENT " i.e. ( IMMINENT ) IMO.

I like your viewpoint and I like it a lot, I'm thinking.

Have a good weekend comrades ;)


Akida Ballista >>>>> Imminent - Could this be possible, imminently, ............ HOPE SO ........... <<<<<

hotty...
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Learning

Learning to the Top 🕵‍♂️
Hi Learning,

Thanks for the archeological link - I think I missed the presentation in November 2022. The presentation stopped about halfway through just now.

I think we are talking about two different things with the software v hardware debate, but your clairvoyance was remarkable.

My understanding of what Anil was talking about is that the functions of Akida 1 were divided between the hardware and the algorithms (software run on the host processor).
View attachment 63633
The left hand side is done in silicon without the need for computer intervention to produce inference/classification. Once the NN is configured during setup, the NPUs run automatically in response to input spike streams.

The right hand side does use algorithms running on the host processor.

For example CNN is converted to SNN using software. SNN models are used to configure the NPUs during setup. (See, eg, slide 10 where each layer can process a separate frame while each other layer is processing another frame before passing the frame on to the next layer.)

With few-shot learning, the name of the object needs to be supplied via the computer.

However, the Valeo hypothesis is that there is no Akida silicon, and the whole function is simulated in software. After all, an earlier business model was Brainchip Studio, a purely software solution.

I had thought that the simulation software would be adequate for proof-of-concept demonstrations for customers, but would probably not be sufficient for commercial applications such as automotive.

Now they we have the hyper-efficient TeNNs, this may still be better than the competition.

Looking back to 2022, the JD partnership was probably developing the software option, the models being a major part of the job.
Thanks Dio,

Me not so smart after all. 😆😆😆

Once again, thank you so much for yours vast knowledge and kindly sharing it.

Learning 🪴
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Damo4

Regular
Last edited:
  • Like
  • Love
Reactions: 6 users

Dijon101

Regular
Don't allow those " thought processes " to quell, Doggy, any time soon.

Just keep on ( DELVING ), the " jackpot " will emerge, eventually, and become " IMMINENT " i.e. ( IMMINENT ) IMO.

I like your viewpoint and I like it a lot, I'm thinking.

Have a good weekend comrades ;)


Akida Ballista >>>>> Imminent - Could this be possible, imminently, ............ HOPE SO ........... <<<<<

hotty...


As a AVZ and BRN shareholder, please never use the phrase "imminent" again...

It's giving me PTSD.
 
  • Haha
  • Like
  • Fire
Reactions: 22 users

7für7

Top 20
Given the BRN/Valeo JD partnership, and the fact that Akida 2 has not made it to silicon, I've been developing the hypothesis that Valeo Scala 3 is provided with Akida software for classification of point cloud objects, so I looked at some recent Valeo patents which do use CNN software running on a system computer for point cloud classification. Using software would have made it possible to continually upgrade the system as TeNNs/ViT was refined.

"At Valeo, we're harnessing AI and GenAI to revolutionize the automotive industry, from enhancing EV range and smart lighting to advancing autonomous driving,” Merlin shared. “We’re integrating AI into every aspect of our business, ensuring rapid development and heightened security."

This Valeo patent relates to training a NN:

US2023146935A1 CONTENT CAPTURE OF AN ENVIRONMENT OF A VEHICLE USING A PRIORI CONFIDENCE LEVELS - 20211109

[0014] The steps of capturing the environment of the vehicle by the at least one environment sensor and of processing the point cloud using the trained artificial intelligence for the content capture of the environment relate to the operation of the vehicle. These steps are therefore each performed individually in each driving assistance system. These steps are furthermore performed repeatedly in the driving assistance system in order to perform continuous content capture of the environment.

In effect, this would relate to the preparation of NN models from lidar data.


This application was filed in October 2022:

WO2024088937A1 METHOD TO ANALYZE AT LEAST ONE OBJECT IN AN ENVIRONMENT OF A SENSOR DEVICE FOR A VEHICLE 20221027

The analysis algorithm may, in particular, comprise a convolutional neural network. The analysis algorithm is at least configured to, for example, detect, identify and/or classify the object. By means of the convolutional neural network, it is, for example, possible to perform a classification of the object if the convolutional neural network is trained to classify objects.

It seems highly probable that BRN were working with Valeo to develop NN software to process lidar signals.

Remembering also that the TeNNs patent applications were filed in June 2022:
WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622

it would not have been possible to have TeNNs silicon prepared at that time. In fact we know that TeNNs was not taped out until much later.

This being the case, and in view of the recent Valeo patents I'm prepared to bet that the BRN/JD partnership encompasses the development of software NN, including the creation of NN models. The milestone payments would have included model development milestones.

So, in my view, there is no Akida SoC in Scala 3, but there is Akida NN model software provided with Scala 3 to run on the ADAS processor, and adaptable to run on an Akida 2 processor with TeNNs when the silicon is available.

I can't say whether or not this is the model that Mercedes has adopted, but since they don't have Akida 2 silicon, there is a high probability that it is the case.
Very wild I have to say. It shows how difficult it is to see through the net of speculations, possibilities and the truth. I hope we will be smarter soon with a huuuuge announcement!!!





Or maybe a new partnership??

1716594422814.gif
 
  • Like
  • Fire
  • Haha
Reactions: 3 users
Top Bottom