BRN Discussion Ongoing

BrainShit

Regular
Chatgpt. question


Companies Using Renesas with Brainchip Inside

Renesas Electronics Corporation has partnered with BrainChip to integrate their Akida neuromorphic processor technology into various applications, particularly in the automotive and IoT sectors. One notable company that utilizes Renesas’ technology alongside BrainChip’s Akida is Nissan Motor Co., Ltd. Nissan has adopted Renesas’ innovative automotive technology for its ProPILOT 2.0 system, which features advanced driver assistance capabilities. This system relies on Renesas’ R-Car automotive system-on-chip (SoC) and RH850 automotive control microcontroller (MCU), which can potentially incorporate BrainChip’s Akida IP for enhanced AI processing.

Additionally, other companies in the industrial and consumer IoT sectors are likely to leverage the combination of Renesas’ SoCs and BrainChip’s Akida technology due to its low power consumption and high performance for edge AI applications. However, specific names of these companies may not be explicitly mentioned in the available information.

In summary, Nissan Motor Co., Ltd. is a prominent example of a company using Renesas technology that could integrate BrainChip’s solutions within its systems.
While ChatGPT often provides truthful information to users, it does have the ability to "lie." Of course, ChatGPT doesn't decide to maliciously lie to users, as it simply can't do so. After all, ChatGPT is a language-processing tool, not an imitation of a real, sentient person.

However, ChatGPT can technically still lie through a phenomenon known as AI hallucination.

AI hallucination involves an AI system providing information that seems reasonable or plausible but, in reality, is not true at all. In fact, AI hallucination can provide information that it was never fed during its training period. Alternatively, it can occur when an AI system provides information unrelated to the prompt or request. An AI system may even claim to be human in a hallucination event.

AI systems, such as chatbots, fall into the hallucination trap for several reasons: their lack of real-world understanding, software bugs, and limitations on the data provided.

As previously stated, ChatGPT can only provide information using data published up to 2021, which certainly limits what kinds of prompts it can fulfill.

One of ChatGPT's big problems is that it can also fall victim to bias when giving users information. Even ChatGPT's creators have stated that the AI system has been "politically biased, offensive," and "otherwise objectionable" in the past. As reported by The Independent, ChatGPT's developers are committed to tackling this issue, but that doesn't mean it no longer poses a risk.

Source: https://www.makeuseof.com/does-chatgpt-lie/



The RH850 microcontroller (MCU) from Renesas does not include neuromorphic architecture. The architecture supports various control functions but does not integrate neuromorphic computing features, which are typically associated with AI and machine learning tasks.

Source: https://www.renesas.com/en/about/pr...ps-adopted-nissan-its-new-skyline-propilot-20

Source: https://www.renesas.com/en/about/pr...tion-worlds-first-28nm-cross-domain-flash-mcu

Source: https://www.renesas.com/en/about/pr...tion-worlds-first-28nm-cross-domain-flash-mcu
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users
Thanks for explaining the AI hallucination something to consider moving forward with ChatGpt.
 
  • Like
Reactions: 3 users
Hello everyone

So apparently the only way the new technology is going to have a good uptake and do well is if the echo system works together and is excepted by all and is adopted by everyone in the echo system.
Now I I would think that Sean hair has a great lot of people he can count on work with and knows through his time at all the different places he’s worked at. He’s been very successful in every other job. I’m sure that he is not only doing his upmost but pushing harder and powering through this first phase of the initial start up. I think we’re close and hopefully by the end of this year we will not only have started our role but will have actually increased our potential and the journey will be nothing but exciting so hang in there chippers because the bus the train the rocket are about to take off
 
  • Like
  • Love
  • Thinking
Reactions: 26 users

itsol4605

Regular
Hello everyone

So apparently the only way the new technology is going to have a good uptake and do well is if the echo system works together and is excepted by all and is adopted by everyone in the echo system.
Now I I would think that Sean hair has a great lot of people he can count on work with and knows through his time at all the different places he’s worked at. He’s been very successful in every other job. I’m sure that he is not only doing his upmost but pushing harder and powering through this first phase of the initial start up. I think we’re close and hopefully by the end of this year we will not only have started our role but will have actually increased our potential and the journey will be nothing but exciting so hang in there chippers because the bus the train the rocket are about to take off
Do you mean the ecosystem and Sean Hehir?
 
  • Haha
  • Like
Reactions: 5 users

Tezza

Regular
Hello everyone

So apparently the only way the new technology is going to have a good uptake and do well is if the echo system works together and is excepted by all and is adopted by everyone in the echo system.
Now I I would think that Sean hair has a great lot of people he can count on work with and knows through his time at all the different places he’s worked at. He’s been very successful in every other job. I’m sure that he is not only doing his upmost but pushing harder and powering through this first phase of the initial start up. I think we’re close and hopefully by the end of this year we will not only have started our role but will have actually increased our potential and the journey will be nothing but exciting so hang in there chippers because the bus the train the rocket are about to take off
Very few people if any, are successful at everything they do. We ALL fail at times. I am hoping this is not Sean's time.
 
  • Like
Reactions: 4 users

itsol4605

Regular
Has somebody ordered the Akida Edge Box?
If so, what is your experience?
 
  • Like
Reactions: 1 users
Very few people if any, are successful at everything they do. We ALL fail at times. I am hoping this is not Sean's time.
Since Sean has brought Steve Brightfield on board I have noticed some great forward momentum in the company, I believe he is a great asset to us, we are getting very close.
Go brainchip.
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Sabrina Eickelkamp, Head of International Corporate and Direct and Guard Sales at Mercedes-Benz discussing the company's outlook on where the fleet is heading.

It's great to see Sabrina specifically mentioning the voice control system and efficiency when she talks about AI and the future fleet. I mean she could have talked about all sorts of AI but she specifically picked out this example.





17 Oct 2024

Sabrina Eickelkamp (Mercedes-Benz): 'AI is driving change across the entire industry'​


Mercedes-Benz sponsors the Global Fleet Manager Award at this year’s Fleet Europe Awards (24 October, Milan). A great opportunity to have a quick Q&A with Sabrina Eickelkamp, Head of International Corporate and Direct and Guard Sales at Mercedes-Benz, on the company’s outlook on where global fleet management is heading.

What’s Mercedes-Benz vision on the road to sustainability ? Is it all BEV or with other drivelines as well?​

"Sustainability and climate protection are key pillars of the business strategy of the Mercedes-Benz Group AG. With our Ambition 2039, we have already set the course towards net carbon-neutrality for our new vehicle fleet in 2019. With “Ambition 2039,” we are pursuing the goal of a fully networked and net carbon-neutral fleet of new vehicles in 2039 – eleven years earlier than EU legislation stipulates. We are looking at the entire life cycle: from development to the supplier network, our own production, the electrification of our products, renewable energy throughout the utilisation phase of electric vehicles, and finally, the recycling of vehicles. Stepping forward to the circular economy approach.
*Net carbon-neutral means that carbon emissions that are not avoided or reduced at Mercedes-Benz are compensated for by certified offsetting projects.
Mercedes-Benz Cars plans to be in a position to cater to different customer needs, whether it‘s an all-electric drivetrain or an electrified combustion engine, until well into the 2030s. To achieve this, we made our production flexible in terms of drive systems. For the further development of our product portfolio, we will sustainably leverage efficiencies between new and existing models. All new models from Mercedes-Benz will meet the highest demands of our customers.

Artificial Intellegence is entering the fleet market. How do you think AI will change cars – and mobility as a whole?​

I’m really excited about the endless possibilities AI is bringing us – and we’re just getting started. AI is definitely going to play a key role in the future, driving change and innovation across the entire automotive industry.
AI has the potential to make mobility safer, more efficient, and more sustainable, while also offering a more convenient and personalized experience for users. As AI continues to evolve, it will play a critical role in shaping the future of transportation. For example, the MBUX Virtual Assistant, powered by Generative AI, will take in-car interaction to the next level. It offers 😍intuitive voice control:love:, allowing drivers to manage navigation, entertainment, and vehicle settings effortlessly. With its ability to understand natural language and learn from user preferences, the MBUX Virtual Assistant provides a personalized and seamless driving experience, making every journey more enjoyable and :love:efficient.:love:

You’re on the jury for this year’s Fleet Europe Awards. What are you looking for in the candidates?​

It has been my absolute pleasure listening to the manifold views and strategies presented by this year’s award candidates. For me personally, innovation in steering a transforming fleet is key to success. So I am looking for a well-rounded, innovative, and sustainable fleet strategy that is clearly communicated and backed by solid data, with a strong focus on cost-effectiveness and risk management.
View the candidates of this years Fleet Europe Awards here and here. The Fleet Europe Awards are part of the Fleet Europe Days (24 October) in Milan Italy.




I know most of us don't need a reminder but, what the heck!

29 Sept 2022
Screenshot 2024-10-21 at 12.58.14 pm.png
 
  • Like
  • Love
  • Fire
Reactions: 45 users

Guzzi62

Regular
While ChatGPT often provides truthful information to users, it does have the ability to "lie." Of course, ChatGPT doesn't decide to maliciously lie to users, as it simply can't do so. After all, ChatGPT is a language-processing tool, not an imitation of a real, sentient person.

However, ChatGPT can technically still lie through a phenomenon known as AI hallucination.

AI hallucination involves an AI system providing information that seems reasonable or plausible but, in reality, is not true at all. In fact, AI hallucination can provide information that it was never fed during its training period. Alternatively, it can occur when an AI system provides information unrelated to the prompt or request. An AI system may even claim to be human in a hallucination event.

AI systems, such as chatbots, fall into the hallucination trap for several reasons: their lack of real-world understanding, software bugs, and limitations on the data provided.

As previously stated, ChatGPT can only provide information using data published up to 2021, which certainly limits what kinds of prompts it can fulfill.

One of ChatGPT's big problems is that it can also fall victim to bias when giving users information. Even ChatGPT's creators have stated that the AI system has been "politically biased, offensive," and "otherwise objectionable" in the past. As reported by The Independent, ChatGPT's developers are committed to tackling this issue, but that doesn't mean it no longer poses a risk.

Source: https://www.makeuseof.com/does-chatgpt-lie/



The RH850 microcontroller (MCU) from Renesas does not include neuromorphic architecture. The architecture supports various control functions but does not integrate neuromorphic computing features, which are typically associated with AI and machine learning tasks.

Source: https://www.renesas.com/en/about/pr...ps-adopted-nissan-its-new-skyline-propilot-20

Source: https://www.renesas.com/en/about/pr...tion-worlds-first-28nm-cross-domain-flash-mcu

Source: https://www.renesas.com/en/about/pr...tion-worlds-first-28nm-cross-domain-flash-mcu
I agree with some of your thoughts, but I was searching as follows:

AKIDA tenns and pico?


ChatGPT said:

Searched 6 sites:

The Akida Pico is a cutting-edge ultra-low-power AI co-processor developed by BrainChip, designed to handle event-based neural networks (TENNs) efficiently. It's particularly optimized for applications requiring low power and real-time processing, making it ideal for Edge AI tasks like wearables, IoT devices, and healthcare monitoring. The Akida Pico can run standalone or as a co-processor, with power consumption often below 1 milliwatt, which is a major advantage for battery-operated and always-on devices.

Developers can use the MetaTF software to build AI models in familiar environments like TensorFlow or PyTorch, avoiding the need to learn a new framework. This tool optimizes models for Akida’s temporal event-based architecture, ensuring real-time performance with minimal energy use. It supports common AI tasks such as voice recognition, anomaly detection, and sensor data processing.

The platform is versatile, finding applications in industries like smart homes, healthcare, and defense, where devices need to stay idle until triggered by specific events. This architecture helps reduce false alarms and boosts efficiency by only waking up the full processor when necessary.

Source:
BrainChip
TechPowerUp
Geeky Gadgets

You can also subscribe to ChatGPT, which I don't. You are getting more info and newer as well if you pay for it, but I don't need it in my life.

It is a glorified Google search?
.
 
Last edited:
  • Like
  • Fire
Reactions: 7 users

Diogenese

Top 20
Gi Bravo,

Sabrina makes the point that AI is still evolving:

"I’m really excited about the endless possibilities AI is bringing us – and we’re just getting started. AI is definitely going to play a key role in the future, driving change and innovation across the entire automotive industry."

I recall that when the EQXX announcement was made, it also said that AI was in the early stages. At the time, I thought that was a bit of puffery, because we has Akida 1 SoC. Back then, I did not know about TENNS, but Mercedes may well have been not merely aware of TENNS, but actually triallling the early stege software embodiments.

As we know, TENNS/AKIDA2 has numerous potential uses in ADAS/AD:
DMS,
occupancy,
MBUX,
lidar and other navigation sensors such as radar, ultrasonic, camera.

In this 20240109 MB press release on the CLA concept vehicle, while referencing generative AI,
Mercedes makes a virtue of necessity by citing the advantages of separating hardware and software development.

https://media.mbusa.com/releases/re...ke-virtual-assistant-powered-by-generative-ai

1729479038936.png


As I may have alluded to in the past, TENNS/Akida 2 was not in a sufficiently developed and tested state to be set in silicon while the CLA concept vehicle was being assembled. So, while this blurb refers to the water-cooled Nvidia chip, which certainly is capable of running TENNS software, there remains the possibility that that chip's hardware will be "enhanced" in the future while readily being amenable to software upgrades in the interim.
 
  • Like
  • Love
  • Fire
Reactions: 27 users

buena suerte :-)

BOB Bank of Brainchip
And Action!!! :)

1729480739249.png
Wake up time :) 🙏🤞🙏




1729480838470.png
 
Last edited:
  • Like
  • Love
Reactions: 12 users
ChatGpt question… which I personally think should / hope to be a target market after listening to one of BRN podcasts.

Potential Integration of BrainChip in the Chiplet Market

Given BrainChip’s focus on neuromorphic processors and their unique capabilities, there is significant potential for integration into the chiplet market. Here are several reasons why:

  1. Modularity and Flexibility: The nature of chiplets allows for modular design, which aligns well with BrainChip’s technology that can be integrated into various applications such as AIoT (Artificial Intelligence of Things), industrial automation, and consumer electronics.
  2. Low Power Consumption: One of the critical advantages of BrainChip’s Akida processors is their ultra-low power requirements. In an era where energy efficiency is paramount, especially in edge computing scenarios, this characteristic makes them attractive candidates for inclusion in chiplet designs.
  3. Enhanced Performance at the Edge: As industries increasingly demand real-time processing capabilities at the edge, integrating BrainChip’s technology into chiplets could enhance overall system performance by providing efficient local processing capabilities.
  4. Compatibility with Advanced Packaging Technologies: The integration of BrainChip’s processors into chiplet architectures would benefit from advanced packaging technologies that facilitate communication between multiple chiplets while managing thermal issues effectively.
  5. Market Demand for AI Solutions: With the growing demand for AI-driven solutions across various sectors, incorporating BrainChip’s neuromorphic technology into chiplets could provide competitive advantages by enabling faster and more efficient processing capabilities tailored to specific applications.
In conclusion, BrainChip could indeed be utilized in the chiplet market, leveraging its innovative neuromorphic technology to enhance modular semiconductor designs aimed at improving performance and efficiency in edge computing environments
 
  • Like
  • Fire
Reactions: 10 users

HopalongPetrovski

I'm Spartacus!
200w.gif
 
  • Haha
Reactions: 4 users

Hoohoo

Member
  • Like
  • Love
  • Fire
Reactions: 12 users

7für7

Top 20
  • Like
  • Fire
Reactions: 5 users

Tothemoon24

Top 20
IMG_9784.jpeg



IMG_9787.jpeg

AMHERST — A University of Massachusetts Amhersteffort to get computer hardware for artificial intelligence from lab to factory will get a $7.9 million boost from the Department of Defense.

The award, announced Wednesday at a Massachusetts Institute of Technology research facility in Lexington, is part of more than $38 million in federal grants distributed through the Northeast Microelectronics Coalition Hub.


The Massachusetts hub, created in 2023, is a network of more than 200 organizations pushing the nation’s development of semiconductors and microelectronics.

UMass Amherst’s partners in the project include Berkshire Community College, defense contractor Raytheon and Global Foundries, according to a press release announcing the award.

Global Foundries is a multinational computer-chip manufacturer with a plant in Burlington, Vermont, and a second plant in Malta, New York, just north of Albany.

At the helm of the effort is Qiangfei Xia, a professor of electrical and computer engineering at UMass Amherst and head of the Nanodevices and Integrated Systems Lab.

These are the first multimillion-dollar project awards under the Microelectronics Commons program, funded through the CHIPS and Science Act.

“Today’s awards highlight the innovation and expertise that exists across the Northeast when it comes to microelectronics and semiconductors, especially right here in Massachusetts,” Gov. Maura Healey said in a news release. “The NEMC Hub is creating research and partnerships that will have long-lasting, positive impacts on our national security, bring more of this critical chip research back to the U.S. and our region, and result in the development of cutting-edge tools that will help protect our country and empower our service members stationed around the world.”


She thanked the Biden-Harris administration for “investing in Massachusetts.”

The Department of Defense announced $269 million in awards this week for 33 technical projects under the Microelectronics Commons initiative.



U.S. Sen. Elizabeth Warren said in a statement that she pushed to “lock in” funding through the CHIPS Act. That money, she said is “paving the way for advancements in our phones and hearing aids. With this award, Massachusetts will continue to be a leader in microelectronic technology research and innovation.”
 

Attachments

  • IMG_9786.jpeg
    IMG_9786.jpeg
    735.9 KB · Views: 21
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users

McHale

Regular
A lot to think about McH,

Note to self:
A. models
B. Mercedes NAOMI4
C. s/w

A. Models are what the NN has to search through.

I'll confine my thoughts to images and speech, but other sensor inputs are treated on the same principles.

Images: Static (photos, drawings); Moving (Video)


Sound: Key word spotting, NLP; other sounds.

Each of these can be divided into several layers of sub-categories with increasing specificity. In a NN, the larger the model, the more power is consumed in making an inference/classification, because the processor needs to examine each example in th model to see which the sensor input most nearly resembles.

Thus it make sense to have specific models foe spcific tasks. The narrower the task, the smaller the model can be.

For example, with image classification in an ADAS/AV, images of astronomy of scuba diving are irrelevant. So ADAS models are compiled from millions of images captured from vehicle-mounted cameras/videos.

Akida excels at classifing static images, and can do this at many frames per second. However, Akida 1 then relied on the associated CPU running software to process the classified images to determine an object's speed and direction. That's the genius of TENNS - it is capable of performing the speed analysis in silicon or in software far more efficiently than conventional software.

I prefer to talk about images/video because Natural Language processing is something I struggle to comprehend, but apparently TENNS makes this a cakewalk too.

Open AI tries to have everything in its model, but that burns a massive amount of energy for a single inquiry - a bit like biting off more than it can chew.

So now we have RAG, where subject-specific models can be downloaded depending on what the NN procesor is intended to do.

B. NAOMI4 - Yes This is a German government funded research project and will not produce a commercial outcome any time soon.

C. H/W v S/W

Valeo does not have an Akida silicon in its SCALA 3. It uses software to process the lidar sensor signals. Because we've been working with them for several years in a JD, I'm hopeful that the software will iclude Akida 2/TENNS simulation software. Sean did mention that we now have an algorithm product line.

The rationale for this was explained in the Derek de Bono/Valeo podcast posted yesterday that software allows for continual upgrading. He also mentioned that provision for some H/W upgrades could also be accommodated. Given TENNS young age, it will have developed significantly in the last couple of years, so it could not be set in silicon at this early stage, although Anil did announce some now deferred preparations for taping out some months ago.

Again, I am hopeful that Akida2/TENNS will be included in the software of both Valeo and Mercedes SDVs (and in other EAP participants' products) because it produces real-time results at a much lower power consumption.

Then there's PICO ... the dormant watchdog ...

Hi Dio thanks for your response to my post from last Thursday, but I must admit to not having really put what I was trying to say in
a clear or properly worded fashion.

When I was talking about models I was really meaning to speak to the different programming/coding languages that can be (need to be) used to interface with the various versions/iterations of Akida, for instance Python, PyTorch, Keras and a number of others I have seen. mentioned.

So in my post, I said models in an incorrect context, because although I do not understand a good deal of the pertinent technical niceties, I do, I believe, know that a model is like a library that can be used for certain applications/uses of Akida, which you also described.

Going back to the coding languages, why do the various iterations of Akida require the use of different coding languages, if that statement is in fact correct. Regardless however, why are there different coding languages required, because I do know that several different languages are used. ?
 
  • Like
  • Love
Reactions: 8 users
  • Like
  • Love
  • Fire
Reactions: 16 users
Hi Dio thanks for your response to my post from last Thursday, but I must admit to not having really put what I was trying to say in
a clear or properly worded fashion.

When I was talking about models I was really meaning to speak to the different programming/coding languages that can be (need to be) used to interface with the various versions/iterations of Akida, for instance Python, PyTorch, Keras and a number of others I have seen. mentioned.

So in my post, I said models in an incorrect context, because although I do not understand a good deal of the pertinent technical niceties, I do, I believe, know that a model is like a library that can be used for certain applications/uses of Akida, which you also described.

Going back to the coding languages, why do the various iterations of Akida require the use of different coding languages, if that statement is in fact correct. Regardless however, why are there different coding languages required, because I do know that several different languages are used. ?
Hi @McHale. I have no expertise in this field but I thought one of the best things I learnt in the last year or so was that via Edge Impulse we have access to Nvidia’s TAO library (which I expect would be bountiful). As well our own smaller model library. And then of course it can be “Trained” for specific use cases using “Live” data.

This is one of my favourite photos because at some point there will be use cases at the far edge where the others can’t go; that’s where hopefully we can clean up and get a large marketshare. Even more so with Pico now.


1729499780366.jpeg


Not sure if that’s answered your question or gone off on a tangent?
 
  • Like
  • Fire
  • Love
Reactions: 17 users
Top Bottom