BRN Discussion Ongoing

So many people wondering who the heck I am 🤣

thelittlepest

View attachment 19904
View attachment 19903
You must be very influential in the world of neuromorphic computing for gaming, automotive and wireless communications.

We are very fortunate to have you take an interest in we mere retail investors.

Regards
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 27 users
You must be very influential in the world of neuromorphic computing for gaming, automotive and wireless communications.

We are very fortunate to have you take an interest in we mere retail investors.

Regards
FF

AKIDA BALLISTA
Or a common pest committed to the cause
 
  • Haha
  • Like
  • Fire
Reactions: 20 users
I don't even know why I'm posting this here, I think few of us will understand French.
Nevertheless I like it! 🥰😘

Listen to France Inter 📻
On the occasion of the Mondial de l'Auto - Paris, the show Regards Croisés dedicates two episodes to the "car of the future" 🚘
Dive into Valeo's autonomous vehicle, with Antoine Lafay.
View attachment 19899
Laetitia Bernard offers us a trip in an autonomous vehicle alongside Antoine Lafay, director of research and innovation at Valéo, a specialist in the sensors that are the eyes and ears of these cars

Musical programming

07:14
 
  • Like
  • Fire
  • Love
Reactions: 5 users

Worker122

Regular
Hi, does this model have a neuromorphic chip in any of the Systems?



MERCEDES BENZ

Hi Xxxxxx thanks for your interest. We don't have much information on its technical data yet but be sure to stay tuned for more exciting updates. Meanwhile, you may register your interest and stay up to date with its latest news here: https://www.mercedes-benz.com.au/pa...=CP2jkOWh-PoCFQDDGAIdy20MxQ&_knopii=1#contact. Thanks, Mercedes-Benz Australia

Didn’t deny it. Lol Mercedes’ EQE model advertised on FB.
 
  • Like
  • Fire
  • Haha
Reactions: 16 users

AusEire

Founding Member. It's ok to say No to Dot Joining
  • Haha
  • Like
Reactions: 8 users

Sirod69

bavarian girl ;-)
Great to see the PSA Certified APIs on Github...
Enabling easier access to hardware Root of Trust services such as crypto operations, firmware update (new) and secure storage.
All in one place where a community can shape their future.
With this change a new name... PSA Functional APIs become PSA Certified APIs
https://lnkd.in/eSfbg-iX
1666599972378.png
 
  • Like
Reactions: 3 users

Diogenese

Top 20
Hey Brain Fam,

Check out this SoundHound announcement from 15 Sep 2022 which describes their new EdgeLite option. It states "With this fully-embedded voice solution companies can process data locally without cloud-related data privacy concerns. Developers have access to natural language commands with less memory and CPU impact, a bundled wake word, and the ability to instantly update commands".

SoundHound has a partnership with Mercedes and they also have a "multi-year agreement with Qualcomm Technologies Inc. to enable SoundHound’s advanced voice AI technology, consisting of its automatic speech recognition, natural language understanding, and text-to-speech conversion software with select Qualcomm Technologies’ Snapdragon® platforms."






View attachment 19892 View attachment 19897
Hi Bravo,

Not sure that that hound dog has got with the programme yet:

https://www.bing.com/videos/search?...2E988B66B2CB6A1A80E92E988B66B2CB6A1&FORM=VIRE

"Hound dog howlin' all forlorn
Laziest dog that ever was born.
He's howlin' 'cause he's sittin' on a thorn -
just too tired to move over."

This is a patent from November 2020. it uses NN models to generate a voice response, but it uses a transformer (26) to do the speech recognition. Funnily enuf, we were discussing transformers earlier - in a derogatory way.

But, if they wanted to improve it to
"Hey Mecedes!" standard ...

US2022165257A1 NEURAL SENTENCE GENERATOR FOR VIRTUAL ASSISTANTS

1666598669957.png




[0002] The present subject matter is in the field of artificial intelligence systems and Automatic Speech Recognition (ASR). More particularly, embodiments of the present subject matter relate to methods and systems for neural sentence generation models.
[0054] Instead of creating these unlimited ways of utterance sentences by experienced developers, the present subject matter can employ neural network models and machine learning to automate the generation of numerous, thorough, and effective sample utterance sentences to invoke one intent. Generated by finetuned natural language generators and trained classifiers, these sample utterance sentences can have a semantic meaning to invoke the specific intent they were created for.

[0080] According to some embodiments, the system can train the classifier model 26 to predict the probability of a generated utterance sentence being correct by finetuning from a pre-trained NLG model such as a transformer. Various transformer models such as XLNET, BART, BERT, or ROBERTA, and their distilled versions can provide sufficient accuracy and acceptable training and inference-time performance for different datasets and applications.

https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)
A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data. It is used primarily in the fields of natural language processing (NLP)[1] and computer vision (CV).[2]

Like recurrent neural networks (RNNs), transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as translation and text summarization. However, unlike RNNs, transformers process the entire input all at once. The attention mechanism provides context for any position in the input sequence. For example, if the input data is a natural language sentence, the transformer does not have to process one word at a time. This allows for more parallelization than RNNs and therefore reduces training times.[1]

Transformers were introduced in 2017 by a team at Google Brain[1] and are increasingly the model of choice for NLP problems,[3] replacing RNN models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets. This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific task
s.[4][5]
 
  • Like
  • Love
  • Fire
Reactions: 14 users

MDhere

Regular
If you reverse the 4C to C4

C4 = Explosion 🔥
freaky shit going on. first my car radio actually shows blu wireless and i can't hit play as its possesed then i walk into chemist isle and see yr C4 explosion
 

Attachments

  • 20221024_184559.jpg
    20221024_184559.jpg
    1 MB · Views: 56
  • Haha
  • Wow
  • Like
Reactions: 15 users

Motty

Member
So glad Jerome is on our team. His enthusiasm is infectious.

"he oiled his way across the floor
oozing charm from every pore"
I found this interview quite outstanding and could not stop watching it. How bloody informative was it. Being a shareholder, it has instilled a high level of confidence in me of what Brainchip has developed. Clearly they are the industry leaders and it is only a matter of time before the dam walls burst. As I reflect on all the dot joining how can I not be excited by the prospects going forward. So glad to be a part of this journey and appreciative of the sharing of knowledge and information so freely given on this forum. Beers and Cheers to all.
 
  • Like
  • Love
  • Fire
Reactions: 22 users

HopalongPetrovski

I'm Spartacus!

Really loved this presentation.
To my mind this simply explains what Brainchip has been doing for the past year or so.
Setting us up into the coming and required ecosystems that will dominate so much of the nascent and currently developing tech that will dominate for the next 10- 20 years.
And we are still just getting started on our initial iterations of the Akida tech. with enhancement after enhancement already in the pipeline.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

HopalongPetrovski

I'm Spartacus!
A couple of comments on the video:
*impressive level of enthusiasm (Peter's first Akida AGM talk would give him a run for his money - though a great deal less awkward).
*students, just smile and don't look fightened
*I have traumatic memories of an unsupervised Van der Graaf generator (and pencil sharpener generator) All boys school
*is it just me or is that a confusing accent
*is that the guy in the chocolate add (or am I thinking of toothpaste?) TV has cooked my brain
I'm pretty sure he went on to an illustrious career on "Thunderbirds".
Mainly due to his outrageous eyebrows. 🤣
 
  • Like
  • Haha
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Bravo,

Not sure that that hound dog has got with the programme yet:

https://www.bing.com/videos/search?q=life+gets+teejus+don't+it&view=detail&mid=A80E92E988B66B2CB6A1A80E92E988B66B2CB6A1&FORM=VIRE

"Hound dog howlin' all forlorn
Laziest dog that ever was born.
He's howlin' 'cause he's sittin' on a thorn -
just too tired to move over."

This is a patent from November 2020. it uses NN models to generate a voice response, but it uses a transformer (26) to do the speech recognition. Funnily enuf, we were discussing transformers earlier - in a derogatory way.

But, if they wanted to improve it to
"Hey Mecedes!" standard ...

US2022165257A1 NEURAL SENTENCE GENERATOR FOR VIRTUAL ASSISTANTS

View attachment 19905



[0002] The present subject matter is in the field of artificial intelligence systems and Automatic Speech Recognition (ASR). More particularly, embodiments of the present subject matter relate to methods and systems for neural sentence generation models.
[0054] Instead of creating these unlimited ways of utterance sentences by experienced developers, the present subject matter can employ neural network models and machine learning to automate the generation of numerous, thorough, and effective sample utterance sentences to invoke one intent. Generated by finetuned natural language generators and trained classifiers, these sample utterance sentences can have a semantic meaning to invoke the specific intent they were created for.

[0080] According to some embodiments, the system can train the classifier model 26 to predict the probability of a generated utterance sentence being correct by finetuning from a pre-trained NLG model such as a transformer. Various transformer models such as XLNET, BART, BERT, or ROBERTA, and their distilled versions can provide sufficient accuracy and acceptable training and inference-time performance for different datasets and applications.

https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)
A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data. It is used primarily in the fields of natural language processing (NLP)[1] and computer vision (CV).[2]

Like recurrent neural networks (RNNs), transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as translation and text summarization. However, unlike RNNs, transformers process the entire input all at once. The attention mechanism provides context for any position in the input sequence. For example, if the input data is a natural language sentence, the transformer does not have to process one word at a time. This allows for more parallelization than RNNs and therefore reduces training times.[1]

Transformers were introduced in 2017 by a team at Google Brain[1] and are increasingly the model of choice for NLP problems,[3] replacing RNN models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets. This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific task
s.[4][5]

Here's something pending, filed recently for SoundHound's EdgeLite.
Screen Shot 2022-10-24 at 4.09.28 pm.png
 
  • Like
  • Fire
Reactions: 7 users
D

Deleted member 118

Guest

RISC-V was included in Android for the first time, grabbing the mobile phone processor market

2022/10/23 16:20

Wei Zhihao, reporter of Juheng.com, Taipei

RISC-V was included in the mobile phone processor market for the first time. ( Figure: AFP)

Tag

Brother Pingtou

SiFive

Andes

Jingxin Department

RISC-V

Mobile phone

CPU

Foreign media reported that Alibaba's Pingtou Brother's RISC-V patch has been incorporated into the original code by the Android system. It is not only the first time that Android officially supports RISC-V architecture, but also symbolizes the expansion of the RSIC-V ecosystem. Zhang, grabbed the market of more than 1 billion mobile phone processors a year. Industries such as SiFive and Taichang Jingxinology (6533-TW) are expected to benefit.

The industry believes that with the increasing number of software and hardware that support RSIC-V, and the world's top two mobile phone processors, Qualcomm (QCOM-US) and mediaTek (2454-TW), have invested in SiFive and Crystal Heart, respectively, and Apple (AAPL-US) The RISC-V team has also been set up, which makes the RISC-V growth space full of imagination.

Foreign media pointed out that Brother Pingtou reached a contributor license agreement (CLA) with Google in June this year to carry out technical cooperation and sharing on Android system support for RISC-V architecture and other work, and then Google's own system core AOSP(An Droid Open Source Project) has also begun to integrate Pingtou's patches into the system.

According to the data, Pingtou Brother submitted 76 basic code patches such as Bionic Libc and simulators this time, of which 18 were directly merged into the official patches of AOSP, 12 key patches, and 4 external project patches, becoming the world's first And RISC-V patch for roid system.

The industry also believes that at this stage, the relationship between the United States and China is becoming more and more tense. China has been strangled by the United States due to various technical problems, and is also actively looking for alternative solutions, hoping to reduce its dependence on the U.S. x86 and Arm architecture through RSIC-V.
 
  • Like
  • Fire
  • Wow
Reactions: 21 users
D

Deleted member 118

Guest
  • Like
  • Wow
  • Thinking
Reactions: 7 users
D

Deleted member 118

Guest
Techworld News=Reporter Noh Tae-min] IAR Systems announced on the 19th that it continues to support SiFive's RISC-V automotive CPU IP.

IAR Embedded Workbench for RISC-V has launched support for the latest SiFive Automotive E6-A and S7-A family that meet the needs of automotive applications such as infotainment, connectivity, and ADAS. IAR’s complete development toolchain enables automotive OEMs and partner embedded software developers to make the most of the energy efficiency, simplicity, security, and flexibility provided by RISC-V.

RISC-V uses a single-commandset architecture (ISA) in all family lines to increase code portability, dramatically reducing the cost and time to market for automotive applications. The SiFive family of automotive processors provides a high level of flexibility with options that support area and performance optimization for a variety of integrity levels, such as ASIL-B, ASIL-D, or Split-Lock, in accordance with ISO 26262.

The recently released E6-A series covers a variety of real-time 32-bit applications, from system control to hardware security modules (HSM) and safety islands, and the microcontrollers themselves are also included in the target. The new S7-A is a 64-bit, high-performance real-time core that fully meets the requirements of the latest SoC, features low latency interrupt support and a high-performance safety island that requires both 64-bit memory space visibility as the main CPU application.

The IAR Embedded Workbench for RISC-V is a complete development tool chain that includes a powerful IAR C/C++ compiler and a comprehensive debugger. Users can use the tools that come with comprehensive safety products to create compact code with best-in-class performance and safety.

The functional safety edition of the IAR Embedded Workbench for RISC-V is certified by TüV SüD in accordance with the requirements of several functional safety standards, including ISO 26262 and IEC 61508. To ensure the code quality of automotive applications, a fully integrated C-STAT tool for static code analysis has demonstrated that code alignment is possible with industry standards such as MISRAC:2012, MISRAC++:2008, and MISRAC:2004.



Anders Holmberg, IAR Systems CTO, said: “IAR Systems is a leading member of the RISC-V community. Through our close relationship with SiFive, we can support SiFive’s latest CPU IP,” he said. “This allows our customers in the automotive industry to use IAR’s professional tools to maximize the benefits of development and accelerate the development process to accelerate time to market.”
 
  • Like
  • Love
Reactions: 7 users
D

Deleted member 118

Guest

This article is part of the TechXchange: RISC-V: The Instruction-Set Alternative

What you'll learn:

How will RISC-V facilitating automotive SoC design, especially for mixed-criticality apps?
The lineup of compute IP being developed by SiFive.


Key trends in automotive electronic architecture these days include centralization of computing, increased computing at the sensing edge, higher software complexity due to mixed criticality (a system containing computer hardware and software that can execute several applications of different criticality, such as safety critical and non-safety critical), and a shift from domain to zonal controllers, to list but a few. These requirements have created the need for new, more capable electronic control units (ECUs), and a higher degree of functional integration in fewer devices.

An open standard is one obvious answer, allowing multiple vendors to be used. It helps drive costs down and enables designers to accommodate new capabilities as they become available.

SiFive, an organization that’s not a chip manufacturer but makes the plans for chipmakers to use, employs a RISC-V instruction set architecture (ISA)—a base for building chips that defines what kind of software can run on the chips. Arm Ltd.'s Arm ISA and Intel's x86 are the dominant ISAs used today for general-purpose processors, but both are proprietary while RISC-V is an open standard.


The global RISC-V ecosystem is growing rapidly and now consists of more than 3,000 members. Working without proprietary lock-in, companies can license from multiple vendors and have more flexibility to design their own IP where needed, while maintaining software and ecosystem compatibility.

Renesas, for example, “has been closely collaborating with SiFive to bring the strong benefits of RISC-V to many of our products,” said Takeshi Kataoka, Senior Vice President and General Manager of the Automotive Solution Business Unit at Renesas. “RISC-V continues to gain momentum around the world, and we plan to leverage SiFive’s portfolio of automotive RISC-V products in our future automotive SoC solutions to meet the exacting demands of these global customers.”

Automotive Compute IP
SiFive is creating a complete lineup of compute IP for MCUs, MPUs, and soon, SoCs, as well as vector-processing solutions tailored for automotive applications. The first automotive family cores will be available later in 2022. And by the second half of 2023, two more product series will be added.

Using a single ISA across its product offerings—from safety islands to real-time products to ADAS and central zone compute—increases code portability and can reduce cost and time-to-market. RISC-V vector extensions will bring enhanced machine-learning and DSP capabilities.

An increasingly important trend in the design of real-time and embedded systems is the integration of components with different levels of criticality onto a common hardware platform. Criticality is a designation of the level of assurance against failure needed for a system component. A mixed-criticality system is one that has two or more distinct levels (for example safety critical, mission critical, and low-critical) like ASIL B, ASIL D, or mixed criticalities with split-lock, which is high compute performance coupled with high safety-integrity support.

SiFive solutions are being developed to address automotive needs for current and future applications like infotainment, cockpit, connectivity, ADAS, and electrification. This is as the automotive market transitions to zonal architectures and manufacturers ask for energy efficiency, simplicity, security, and software flexibility. The RISC-V ecosystem enables a workload-targeted chip design, based on an open specification base that enables industry-wide collaboration to build standards and specifications for commercial competition.

The New Set of Chip Series
SiFive has launched a new product portfolio and its first three automotive product series, each with area- and performance-optimized variants. The company says it’s the only RISC-V IP supplier to offer multiple processor series that meets automotive designers’ needs with regard to compute, integrity, and security.

These latest chips include the E6-A series for digital control applications like steering, S7-A for so-called "safety islands" that act as a failsafe for other critical applications, and X280-A to manage data from image sensors and do machine-learning work, including for autonomous driving.


E6-A

The automotive E6-A series has IP options that are both area- and performance-optimized for different integrity levels such as ASIL B, ASIL D, or split-lock, in line with ISO 26262. With “split-lock” capability, high compute performance can be coupled with high safety-integrity support.

Split-lock differs from lock-step by adding the flexibility not available in lock-stepped CPU implementations. It allows the system to be configured either in a “split mode” (two independent CPUs that can be used for diverse tasks and applications), or “lock mode” (the CPUs are lock-stepped for high safety-integrity applications) at boot up.

SiFive’s automotive products are accompanied by complete safety packages that include documentation to accelerate Safety Element out of Context (SEooC) integration and, with it, time-to-market. In the automotive world, the SEooC defined in ISO 26262-10 is the method for using components in a vehicle that weren’t originally designed for that specific project. Components that are developed without any idea of where they will be fitted fall under the purview of SEooC.

The E6-A series targets a variety of real-time, 32-bit applications from system control to hardware security modules and safety islands, and as standalone in microcontrollers. E6-A has a 32-bit real-time core and is the first commercially available offering from SiFive with broad availability later in 2022.


Building on the foundations of the SiFive Essential portfolio, the multicore-capable design of the E6-A series offers what’s claimed to be the industry’s best-in-class functional-safety support as a SEooC to enable use in applications with Automotive Safety Integrity Levels (as per ISO 26262) up to ASIL B and ASIL D. The automotive products are accompanied by complete safety packages that include documentation to accelerate SEooC integration.

E6-AB, targeted at ASIL B integrity levels, employs minimal hardware redundancy of the logic to guarantee safety metrics are satisfied, without the use of Software Test Libraries. This enables faster time-to-market and minimizing software integration work.

S7-A

By the second half of 2023, two more product series will be added to the portfolio—the S7-A and X280-A. S7-A is an optimized 64-bit CPU tailored for safety islands requiring both low-latency interrupt support and the same memory space visibility as the main application CPUs. These are often used in ADAS, gateways, and domain controllers.

The combination of out-of-the-box features and design scalability enable designers to achieve a balance of power, performance, and area while achieving fast time-to-market for automotive MCUs, system controllers, battery-management systems (BMS), and safety islands for SoC markets. In addition to the automotive market, the E6-A series also is a good choice for broader deployment in MCU applications requiring functional-safety capabilities.


X280-A

The X280-A is a vector-capable processor well-suited for edge sensing in ADAS, sensor-fusion applications, and any ML/AI-accelerated functionality in the vehicle. Building on the performance and power efficiency of the X280, it’s aimed at sensors, sensor fusion, and other vector or ML-intensive workloads in automotive applications.

SiFive’s growing portfolio of IP will range from small 32-bit real-time CPUs all the way up to high-performance 64-bit application processors. Later in 2023, a new high-end processor, configurable to up to 16 cores, will be added to the portfolio.
 
  • Like
  • Love
Reactions: 17 users
Hi... A week or so ago I emailed both Peter and Anil together.

I said to Anil that I'd personally like to meet him face-to-face, so to save me having to fly to San Francisco, I politely planted a seed, that being,
it would be great for both Peter and Anil to attend next year's AGM together, assuring them that they would be very well received together, by the loyal Australian shareholder base.

I don't know if any shareholders have had the opportunity to meet with Anil, but I'd consider it a real privilege, as with Peter.

Whatever our 4C does or doesn't deliver this week, there's more to come before this year is out...listen very carefully to Nikunj during his presentation the other day with Edge Impulse, he seemed to give a hint about something coming at one point, in my own opinion of course.

Love Brainchip x
Do you have the link to the talk I must of missed it work has been crazy the Christmas rush has started already
 

Mt09

Regular
Do you have the link to the talk I must of missed it work has been crazy the Christmas rush has started already
 
  • Like
  • Love
  • Fire
Reactions: 10 users
Top Bottom