BRN Discussion Ongoing

mrgds

Regular
I think I know why the share price is falling - someone left the table cloth behind! Someone should contact BRN and ask where the ASX announcement about the missing table cloth is.
No, No, No, @Fox151 ......................... ur reading it wrong,
This is a "huge DOT join " with TESLA ......................... you see, Tesla"s mantra is, "No part is the best part "
So, a subliminal message to all shareholders is why we"ve ditched the tablecloth !!! 😂😂😂

A interesting use of words too in the recent podcast from our CEO,
Quote ............... "Maniacal Focus " .......................really?? :eek:
Lets hope its with reference to #2 😲

Screenshot (61).png
 
  • Like
  • Haha
Reactions: 10 users

Gemmax

Regular
And an old beautiful realization that should be unknown to many here. Unfortunately it is impossible for me to make my own better photo of this magazine. Only speed records from 1898 with the beginning and EVs. How the story went you know.
View attachment 32745

Elektrowagen means EV
It began with EVs ;)
FWIW. A young Ferdinand Porsche exhibited a hybrid Austro Daimler car at the 1900 World Exposition in Paris. It had electric motors on the front wheels and an internal combustion engine on the rear. ( Truly ahead of his time.)
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Tothemoon24

Top 20
This reduced in size article is from Sally Ward EE times .


Embedded World 2023




Also on the STMicro booth were another couple of fun demos, including a washing machine that could tell how much laundry was in the machine in order to optimize the amount of water added. This system is sensorless; it is based on AI analysis of the current required to drive the motor, and predicted the weight of the 800g laundry load to within 30g. A robot vacuum cleaner equipped with a time-of-flight sensor also used AI to tell what type of floor surface it was cleaning, to allow it to select the appropriate cleaning method.

Renesas

Next stop was the Renesas booth, to see the Arm Cortex-M85 up and running in a not-yet-announced product (due to launch in June). This is the first time EE Times has seen AI running on a Cortex-M85 core, which was announced by Arm a year ago.


The M85 is a larger core than the Cortex-M55, but both are equipped with Helium—Arm’s vector extensions for the Cortex-M series—ideal for accelerating ML applications. Renesas’ figures had the M85 running inference 5.3× faster than a Renesas M7-based design, though the M85 was also running faster (480 MHz compared with 280).

Renesas’ demo had Plumerai’s person-detection model up and running in 77 ms per inference.

Renesas AI on Cortex-M85 model from Plumerai at Embedded World 2023 Renesas’ not-yet-announced Cortex-M85 device is the first we’ve seen running AI on the M85. Shown here running Plumerai people-detection model. (Source: EE Times/Sally Ward-Foxton)
Renesas field application engineer Stefan Ungerechts also gave EE Times an overview of the DRP-AI (dynamically reconfigurable processor for AI), Renesas’ IP for AI acceleration. A demo of the RZ/V2L device, equipped with a 0.5 TOPS @ FP16 (576 MACs) DRP-AI engine, was running tinyYOLOv2 in 27 ms at 500 mW (1 TOPS/W). This level of power efficiency means no heat sink is required, Ungerechts said.

The DRP-AI is, in fact, a two-part accelerator; the dynamically reconfigurable processor handles acceleration of non-linear functions, then there is a MAC array alongside it. Non-linear functions in this case might be image preprocessing functions or model pooling layers of a neural network. While the DRP is reconfigurable hardware, it is not an FPGA, Ungerechts said. The combination is optimized for feed-forward networks like convolutional neural networks commonly found in computer vision, and Renesas’ software stack allows either the whole AI workload to be passed to the DRP-AI or use of a combination of the DRP-AI and the CPU.

Also available with a DRP-AI engine are the RZ/V2MA and RZ/V2M, which offer 0.7 TOPS @ FP16 (they run faster than the -V2L at 630 MHz compared to 400, and have higher memory bandwidth).

A next-generation version of the DRP-AI that supports INT8 for greater throughput, and is scaled up to 4K MACs, will be available next year, Ungerechts said.

Squint

Squint, an AI company launched earlier this year, is taking on the challenge of explainable AI.

Squint CEO Kenneth Wenger told EE Times that the company wants to increase trust in AI decision making for applications like autonomous vehicles (AVs), healthcare and fintech. The company takes pre-production models and tests them for weaknesses—identifying in what situations they are more likely to make a mistake.

This information can be used to set up a mitigating factors, which might include human-in-the-loop—perhaps flagging a medical image to a doctor—or trigger a second, more specialized model that has been specifically trained for that situation. Squint’s techniques can also be used to tackle “data drift”—for maintaining models over longer periods of time.

Embedl

Swedish AI company Embedl is working on retraining models to optimize them for specific hardware targets. The company has a Python SDK that fits into the training pipeline. Techniques include replacing operators with alternatives that may run more efficiently on the particular target hardware, as well as quantization-aware retraining. The company’s customers so far have included automotive OEMs and tier 1s, but they are expanding to Internet of Things (IoT) applications.

Embedl has also been a part of the VEDL-IoT project, an EU-funded project in collaboration with Bielefeld University that aims to develop an IoT platform, which distributes AI across a heterogeneous cluster.

Their demo showed managing AI workloads across different hardware: an Nvidia AGX Xavier GPU in a 5G basestation and an NXP i.MX8 application processor in a car. With sufficient 5G bandwidth available, “difficult” layers of the neural network could be computed remotely in the basestation, and the rest in the car, for optimum latency. Reduce the 5G bandwidth available, and more or all of the workload goes to the i.MX8. Embedl had optimized the same model for both hardware types.

VEDL-IoT/Embedl demo The VEDL-IoT project demo shows splitting AI workloads across 5G infrastructure and embedded hardware. (Source: EE Times/Sally Ward-Foxton)

Silicon Labs

Silicon Labs had several xG24 dev kits running AI applications. One had a simple Sparkfun camera with the xG24 running people counting, and calculating the direction and speed of movement.

A separate wake word demo ran in 50 ms on the xG24’s accelerator, and a third board was running a gesture recognition algorithm.

BrainChip

BrainChip had demos running on a number of partner booths, including Arm and Edge Impulse. Edge Impulse’s demo showed the company’s FOMO (faster objects, more objects) object detection network running on a BrainChip Akida AKD1000 in under 1 mW.
 
  • Like
  • Love
  • Fire
Reactions: 23 users
So what is peoples take on how socio does not already have an IP licence. They literally have designed products that rely on Akida.

I ask because if they have somehow avoided the need what sort of deal has been struck.

Why would they not sign it now rather than later as they are not hiding the relationship or product.

Because if they can get away with not having one is this the same situation that will apply to prophesee?
At a guess I would say that they may have received permission to use akida for their help in tapping out akida 1000 and would only pay a royalty on chips produced. Who the hell knows what deals are being done.
 
  • Like
Reactions: 3 users
So what’s the consensus on if Mercedes Benz are going to have vehicles containing akida in 2024? I’ve read a few reports now that it “may” contain the tech that was used in the EQXX (akida)



“Mercedes-Benz have also been eager to reveal that this new electric saloon will be the pioneer of the company’s proprietary MB Operating System, which will be used instead of the Google connectivity options integrated into the vehicles of competitors such as Volvo. This homemade infotainment system may even feature the experimental processor being developed by Mercedes that executes tasks in “neuromorphic spikes”, which essentially translates to the processor steadily accumulating tasks until a certain quantity has been reached, when they are all carried out simultaneously”
Look into the EQA with the new MBOS
 
  • Like
Reactions: 4 users
AKIDA 2nd Generation

“Availability:

Engaging with lead adopters now.

General availability in Q3’2023”


WHO ARE THE LEAD ADOPTERS???

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 30 users

cosors

👀
I have to find out and understand that first. I post it as a lover of TSE and as someone who takes the rules seriously. I have a hunch, but I have to look first. So I'm posting this for transparency and because we all need to learn:
Screenshot_2023-03-21-22-19-42-89_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

Screenshot_2023-03-21-22-57-11-90_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

I still haven't figured out which of my posts was worth deleting for Dreddbot. But I have cleaned up some more myself. I fear that Dreddb0t has exaggerated especially when I look at a lot of the posts here. A lot of joy, fun and laughter would be sacrificed to the clean factual thread. By logic, this post would also have to be deleted and my avatar banned. I don't know if I'd like that. Imagine every post that is off topic would be deleted.
I am the law. I will also think about that for the time being.
____
Screenshot_2023-03-22-00-07-00-69_40deb401b9ffe8e1df2f1cc5ba480b12.jpg
 
Last edited:
  • Like
  • Thinking
  • Love
Reactions: 8 users

chapman89

Founding Member
  • Like
  • Fire
  • Love
Reactions: 107 users

mototrans

Regular

GStocks123

Regular
  • Like
  • Fire
  • Love
Reactions: 28 users

Dhm

Regular
Round 1 between Hailo & BRN has been won by BRN. Renesas who is partnered with both chose BRN IP for their new RA series MCU based on ARM Cortex-M85.

Round 1 between SynSense & BRN appears to be going in favour of BRN. Prophesee is partnered with both & appears will choose the BRN IP. BRN is a superior product more suited to Prophesee's metavision sensor & SynSense released Speck using IniVation's (Prophesee competitor) sensor instead of Prophesee's sensor.
@Steve10 when Qualcomm revealed their Snapdragon with Prophesee at CES, the blur free photography contribution was not from Brainchip even though many of us thought it was. Therefore it must have been Synsense. Round one to Synsense although there are many rounds to play out over coming years.
 
  • Like
  • Thinking
Reactions: 6 users

TopCat

Regular
And this from Bill Gates…..

The next frontiers​

There will be an explosion of companies working on new uses of AI as well as ways to improve the technology itself. For example, companies are developing new chips that will provide the massive amounts of processing power needed for artificial intelligence. Some use optical switches—lasers, essentially—to reduce their energy consumption and lower the manufacturing cost. Ideally, innovative chips will allow you to run an AI on your own device, rather than in the cloud, as you have to do today.

 
  • Like
  • Fire
  • Love
Reactions: 34 users

TechGirl

Founding Member
  • Like
  • Love
  • Fire
Reactions: 44 users
Intellisense-


View attachment 32749
This is potentially huge. I have rabbited on about the cognitive communications market and how important it is to Brainchip’s commercial success.

And before anyone flogs the dead horse about this is only NASA etc; read this:

The global radio frequency front end market size was valued at $18.8 billion in 2021, and is projected to reach $69.9 billion by 2031, growing at a CAGR of 13.8% from 2022 to 2031.
1679434831441.png

https://www.alliedmarketresearch.com › ...

Radio Frequency Front End Market Size and Forecast


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 77 users

Tezza

Regular
  • Like
Reactions: 2 users

JB49

Regular
  • Like
  • Love
  • Fire
Reactions: 19 users

TechGirl

Founding Member
This is potentially huge. I have rabbited on about the cognitive communications market and how important it is to Brainchip’s commercial success.

And before anyone flogs the dead horse about this is only NASA etc; read this:

The global radio frequency front end market size was valued at $18.8 billion in 2021, and is projected to reach $69.9 billion by 2031, growing at a CAGR of 13.8% from 2022 to 2031.
View attachment 32750
https://www.alliedmarketresearch.com › ...

Radio Frequency Front End Market Size and Forecast


My opinion only DYOR
FF

AKIDA BALLISTA

And they mentioned for “commercial and government markets”

That doesn’t sound like just for NASA

“Intellisense Systems Inc. has selected its neuromorphic technology to improve the cognitive communication capabilities on size, weight and power (SWaP) constrained platforms (such as spacecraft and robotics) for commercial and government markets.”
 
  • Like
  • Love
  • Fire
Reactions: 48 users

Onboard21

Member
Brainchip March newsletter

March 2023 Newsletter​

 
  • Like
  • Love
  • Fire
Reactions: 23 users
@Steve10 when Qualcomm revealed their Snapdragon with Prophesee at CES, the blur free photography contribution was not from Brainchip even though many of us thought it was. Therefore it must have been Synsense. Round one to Synsense although there are many rounds to play out over coming years.
No it does not have to be SynSense. Prophesee can run their sensor using Von Neumann computing but it is less efficient. So just because Prophesee is not using AKIDA does not immediately default to SynSense far from it.

What Prophesee has found in Brainchip AKIDA is the optimum processing solution for their technology.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 34 users
Top Bottom