BRN Discussion Ongoing

Newspaper or not this article makes perfectly clear why Brainchip’s

1. ASX announced,
2. confirmed,
3. paid for,
4. locked in, and
5. publicly acknowledged by Renesas,

deal was, is and will be HUGELY rewarding to retail shareholders who have done their research.

A great all in one place refresher as to the size and market position of Renesas.

My opinion only DYOR
FF

AKIDA BALLISTA
Renesas and MegaChips are company transformative deals and some just can’t see it.

Sad really.😞

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 29 users

GStocks123

Regular
Hi All,

Speaking of 4-bit quantization, I found another reference to it in a different Qualcomm blog to the one @GStocks123 posted. This time Qualcomm refers to it in conjunction with Snapdragon mobile platforms and the Qualcomm Hexagon DSP, which seems to indicate that they consider it "highly beneficial to quantize values to smaller representations, down to as low as 4-bit". Why else would they mention it directly after discussing Snapdragon and Hexagon, if it didn't improve the actual performance, latency, of these two specific processors??

Naturally I don't want to get everyone too excited and will leave you to draw your own conclusions.

B 💋



View attachment 17449
In addition to this- I’ve received an email this morning confirming there is an ongoing relationship/communication between both Merc & Valeo.. take it how you want it :)
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Whacko-the-diddle-oh! This looks interesting!


The KI Delta Learning project is being funded by the German Federal Ministry for Economic Affairs and Energy. In addition to Porsche Engineering, partners include BMW, CARIAD and Mercedes-Benz, major suppliers such as Bosch, and nine universities, including the Technical University of Munich and the University of Stuttgart.



Valeo Schalter und Sensoren GmbH​

slide 1 of 1
KI-DL_PPT_Cover_144dpi_rgb2.png

Valeo Schalter und Sensoren GmbH​

Valeo_Logo.svg

Valeo is an automotive supplier, partner to all automakers worldwide. As a technology company, Valeo proposes innovative products and systems that contribute to the reduction of CO2 emissions and to the development of intuitive driving. In 2019, the Group generated sales of 19.2 billion euros and invested 13% of its original equipment sales in Research and Development. Valeo has 190 plants, 20 research centers, 43 development centers and 15 distribution platforms, and at June 30, 2020 employed 102,400 people in 33 countries worldwide.
Valeo is today the world's leading supplier of sensors for driver assistance. Since the 1980s, Valeo has been developing sensors for driver assistance in the fields of ultrasound, radar, lidar, cameras and laser scanners.
Valeo maintains its own network of research sites (DAR - Driving Assistance Research) in Paris, Kronach and San Francisco. In 2017, Valeo created the Valeo.ai research centre in Paris, which is the first research centre in the field of AI and machine learning for ADAS and autonomous driving. Valeo also collaborates with universities and finances a number of doctoral theses.
Referring to the KI Delta Learning project, Valeo contributes its expertise in algorithms for sensors, ultrasound, radar, lidar, cameras and laser scanners. Valeo is particularly active in the fields of artificial intelligence and testing and validation. Here, particularly in the field of laser scanners and cameras, research into new sensor technologies is being carried out which will be used for sensor technologies after 2020.
Valeo's main focus in this project is to study the impact of using different sensor systems, with a focus on lidar and camera, in different scenarios (e.g. day and night), on the performance and capabilities in terms of domain adaptation of models. Furthermore, the processing and modification (augmentation) of real and synthetic data to improve the quality of models using different approaches to augmentation (classical, GANs). Data acquisition with a variety of sensors in different environments, with annotations created in different ways. Finally, studies towards the implementation of models on embedded hardware with focus on quantization and pruning to reduce and accelerate neural networks.





KI_Delta_Learning-gruppe_1.0.svg

KI_DeltaLearning-einzel.svg

KI Delta Learning is a project of the KI Familie. It was initiated and developed by the VDA Leitinitiative autonomous and connected driving and is funded by the Federal Ministry for Economic Affairs and Climate Action.

BMWK_Fz_2021_WebSVG_en.svg




2022 © KI Delta Learning


Mercedes Benz AG​

slide 1 of 1
KI-DL_PPT_Cover_144dpi_rgb2.png

Mercedes Benz AG​

Mercedes_Benz_Logo.svg

Mercedes-Benz AG is responsible for the global business of Mercedes-Benz Cars and Mercedes-Benz Vans with over 173,000 employeesworldwide. Ola Källenius is Chairman of the Board of Management of Mercedes-Benz AG. The company focuses on the development, production and sales of passenger cars, vans and services. Furthermore, the company aspires to be leading in the fields of connectivity, automated driving and alternative drives with its forward-looking innovations. The product portfolio comprises the Mercedes-Benz brand with the sub-brands Mercedes-AMG, Mercedes-Maybach and Mercedes me - as well as the smart brand, and the EQ product and technology brand for electric mobility.
Mercedes-Benz AG is one of the largest manufacturers of premium passenger cars. In 2019 it sold nearly 2.4 million cars and more than 438,000 vans. In its two business divisions, Mercedes-Benz AG is continually expanding its worldwide production network with over 40 production sites on four continents, while aligning itself to meet the requirements of electric mobility. At the same time, the company is developing its global battery production network on three continents. Sustainable actions play a decisive role in both business divisions.
To the company, sustainability means creating value for all stakeholders on a lasting basis: customers, employees, investors, business partners and the society as a whole. The basis for this is the sustainable business strategy of Daimler in which the company takes responsibility for the economic, ecological and social effects of its business activities and looks at the entire value chain.
In addition to leading the KI Delta Learning project, Mercedes-Benz will ensure a uniform and consistent evaluation across all projects and will address the robustness of the modules in open world scenarios as well as the invariance to domain changes. Further priorities are active learning and training organisation.
Subscribe to our newsletter and don’t miss anything.




KI_Delta_Learning-gruppe_1.0.svg

KI_DeltaLearning-einzel.svg

KI Delta Learning is a project of the KI Familie. It was initiated and developed by the VDA Leitinitiative autonomous and connected driving and is funded by the Federal Ministry for Economic Affairs and Climate Action.

BMWK_Fz_2021_WebSVG_en.svg


Screen Shot 2022-09-27 at 12.59.40 pm.png

2022 © KI Delta Learning

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 40 users

Earlyrelease

Regular
Newspaper, wow i havent seen one in a while. They still have classifieds aswell impressive.
EQ
You do know WA stands for wait awhile - we’ll that’s if you are Eastern seaboard based. We liked to see ourselves as keeping hold of the better bits of the past (old style papers) and WA representing Worldleading AI since PVDM and his team are based here in sunny Perth. 😎
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

Slymeat

Move on, nothing to see.
  • Haha
  • Like
Reactions: 9 users

HopalongPetrovski

I'm Spartacus!
Whacko-the-diddle-oh! This looks interesting!


The KI Delta Learning project is being funded by the German Federal Ministry for Economic Affairs and Energy. In addition to Porsche Engineering, partners include BMW, CARIAD and Mercedes-Benz, major suppliers such as Bosch, and nine universities, including the Technical University of Munich and the University of Stuttgart.



Valeo Schalter und Sensoren GmbH​

slide 1 of 1
KI-DL_PPT_Cover_144dpi_rgb2.png

Valeo Schalter und Sensoren GmbH​

Valeo_Logo.svg

Valeo is an automotive supplier, partner to all automakers worldwide. As a technology company, Valeo proposes innovative products and systems that contribute to the reduction of CO2 emissions and to the development of intuitive driving. In 2019, the Group generated sales of 19.2 billion euros and invested 13% of its original equipment sales in Research and Development. Valeo has 190 plants, 20 research centers, 43 development centers and 15 distribution platforms, and at June 30, 2020 employed 102,400 people in 33 countries worldwide.
Valeo is today the world's leading supplier of sensors for driver assistance. Since the 1980s, Valeo has been developing sensors for driver assistance in the fields of ultrasound, radar, lidar, cameras and laser scanners.
Valeo maintains its own network of research sites (DAR - Driving Assistance Research) in Paris, Kronach and San Francisco. In 2017, Valeo created the Valeo.ai research centre in Paris, which is the first research centre in the field of AI and machine learning for ADAS and autonomous driving. Valeo also collaborates with universities and finances a number of doctoral theses.
Referring to the KI Delta Learning project, Valeo contributes its expertise in algorithms for sensors, ultrasound, radar, lidar, cameras and laser scanners. Valeo is particularly active in the fields of artificial intelligence and testing and validation. Here, particularly in the field of laser scanners and cameras, research into new sensor technologies is being carried out which will be used for sensor technologies after 2020.
Valeo's main focus in this project is to study the impact of using different sensor systems, with a focus on lidar and camera, in different scenarios (e.g. day and night), on the performance and capabilities in terms of domain adaptation of models. Furthermore, the processing and modification (augmentation) of real and synthetic data to improve the quality of models using different approaches to augmentation (classical, GANs). Data acquisition with a variety of sensors in different environments, with annotations created in different ways. Finally, studies towards the implementation of models on embedded hardware with focus on quantization and pruning to reduce and accelerate neural networks.





KI_Delta_Learning-gruppe_1.0.svg

KI_DeltaLearning-einzel.svg

KI Delta Learning is a project of the KI Familie. It was initiated and developed by the VDA Leitinitiative autonomous and connected driving and is funded by the Federal Ministry for Economic Affairs and Climate Action.

BMWK_Fz_2021_WebSVG_en.svg




2022 © KI Delta Learning


Mercedes Benz AG​

slide 1 of 1
KI-DL_PPT_Cover_144dpi_rgb2.png

Mercedes Benz AG​

Mercedes_Benz_Logo.svg

Mercedes-Benz AG is responsible for the global business of Mercedes-Benz Cars and Mercedes-Benz Vans with over 173,000 employeesworldwide. Ola Källenius is Chairman of the Board of Management of Mercedes-Benz AG. The company focuses on the development, production and sales of passenger cars, vans and services. Furthermore, the company aspires to be leading in the fields of connectivity, automated driving and alternative drives with its forward-looking innovations. The product portfolio comprises the Mercedes-Benz brand with the sub-brands Mercedes-AMG, Mercedes-Maybach and Mercedes me - as well as the smart brand, and the EQ product and technology brand for electric mobility.
Mercedes-Benz AG is one of the largest manufacturers of premium passenger cars. In 2019 it sold nearly 2.4 million cars and more than 438,000 vans. In its two business divisions, Mercedes-Benz AG is continually expanding its worldwide production network with over 40 production sites on four continents, while aligning itself to meet the requirements of electric mobility. At the same time, the company is developing its global battery production network on three continents. Sustainable actions play a decisive role in both business divisions.
To the company, sustainability means creating value for all stakeholders on a lasting basis: customers, employees, investors, business partners and the society as a whole. The basis for this is the sustainable business strategy of Daimler in which the company takes responsibility for the economic, ecological and social effects of its business activities and looks at the entire value chain.
In addition to leading the KI Delta Learning project, Mercedes-Benz will ensure a uniform and consistent evaluation across all projects and will address the robustness of the modules in open world scenarios as well as the invariance to domain changes. Further priorities are active learning and training organisation.
Subscribe to our newsletter and don’t miss anything.




KI_Delta_Learning-gruppe_1.0.svg

KI_DeltaLearning-einzel.svg

KI Delta Learning is a project of the KI Familie. It was initiated and developed by the VDA Leitinitiative autonomous and connected driving and is funded by the Federal Ministry for Economic Affairs and Climate Action.

BMWK_Fz_2021_WebSVG_en.svg


View attachment 17454
2022 © KI Delta Learning

You get some love just for the...."Whacko-the-diddle-oh!"
Haven't heard that since my Dad went over the rainbow bridge. 🤣
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 8 users

krugerrands

Regular
Can I get some clarity around how Akida is involved exactly with the X280.

Breaking down the x280
  • X280 Processor - a 64-bit applications processor plus an expanded vector unit supporting the RISC-V Vector 1.0 ratified standardvector extensions
    ( not Akida )

  • SiFive Intelligence extensions, which add custom instructions to optimize AI/ML workloads.
    X280 can run by itself, performing inference tasks at the edge, where it can also act as an apps processor and inference engine for smaller workloads. ( ref )
    Are we saying that this is Akida IP?? the SiFive Intelligence extenstions??

  • VCIX (Vector Coprocessor Interface eXtension)
    This feature was added to the latest version of the X280 ~ June 2022 ( ref )
    VCIX connects to the 32, 512-bit registers in the X280 directly and also to the scalar instruction stream, creating a high bandwidth, low latency path directly coupled to the instruction stream of the X280 scalar processor.

    "X280 has many features to accelerate AI and ML, but it’s not a massive AI accelerator that is traditionally built for datacenters. Customers building these massive accelerators found they had a bunch of complex compute that needed to be done that their accelerator “really didn’t know how to do.” For example, many found that you need a full featured software stack to run the code that’s not in the kernel that sits on the accelerator. To solve this challenge, customers realized they could put the X280 core next to their large accelerator where it could provide maintenance code and operations code, run kernels that the big accelerator couldn’t do, or provide other valuable extra functions. And the X280 core allowed them to run very fast."
    ( does not sound like Akida IP, not like Akida will be using this VCIX interface )

So either SiFive Intelligence extensions is the Akida IP and is embedded in all x280 processors.

Or

Laguna Hills, Calif. – April 5, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI chips and IP, and SiFive, Inc., the founder and leader of RISC-V computing, have combined their respective technologies to offer chip designers optimized AI/ML compute at the edge.

That implies to me that only custom designed chips will have the Akida Neuron fabric.

Not every X280 application that is AI/ML will be Akida.

Any insights?
@Diogenese
@uiux

Any other takers?
 
  • Like
Reactions: 2 users
Just on Qualcomm & wearables with their new Snapdragon W5 SoCs.

Appears leaked they are using a co-processor.

The ML handled by a U55 NPU which can presume is the ARM Ethos U55 which I don't think uses SNN?

Second slide under performance.



Qualcomm announces 4nm Snapdragon W5 and W5+ Gen 1 SoC for wearables​

Staff
By StaffJuly 19, 2022

2 Mins Read


\


After its teasers SoC last week, Qualcomm is announcing two new wearable chipsets today. To succeed the Snapdragon Wear 4100+ platform, Qualcomm announces the Snapdragon W5 Gen 1 and the W5+ Gen 1. Both SoCs are built on the 4nm process and represent a drastic improvement to Qualcomm’s offering for wearables.

Qualcomm touts that its new wearable chipsets will offer up to 2X the performance using half as much power compared to the Snapdragon Wear 4100+ Platform. In typical scenarios, the W5+ can provide up to 50% longer battery life by taking up 30% less space than previous generations. Qualcomm has worked closely with Google over the past several quarters to optimize Wear OS for the new SoC.

The main difference between the Snapdragon W5 and W5+ Gen 1 is that the non-Plus model skips out on the AON Co-Processor. It intends to equip segment-specific wearables with the Snapdragon W5 Gen 1 as they won’t require the extra power-saving capabilities. This includes wearables for China, Kids, Seniors, Health, and Enterprise segments.

Qualcomm announces 4nm Snapdragon W5 and W5+ Gen 1 SoC for wearables


The W5+ Gen 1 is made up of four Cortex A53 cores and one Cortex M55 efficiency core at 250 MHz. Graphics are handled by the A702 GPU clocked at 1 GHz (significant boost from the 4100+’s 320 MHz A504 GPU) and memory has been updated to support LPDDR4 at 2133 Mhz. A new Machine Learning unit U55 has been added here as well.

The W5+’s new co-processor can handle more tasks in the background while using less power. All the wearable’s sensing is handled by the AON (always on) Co-Processor (22nm) and with the W5+ Gen 1, speech processing, audio playback, and notifications are all able to be handed off to the co-processor now. The AON chip will also enable low-power Bluetooth 5.3, Wi-Fi, GNSS, and adds support for new power states like Deep Sleep and Hibernate. LTE is also getting a low-power mode with an update modem.

Qualcomm announces 4nm Snapdragon W5 and W5+ Gen 1 SoC for wearables


Qualcomm announces it has partnered with Compel and Pegatron to create reference smartwatches with the Snapdragon W5+ on board to help partners develop products faster.

Mobvoi and Oppo will be the first OEMs to launch wearables with Qualcomm’s new SoC and the chipmaker says 25 smartwatch designs are currently in the pipeline. Oppo is expected to make an announcement sometime in August for the Oppo Watch 3.
 
  • Like
  • Love
Reactions: 16 users

Diogenese

Top 20
Can I get some clarity around how Akida is involved exactly with the X280.

Breaking down the x280
  • X280 Processor - a 64-bit applications processor plus an expanded vector unit supporting the RISC-V Vector 1.0 ratified standardvector extensions
    ( not Akida )

  • SiFive Intelligence extensions, which add custom instructions to optimize AI/ML workloads.
    X280 can run by itself, performing inference tasks at the edge, where it can also act as an apps processor and inference engine for smaller workloads. ( ref )
    Are we saying that this is Akida IP?? the SiFive Intelligence extenstions??

  • VCIX (Vector Coprocessor Interface eXtension)
    This feature was added to the latest version of the X280 ~ June 2022 ( ref )
    VCIX connects to the 32, 512-bit registers in the X280 directly and also to the scalar instruction stream, creating a high bandwidth, low latency path directly coupled to the instruction stream of the X280 scalar processor.

    "X280 has many features to accelerate AI and ML, but it’s not a massive AI accelerator that is traditionally built for datacenters. Customers building these massive accelerators found they had a bunch of complex compute that needed to be done that their accelerator “really didn’t know how to do.” For example, many found that you need a full featured software stack to run the code that’s not in the kernel that sits on the accelerator. To solve this challenge, customers realized they could put the X280 core next to their large accelerator where it could provide maintenance code and operations code, run kernels that the big accelerator couldn’t do, or provide other valuable extra functions. And the X280 core allowed them to run very fast."
    ( does not sound like Akida IP, not like Akida will be using this VCIX interface )

So either SiFive Intelligence extensions is the Akida IP and is embedded in all x280 processors.

Or

Laguna Hills, Calif. – April 5, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI chips and IP, and SiFive, Inc., the founder and leader of RISC-V computing, have combined their respective technologies to offer chip designers optimized AI/ML compute at the edge.

That implies to me that only custom designed chips will have the Akida Neuron fabric.

Not every X280 application that is AI/ML will be Akida.

Any insights?
@Diogenese
@uiux

Any other takers?

I think that a lot of weight has been placed on this statement from SiFive vice president:

https://brainchip.com/brainchip-sifive-partner-deploy-ai-ml-at-edge/

Employing Akida, BrainChip’s specialized, differentiated AI engine, with high-performance RISC-V processors such as the SiFive Intelligence Series is a natural choice for companies looking to seamlessly integrate an optimized processor to dedicated ML accelerators that are a must for the demanding requirements of edge AI computing,” said Chris Jones, vice president, products at SiFive. “BrainChip is a valuable addition to our ecosystem portfolio”.

So, yes, you can order one with the lot, or discard the anchovies, capsicum and SNN.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 42 users

krugerrands

Regular
I think that a lot of weight has been placed on this statement from SiFive vice president:

https://brainchip.com/brainchip-sifive-partner-deploy-ai-ml-at-edge/

Employing Akida, BrainChip’s specialized, differentiated AI engine, with high-performance RISC-V processors such as the SiFive Intelligence Series is a natural choice for companies looking to seamlessly integrate an optimized processor to dedicated ML accelerators that are a must for the demanding requirements of edge AI computing,” said Chris Jones, vice president, products at SiFive. “BrainChip is a valuable addition to our ecosystem portfolio”.

So, yes, you can order one with the lot, or discard the anchovies, capsicum and SNN.

Lol.

Sure, that still implies to me that it will be a custom X280.

Not every X280 application that talks about AI/ML would be Akida.
Even if we are saying it would be "unnatural" not to have Akida...... like ordering Pizza with pineapple on it.

The metaphor doesn't quite work since in this case you need to ADD Akida.
The base does not have Akida.
 
  • Like
  • Thinking
Reactions: 6 users
Newspaper, wow i havent seen one in a while. They still have classifieds aswell impressive.
I can better that one 😛

But we are currently in 5th place..

BrainChip is going to spend a lot of time in these leader boards now 😉

I meant to reply to the post above Equanimous's 🙄..
 

Attachments

  • _20220927_121557.JPG
    _20220927_121557.JPG
    69.3 KB · Views: 90
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Like
  • Love
  • Fire
Reactions: 10 users
EQ
You do know WA stands for wait awhile - we’ll that’s if you are Eastern seaboard based. We liked to see ourselves as keeping hold of the better bits of the past (old style papers) and WA representing Worldleading AI since PVDM and his team are based here in sunny Perth. 😎
You do know that this is the very reason why the Brainchip Innovation Centre is in Perth security considerations. Those who might want to steal Brainchip IP being from the 21st century will stick out like a sore thumb.😂🤣😂🤡😎👍

Regards from the future
FF

AKIDA BALLISTA
 
  • Haha
  • Like
Reactions: 23 users

krugerrands

Regular
Lol.

Sure, that still implies to me that it will be a custom X280.

Not every X280 application that talks about AI/ML would be Akida.
Even if we are saying it would be "unnatural" not to have Akida...... like ordering Pizza with pineapple on it.

The metaphor doesn't quite work since in this case you need to ADD Akida.
The base does not have Akida.


1664251321572.png





1664251405035.png


SiFive Intelligence Extensions for ML workloads – BF16/FP16/FP32/FP64, int8 to 64 fixed-point data types

My take.
The Intelligence Extensions are embedded in the X280 chip and can run by itself, performing inference tasks at the edge, where it can also act as an apps processor and inference engine for smaller workloads.

For Akida IP to be employed, you need a custom chip where I'm guessing the Akida Neuron fabric integrates through this AXI4 bus interconnect.
 
  • Like
  • Fire
Reactions: 6 users

equanimous

Norse clairvoyant shapeshifter goddess
EQ
You do know WA stands for wait awhile - we’ll that’s if you are Eastern seaboard based. We liked to see ourselves as keeping hold of the better bits of the past (old style papers) and WA representing Worldleading AI since PVDM and his team are based here in sunny Perth. 😎
At one stage WA almost became its own country with its border closure.

 
  • Haha
  • Love
  • Like
Reactions: 5 users

Iseki

Regular
Lol.

Sure, that still implies to me that it will be a custom X280.

Not every X280 application that talks about AI/ML would be Akida.
Even if we are saying it would be "unnatural" not to have Akida...... like ordering Pizza with pineapple on it.

The metaphor doesn't quite work since in this case you need to ADD Akida.
The base does not have Akida.
Sure, but it's available if needed - it may even become a standard feature if it can turn off and on the power hungry vector units.
NASA had to go with SiFive, of course, as ARM isn't a US company.
But you can take heart that SiFive and Brainchip ensured that their technology works together, possibly at the behest of NASA, one of our earliest EAP's.

So what you have is both the sunrise chipset designers keeping Brainchip close in their eco systems.

I doesn't get any better than this really.
 
  • Like
  • Love
Reactions: 22 users

Diogenese

Top 20
You do know that this is the very reason why the Brainchip Innovation Centre is in Perth security considerations. Those who might want to steal Brainchip IP being from the 21st century will stick out like a sore thumb.😂🤣😂🤡😎👍

Regards from the future
FF

AKIDA BALLISTA
They could easily blend in with a few corks strung from the hat brim and a geologist's pick.
 
  • Haha
  • Like
  • Love
Reactions: 7 users
Lol.

Sure, that still implies to me that it will be a custom X280.

Not every X280 application that talks about AI/ML would be Akida.
Even if we are saying it would be "unnatural" not to have Akida...... like ordering Pizza with pineapple on it.

The metaphor doesn't quite work since in this case you need to ADD Akida.
The base does not have Akida.
Just one tiny point acceleration is not Artificial Intelligence.

The industry quite bluntly lies in all its advertising when it describes a semiconductor as an Ai accelerator where the edge is concerned.

An Ai accelerator at the edge is simply compressing data to send it more efficiently to somewhere else to be processed.

SiFive X280 is as they say an accelerator full stop. The only way they can claim it to be an Intelligence Series is if they add Ai that does some processing of the sensor data before it is sent to somewhere else for action. Which is where AKIDA comes in.

Otherwise in truth it is not the Intelligence Series it is the Accelerator Series.

Now if they wanted they could add Loihi 2 to the X280 to make it intelligent but of course it is not commercially available.

Of SiFive’s published partners the only partner supplying Ai to make X280 intelligent is Brainchip.

So it is of course an add on but if you don’t add it on you don’t get intelligence you get acceleration.

It is like going into a car dealership to buy a new car and when it’s delivered there are no wheels. When you point this out the sales person says wheels are an extra. You didn’t say you wanted wheels.

Buying an Intelligence Series processor without the intelligence is pretty unintelligent.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 38 users

Boab

I wish I could paint like Vincent
Looks like lots of small trades amongst themselves.

BRN$0.872

$0.042 (5.06%)
Add to WatchlistAdd to AlertsManage Orders
BRAINCHIP LTD FPO (ORDINARY FULLY PAID)
Tue 27 Sep 2022 2:26 PM (Sydney time)



Share Quote


Bid ($)

Offer ($)

High ($)

Low ($)

Volume

Trades

Value ($)

Open ($)

Previous Close ($)
0.870 0.875 0.885 0.850 4,433,899 1,186 3,858,991 0.860 0.830

Course Of Sales
Last 20 Trades|All Trades
Time
Price ($)
Volume
Value ($)
Condition
2:25:55 PM 0.872 117 102.083 CXXT
2:25:35 PM 0.872 148 129.130 CXXT
2:25:23 PM 0.875 19 16.625 XT
2:25:12 PM 0.870 9,706 8,444.220
2:25:12 PM 0.872 294 256.515 CXXT
2:24:55 PM 0.872 138 120.405 CXXT
2:24:42 PM 0.875 24,671 21,587.125
2:24:42 PM 0.875 329 287.875
2:24:28 PM 0.875 6 5.250
2:24:28 PM 0.872 120 104.700 CXXT
2:23:57 PM 0.875 6 5.250 XT
2:23:35 PM 0.872 119 103.828 CXXT
2:22:55 PM 0.872 111 96.848 CXXT
2:22:35 PM 0.872 151 131.748 CXXT
2:22:30 PM 0.870 1,251 1,088.370
2:22:30 PM 0.872 294 256.515 CXXT
2:22:30 PM 0.872 102 88.995 CXXT
2:21:20 PM 0.875 114 99.750
2:21:05 PM 0.872 136 118.660 CXXT
2:20:52 PM 0.875 1,500 1,312.500
Download CSV
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

equanimous

Norse clairvoyant shapeshifter goddess

SNUFA 2022​

Spiking Neural networks as Universal Function Approximators​

SNUFA 2022​

snufa2022_logo.png

Brief summary. This online workshop brings together researchers in the fields of computational neuroscience, machine learning, and neuromorphic engineering to present their work and discuss ways of translating these findings into a better understanding of neural circuits. Topics include artificial and biologically plausible learning algorithms and the dissection of trained spiking circuits toward understanding neural processing. We have a manageable number of talks with ample time for discussions.

Executive committee. Katie Schuman, Timothée Masquelier, Dan Goodman, and Friedemann Zenke.

Quick links. Register (free) Submit an abstract (before 28 Sept 2022)

Invited speakers​

Key information​

Workshop. 9-10 November 2022, European afternoons.

Registration. Free but mandatory. Click here to register.

Abstract submission deadline. 28 September 2022. Click here to submit.

Final decisions. 12 October 2022.

Format​

  • Two half days
  • 4 invited talks
  • 8 contributed talks
  • Poster session
  • Panel debate (topic to be decided, let us know if you have a good idea)

Abstract submissions​

Click here to submit. Abstracts will be made publicly available at the end of the abstract submissions deadline for blinded public comments and ratings. We will select the most highly rated abstracts for contributed talks, subject to maintaining a balance between the different fields of, broadly speaking, neuroscience, computer science and neuromorphic engineering. Abstracts not selected for a talk will be presented as posters, and there is an option to submit an abstract directly for a poster and not a talk if you prefer.
 
  • Like
  • Fire
Reactions: 8 users
Top Bottom