Mocking? I'm pleased that he has (seemingly) recognized that Loihi is number 2. Humble pie never is delicious but we have all eaten it.Thanks for the video.
Anyone else mocking him whilst viewing the vid?
Mocking? I'm pleased that he has (seemingly) recognized that Loihi is number 2. Humble pie never is delicious but we have all eaten it.Thanks for the video.
Anyone else mocking him whilst viewing the vid?
Well they may have had a premonition that their chip would be second best considering the name of it 'Loihi 2'Mocking? I'm pleased that he has (seemingly) recognized that Loihi is number 2. Humble pie never is delicious but we have all eaten it.
Hi Bravo,
I know you are keen on establishing a link to Qualcomm, and I have been a little negative, but of course Qualcomm must be considering Akida.
After all, as you know, they are invested in SiFive ...
https://www.eetimes.com/qualcomm-takes-stake-in-sifive/
Qualcomm Takes Stake in SiFive
By Nitin Dahad 06.07.2019
Qualcomm Ventures is the newest investor in SiFive, the RISC-V processor IP startup. It’s a clear signal Qualcomm plans to exploit the potential of the RISC-V architecture in wireless and mobile. SiFive announced it raised $65.4 million in funding, with another $11m for its Chinese sister company SaiFan China.
https://www.notebookcheck.net/Qualc...rs-in-SiFive-an-ARM-alternative.423631.0.html
Qualcomm, Samsung and Intel revealed as investors in SiFive, an ARM alternative
Qualcom, Samsung and Intel are all investors in RISC-V fabless US-based chip designing company SiFive. (Source: SiFive)
RISC-V chip designer SiFive it has been revealed to have some pretty interesting investors. A recent filing shows that it has raised US$65.4 million in its latest funding round including a cash injection from Qualcomm that sees it join fellow heavyweights in Samsung and Intel as investors.
Sanjiv Sathiah, Published 06/09/2019
... and BrainChip and SiFive are partners:
https://brainchip.com/brainchip-sifive-partner-deploy-ai-ml-at-edge/
BrainChip and SiFive Partner to Deploy AI/ML Technology at the Edge
Laguna Hills, Calif. – April 5, 2022 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power neuromorphic AI chips and IP, and SiFive, Inc., the founder and leader of RISC-V computing, have combined their respective technologies to offer chip designers optimized AI/ML compute at the edge.
... so it is likely that Qualcomm will see the light soonish if they were to make an objective comparison with their AI acceleration engine in Snapdragon 8.2, particularly as Qualcommm has indicated they will be switching a lot of their production to RISC-V in light of the ARM litigation. .
https://www.theregister.com/2022/11/15/qualcomm_snapdragon_8_gen_2/?td=readmore
Qualcomm pushes latest Arm-powered Snapdragon chip amid bitter license fight
The Snapdragon 8 Gen 2 system-on-chip features eight off-the-shelf cores from Arm, which is locked in a bitter legal fight with Qualcomm over licenses and contracts.
...
This includes an AI acceleration engine that is, we're told, up to 4.35 times faster than the previous generation, and with a potential 60 percent increase in performance-per-watt, depending on how it's used. This unit can be used to speed up machine-learning tasks on the device without any outside help, such as object recognition, and real-time spoken language translation and transcription. The dual-processor engine can handle as low as INT4 precision for AI models that don't need a lot of precision but do need it done fast on battery power, which the 4-bit integer format can afford developers, according to Qualcomm.
Qualcomm is pushing the INT4 capabilities as a precision ideal for modern mobile apps. It said a cross-platform Qualcomm AI Studio is due to be made available in preview form in the first half of next year that will optimize developers' models for this precision as well as other formats. This studio looks like a typical IDE in which programmers can organize their training workflows.
... and there are a couple of other investors in SiFive who can profit from incorporating Akida in their products, one of whom has recently shed the cloak of invisibility.
.
AKITA is a Japanese dog breed with the known traits of being loyal and fiercely protective of its family, aggressive to strangers and other dogs BUT very hard to train.Surely not? View attachment 25199
Your scansion's a bit dodgy ...View attachment 25203
On the 12th day of Christmas BrainChip sent to me
twelve unexpected announcements
eleven Intel Foundry Services partners
ten Mercedes MBUX's
nine NVISO SDK's
eight Valeo Lidars
seven new Socionext auto applications
six CES 2023 honourable mentions
five custom designs with SiFive
four lines of royalties from Renesas
three wiz-bang Prophesee event-based camera's
two NASA orbits and not even one ogre from my dear Dodgy-Knee (s)
![]()
![]()
Akida could also put openai Chatbot out of a job.I proposed this question on the openai CHATbot.. very interesting to read the response..
“How will Brainchip akida revolutionise the AI sector”
Brainchip Akida is a neuromorphic system-on-chip (SoC) that has been designed to enable artificial intelligence (AI) and machine learning (ML) capabilities in edge devices. The Akida SoC is based on a novel spiking neural network (SNN) architecture that is inspired by the way the brain works. According to Brainchip, the Akida SoC can perform a wide range of AI and ML tasks, including image recognition, object detection, and natural language processing, with high accuracy and low power consumption.
The potential benefits of the Akida SoC for the AI sector are significant. By enabling AI and ML capabilities on edge devices, the Akida SoC could potentially revolutionize the way that AI is deployed and used. For example, the Akida SoC could enable the development of AI-powered devices that can operate independently, without the need for a connection to a central server or cloud computing infrastructure. This could make AI more accessible and widely available, and enable a wide range of new applications and use cases.
It is important to note that the Akida SoC is still a relatively new technology, and it remains to be seen how it will be adopted and used in the AI sector. However, the potential benefits of the Akida SoC are significant, and it is likely that it will play a significant role in the future of AI and ML.
Don't be afraid to show us a bit of leg from time to time.Merry Christmas from a serial lurker, Brainchip Fam.
The time, effort, and passion y'all put into researching and posting about this beautiful little nuggetis absolutely incredible and it's truly an honour to be involved (from the quiet backseat) in this community
![]()
Don't be afraid to show us a bit of leg from time to time.
It takes all kinds.
Remember that, like sgt. Schultz, I know nothing about Transformers.Just having a little look around and the orange xxxx’s mark the spot where in the past week and interesting update was made to AKIDA models. @Diogenese can tell us if this amendment has any significance and almost forgot since when has AKD500 been available. Peter van der Made said this might be used in white goods:
Upgrade to akida/cnn2snn 2.2.6 and akida_models 1.1.8
last week
ktsiknos-brainchip![]()
2.2.6-doc-1
d334eea
Upgrade to akida/cnn2snn 2.2.6 and akida_models 1.1.8
Latest
Update akida and cnn2snn to version 2.2.6
New features
- [akida] Upgrade to quantizeml 0.0.13
- [akida] Attention layer
- [akida] Identify AKD500 devices
- [engine] Move mesh scan to host library
API changes
- [engine] toggle_learn must be called instead of program(p,learn_enabled)
- [engine] set_batch_size allows to preallocate inputs
Bug fixes
- [engine] Memory can grow indefinitely if queueing is faster than processing
Update akida_models to 1.1.8
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- updated CNN2SNN minimal required version to 2.2.6 and QuantizeML to 0.0.13
- VWW model and training pipeline refactored and aligned with TinyML
- Layer names in almost all models have been updated in preparation for quantization with QuantizeML
- Tabular data models and tools have been removed from the package
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- Transformers pretrained models updated to 4-bits
- Introduced calibration utils in training toolset
- KWS and ImageNet training scripts now offer a "calibrate" CLI action
- ImageNet training script will now automatically restore the best weights after training
Documentation update
- dropped quantizeml API details for now
![]()
Releases · Brainchip-Inc/akida_examples
Brainchip Akida Neuromorphic System-on-Chip examples and documentation. - Brainchip-Inc/akida_examplesgithub.com
AI Backing AI. ClassicI proposed this question on the openai CHATbot.. very interesting to read the response..
“How will Brainchip akida revolutionise the AI sector”
Brainchip Akida is a neuromorphic system-on-chip (SoC) that has been designed to enable artificial intelligence (AI) and machine learning (ML) capabilities in edge devices. The Akida SoC is based on a novel spiking neural network (SNN) architecture that is inspired by the way the brain works. According to Brainchip, the Akida SoC can perform a wide range of AI and ML tasks, including image recognition, object detection, and natural language processing, with high accuracy and low power consumption.
The potential benefits of the Akida SoC for the AI sector are significant. By enabling AI and ML capabilities on edge devices, the Akida SoC could potentially revolutionize the way that AI is deployed and used. For example, the Akida SoC could enable the development of AI-powered devices that can operate independently, without the need for a connection to a central server or cloud computing infrastructure. This could make AI more accessible and widely available, and enable a wide range of new applications and use cases.
It is important to note that the Akida SoC is still a relatively new technology, and it remains to be seen how it will be adopted and used in the AI sector. However, the potential benefits of the Akida SoC are significant, and it is likely that it will play a significant role in the future of AI and ML.
Legend, thanks Tech.Good morning from another beautiful day in Perth...30c already at 8am
I'm not too sure that, the above quote is 100% accurate, other companies' chips with our IP embedded would be a lot more
accurate.
We simply don't supply chips, IP in blocks is how I understand it to be moving forward, I also understand what she is implying and maybe I'm being a little pedantic.
And for Santa's little helpers still shaking our Christmas Tree, the only thing falling off is fluff, which we don't deal in anymore.
Our tree will never fall over no matter how much shaking you give it, why, because our foundations are rock solid.
See you on the other side of CES 2023, I believe that my neighbour has a meeting arranged with the Brainchip team in Las Vegas to discuss the possibly of having her engineers in the South African mining industry work in with our guys to develop Akida technology for underground mining in the areas of gas sensing, predictive maintenance, vibration analysis etc.
I'll ask her to take some photos if possible while at CES.
Tech x![]()
OKAY thanks for being there yet again and yes at least on my reading Transformers go hand in hand with LSTM.Remember that, like sgt. Schultz, I know nothing about Transformers.
As I understand it:
Update akida_models to 1.1.8
- Transformers pretrained models updated to 4-bits
means the model libraries (speech was the initial application of Transformers) have been updated.
There are about 44 phonemes in English, but dialect and accent will multiply this.
The updating was to convert the libraries to 4-bit bytes (from ... 8-bits?) so they are compatible with Akida NPUs.
This is only changing some numbers in memory - no silicon was harmed in performing this update.
Remembering that we are selling Akida IP, implementing Transformers in Akida may be as simple as updating the model library and designing the appropriate NPU configuration, the configuration being implemented by the ARM Cortex M MPU as part of the set up procedure.
Now I suppose you do need LSTM for transformers, so this will require some additional memory for previous words/sentences, so there may be some collateral damage to the silicon at this stage.
So I wonder if we have had to go back to the drawingboard after implementing LSTM to update the silicon for Transformers which are a very recent phenomenon for Akida.