BRN Discussion Ongoing

I’m sick of hearing my own voice at this stage, and can see I’m ruffling feathers so I think I need to stop posting for a while.

I’ll be the first to congratulate the team if something positive is announced.

Cheers,
Mark
For what it's worth, I'm appreciative of your narrative mate. Granted, it's leaning towards the pessimistic, but I think that's needed and warranted. It'd be a shame to see you pull back, but if it's for your wellbeing, then by all means.
 
  • Like
  • Love
  • Fire
Reactions: 19 users
I also thought this tweet looked promising when I first saw it, but in the referenced article they state that transformers are superior to LSTM. And they go on to give an example of visual processing where the parallel processing of distant pixels, gives transformers another advantage.

I did not see that article as supporting BrainChip’s LSTM aspirations.

From the referenced article:
”Unlike RNNs and LSTMs that must read a string of text sequentially, transformers are significantly more parallelizable and can read in a complete sequence of words at once, allowing them to better learn contextual relationships between words in a text string.”

and

View attachment 26753
The thing is AKD2000 or ACE or whatever it’s called is to have both transformers and LSTM and while I am an old technophobe the reading I did on Transformers confirmed an interplay between them and LSTM.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Great find generously shared.

To assist I will add the following extract which makes clear that Accenture have most likely been testing either Loihi 2 or AKIDA technology as referenced at page 6:

“Some people approximate derivatives of spikes in order to use backpropagation (like SynSense) and some use another technique called spike timing dependent plasticity (STDP), which is closer to how biological brains function. STDP, however, is less mature as a technology (BrainChip uses this method for one–shot learning at the edge)”


We know that SynSense has been working with BMW and Prophesee but as this extract from an article promoted by SynSense on its website makes clear it does not use STDP.

It also confirms what we know that Brainchip uses STDP but what it does not mention is that Brainchip owns the IP for the STDP based JAST Learning Rules which they converted from a license to full ownership in the first half of 2023 under the leadership of the CEO Sean Hehir. Some think this was a stroke of commercial brilliance.

What some might not know is that when Peter van der Made bought the original license CERCO had been discussing terms with Intel and Peter van der Made slipped in underneath and walked away with the prize.

So while Mike Davies and Intel use a form of STDP they are at a significant disadvantage absent the right to use the JAST Learning Rules.

If the vehicle manufacturer is Honda we know Valeo is involved so Brainchip has feet on the ground so to speak.

If Mercedes Benz is the vehicle manufacturer it cannot be anyone else unless this testing by Accenture was about year or so before the CES 2022 EQXX reveal when Intel was still working with Mercedes Benz before they switched to Brainchip.

The Accenture Brainchip podcast is going to be very interesting.

My opinion only DYOR
FF

AKIDA BALLISTA

Accenture and Intel
r pm.png




t pm.png


UPDATED 11:50 EST / SEPTEMBER 28 2022
390357669_a04c7fe7a6_c.jpg
SECURITY

Intel Project Amber pilot opens up with Leidos and Accenture onboard​

headapril-96x96.jpg

BY DUNCAN RILEY
share.png
SHARE
Intel Corp. today announced it’s opening up its “Project Amber” pilot program and already has two companies signed up: professional services firm Leidos Holdings Inc. and consulting giant Accenture plc.
Project Amber was announced in May and provides organizations with remote verification of trustworthiness in cloud, edge and on-premises environments. Designed to address growing security needs, the service focuses on trust and operates as an independent trust authority in the form of an innovative service-based security implementation code.
In the proof-of-concept pilot with Leidos, a company with a $13.7 billion market cap and 44,000 employees, the company built an attestation service for potential use in its QTC Mobile Medical Clinics. The service uses large, specially equipped vans to perform “in-the-field” medical exams and health information processing for U.S. veterans in rural and underserved areas. Leidos is looking at using Project Amber to play a role in meeting security challenges, including those caused by “internet of things” and medical IoT connected devices.
In the pilot, Leidos’ Health Group used Project Amber Attestation and Intel Software Guard Extensions to create an independently verified hardware root of trust and a trusted distributed compute foundation. The capabilities are designed to protect connected clinical workloads and data in storage, in process and in transmission.
Accenture has integrated Project Amber into a new AI-based framework for privacy-protecting data cooperatives. Well-designed cooperatives are said to let companies share data and collaborate and help reduce concerns around trust, compliance, privacy and data control or ownership.
The Accenture proof-of-concept allows healthcare institutions to expand their knowledge by privately and more securely sharing data and an AI model for disease detection and prevention training from multiple sources.
In earlier testing, Accenture also worked with Intel to demonstrate federated data learning for a practical use case, detecting sepsis. After evaluating different techniques, Accenture chose Intel SGX secure enclave technology to train and use AI models. The POC is a prototyped extension of Accenture Applied Intelligence’s AIP+ service, a collection of modular, pre-integrated AI services and capabilities designed to simplify the adoption of AI solutions.
For the broader service expansion, further pilots of Project Amber begin in the fourth quarter. General availability, including support for cloud software-as-service and edge SaaS licensing, is expected in the first half of 2023.
 
  • Like
  • Love
  • Fire
Reactions: 25 users
 
  • Haha
  • Like
  • Fire
Reactions: 6 users

miaeffect

Oat latte lover
Read my post in context you FW. This is a possible alternative unless you have irrefutable proof otherwise.

But here's a start for you and all new investors who you've apparently taken under your wing: Revenue = sales = sales staff doing their jobs!
Screenshot_20230111-174400_Chrome.jpg
 
  • Haha
  • Like
  • Fire
Reactions: 13 users

Diogenese

Top 20
The thing is AKD2000 or ACE or whatever it’s called is to have both transformers and LSTM and while I am an old technophobe reading I did on Transformers confirmed an interplay between them and LSTM.

My opinion only DYOR
FF

AKIDA BALLISTA
I imagine transformers as Big Long-Short Term Memory with added logic (BLSTMWAL).
 
  • Like
  • Haha
  • Love
Reactions: 10 users
Read my post in context you FW. This is a possible alternative unless you have irrefutable proof otherwise.

But here's a start for you and all new investors who you've apparently taken under your wing: Revenue = sales = sales staff doing their jobs!
I would like to be charitable but two things stick out for me in the post you see as reasonable.

1. It takes absolutely no account of the known fact that Prophesee has been working with Sony since 2020 and has finalised a deal just recently to bring a Sony Prophesee powered camera lens to mass market sometime this year. Despite this Prophesee still engaged in a $50 million capital raise second half 2022 to cover operating expenses. At least three years to develop product and an unknown further period before product hits the market.

2. I received a DRO HC notification today. I follow them because of possible AKIDA involvement. The release stated that they hoped to reach break even going concern status in 2023 after six years of pursuing commercialisation.

So taking these caste iron golden clad facts how is it you can claim that two years on your timetable into commercialisation that the sales staff at Brainchip are not doing their job or is this another one of the uneducated fabricated opinions that has been thrown around here since the shorts took renewed interest.

Also you might note that the Renesas timetable is identical to the Sony timetable.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 48 users

skutza

Regular
I don't want to be waiting for years here. We need to see contracts signed and actual revenue - soon. At least enough to pay company expenses and be a 'going concern'.
I agree, my thought in this was we've got close now to the OEM'S I suppose catching up. They've played around, now understand, made their products and now showing them off. Some have finished these steps and are ahead of the game, now the can start selling!!!
 
  • Fire
  • Like
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!
YES, and thank you JB.
Perhaps I am delirious from the unaccustomed sunshine and heat here in marvellous Melbourne and should be sedated, but.......
This, along with the fact of our Podcast, hosted by Sean tomorrow has really got my spidey sense tingling.....but in a good way. 🤣
The sustained shorting attack, the call up of extra capital for new tech, getting past all the CES retail fluffery with new undercover announcements, Tech's cryptic strawberries and cream (we're playing Wimbledon???), the panic some are exhibiting here................it feels somewhat like the water getting sucked out before a Tsunami. 🤣
I don't know if we'll get an announcement sometime before the podcast is released or what it may say............. Keystone investor???
Pure speculation on my behalf. I know nothing.....nothing! 🤣
But feels like a powerful wave is on it's way.
Just in case.........be ready.
GLTAH
 
  • Like
  • Love
  • Haha
Reactions: 31 users

Foxdog

Regular
I would like to be charitable but two things stick out for me in the post you see as reasonable.

1. It takes absolutely no account of the known fact that Prophesee has been working with Sony since 2020 and has finalised a deal just recently to bring a Sony Prophesee powered camera lens to mass market sometime this year. Despite this Prophesee still engaged in a $50 million capital raise second half 2022 to cover operating expenses. At least three years to develop product and an unknown further period before product hits the market.

2. I received a DRO HC notification today. I follow them because of possible AKIDA involvement. The release stated that they hoped to reach break even going concern status in 2023 after six years of pursuing commercialisation.

So taking these caste iron golden clad facts how is it you can claim that two years on your timetable into commercialisation that the sales staff at Brainchip are not doing their job or is this another one of the uneducated fabricated opinions that has been thrown around here since the shorts took renewed interest.

Also you might note that the Renesas timetable is identical to the Sony timetable.

My opinion only DYOR
FF

AKIDA BALLISTA
Respectfully FF I am not claiming that the sales staff are not doing their jobs. However, I'm not ruling it out either. In the absence of revenue producing engagements with customers and the associated ASX announcements, which would required, I think both scenarios are still possible. Regardless of how unpalatable my previous post may seem it's merely looking at both sides of the coin. You have said yourself that the tipping point is when revenue outpaces operating expenses. I think we all expected this to occur Y/E 2022 regardless of said economic headwinds. Hopefully this expectation is realised in the next 4C and we can say the sales team are doing their jobs. Until then the coin will continue to have two sides.
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Easytiger

Regular
I know that, but if MERC stayed with the Akida chip rather than using Akida IP, I'm sure BRN would oblige.
If so, will the 4c show a debtor for an equivalent amount???
 
  • Like
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I wonder if Sean will ask anything about this in the Podcast?


OCTOBER 04, 2022
Accenture Collaborates with Mars to Develop “Factory of the Future” Using AI, Cloud, Edge and Digital Twins

NEW YORK; Oct. 4, 2022 – Accenture (NYSE: ACN) is working with Mars, the global leader in confectionary, food, and pet care products and services, to transform and modernize its global manufacturing operations with artificial intelligence (AI), cloud, edge technology and digital twins.

Accenture and Mars have been trialing digital twins for Mars’ manufacturing operations since late 2020. Digital twins are virtual representations of machines, products, or processes. Fed with real-time data, they can predict and optimize production processes and equipment performance, from reliability to quality to energy efficiency. Applied to its manufacturing plants, digital twins will enable Mars to simulate and validate the results of product and factory adjustments before allocating time and resources in the physical space.

Mars.jpg


The companies tested a digital twin to reduce instances of over-filling packages, a common problem in the food industry. The digital twin gave Mars a bird’s-eye view of the production lines at one of its factories in Illinois. The twin fed sensor data from manufacturing machinery into a predictive analytics model, which allowed factory line operators to monitor events in real-time and adjust the filling process. After the successful test, Accenture and Mars introduced the solution across the U.S. and developed similar solutions for its pet care business in Europe and China.

Under the new agreement, Accenture and Mars will work together to apply digital twin technology and models to the company’s manufacturing facilities globally. This will give Mars factory line operators real-time insights into current and predictive performance. Mars plans to apply them to dozens of use cases over the next three years.

Over the next two years, Accenture and Mars will create a new cloud platform for manufacturing applications, data and artificial intelligence (AI) to lay the foundation for its vision of the “Factory of the Future.” The new platform will provide next-generation robotics, AI and automation capabilities at the edge to make Mars manufacturing operations significantly more efficient and address essential sustainability goals such as water stewardship and reducing waste and total greenhouse gas emissions.

William Beery, vice president, and global CIO at Mars Wrigley said, “Our collaboration with Accenture, combined with our partnership with Microsoft, enables us to scale digital twin technology to reach this goal, delivering not just significant cost savings and sustainability, but preparing our manufacturing operations for the future of work.”

Larry Thomas, a senior managing director at Accenture and client account lead for Mars adds, “Our work with Mars is about using the power of data, cloud and edge computing to modernize factories, boost business agility in response to change, and put power in the hands of Mars Associates so they can make informed decisions faster.”

Accenture brings cloud, engineering, manufacturing, and supply chain capabilities to the project. It also works closely with Microsoft to leverage the Azure platform and Accenture’s proprietary edge accelerators. Earlier this year, Accenture was named Microsoft’s 2022 Manufacturing & Supply Chain Partner of the Year.

Thiago Veiga, senior director of Digital Supply, R&D & Procurement at Mars Inc., said, “We at Mars are constantly looking for innovative and sustainable ways to create value in our end-to-end supply chain, and digital manufacturing is a key priority.”

Simon Osborne, a managing director at Accenture leading its digital twin work with Mars, said, “The problems we’re solving aren’t new; what’s new is how we use advanced technologies to get real-time data into operators’ hands and apply AI to help them make decisions before problems occur. While many companies are beginning to experiment with digital twins, what sets this project apart is the speed and scaling of the technology across Mars’ operations globally.”

About Accenture
Accenture is a global professional services company with leading capabilities in digital, cloud and security. Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services and Accenture Song — all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 721,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities. Visit us at www.accenture.com.

About Mars, Incorporated
For more than a century, Mars, Incorporated has been driven by the belief that the world we want tomorrow starts with how we do business today. This common purpose unites our 140,000+ Associates. It is at the center of who we are as a global, family-owned business, and it fuels how we are transforming, innovating, and evolving to make a positive impact on the world. 

Every year, our diverse and expanding portfolio of quality confectionery, food, and pet care products and services delight millions of people and supports millions of pets. With almost $45 billion in annual sales, we produce some of the world’s best-loved brands including Ben’s Original™, CESAR®, Cocoavia®, DOVE®, EXTRA®, KIND®, M&M’s®, SNICKERS®, PEDIGREE®, ROYAL CANIN®, and WHISKAS®. We are creating a better world for pets through nutrition, breakthrough programs in diagnostics, wearable health monitoring, DNA testing, pet welfare and comprehensive veterinary care with AniCura, BANFIELD™, BLUEPEARL™, Linnaeus and VCA™.  

We know we can only be truly successful if our partners and the communities in which we operate prosper. The Mars Five Principles – Quality, Responsibility, Mutuality, Efficiency and Freedom – inspire our Associates to act every day to help create a better world tomorrow in which the planet, its people and pets can thrive.  

For more information about Mars, please visit www.mars.com. Join us on Facebook, Twitter, Instagram, LinkedIn and YouTube.

 
  • Like
  • Love
  • Fire
Reactions: 31 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
  • Fire
Reactions: 11 users
Respectfully FF I am not claiming that the sales staff are not doing their jobs. However, I'm not ruling it out either. In the absence of revenue producing engagements with customers and the associated ASX announcements, which would required, I think both scenarios are still possible. Regardless of how unpalatable my previous post may seem it's merely looking at both sides of the coin. You have said yourself that the tipping point is when revenue outpaces operating expenses. I think we all expected this to occur Y/E 2022 regardless of said economic headwinds. Hopefully this expectation is realised in the next 4C and we can say the sales team are doing their jobs. Until then the coin will continue to have two sides.
Thanks for your response which ignores completely the two points I raised.

What I have decided is that the way in which you and others have turned this general discussion thread into a place of unfounded and non factual negative commentary and empty rhetoric creates an environment which no longer favours serious research and analysis.

Everyone is entitled to have an opinion indeed like backsides everyone has opinions.

My investment style is not based upon opinions but facts. Factual research no longer dominates the discussion and therefore it no longer has any value as far as I am concerned.

The facts that have been disclosed this week I would have found in one tenth of the time I have spent addressing the sort of rubbish you and others have heaped on here. It was a fruitless and useless task.

This place no longer serves the intended purpose so I bid you and others farewell.

Enjoy the new home you have made for yourselves I am sure many from HC will love to call in a join you as you descend into abject righteous self pity stamping your little feet demanding things happen to your timetable and ignoring the facts.

Regards
FF

AKIDA BALLISTA
 
  • Sad
  • Like
  • Love
Reactions: 84 users

VictorG

Member
If so, will the 4c show a debtor for an equivalent amount???
If my assumption is correct, I think it will be in the half yearly report.
 
  • Like
Reactions: 1 users

HopalongPetrovski

I'm Spartacus!
CRAP! 😞
 
  • Like
  • Sad
Reactions: 13 users

Foxdog

Regular
Thanks for your response which ignores completely the two points I raised.

What I have decided is that the way in which you and others have turned this general discussion thread into a place of unfounded and non factual negative commentary and empty rhetoric creates an environment which no longer favours serious research and analysis.

Everyone is entitled to have an opinion indeed like backsides everyone has opinions.

My investment style is not based upon opinions but facts. Factual research no longer dominates the discussion and therefore it no longer has any value as far as I am concerned.

The facts that have been disclosed this week I would have found in one tenth of the time I have spent addressing the sort of rubbish you and others have heaped on here. It was a fruitless and useless task.

This place no longer serves the intended purpose so I bid you and others farewell.

Enjoy the new home you have made for yourselves I am sure many from HC will love to call in a join you as you descend into abject righteous self pity stamping your little feet demanding things happen to your timetable and ignoring the facts.

Regards
FF

AKIDA BALLISTA
I'm sorry. I thought I addressed the sales force issue which was the basis of my initial post and your response. I note your examples but for me I'd like to see actual results in the form of revenue. If your intention on this forum is to live in a self affirming echo chamber then there is no need to leave. I will continue to visit as a silent observer as many here value your input, don't leave on my account.
 
  • Like
  • Love
Reactions: 14 users

buena suerte :-)

BOB Bank of Brainchip
Sorry FF but that just isn't possible!! "I bid you and others farewell." you give too much to this forum and we are on the cusp of greatness and you have to be here to share those moments!!

Your levelling comments and Fact Finding are second to none .... Stay with us and keep giving this BRN Tse forum your well respected input!

Everyone is a little down with the negative market at the moment and getting a bit edgy ! (pun intended) :love:.... But that will change VERY soon

Lets get back to positivity Chippers 🙏🙏🙏
 
  • Like
  • Love
  • Fire
Reactions: 51 users
Thanks for your response which ignores completely the two points I raised.

What I have decided is that the way in which you and others have turned this general discussion thread into a place of unfounded and non factual negative commentary and empty rhetoric creates an environment which no longer favours serious research and analysis.

Everyone is entitled to have an opinion indeed like backsides everyone has opinions.

My investment style is not based upon opinions but facts. Factual research no longer dominates the discussion and therefore it no longer has any value as far as I am concerned.

The facts that have been disclosed this week I would have found in one tenth of the time I have spent addressing the sort of rubbish you and others have heaped on here. It was a fruitless and useless task.

This place no longer serves the intended purpose so I bid you and others farewell.

Enjoy the new home you have made for yourselves I am sure many from HC will love to call in a join you as you descend into abject righteous self pity stamping your little feet demanding things happen to your timetable and ignoring the facts.

Regards
FF

AKIDA BALLISTA
FF,
Don't let these WANCA's win.
Stay and fight the good fight because WE will win in the end because we know the facts.
FACTS will prevail over FICTION...(but not science fiction:love:)
Those on here that know the truth via the facts take no notice of the infiltrators.
Keep Finding Facts, keep reporting the Facts because minorities might have the voice but the majority here are 100% behind you.

That's my opinion...

Baron
 
  • Like
  • Love
  • Fire
Reactions: 45 users

4 Bits Are Enough​


Peter AJ van der Made
Traditional convolutional neural networks (CNNs) use 32-bit floating point parameters and activations. They require extensive computing and memory resources. Early convolutional neural networks such as AlexNet had 62 million parameters.
Over time, CNNs have increased in size and capabilities. GPT3 is a transformer network that has 175 billion parameters. The most significant AI training runs have increased exponentially, with an average doubling period of 3.4-months. Millions or billions of Multiply and Accumulate (MAC) functions must be executed for each inference. These operations are performed in batches of data on large servers with stacks of Graphics Processing Units (GPU) or costly cloud services, and these requirements keep accelerating.
At the other end, the increasing popularity of Deep Learning networks in small electronic devices demands energy-efficient, low-latency solutions of, sometimes, similar models. Deep Learning networks are seen in smartphones, industrial IoT, home appliances, and security devices. Many of these devices are subject to stringent power and security requirements. Security issues can be mitigated by eliminating the uploading of raw data to the internet and performing all or most of the processing on the device itself. However, given the constraints at the edge, models running for these devices must be much more compact in every dimension, without compromising accuracy.
A new architectural approach such as event-based processing with at-memory compute is fundamental to addressing the efficiency challenge. These draw inspiration from neuromorphic principles, mimicking the brain to minimize operations and hence energy consumption. However, energy efficiency is the cumulative effect of not just the architecture, but model size including width of weights and activation parameters. In particular, support for 32-bit floating point requires complex and large-footprint hardware. The reduction in size of these parameters and weights can provide a substantial benefit in performance and in reducing the hardware needed to compute. However, this must be judiciously and innovatively done to keep the outcomes and accuracy similar to the larger models. With the process of quantization, activation parameters and weights can be converted to low bit-width values. Several sources have reported that lower precision computation can provide similar classification accuracy at lower power consumption and better latency. This enables smaller footprint hardware implementation, that reduces development, silicon, and packaging cost, enabling on-device processing in handheld, portable, and edge devices.
MetaTF_Header-400x257-1.jpeg

To make the development process easier, Brainchip has developed the MetaTF™ software that integrates with TensorFlow ™ (and other edge AI development flows), including APIs for 4-bit processing and quantization functionality, to enable retraining and optimization.
The developers can therefore seamlessly build and optimize for the Akida Neural Processor and benefit from executing neural networks entirely on-chip, efficiently, with low latency.
Quantization is the process of mapping continuous infinite values to discrete finite values. Or in the case of modern AI, mapping larger floating-point values to a discrete set of smaller real numbers. The quantization method obtains an efficient representation, manipulation, and communication of numeric values in Machine Learning (ML) applications. 32-bit floating point numbers are distributed over a discrete set of real numbers (0 to 15) to minimize the number of bits required while maintaining the accuracy of the classification. Remarkable performance is achieved in 4-bit quantized models for diverse tasks such as object classification, face recognition, segmentation, object detection, and keyword recognition.
The Brainchip Akida neural processor performs all the operations needed to execute a low bit-width Convolutional Neural Network, thereby offloading the entire task from the central processor or microcontroller. The design is optimized for high-performance Machine Learning applications, resulting in efficient, low power consumption while performing thousands of operations simultaneously on each phase of the 300 MHz clock cycle. A unique feature of the Akida neural processor is the ability to learn in real time, allowing products to be conveniently configured in the field without cloud access. The technology is available as a chip or a small IP block to integrate into an ASIC.
“Table 1 provides the accuracy of several 4-bit CNN networks comparable to floating-point accuracies. For example, AkidaNet is a version of Mobilnet optimized for 4-bit classification, and many other example networks can be downloaded from the Brainchip website. In the quantization column below, ‘a’/’b’/’c’ where ‘a’ means weights bits for first layer, ‘b’ means weights bits for subsequent layers, ‘c’ means output activation map bits for every layer.
Screenshot-2023-01-05-at-4.15.10-PM-1024x587.png

Table 1. Accuracy of inference.
AkidaNet is a feed-forward network optimized to work with 4-bit weights and activations. AkidaNet 0.5 has half the parameters of AkidaNet 1.0. The Akida hardware supports Yolo, DeviceNet, VGG, and other feed-forward networks. Recurrent networks and transformer networks are supported with minimal CPU participation. An example recurrent network implemented on the AKD1000 chip required just 3% CPU participation with 97% of the network running on Akida.
4-bit network resolution is not unique. Brainchip pioneered this Machine Learning technology as early as 2015 and, through multiple silicon implementations, tested and delivered a commercial offering to the market. Others have recently published papers on its advantages, such as IBM, Stanford University and MIT.
Untitled-design-1.png

Akida is based on a neuromorphic, event-based, fully digital design with additional convolutional features. The combination of spiking, event-based neurons, and convolutional functions is unique. It offers many advantages, including on-chip learning, small size, sparsity, and power consumption in the microwatt/milliwatt ranges. The underlying technology is not the usual matrix multiplier, but up to a million digital neurons with either 1, 2, or 4-bit synapses. Akida’s extremely efficient event-based neural processor IP is commercially available as a device (AKD1000) and as an IP offering that can be integrated into partner System on Chips (SoC). The hardware can be configured through the MetaTF software, integrated into TensorFlow layers equating up to 5 million filters, thereby simplifying model development, tuning and optimization through popular development platforms like TensorFlow/Keras and Edge Impulse. There are a fast-growing number of models available through the Akida model zoo and the Brainchip ecosystem.
To dive a little bit deeper into the value of 4-bit, in its 2020 NeurIPS paper IBM described the various pieces that are already present and how they come together. They prove the readiness and the benefit through several experiments simulating 4-bit training for a variety of deep-learning models in computer vision, speech, and natural language processing. The results show a minimal loss of accuracy in the models’ overall performance compared with 16-bit deep learning. The results are also more than seven times faster and seven times more energy efficient. And Boris Murmann, a professor at Stanford who was not involved in the research, calls the results exciting. “This advancement opens the door for training in resource-constrained environments,” he says. It would not necessarily make new applications possible, but it would make existing ones faster and less battery-draining“ by a good margin.”
With the focus on edge AI solutions that are extremely energy-sensitive and thermally constrained and require efficient real-time response, this advantage of 4-bit weights and activations is compelling and shows a strong trend in the coming years. Brainchip has pioneered this path since 2016 and invested in a simplified flow and ecosystem to enable developers. BrainChip’s MetaTF compilation and tooling are integrated into TensorFlow™ and Edge Impulse. TensorFlow/Keras is a familiar environment to most data scientists, while Edge Impulse is a strong emerging platform for Edge AI and TinyML. MetaTF, many application examples, and source code are available free from the Brainchip website: https://doc.brainchipinc.com/examples/index.html
Brainchip continues to invest in advanced machine-learning technologies to further its market leadership.
Source: IBM NeurIPS proceedings 2020: https://proceedings.neurips.cc/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf
Source: MIT Technology Review. https://www.technologyreview.com/2020/12/11/1014102/ai-trains-on-4-bit-computers/
Skimming and catching up some reading and nice work & thanks to all for sharing the PVDM blog and the Accenture info.

A little section caught my eye as follows....

An example recurrent network implemented on the AKD1000 chip required just 3% CPU participation with 97% of the network running on Akida.
4-bit network resolution is not unique. Brainchip pioneered this Machine Learning technology as early as 2015 and, through multiple silicon implementations, tested and delivered a commercial offering to the market. Others have recently published papers on its advantages, such as IBM, Stanford University and MIT.

The following company was something looking at yesterday.

Anyway, who knows if they decided to maybe use already available COTS tech to assist?

Haven't dug any deeper yet.


IMG_20230111_152806.jpg



Screenshot_2023-01-11-15-25-10-36_4641ebc0df1485bf6b47ebd018b5ee76.jpg


IMG_20230111_153806.jpg

Screenshot_2023-01-11-15-26-52-03_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Love
Reactions: 31 users
Top Bottom