BRN Discussion Ongoing

AARONASX

Holding onto what I've got
Merry Christmas everyone

1671749620990.png
 
  • Like
  • Haha
  • Love
Reactions: 41 users
AWESOME ..................... thanks @Baisyet
Why doesn"t MD just say that Intel is adopting Akida SNN technology ...................... :rolleyes:

@Diogenese ........................ any thoughts much appreciated

AKIDA BALLISTA
One reason I can think of is if Mike Davies/Intel says this loud and clear it is a signal to every other company that they don’t need Intel and can go straight to Brainchip.

At the moment they can continue to try and control the narrative that Loihi is the way to neuromorphic ascendancy and Brainchip AKIDA IP is a niche chip for limited use cases and if you really want it they can supply it.

But Brainchip is building itself into as many ecosystems as it possibly can in much the same way as a virus moves through a community.

Renesas is tapping out at the low end while at the same time Edge Impulse is comparing AKIDA with GPUs.

Peter van der Made has stated that the market has no understanding of the significance of MegaChips to the future success of Brainchip.

Prophesee and the use cases it referenced for an AKIDA Prophesee event based intelligent sensor picks up many of the industry directions referenced in the report I posted above.

At the same time ARM the chip supply monster is promoting AKIDA across virtually every industrial use case.

NASA is clearly exploring the use of AKIDA as an essential element of deep space exploration and DARPA is deeply imbedding the AKIDA technology in radar and other use cases via ISL and others.

Then Biotome is one of the known medical research companies exploring the use of AKIDA for this industry.

Mercedes Benz extolling the AKIDA advantage over all current technology options in the automotive industry.

Remember Alex the Rocket Scientist stating that he refers to autonomous vehicles as robots because that is what they are technically. Mercedes Benz has not simply extolled the benefits of using AKIDA for cars needing voice recognition but for every single robotic use case from drones to personal assistants.

Carnegie Mellon University and others are now teaching AKIDA Science Fiction to the next generation of technology entrepreneurs and innovators who will populated the research labs and offices of the technology giants.

Brainchip’s Board and Management are brilliantly in my opinion creating an environment where AKIDA is ubiquitous and if you are not involved in some way then you are not on the right technology page.

In my opinion Intel had no choice but to join and will try to control an uncontrollable narrative which has Renesas offering AKIDA for making low end MCU’s smart and Edge Impulse describing AKIDA as a threat to GPUs and the stuff of Science Fiction.

The eventual release of AKIDA next gen into the established and growing ecosystem will be like hitting the nitrous switch on a dragster.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 102 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Interesting read.......


Editorial: Renaissance of Biomimicry Computing​

William A. Casey​

  • United States Naval Academy, Annapolis, Maryland, USA

Yang Cai​

  • Carnegie Mellon University, Pittsburgh, PA, USA


"We are in the era of the Renaissance of Biologically Inspired Computing. Our society has been increasingly digitized and become more and more complex. It needs more effective algorithms. For example, how to incorporate an artificial immune system to improve cyber security? How do we use evolutionary algorithms to solve the scalability problem in a Blockchain system? How to detect Deep Fake media content? On the other hand, modern biological discoveries provide new computational models for problem-solving, for example, CRISPR therapy and RNA vaccine, et al. Furthermore, modern computing technologies enable more powerful means to implement biomimicry algorithms, for example, biomorphic chips such as Akida Spiking Neural Network chip [17]."




Larry


 
  • Like
  • Fire
  • Love
Reactions: 42 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
An oldie, but a goodie. We're mixing with the "in" crowd now. Funny how the same names keep popping up.


Screen Shot 2022-12-23 at 9.54.29 am.png
 
  • Like
  • Fire
  • Love
Reactions: 34 users
Questions that are obvious in relation to this adoption..
- What’s royalty R&D revenue likely to be assuming test chips are built, tested and refined with BRN IP along the way? Assuming 2 years before commercial car sales and in mass production.

- If there are existing customers that are testing Akida IP now, wouldn’t there be some amount of royalty revenue for their R&D phase?

Info on licensing deals structure would be beneficial with the view we all see the Akida tech infiltration into the edge industry, however no royalty revenue. Would I be wrong in assuming that royalty revenue does not happen until there’s a commercial sale of a product?
One of the hardest concepts I found to explain in legal practice was what could be in a contract.

Ninety percent at least of clients in small business had a kind of mystical view about contracts and what they could contain.

The simple fact however is that you can agree anything you want in a contract as long as it is not illegal and is capable of being performed.

So while your questions are intelligent and reasonable they are impossible to answer as Brainchip and ecosystem partners can agree to do anything under the sun in any imaginable way. As long as what they agree is not illegal or impossible.

Brainchip could agree to delay receipt of royalties for ten years if ARM agrees to implement AKIDA IP in fifty percent of all its chip designs.

I am not saying they would or have but all things legal and possible are possible.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Thinking
Reactions: 27 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Back in March 2022 NVDIA was also in talks with Intel about using Intel's Foundry Services to manufacture some of its chips. I found this extract very interesting and it shows how important collaboration is in this industry.


Screen Shot 2022-12-23 at 10.12.50 am.png


 
  • Like
  • Love
  • Fire
Reactions: 24 users

Learning

Learning to the Top 🕵‍♂️
Here is something for everyone to ponder also.

We as shareholders love Dell Technology.

And with this recent announcement of Brainchip joining Intel Foundry Services (IFS).

Dell Technology has primarily use Intel chips for the last 35 years.


We know that Dell Technology fully knows of Brainchip's AKIDA.


With such knowledge, I believe Dell Technology will adopt Brainchip's AKIDA to be part of the solution in the not too distant future, through Intel. (JMHO)

Screenshot_20221223_102312_Samsung Notes.jpg


Whising everyone a Merry Christmas 🎅🎄🎁 and a prosperous New Year 🎆🧨🧧

Learning. 🥳🍾🏖
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 79 users

stuart888

Regular
Synonyms of Critical: crucial, decisive, momentous, deciding factor
Critical in Safety as Explained by the Auto Graphic says a lot! What can Brainchip AI do? The brains for all the sensors = Akida Fusion.

Big money is going to be had at aspects. Love the Traffic Sign Recognition, which is so needed. Cannot wait until my girlfriend does not impatiently say "Green" when at a stop light and the car just starts moving immediately when it turns green.

The safety defensive smarts will be implemented first, (emergency braking and steering avoidance) then later the go on green. ADAS Safety could be the Brainchip star?

Happy to be on the ride.

1671753345707.png
 
  • Like
  • Fire
  • Haha
Reactions: 23 users
D

Deleted member 2799

Guest
Drunk Santa Claus GIF
Wish you all a Merry Christmas and a good start into the new year! May we see next year a better market situation than this year! Stay healthy and happy

Best
7
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Good Morning Chippers,

Great news with the Socionext announcement last night.

WooooHoòooo.

Quote from release..

Socionext uses state of the art process technologies, such as 7nm & 4nm , to produce automotive application performance requirements.

So.....

I think this is the FIRST TIME we have had confirmation of Brainchips IP going into 7nm & 4nm chips.

WooooHoooo.

Regards,
Esq.


Great post @Esq.111!

Hmmm...All this talk of 4nm chips has got me thinking about Tesla since it has just placed a HUGE order for 4nm chips at TSMC's new US facility in Arizona with volume production expected to begin in 2024. Tesla's Hardware 3 in its electric cars will be replaced by Hardware 4 with the 4nm chips from TSMC. It says in another article that at the moment details are scarce but it's expected that this "new chip" will increase the range capacity as well as triple the power of the current model.

A girl can always dream...

Screen Shot 2022-12-23 at 11.08.33 am.png



giphy.gif




 
  • Like
  • Love
  • Fire
Reactions: 38 users

Boab

I wish I could paint like Vincent

Another year just about done and dusted .

Wishing all the contributors on here, the Brainchip Management and employees, and my favourite IT girl Anastasi a happy and safe Christmas and a prosperous new year. Looking forward to a successful 2023.

Just had a look at the ARM website. Its been 7 months since the Brainchip/ARM AI partnership was announced, and Brainchip is still the first company that comes up when you click on relevance and the industries/ tech listed below.

Industry and Technologies​


They also have a new case study article on their site which links back to Brainchip site, i dont know if this is new but its below, a good read!!!!

What is the Akida Event Domain Neural Processor?​

By Brien M. Posey​


Previous generations of artificial intelligence and machine learning chips were useful, but their time is rapidly running out. The constraints on power and bandwidth imposed by edge devices mean that it’s time for a new paradigm, a new design that fulfills the promise of AI and ML at the edge. It’s time for the BrainChip Akida processor.


Although machine learning (ML) has existed for some time, the technology is still evolving. The BrainChip Akida processor overcomes many of the challenges that have long been associated with ML, particularly regarding deep learning neural networks.


The Evolving Artificial Intelligence Model​


Before we jump into the guts of how BrainChip’s Akida Neural Processor works, it’s important to understand what it does and how it will transform artificial intelligence (AI).


BrainChip has focused the past 15 years on evolving the art of AI to overcome the shortcomings of today’s deep learning technologies.


In utilizing AI, corporations are processing exabytes of data to extract information for a wide range of purposes, including surveillance and security, consumer behavior, advertising, language processing, video analysis, financial predictions, and many more.


These applications have spawned a monumental market for both software and hardware and have transformed nearly every industry. There can be no argument that the breakthroughs have been extraordinary, and the growth rate of applications has been explosive. Yet, this has represented, to date, only the tip of the iceberg for AI capabilities. With the expansion of the Internet of Things (IoT) comes a parallel expansion of AI into everyday appliances in the home, office, and industry.


Today’s systems, although impressive, are merely first- and second-generation solutions relying on over-simplified and limited representations of how nature’s intelligence—the brain—really functions. Today’s systems have limited to no ability to learn without huge amounts of labeled data and many repetitions of deep learning and training cycles.


Deep learning systems recognize an object by statistically determining the number of features that match an im- age—features that were extracted from millions of images that it was previously trained on. These systems use several orders of magnitude more power than the brain.


Currently, ML systems rely on power-hungry CPUs and GPUs physically located in large data centers to ingest, process, and retrain data which is generated in a highly distributed fashion all over the globe. This drives an ever-growing and insatiable need for communication bandwidth to move the data to the data center.




BrainChip deems this model ripe for a revolution, and that AI needs to evolve to support intelligence at the location where the data is generated or sensed. It believes that the future of AI lies in the ability to achieve ultra-low power processing as data is being interpreted and transformed into information, and that continuous learning needs to be autonomous and continuous.


BrainChip has developed the Akida Neural Processor to solve the problems inherent in moving AI out of the data center and to the location where data is created: the edge, of which a large segment is often referred to as IoT.


This has several advantages. The most important one is privacy and a sharp reduction of dependency on the Internet. You would not want a device in your home that shoots images up to the internet, where they can be hacked and viewed by anyone—but a warning sent over the internet to your phone that an intruder or other unrecognized person enters your home would be an advantage.


What Is a Neural Network?​


A neural network lies at the core of all AI. As its name implies, a neural network is modeled on the principles of neural processing—the cells that make up the brain network. However, today’s technology (deep learning) is, at best, only loosely related to how the brain functions.


Neuromorphic computing is a field of computer science based on the study of the brain, and how the function of neural brain cells can be utilized in silicon to perform cognitive computing.


BrainChip has developed the Akida neural processor utilizing the fundamental concepts of Neuromorphic computing, in combination with the advances made in deep learning.


The Akida neural processor is a flexible, self-contained, event-based processor that can run today’s most common neural networks, Convolutional Neural Networks in event-based hardware, as well as the next-generation Spiking Neural Networks.


The Akida neural processor is ultra-low power, requires only internal memory, and can perform inference and instantaneous learning within an AI solution. It represents the third generation of neural networking, and the next step in the evolution of AI.


What Is the Akida Neural Processor?​


What makes Akida so different from first and second-generation neural processors? Unlike those legacy processors, the Akida processor is event-based, which means it processes data in the form of events.


Events are the occurrences where things happen, such as a change of contrast in a picture, or a change of color. The human visual system encodes images in the same way.


An event is expressed as a short burst of energy. In Akida, the burst can have a value that indicates neural behavior. No events are generated where zero values occur in the network—for instance, where blank areas occur in a picture—making Akida’s processing scheme intrinsically sparse. In other words, if no events exist or are generated, no processing needs to occur.


The Akida processor uses an encoding scheme called “rank coding,” in which information is expressed as the time and place it occurs. Akida is not programmed in the traditional sense—it consists of physical neuron and synapse circuits configured for a specific task, defining the dimensions and types of network layers.


The entire network is mapped to physical neuron and synapse circuits on the chip. Synapses store weight values and are connected to neurons, which integrate the weight


values when they’re released by an incoming event. Each neuron can have thousands of synapses. Each reconfigurable core can contain the equivalent of tens of thousands of neurons.


Power is consumed only when inputs to a neuron exceed the predetermined threshold and generate an action potential to be processed by subsequent layers in the network.


No output event is generated when the sum of synaptic inputs is zero or negative, significantly reducing the processing requirements in all the following layers. The neural and synapse functions in the Akida neural fabric are entirely implemented in digital hardware. Therefore, no computer code is running within any of the neural cores, resulting in a very low overall power consumption of approximately 3 pico-Joules per synaptic operation (in 28nm technology).


As stated previously, the Akida neural processor is a complete, self-contained, purpose-built neural processor. This is in stark contrast with traditional solutions, which utilize a CPU to run the neural network algorithm, a deep learning accelerator (such as a GPU) to perform, multiply, and ac- cumulate mathematical operations (MACs), and memory to store network parameters (see Figure 1).


Screen-Shot-2022-05-07-at-6.59.18-PM-300x262.png



Figure 1: A traditional Neural Processing solution using a CPU, Deep Learning Accelerator, and external memory vs. the Akida solution as a fully integrated, purpose-built neural processor


By integrating all the required elements into a consolidated, purpose-built neural processor, the Akida processor eliminates the excess power consumption associated with the interaction and communication between the three separate elements, as well as minimizing the physical footprint.


How Does the Akida Event-Based Neural Processor Work at Ultra-Low Power?​


As described earlier, the Akida neural processor is differentiated from other solutions by two major factors:


  1. It is a complete, fully integrated, purpose-built neural processor
  2. It is an event-based processor

By fully integrating the neural network control, the parameter memory, and the neuronal mathematics, the Akida neural processor eliminates significant compute and data I/O power overhead. This factor alone can save multiple watts of unnecessary power consumption.


The Akida event processor is constructed from event-based neurons, which work in a manner much more like the way the brain operates than the “perceptron” style neurons used in today’s deep learning neural network hardware solutions.


In the Akida event domain processor, “events” or “spikes” indicate the presence of information, eliminating wasted effort. This is a core principle.​


All neural networks consist of some form of simulation or emulation of “neural cells” and the weighted connections between those cells. The connections between neural cells have memory, store a value, and are called “synapses” (see Figure 2).


In the end, only information is processed and consumes energy. In the Akida event domain processor, “events” or “spikes” indicate useful information, eliminating wasted effort. This is a core principle.


Screen-Shot-2022-05-07-at-7.07.52-PM-300x136.png



Figure 2: Biological neurons are cells that communicate with one another and store information in synapses. A neuron can have hundreds of thousands of synapses, the content of which is recalled by sensory input action potentials. The neuron integrates the values of active synapses and generates an action potential output when the integrated value reaches or exceeds a threshold value. Artificial Neural Networks model similar behavior.


This is fundamentally different from the function of artificial neurons in Deep Learning Convolutional Neural Network hardware implementations, which process all information without discerning whether it contains valuable information or not.


Every pixel in an image is converted to data and processed, whether it contains any information or not.


To illustrate how this works, consider an extreme case. You could give a “standard” Convolutional Neural Network a blank page to process, and it will take every pixel and process it through millions of multiply-accumulate instructions to find out that the page is blank.


The Akida event-based processing method works like how the human brain would process a blank page: since there are no lines or colors on the page, it receives no events, so it does not need to process anything. It is this reduction of data that must be processed, known as “sparsity,” that leads to significant power savings.


Combined with state-of-the-art circuit architecture and implementation, the Akida neural processor has demonstrated power reduction of up to 10x over the most power-efficient alternatives. In addition, power savings are up to 1,000x compared with standard data center architectures. For AI applications at the edge, where information is created, power budgets can be limited to micro-watts or milli-watts. The Akida platform, with its ultra-low power consumption, meets the power budget requirements for these applications.


Screen-Shot-2022-05-09-at-9.52.09-AM-300x285.png



Figure 3: the evolution of training and learning


How Does Akida Learn?​


Training is an extremely time- and energy-consuming process in today’s deep learning solutions, as it requires a tremendous amount of hand-labeled input data (datasets) and extremely powerful compute infrastructures to train a neural network.


All this has resulted in very useful and powerful solutions, but one with a significant drawback—once an AI solution is trained, it’s not easy for the system to learn new things without going through the entire training process again, this time including the new information.


The Akida processor offers a solution that can take a deep learned neural network, run inference on that network, and then can learn things without going through retraining. Figure 3 shows the evolution of training and learning. Akida represents the third generation of AI; whereby instantaneous learning is enabled.


In native learning mode, event domain neurons learn quickly through a biological process known as Spike Time Dependent Plasticity (STDP), in which synapses that match an activation pattern are reinforced. BrainChip is utilizing a naturally homeostatic form of STDP learning in which neurons don’t saturate or switch off completely.


STDP is possible because of the event-based processing method used by the Akida processor and can be applied to incremental learning and one-shot or multi-shot learning.


The next generation of AI solutions will evolve by utilizing the concepts learned from studying the biological brain. The BrainChip Akida neural processor embodies this evolution by incorporating event domain neural processors in a practical and commercially viable way.




The ability to move AI to the edge depends upon a fundamental shift in how the core of AI solutions is built. The Akida neural processor provides the means. It’s a self-contained, efficient neural processor that’s event-based for maximum efficiency, ultra-low power consumption, and real-time learning. Instantaneous learning reduces the need for retraining, and its processing capabilities eliminate the need for constant internet connectivity.


The BrainChip Akida Neural processor is the next generation in AI that will enable the edge. It overcomes the limitations of legacy AI chips that require too much power and bandwidth to handle the needs of today’s applications, and moves the technology forward in a significant leap, allowing AI to do more with less. Akida’s time has come.

Happy Season 9 GIF by Curb Your Enthusiasm
The guy who wrote this article is also a scientist and has a big connection with Microsoft.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Boab

I wish I could paint like Vincent
The guy who wrote this article is also a scientist and has a big connection with Microsoft.
Pretty sure its the same guy.
Brien.jpg
 
  • Like
  • Fire
  • Love
Reactions: 20 users

TECH

Regular
Nice work @Baisyet ..... WOW!

Intel will manufacture Brainchip's chips


nuclear explosion GIF

Good morning from another beautiful day in Perth...30c already at 8am

I'm not too sure that, the above quote is 100% accurate, other companies' chips with our IP embedded would be a lot more
accurate.

We simply don't supply chips, IP in blocks is how I understand it to be moving forward, I also understand what she is implying and maybe I'm being a little pedantic.

And for Santa's little helpers still shaking our Christmas Tree, the only thing falling off is fluff, which we don't deal in anymore.
Our tree will never fall over no matter how much shaking you give it, why, because our foundations are rock solid.

See you on the other side of CES 2023, I believe that my neighbour has a meeting arranged with the Brainchip team in Las Vegas to discuss the possibly of having her engineers in the South African mining industry work in with our guys to develop Akida technology for underground mining in the areas of gas sensing, predictive maintenance, vibration analysis etc.

I'll ask her to take some photos if possible while at CES.

Tech x (y)🎅
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 61 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 16 users

stuart888

Regular

I thought this was fantastic. For the people that want to see how a person implements the Edge Impulse solution, this is a great video. Edge Impulse is a huge enabler for Brainchip, making things easier for analysis of the entire solution.

I liked the trial-and-error process of implementing the SNN solution. The product lets you fine tune the group of data, sensors, and the SNN to get a result in factual percentage of success. I felt hands-on watching this, understanding what developers do.

Good for the novice. Our partners at Edge Impulse are working for us. Could they become the Adobe Photoshop of SNN tools?
 
  • Like
  • Fire
  • Sad
Reactions: 20 users

Glen

Regular
Its a very delicate situation.

Imagine your supervisors and bosses Investers spent billions to develop a less superior product!

You don't just go oh we were wrong and let's go this way. It takes a plan and time to steer that giant vessel a different direction. You know how many jobs and manager roles and friends would have been booted with a sharp reaction. It's a big political game also.

Know Intel will make money and be in the chip race for edge AI. It's only time that the rest join.

I know we would love to hear them sing AKIDA but they won't not yet.

This partnership was the last DD I needed to really feel confident in BRN succeeding. As Arm, Merc, Renesas, Mega Chips and more weren't 🙄

This is a sign how quick others will adopt.
IMO

Merry Christmas Happy Holidays folks.
I believe NASA and US defense dept needed an American company to produce are chips or iP.
 
  • Like
  • Thinking
Reactions: 15 users
Sorry if posted already


"It’s also putting out a lot of use case material which our friends at Edge Impulse have been really good at doing, picking a variety of hardware platforms, a variety of applications.
Hi @IloveLamp

I had not read this article but it is a must read for two reasons:

1. Everything they talk about now and future AKIDA 1.0 already offers COTS;

2. The messaging designed by Jerome Nadel for Brainchip is being adopted by the industry - “doing more with less” - mimicry of this kind is a sure sign that you are getting your message out and are starting to lead and control the narrative.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Yobe and SoundHound. Hmmm..

Look at what I found on Yobe's website (below).

I also noticed Yobe is on Arm's ecosystem catalogue. Syas they're an "algorithm provider", so no SNN. Not sure if they could benefit from incorporating Akida into their solution since we know Akida is compatible with Arm's entire product family??




Screen Shot 2022-12-23 at 12.22.32 pm.png





33 pm.png

22 pm.png

Screen Shot 2022-12-23 at 12.31.38 pm.png
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 24 users

Diogenese

Top 20
Yobe and SoundHound. Hmmm..

Look at what I found on Yobe's website (below).

I also noticed Yobe is on Arm's ecosystem catalogue. Syas they're an "algorithm provider", so no SNN. Not sure if they could benefit from incorporating Akida into their solution since we know Akida is compatible with Arm's entire product family??




View attachment 25185




View attachment 25187
View attachment 25188
View attachment 25190

have Yobe been resting on their laurels?

This patent is from 2010 - nothing since unless in the last 18 months. The system diagram is from the last millennium.

US10403302B2 Enhancing audio content for voice isolation and biometric identification by adjusting high frequency attack and release times

1671760183216.png


Systems and methods for isolating audio content and biometric authentication include receiving, with an audio receiver, an audio signal spanning a plurality of frequency bands, identifying a speech signal carried by a voice frequency band selected from the plurality of frequency bands, enhancing the speech signal relative to other audio content within the audio signal, and extracting a voice profile key that uniquely identifies the speech signal, wherein enhancing the first speech signal comprises adjusting attack and release times of the speech signal based on sound events within the speech signal, the attack time being associated with very high frequency sounds that are not phase-shifted.

1671761143154.png



They bill themselves as "algorithm provider".

[0032] Some embodiments of the method also includes extracting a first voice profile key that uniquely identifies the first speech signal. Extracting the first voice profile key comprises generating a set of integers, wherein each integer is a function of a recurring frequency and a corresponding amplitude present in the speech signal. The set of integers may identify a unique code or voice print belonging to an individual voice donor. The voice print extracted from the audio signal may then be isolated using the method described above and used to biometrically identify the individual donor by comparing the voice profile key to a database of known voice profile keys. Biometric identification may also include comparison of voice frequency, amplitude, tempo, pitch, speech, or other audible queues that may be unique to an individual as known in the art. If the voice profile key is not found in the database, it may be added. For example, the method may include receiving, from a data store, a plurality of historic voice profile keys and corresponding identified individuals and identifying a first individual donor of the first speech signal by matching the first voice profile key to one of the historic voice profile keys.

There is certainly scope for Akida to enhance the performance of the Yobe algorithm.
 
  • Like
  • Love
  • Fire
Reactions: 20 users
Just some thoughts on Socionext, timelines and relationships that may or may not (?) have been covered previously.

Musing where all the tentacles lead to, and everything does appear to be moving to a critical mass type stage for the upcoming year.

Late 2018 this joint project occurred.

Note a couple of the project partners, project brief and expected end date ;)

Socionext and Partners to Start NEDO-Sponsored Project on
Developing ʻEvolutionary, Low-Power AI Edge LSIʼ
Langen/Germany, 17. October, 2018 --- Socionext Inc., ArchiTek Corporation, and Toyota
Industries Corporation
have signed an agreement to start a research and development project on
ʻEvolutionary, Low-Power AI Edge LSIʼ.

The project is being sponsored by New Energy and Industrial Technology Development Organization (NEDO), a Japanese governmental organization promoting the development and introduction of new energy technologies. It is scheduled to conclude in March 2022 with the goal to commercialize technologies in autonomous driving, surveillance systems, drones, robots, AI powered home appliances and others.

The project consists of the following:

(1) Virtual Engine Architecture (ArchiTek Corporation)

To develop a new architecture that achieves a compact device, low power consumption and flexibility, all at the same time.

(2) Real-Time SLAM (Toyota Industries Corporation)

To establish real-time SLAM (Simultaneous Localization And Mapping) technology for self-driving machines.

(3) Quantification DNN (Socionext Inc.)

To address and solve low recognition rate problem with DNN quantization, required for high speed and low power AI processing.

(4) Edge Environment Optimization (Socionext Inc.)

To study a method to identify and optimize how to share functions between the cloud and the edge.

The project is scheduled to conclude in March 2022. Socionext aims to establish the new "AI edge solution platform" based on the outcome of the project and apply it to a wide range of applications for expanding the company’s business and global market outreach.



Expanding the Five Senses with Edge Computing and
Solving Social Problems

Every minute and second, a tremendous amount of information is sucked up from edge devices to the cloud.
However, such information is by no means being used effectively.
From daily routines such as driving to medical care and disaster sites,
people are stressed to be forced to make decisions from a huge number of options in every scene.

Therefore, innovation in edge computing is now required.
Without waiting for a few seconds to communicate with the cloud, respond to human needs.
Always be proactive and respond to the situation.

It moves at the moment when people, society, want something, or before they become aware of that desire.
What we are aiming for is technology that expands the five senses.
Technological innovation with a radius of 1 meter.

I want to see more, I want to know more, I want to feel more.
When the edge changes, the world you feel changes.



In mid 2019 this occurred:

BrainChip and Socionext Sign a Definitive Agreement to Develop the Akida™ Neuromorphic System-on-Chip


In Mar 2020 this occurred:

BrainChip and Socionext Provide a New Low-Power Artificial Intelligence Platform for AI Edge Applications
Socionext to offer its SynQuacerTM Multi-Core Processor with BrainChip’s AkidaTM SoC
BrainChip will provide training, technical and customer support
Companies will jointly identify target end markets and customers

Socionext also offers a high-efficiency, parallel multi-core processor SynQuacerTM SC2A11 as a server solution for various applications.

Socionext’s processor is available now and the two companies expect the Akida SoC engineering samples to be available in the third quarter of 2020.

In addition to integrating BrainChip’s AI technology in an SoC, system developers and OEMs may combine BrainChip’s proprietary Akida device and Socionext’s processor to create high-speed, high-density, low-power systems to perform image and video analysis, recognition and segmentation in surveillance systems, live-streaming and other video applications.



Also in Mar 2020:

Socionext Prototypes Low-Power AI Chip with Quantized Deep Neural Network Engine
Delivers Significant Expansion of Edge Computing Capabilities, Performance and Functionality

SANTA CLARA, Calif., March 17, 2020 ---Socionext Inc. has developed a prototype chip that incorporates newly-developed quantized Deep Neural Network (DNN) technology, enabling highly-advanced AI processing for small and low-power edge computing devices.

The prototype is a part of a research project on “Updatable and Low Power AI-Edge LSI Technology Development” commissioned by the New Energy and Industrial Technology Development Organization (NEDO) of Japan. The chip features a "quantized DNN engine" optimized for deep learning inference processing at high speeds with low power consumption.

Quantized DNN Engine
In their place, Socionext has developed a proprietary architecture based on "quantized DNN technology" for reducing the parameter and activation bits required for deep learning. The result is improved performance of AI processing along with lower power consumption. The architecture incorporates bit reduction including 1-bit (binary) and 2-bit (ternary) in addition to the conventional 8-bit, as well as the company’s original parameter compression technology, enabling a large amount of computation with fewer resources and significantly less amounts of data.

Deep Learning Software Development Environment
Socionext has also built a deep learning software development environment. Incorporating TensorFlow as the base framework, it allows developers to perform original, low-bit "quantization-aware training" or "post-training quantization". When used in combination with the new chip, users can choose and apply the optimal quantization technology to various neural networks and execute highly accurate processing. The new chip will add the most advanced computer vision functionality to small form factor, low-power edge devices. Target applications include advanced driver assistance system (ADAS), security camera, and factory automation among others.
Socionext is currently conducting circuitry fine-tuning and performance optimization through the evaluation of this prototype chip. The company will continue working on research and development with the partner companies towards the completion of the NEDO-commissioned project, to deliver the AI Edge LSI as the final product.

NEDO Project title:
Project for Innovative AI Chips and Next-Generation Computing Technology Development
Development of innovative AI edge computing technologies
Updatable and Low Power AI-Edge LSI Technology Development


They also have products like what has been covered previously:

4th Generation Smart Graphic Display Controllers Enable Panoramic and Multi-displays​

Langen, Germany, Milpitas, Calif., and Yokohama, Japan, July 15, 2022 --- Socionext, a global leader in the design and development of innovative System-on-Chip products, has announced a new series of smart display controllers, “SC1721/ SC1722/ SC1723 Series”, certified with ISO26262 for functional safety. Samples will be available at the end of July 2022.

The automotive industry is currently undergoing major transformations that occur approximately once every 100 years. The E/E (Electrical/Electronic) architecture, which is the system structure of automobiles, is changing from a distributed architecture to a domain/zone architecture. Automakers are adopting integrated cockpit systems linking multiple displays, such as meters, In-Vehicle Infotainment (IVI), and head-up displays. Larger display sizes and screen resolutions are also driving the demand for improved image quality. Due to the changes, complying with the ISO26262 functional safety standard is critical for developing new automotive ADAS and infotainment systems.

Socionext improves vehicle safety by adding a mechanism to monitor external LED driver error detection and internal algorithm and supports functional safety (ASIL-B) by complying with the ISO26262 development process.

These features enable new architectures, such as panoramic displays for dashboards, to meet a growing trend of larger multi-display applications.

 
  • Like
  • Fire
  • Love
Reactions: 40 users
Top Bottom