BRN Discussion Ongoing

JK200SX

Regular
i'm not sure if this paper has been posted before, but it was published online on 14SEP22:


Brainchip AKIDA is referenced and has its own subsection in the paper. The third sentence in section 3.8 has me intrigued:

1673318668685.png



I know a couple of them, but who are the rest that make up the 15 companies?
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Diogenese

Top 20
When I read this info contained in the article this jumped out. Do we know that Accenture are partners Mercedes. “with Neuromorphic technologies make efficient onboard AI possible. In a recent collaboration with an automotive client, we demonstrated that spiking neural networks running on a neuromorphic processor can recognize simple voice commands up to 0.2 seconds faster than a commonly used embedded GPU accelerator, while using up to a thousand times less power. This brings truly intelligent, low latency interactions into play, at the edge, even within the power-limited constraints of a parked vehicle.

Hi BrnPos2022,

Unfortunately they are referring to Intel's Kapoho Bay research chip:


Neuromorphic Computing: Energy-efficient Smart Cars with Advanced Voice Control | Accenture

https://www.accenture.com/content/d...nt-Smart-Cars-with-Advanced-Voice-Control.pdf


Using edge AI devices to compliment cloud[1]based AI could also increase responsiveness and improve reliability when connectivity is poor. So we've built a proof of concept system with one of our major automotive partners to demonstrate that neuromorphic computing can make cars smarter without draining the batteries. We're using Intel's Kapoho Bay to recognize voice commands that an owner would give to their vehicle. The Kapoho Bay is a portable and extremely efficient neuromorphic research device for AI at the edge.


As a first step, we trained the system to recognize simple commands, such as lights on and lights off, open door, close door, or start engine. Using a combination of open source voice recordings and a smaller sample of specific commands, we can approximate the kinds of voice processing needed for smart vehicles. We tested this approach by comparing our train spiking neural networks running on Intel's neuromorphic research cloud against a convolutional neural network, running on a GPU.


Both systems achieved acceptable accuracy recognizing our voice commands, but we found that the neuromorphic system was up to a thousand times more efficient than the standard AI system with a GPU. This is extremely impressive and it's consistent with the results from other labs, as Intel will show further in their session on benchmarking the Intel OAE. The neuromorphic system also responded up to 200 milliseconds faster than the GPU. This dramatic improvement in energy efficiency for our task comes from the fact that computation in Loihi is extremely sparse. While the GPU performs billions of computations per second, every second, the neuromorphic chip only processes changes in the audio signal and neuron cores inside low Loihi communicate efficiently with spikes
.


Of course, now we're friends, they may be able to get even better results.
 
  • Like
  • Fire
  • Love
Reactions: 11 users

AlesHome

Emerged
When was the AKD500 released and what devices is it in

Fom Metatf update

New features​

  • [akida] Upgrade to quantizeml 0.0.13
  • [akida] Attention layer
  • [akida] Identify AKD500 devices
  • [engine] Move mesh scan to host library
 

Murphy

Life is not a dress rehearsal!
  • Like
  • Haha
  • Love
Reactions: 13 users
As always could be nothing could be something and could have already been posted.
View attachment 26662
I don’t know how to react to this??? We have had assurances here that Rob Telson has been terminated that’s why he is not doing the podcast, so is it right and proper that he continues to masquerade as being employed by Brainchip and promoting contacts and relationships with Intel as if he is still in charge of Ecosystems and Partner relationships???

Or could these allegations just be more of the ‘it’s my opinion’ posts not designed to mislead or manipulate???

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Fire
Reactions: 24 users

skutza

Regular
  • Like
  • Love
Reactions: 6 users
Hi BrnPos2022,

Unfortunately they are referring to Intel's Kapoho Bay research chip:


Neuromorphic Computing: Energy-efficient Smart Cars with Advanced Voice Control | Accenture

https://www.accenture.com/content/d...nt-Smart-Cars-with-Advanced-Voice-Control.pdf


Using edge AI devices to compliment cloud[1]based AI could also increase responsiveness and improve reliability when connectivity is poor. So we've built a proof of concept system with one of our major automotive partners to demonstrate that neuromorphic computing can make cars smarter without draining the batteries. We're using Intel's Kapoho Bay to recognize voice commands that an owner would give to their vehicle. The Kapoho Bay is a portable and extremely efficient neuromorphic research device for AI at the edge.


As a first step, we trained the system to recognize simple commands, such as lights on and lights off, open door, close door, or start engine. Using a combination of open source voice recordings and a smaller sample of specific commands, we can approximate the kinds of voice processing needed for smart vehicles. We tested this approach by comparing our train spiking neural networks running on Intel's neuromorphic research cloud against a convolutional neural network, running on a GPU.


Both systems achieved acceptable accuracy recognizing our voice commands, but we found that the neuromorphic system was up to a thousand times more efficient than the standard AI system with a GPU. This is extremely impressive and it's consistent with the results from other labs, as Intel will show further in their session on benchmarking the Intel OAE. The neuromorphic system also responded up to 200 milliseconds faster than the GPU. This dramatic improvement in energy efficiency for our task comes from the fact that computation in Loihi is extremely sparse. While the GPU performs billions of computations per second, every second, the neuromorphic chip only processes changes in the audio signal and neuron cores inside low Loihi communicate efficiently with spikes
.


Of course, now we're friends, they may be able to get even better results.
Don’t forget Mercedes Benz trialled Intel before moving to Brainchip just as Prophesee did.

I wonder how many NDA covered companies have followed the same trodden path and could it be now described as well worn.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 31 users

skutza

Regular
Maybe Akida Controlling Everything!

:)
ACE is from the old English name ACEY which means amongst other things number one.

“Acey, a boy's name of English origin, means "number one" or "the best."

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 10 users

Diogenese

Top 20
ChatGPT is either the world's greatest espionage agent (007), or the world's greatest liar ( Walter Mitty Richard Nixon Donald Trump Pinocchio [insert name here]).

Can no one be trusted - all these years I've sworn by Ella, and now:

1673320274519.png


Luckily, Accenture have the solution:

US2022382795A1 METHOD AND SYSTEM FOR DETECTION OF MISINFORMATION
A system and method for automatically detecting misinformation is disclosed. The misinformation detection system is implemented using a cross-stitch based semi-supervised end-to-end neural attention model which is configured to leverage the large amount of unlabeled data that is available. In one embodiment, the model can at least partially generalize and identify emerging misinformation as it learns from an array of relevant external knowledge. Embodiments of the proposed system rely on heterogeneous information such as a social media post's text content, user details, and activity around the post, as well as external knowledge from the web, to identify whether the content includes misinformation. The results of the model are produced via an attention mechanism.
 
  • Like
  • Love
  • Fire
Reactions: 8 users
i'm not sure if this paper has been posted before, but it was published online on 14SEP22:


Brainchip AKIDA is referenced and has its own subsection in the paper. The third sentence in section 3.8 has me intrigued:

View attachment 26663


I know a couple of them, but who are the rest that make up the 15 companies?
We know only those that have been disclosed with 7 to 8 outstanding under NDA and not revealed.

I am sure you know the names of all those we know already and which are the confirmed original NDA’s like Ford, Valeo, Renesas, MegaChips, Mercedes Benz, ISL, NASA. This makes seven names plus seven or eight still hidden NDA EAPs gives you fifteen.

There are of course all the other partners now in the ecosystem ARM, SiFive, Intel, Edge Impulse, Nviso, Prophesee etc

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Ok slackers upping it up to two months free subscription for the first poster to provide the link to where Lou stated this.
Open to all here now not just non subscribers.
I know where that is
 

TechGirl

Founding Member
  • Like
  • Fire
  • Haha
Reactions: 12 users

Hrdwk

Regular
  • Like
  • Haha
Reactions: 9 users

Tothemoon24

Top 20
Another
Cocktail
Everyone ?
 
  • Haha
  • Like
  • Fire
Reactions: 21 users
I don’t know how to react to this??? We have had assurances here that Rob Telson has been terminated that’s why he is not doing the podcast, so is it right and proper that he continues to masquerade as being employed by Brainchip and promoting contacts and relationships with Intel as if he is still in charge of Ecosystems and Partner relationships???

Or could these allegations just be more of the ‘it’s my opinion’ posts not designed to mislead or manipulate???

My opinion only DYOR
FF

AKIDA BALLISTA
Are we sure that is the real Rob Telson or perhaps....... AN IMPOSTER👀 🕵️‍♂️
No offense Rob if you happen to see this image.
Love your work!
3777764976049977551_.jpg
 
  • Haha
  • Love
  • Like
Reactions: 13 users

Diogenese

Top 20
You get no marx for that!
 
  • Haha
  • Like
  • Fire
Reactions: 16 users
Akida
Conquering
Edge ?
Nothing sus going on here
Well I suppose in a socialist country fairness dictates that if the state is inefficient and ruins the economy that an efficient well run profitable private enterprise has to be brought under state control and reduced to the same level as the state.

That’s what socialism is, is it not. Every opinion equal, everyone at the same level. As you cannot make all people highly intelligent the only way to achieve the socialist state is to lower everyone to the lowest common denominator. After all its easy for smart people to play dumb but not so easy for the dumb to play smart.🤣😂🤡😂🤣😎

I am just pleased he has escaped with his life.

I would suggest Brainchip give him a job but the CCP would probably shoot his relatives until he gave up AKIDA’s secret sauce.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
Reactions: 9 users

Shadow59

Regular
ACE is from the old English name ACEY which means amongst other things number one.

“Acey, a boy's name of English origin, means "number one" or "the best."

My opinion only DYOR
FF

AKIDA BALLISTA
When my family first emigrated to Queensland in the 70's, part of the vernacular at that time, other than finishing a sentence with "anyhow but." or "but" was "ace mate" meaning the best. Maybe thats's Akida....Simply the best
 
  • Like
  • Love
Reactions: 9 users

TechGirl

Founding Member
Just browsing around as I do & came across the Machine Learning Department at CMU


No mention of us but in their recent news it is all very relevant to us :unsure:



New Research Investigates How the Brain Processes Language​

Aaron Aupperlee
Tuesday, November 29, 2022
Print this page.

combined-words-brain.png


New research from a team in the Machine Learning Department shows which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words.

Humans accomplish a phenomenal amount of tasks by combining pieces of information. We perceive objects by combining edges, categorize scenes by combining objects, interpret events by combining actions, and understand sentences by combining words. But researchers don't yet have a clear understanding of how the brain forms and maintains the meaning of the whole — such as a sentence — from its parts. School of Computer Science (SCS) researchers in the Machine Learning Department (MLD) have shed new light on the brain processes that support the emergent meaning of combined words.

Mariya Toneva, a former MLD Ph.D. student now faculty at the Max Planck Institute for Software Systems, worked with Leila Wehbe, an assistant professor in MLD, and Tom Mitchell, the Founders University Professor in SCS, to study which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words. This work could contribute to a more complete understanding of how the brain processes, maintains and updates the meaning of words, and could redirect research focus to areas of the brain suitable for future wearable neurotechnology, such as devices that can decode what a person is trying to say directly from brain activity. These devices can help people with diseases like Parkinson's or multiple sclerosis that limit muscle control.

Toneva, Mitchell and Wehbe used neural networks to build computational models that could predict the areas of the brain that process the new meaning of words when they are combined. They tested this model by recording the brain activity of eight people as they read a chapter of "Harry Potter and the Sorcerer's Stone." The results suggest that some regions of the brain process both the meaning of individual words and the meaning of combined words, while others process only the meanings of individual words. Crucially, the authors also found that one of the neural activity recording tools they used, magnetoencephalography (MEG), did not capture a signal that reflected the meaning of combined words. Since future wearable neurotechnology devices might use recording tools similar to MEG, one potential limitation is their inability to detect the meaning of combined words, which could affect their capacity to help users produce language.

The team's work builds on past research from Wehbe and Mitchell that used functional magnetic resonance imaging to identify the parts of the brain engaged as people read a chapter of the same Potter book. The result was the first integrated computational model of reading, identifying which parts of the brain are responsible for such subprocesses as parsing sentences, determining the meaning of words and understanding relationships between characters.

For more on the most recent findings, read the paper "Combining Computational Controls With Natural Text Reveals Aspects of Meaning Composition," in Nature Computational Science.

For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
 
  • Like
  • Love
  • Fire
Reactions: 18 users
Top Bottom