BRN Discussion Ongoing

Diogenese

Top 20
When I read this info contained in the article this jumped out. Do we know that Accenture are partners Mercedes. “with Neuromorphic technologies make efficient onboard AI possible. In a recent collaboration with an automotive client, we demonstrated that spiking neural networks running on a neuromorphic processor can recognize simple voice commands up to 0.2 seconds faster than a commonly used embedded GPU accelerator, while using up to a thousand times less power. This brings truly intelligent, low latency interactions into play, at the edge, even within the power-limited constraints of a parked vehicle.

Hi BrnPos2022,

Unfortunately they are referring to Intel's Kapoho Bay research chip:


Neuromorphic Computing: Energy-efficient Smart Cars with Advanced Voice Control | Accenture

https://www.accenture.com/content/d...nt-Smart-Cars-with-Advanced-Voice-Control.pdf


Using edge AI devices to compliment cloud[1]based AI could also increase responsiveness and improve reliability when connectivity is poor. So we've built a proof of concept system with one of our major automotive partners to demonstrate that neuromorphic computing can make cars smarter without draining the batteries. We're using Intel's Kapoho Bay to recognize voice commands that an owner would give to their vehicle. The Kapoho Bay is a portable and extremely efficient neuromorphic research device for AI at the edge.


As a first step, we trained the system to recognize simple commands, such as lights on and lights off, open door, close door, or start engine. Using a combination of open source voice recordings and a smaller sample of specific commands, we can approximate the kinds of voice processing needed for smart vehicles. We tested this approach by comparing our train spiking neural networks running on Intel's neuromorphic research cloud against a convolutional neural network, running on a GPU.


Both systems achieved acceptable accuracy recognizing our voice commands, but we found that the neuromorphic system was up to a thousand times more efficient than the standard AI system with a GPU. This is extremely impressive and it's consistent with the results from other labs, as Intel will show further in their session on benchmarking the Intel OAE. The neuromorphic system also responded up to 200 milliseconds faster than the GPU. This dramatic improvement in energy efficiency for our task comes from the fact that computation in Loihi is extremely sparse. While the GPU performs billions of computations per second, every second, the neuromorphic chip only processes changes in the audio signal and neuron cores inside low Loihi communicate efficiently with spikes
.


Of course, now we're friends, they may be able to get even better results.
 
  • Like
  • Fire
  • Love
Reactions: 11 users

AlesHome

Emerged
When was the AKD500 released and what devices is it in

Fom Metatf update

New features​

  • [akida] Upgrade to quantizeml 0.0.13
  • [akida] Attention layer
  • [akida] Identify AKD500 devices
  • [engine] Move mesh scan to host library
 

Murphy

Life is not a dress rehearsal!
  • Like
  • Haha
  • Love
Reactions: 13 users
As always could be nothing could be something and could have already been posted.
View attachment 26662
I don’t know how to react to this??? We have had assurances here that Rob Telson has been terminated that’s why he is not doing the podcast, so is it right and proper that he continues to masquerade as being employed by Brainchip and promoting contacts and relationships with Intel as if he is still in charge of Ecosystems and Partner relationships???

Or could these allegations just be more of the ‘it’s my opinion’ posts not designed to mislead or manipulate???

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Fire
Reactions: 24 users

skutza

Regular
  • Like
  • Love
Reactions: 6 users
Hi BrnPos2022,

Unfortunately they are referring to Intel's Kapoho Bay research chip:


Neuromorphic Computing: Energy-efficient Smart Cars with Advanced Voice Control | Accenture

https://www.accenture.com/content/d...nt-Smart-Cars-with-Advanced-Voice-Control.pdf


Using edge AI devices to compliment cloud[1]based AI could also increase responsiveness and improve reliability when connectivity is poor. So we've built a proof of concept system with one of our major automotive partners to demonstrate that neuromorphic computing can make cars smarter without draining the batteries. We're using Intel's Kapoho Bay to recognize voice commands that an owner would give to their vehicle. The Kapoho Bay is a portable and extremely efficient neuromorphic research device for AI at the edge.


As a first step, we trained the system to recognize simple commands, such as lights on and lights off, open door, close door, or start engine. Using a combination of open source voice recordings and a smaller sample of specific commands, we can approximate the kinds of voice processing needed for smart vehicles. We tested this approach by comparing our train spiking neural networks running on Intel's neuromorphic research cloud against a convolutional neural network, running on a GPU.


Both systems achieved acceptable accuracy recognizing our voice commands, but we found that the neuromorphic system was up to a thousand times more efficient than the standard AI system with a GPU. This is extremely impressive and it's consistent with the results from other labs, as Intel will show further in their session on benchmarking the Intel OAE. The neuromorphic system also responded up to 200 milliseconds faster than the GPU. This dramatic improvement in energy efficiency for our task comes from the fact that computation in Loihi is extremely sparse. While the GPU performs billions of computations per second, every second, the neuromorphic chip only processes changes in the audio signal and neuron cores inside low Loihi communicate efficiently with spikes
.


Of course, now we're friends, they may be able to get even better results.
Don’t forget Mercedes Benz trialled Intel before moving to Brainchip just as Prophesee did.

I wonder how many NDA covered companies have followed the same trodden path and could it be now described as well worn.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 31 users

skutza

Regular
Maybe Akida Controlling Everything!

:)
ACE is from the old English name ACEY which means amongst other things number one.

“Acey, a boy's name of English origin, means "number one" or "the best."

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 10 users

Diogenese

Top 20
ChatGPT is either the world's greatest espionage agent (007), or the world's greatest liar ( Walter Mitty Richard Nixon Donald Trump Pinocchio [insert name here]).

Can no one be trusted - all these years I've sworn by Ella, and now:

1673320274519.png


Luckily, Accenture have the solution:

US2022382795A1 METHOD AND SYSTEM FOR DETECTION OF MISINFORMATION
A system and method for automatically detecting misinformation is disclosed. The misinformation detection system is implemented using a cross-stitch based semi-supervised end-to-end neural attention model which is configured to leverage the large amount of unlabeled data that is available. In one embodiment, the model can at least partially generalize and identify emerging misinformation as it learns from an array of relevant external knowledge. Embodiments of the proposed system rely on heterogeneous information such as a social media post's text content, user details, and activity around the post, as well as external knowledge from the web, to identify whether the content includes misinformation. The results of the model are produced via an attention mechanism.
 
  • Like
  • Love
  • Fire
Reactions: 8 users
i'm not sure if this paper has been posted before, but it was published online on 14SEP22:


Brainchip AKIDA is referenced and has its own subsection in the paper. The third sentence in section 3.8 has me intrigued:

View attachment 26663


I know a couple of them, but who are the rest that make up the 15 companies?
We know only those that have been disclosed with 7 to 8 outstanding under NDA and not revealed.

I am sure you know the names of all those we know already and which are the confirmed original NDA’s like Ford, Valeo, Renesas, MegaChips, Mercedes Benz, ISL, NASA. This makes seven names plus seven or eight still hidden NDA EAPs gives you fifteen.

There are of course all the other partners now in the ecosystem ARM, SiFive, Intel, Edge Impulse, Nviso, Prophesee etc

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Ok slackers upping it up to two months free subscription for the first poster to provide the link to where Lou stated this.
Open to all here now not just non subscribers.
I know where that is
 

TechGirl

Founding Member
  • Like
  • Fire
  • Haha
Reactions: 12 users

Hrdwk

Regular
  • Like
  • Haha
Reactions: 9 users

Tothemoon24

Top 20
Another
Cocktail
Everyone ?
 
  • Haha
  • Like
  • Fire
Reactions: 21 users
I don’t know how to react to this??? We have had assurances here that Rob Telson has been terminated that’s why he is not doing the podcast, so is it right and proper that he continues to masquerade as being employed by Brainchip and promoting contacts and relationships with Intel as if he is still in charge of Ecosystems and Partner relationships???

Or could these allegations just be more of the ‘it’s my opinion’ posts not designed to mislead or manipulate???

My opinion only DYOR
FF

AKIDA BALLISTA
Are we sure that is the real Rob Telson or perhaps....... AN IMPOSTER👀 🕵️‍♂️
No offense Rob if you happen to see this image.
Love your work!
3777764976049977551_.jpg
 
  • Haha
  • Love
  • Like
Reactions: 13 users

Diogenese

Top 20
You get no marx for that!
 
  • Haha
  • Like
  • Fire
Reactions: 16 users
Akida
Conquering
Edge ?
Nothing sus going on here
Well I suppose in a socialist country fairness dictates that if the state is inefficient and ruins the economy that an efficient well run profitable private enterprise has to be brought under state control and reduced to the same level as the state.

That’s what socialism is, is it not. Every opinion equal, everyone at the same level. As you cannot make all people highly intelligent the only way to achieve the socialist state is to lower everyone to the lowest common denominator. After all its easy for smart people to play dumb but not so easy for the dumb to play smart.🤣😂🤡😂🤣😎

I am just pleased he has escaped with his life.

I would suggest Brainchip give him a job but the CCP would probably shoot his relatives until he gave up AKIDA’s secret sauce.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
Reactions: 9 users

Shadow59

Regular
ACE is from the old English name ACEY which means amongst other things number one.

“Acey, a boy's name of English origin, means "number one" or "the best."

My opinion only DYOR
FF

AKIDA BALLISTA
When my family first emigrated to Queensland in the 70's, part of the vernacular at that time, other than finishing a sentence with "anyhow but." or "but" was "ace mate" meaning the best. Maybe thats's Akida....Simply the best
 
  • Like
  • Love
Reactions: 9 users

TechGirl

Founding Member
Just browsing around as I do & came across the Machine Learning Department at CMU


No mention of us but in their recent news it is all very relevant to us :unsure:



New Research Investigates How the Brain Processes Language​

Aaron Aupperlee
Tuesday, November 29, 2022
Print this page.

combined-words-brain.png


New research from a team in the Machine Learning Department shows which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words.

Humans accomplish a phenomenal amount of tasks by combining pieces of information. We perceive objects by combining edges, categorize scenes by combining objects, interpret events by combining actions, and understand sentences by combining words. But researchers don't yet have a clear understanding of how the brain forms and maintains the meaning of the whole — such as a sentence — from its parts. School of Computer Science (SCS) researchers in the Machine Learning Department (MLD) have shed new light on the brain processes that support the emergent meaning of combined words.

Mariya Toneva, a former MLD Ph.D. student now faculty at the Max Planck Institute for Software Systems, worked with Leila Wehbe, an assistant professor in MLD, and Tom Mitchell, the Founders University Professor in SCS, to study which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words. This work could contribute to a more complete understanding of how the brain processes, maintains and updates the meaning of words, and could redirect research focus to areas of the brain suitable for future wearable neurotechnology, such as devices that can decode what a person is trying to say directly from brain activity. These devices can help people with diseases like Parkinson's or multiple sclerosis that limit muscle control.

Toneva, Mitchell and Wehbe used neural networks to build computational models that could predict the areas of the brain that process the new meaning of words when they are combined. They tested this model by recording the brain activity of eight people as they read a chapter of "Harry Potter and the Sorcerer's Stone." The results suggest that some regions of the brain process both the meaning of individual words and the meaning of combined words, while others process only the meanings of individual words. Crucially, the authors also found that one of the neural activity recording tools they used, magnetoencephalography (MEG), did not capture a signal that reflected the meaning of combined words. Since future wearable neurotechnology devices might use recording tools similar to MEG, one potential limitation is their inability to detect the meaning of combined words, which could affect their capacity to help users produce language.

The team's work builds on past research from Wehbe and Mitchell that used functional magnetic resonance imaging to identify the parts of the brain engaged as people read a chapter of the same Potter book. The result was the first integrated computational model of reading, identifying which parts of the brain are responsible for such subprocesses as parsing sentences, determining the meaning of words and understanding relationships between characters.

For more on the most recent findings, read the paper "Combining Computational Controls With Natural Text Reveals Aspects of Meaning Composition," in Nature Computational Science.

For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
 
  • Like
  • Love
  • Fire
Reactions: 18 users

TechGirl

Founding Member
And another one


A Low-Cost Robot Ready for Any ObstacleCMU, Berkeley Researchers Design Robust Legged Robot System​

Aaron Aupperlee
Wednesday, November 16, 2022
Print this page.

legged-robot.jpg


A robotic system designed by researchers at CMU and Berkeley allows small, low-cost legged robots to maneuver in challenging environments.
This little robot can go almost anywhere.

Researchers at Carnegie Mellon University's School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs; and even operate in the dark.

"Empowering small robots to climb stairs and handle a variety of environments is crucial to developing robots that will be useful in people's homes as well as search-and-rescue operations," said Deepak Pathak, an assistant professor in the Robotics Institute. "This system creates a robust and adaptable robot that could perform many everyday tasks."

The team put the robot through its paces, testing it on uneven stairs and hillsides at public parks, challenging it to walk across stepping stones and over slippery surfaces, and asking it to climb stairs that for its height would be akin to a human leaping over a hurdle. The robot adapts quickly and masters challenging terrain by relying on its vision and a small onboard computer.

The researchers trained the robot with 4,000 clones of it in a simulator, where they practiced walking and climbing on challenging terrain. The simulator's speed allowed the robot to gain six years of experience in a single day. The simulator also stored the motor skills it learned during training in a neural network that the researchers copied to the real robot. This approach did not require any hand-engineering of the robot's movements — a departure from traditional methods.

Most robotic systems use cameras to create a map of the surrounding environment and use that map to plan movements before executing them. The process is slow and can often falter due to inherent fuzziness, inaccuracies, or misperceptions in the mapping stage that affect the subsequent planning and movements. Mapping and planning are useful in systems focused on high-level control but are not always suited for the dynamic requirements of low-level skills like walking or running over challenging terrains.

The new system bypasses the mapping and planning phases and directly routes the vision inputs to the control of the robot. What the robot sees determines how it moves. Not even the researchers specify how the legs should move. This technique allows the robot to react to oncoming terrain quickly and move through it effectively.

Because there is no mapping or planning involved and movements are trained using machine learning, the robot itself can be low-cost. The robot the team used was at least 25 times cheaper than available alternatives. The team's algorithm has the potential to make low-cost robots much more widely available.

"This system uses vision and feedback from the body directly as input to output commands to the robot's motors," said Ananye Agarwal, an SCS Ph.D. student in machine learning. "This technique allows the system to be very robust in the real world. If it slips on stairs, it can recover. It can go into unknown environments and adapt."

This direct vision-to-control aspect is biologically inspired. Humans and animals use vision to move. Try running or balancing with your eyes closed. Previous research from the team had shown that blind robots — robots without cameras — can conquer challenging terrain, but adding vision and relying on that vision greatly improves the system.

The team looked to nature for other elements of the system, as well. For a small robot — less than a foot tall, in this case — to scale stairs or obstacles nearly its height, it learned to adopt the movement that humans use to step over high obstacles. When a human has to lift its leg up high to scale a ledge or hurdle, it uses its hips to move its leg out to the side, called abduction and adduction, giving it more clearance. The robot system Pathak's team designed does the same, using hip abduction to tackle obstacles that trip up some of the most advanced legged robotic systems on the market.

The movement of hind legs by four-legged animals also inspired the team. When a cat moves through obstacles, its hind legs avoid the same items as its front legs without the benefit of a nearby set of eyes. "Four-legged animals have a memory that enables their hind legs to track the front legs. Our system works in a similar fashion" Pathak said. The system's onboard memory enables the rear legs to remember what the camera at the front saw and maneuver to avoid obstacles.

"Since there's no map, no planning, our system remembers the terrain and how it moved the front leg and translates this to the rear leg, doing so quickly and flawlessly," said Ashish Kumar a Ph.D. student at Berkeley.

The research could be a large step toward solving existing challenges facing legged robots and bringing them into people's homes. The paper "Legged Locomotion in Challenging Terrains Using Egocentric Vision," written by Pathak, Berkeley professor Jitendra Malik, Agarwal and Kumar, will be presented at the upcoming Conference on Robot Learning in Auckland, New Zealand.

For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
 
  • Like
  • Fire
  • Love
Reactions: 16 users
Top Bottom