BRN Discussion Ongoing

JK200SX

Regular
Faster than a speeding GPU, more powerful than a CNN algorithm, able to classify giant libraries in a single inference (poetic licence), cooler than a cloud server - look! up in the sky! ... Is it a bird? ... Is it a plane? ... No!
...
It's PvdM
...
with his BVDs on the outside!?
For all you conspiracy theorists, I just noticed that VVDN rhymes with PVDM ..........
:)
 
  • Haha
  • Like
  • Thinking
Reactions: 37 users

cosors

👀
We know here how to do it better and what is to come. Especially as far as MB is concerned. This is still possible:

"On the Autobahn near Bamberg

Man driving behind police car while sleeping in Tesla

He was traveling at 110 km/h: On the A70 near Bamberg, a man fell asleep at the wheel of a self-driving electric car. Even with horns, the police couldn't wake him up.

Police have stopped a sleeping Tesla driver on the Autobahn near Bamberg. The vehicle moved at exactly the same distance from the patrol car in front on the A70 in the direction of Bayreuth , the police said. He drove at a constant speed of 110 kilometers per hour.

The police patrol tried unsuccessfully for 15 minutes to wake the 45-year-old with horns and stop signals. Officers found the man sitting in the driver's seat with his eyes closed and his hands off the steering wheel. Finally the man woke up and followed the instructions. During the check, he showed "typical drug failure symptoms," the police said.

According to the information, the officers found a so-called steering wheel weight in the footwell. This device is attached to the steering wheel to trick a vehicle safety feature by pretending that the hand is on the wheel. The driver is now being investigated for endangering road traffic. His driver's license was withheld."
https://www.spiegel.de/panorama/a70...9f0?dicbo=v2-427a7c3dd75a107860bf607ca00a5e34

I guess this incident will be made a legal precedent in our country. I'm also thinking of facial recognition and, for example, MB's wristband or watch and NFC, as well as breath recognition.
At the moment you can apparently trick Tesla's safety systems with a drink holder with a vodka bottle on the steering wheel.
 
Last edited:
  • Like
  • Wow
  • Fire
Reactions: 20 users

cosors

👀
We had a meeting with friends who are also investing. Over the years they have sometimes made fun of my three favourite small companies as crude start-ups. Only big Tec companies are good and safe. After a summary of the past year no one makes fun of me any more. I have looked at maybe two dozen big tecs and I can say that I have never been calmer and happier. And on top of that, following the exciting story that shows the future 🤗 Simply great!
I also note that I have learned a lot about the ASX in the last few years. Few people here are concerned with the comparison of stock market nations.
I wish you all a happy and pleasant new year. May all your monetisation money desires and wishes come true ❤️‍🔥🍾🤘
Well, I was talking about this to a friend of mine.
I've done much better than her, albeit a small amount.
But the good thing is, it is all my money, not someone else's.
 
Last edited:
  • Like
  • Love
Reactions: 49 users

Pmel

Regular

Attachments

  • Screenshot_20221231-065533_LinkedIn.jpg
    Screenshot_20221231-065533_LinkedIn.jpg
    676.5 KB · Views: 145
  • Like
  • Love
  • Fire
Reactions: 33 users
A great article here about what will be happening and what you could possibly expect at CES 2023:


Just a reminder as well CES 2023 actually begins on 3rd January for the media, who will be in touch with all the exhibitors across the next couple of days, before things kick off on the floor on the 5th.

"Over 100,000 attendees are the goal for CES, according to the Association. Attendees are coming from 173 countries, territories, and regions, and over 4,700 members of the media have been registered."

Imagine all those eyeballs checking out Brainchip, their partners and whoever else there that could possibly be teaming up with them, wow!
 
  • Like
  • Fire
  • Love
Reactions: 60 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Found this interesting article in the weekend paper.

Nothing specifically Brainchip , though I'd be thinking thay could deffinately slip a little Akida IP in there .

Although this article is for millitary applications, one can see on the flip side search and rescue etc for the better.

Hope we all have a calm & peaceful 2023.

Regards,
Esq.

* Thinking thay should rename C.E.S 2023......
A.C.E.S 2023 ( Akida Consumer Electronics Show ).

😃.
 

Attachments

  • 20221231_071513.jpg
    20221231_071513.jpg
    2.8 MB · Views: 83
  • 20221231_071519.jpg
    20221231_071519.jpg
    2.7 MB · Views: 67
  • 20221231_071540.jpg
    20221231_071540.jpg
    2.6 MB · Views: 74
  • Like
  • Love
  • Fire
Reactions: 31 users
Apologies if posted before.

Just to make it easier:

Tim Llewellynn
CEO/Co-Founder of NVISO Human Behaviour AI | President Bonseyes Community Association | Coordinator Bonseyes AI Marketplace | IBM Beacon Award Winner #metaverse #edgeai #decentralizedai. 1w

On the 18th day of Christmas, Santa brought me …. eye tracking running on neuromorphic hardware.

🥇 Detects gaze direction, iris/pupil detection, eye openness, and eye landmarks.

🥇 Asymmetrical eye tracking (left/right eye independent)

🥇 2.84 degree gaze accuracy (on in-domain data)

🥇 1000+ FPS on neuromorphic hardware

🥇 <250 KB model size

Designed specifically for applications of:

1. digital avatars,
2. human machine interfaces,
3. robots, and
4. driver monitoring systems.”

Very hard for anyone to ignore 1,000 plus frames per second.

Overnight probably at a time of half sleep the thought occurred to me that all of the companies that have now been disclosed do not on my understanding have a reputation for requiring absolute secrecy about commercial engagements.

So still lurking in the background is the company which is so large and important that it can demand absolute secrecy and if breached can cause a disaster for Brainchip and its shareholders.

When you look at the companies already made public some are far from being insignificant so it is very intriguing.

The level of secrecy required likely excludes any company where disclosure of any kind has been made. To this end Samsung, Tata, Nvidia can probably be excluded.

This then leaves Amazon, AWS, Apple, Microsoft, Tesla and ???

I wait and wonder what, when, who and will the eventual disclosure light up the market like the Sydney Harbour Bridge on New Years Eve.

My speculation and opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 67 users

cosors

👀
We had a meeting with friends who are also investing. Over the years they have sometimes made fun of my three favourite small companies as crude start-ups. Only big Tec companies are good and safe. After a summary of the past year no one makes fun of me any more. I have looked at maybe two dozen big tecs and I can say that I have never been calmer and happier. And on top of that, following the exciting story that shows the future 🤗 Simply great!
I also note that I have learned a lot about the ASX in the last few years. Few people here are concerned with the comparison of stock market nations.
I wish you all a happy and pleasant new year. May all your monetisation money desires and wishes come true ❤️‍🔥🍾🤘
And I tried to make it clear to them that something like ASN, which I wish for everyone, has nothing to do with lottery or casino, but simply with more eyes than mine. A four bagger in weeks for me and in the bag and off to my three. Now I'm waiting for the other three.

Lastly, a serious mind "joke" from me just now: I hadn't stopped by there for months and I'm not going to after my bad typing - maybe a sign. I seriously just typed in "hotcrapper" in the URL and wondered about the results and only then realised that I had really closed that chapter. 🤣

Thank you zeeb0t and then a happy new year ❤️‍🔥💥🍾

___
Don't misunderstand, the Anson story is just beginning. But like AVZ you need patience and if you don't have it like me with ASN because it's not your core just follow the ASX mechanisms. Back later. I give no single share from my core.
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 24 users

goodvibes

Regular
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users

jtardif999

Regular
  • Like
  • Thinking
  • Fire
Reactions: 31 users
  • Like
  • Fire
Reactions: 26 users
And this is what he works on at AMD:


No opinion just a fact so DYOR
FF

AKIDA BALLISTA
AMD is making much of CES 2023:


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Thinking
Reactions: 24 users

jtardif999

Regular
MF motto:
"Where there's smoke ... there's a meme stock."
and I thought you were going to say ‘… there’s an ass worth blowing it up’ 🤓
 
  • Haha
  • Like
Reactions: 7 users

jtardif999

Regular
Anyone remember how many meters mars Rover can currently travel per hour. 860 something per hour?
The search function here is really lacking. I can't find that document from NASA
4 cm per second
 
  • Like
Reactions: 6 users

Deadpool

hyper-efficient Ai
Warning up ramping proceeding.

As another year ends, I was left wondering what the future has installed for us. As renowned futurist Ray Kurzweil predicted some 15 years ago that the singularity—the time when the abilities of a computer overtake the abilities of the human brain, will occur in about 2045.



As I have pondered, what may come about from the very secretive early access partner program over the last couple of years in which countless numbers of engineers working diligently in secure labs with BRN SNN architecture and incorporating it into their own inhouse designs, the resulted products must be so technology advanced from any product that has come before it, and next week some of these companies will be showcasing their wares at CES23 for the first time for the world to admire. None of this would be possible without their collaboration with BRN and ultimately the vision and brilliance of one man.



The future as I see it, as Akida becomes more readily available to the masses, from the idea of one man to the many thousands of future computer science students and engineers with curious minds and imaginations, what this new architecture will reveal, the mind boggles as to what may be achieved from beneficial Ai and what is installed for mankind, I would surmise the current Akida based SNN technology and the resulted beneficial Ai component will evolve exponentially from now on, but what happens when cortical columns and maybe even quantum computing are introduced into the mix.

Maybe the singularity might be coming a lot sooner than 2045.



I’m also thinking that 7th January 2023 as per Greek Christmas day may just be the tipping point for BRN and shareholder fulfilment.



Anyway, Toodals all, I must be off to the naval architects to design my new super yacht BRAiN STORM tm:LOL: and will be setting a coarse to somewhere in the Med in the not too distant future, all are welcome aboard.

Happy new year everybody.


4Th Of July Summer GIF
 
  • Like
  • Love
  • Fire
Reactions: 57 users
AKIDA outshines all the rest and has 1.2 million neurons but are the unsung heroes the 10 billion synapses. The following recently exposed research suggests they might just be:

Science News | Keeping information in mind may mean storing it across synapses, study suggests​

  • 5 minute read


Published on30 December 2022

Author​

TeamAskbyGeeks


Washington [US]Dec. 29 (ANI): From the time you read your Wi-Fi password off the coffee shop’s menu board to the time you can go back to your laptop to enter it, you have to remember it.
If you’re wondering how your brain does this, you’re asking a question about working memory, which researchers have been trying to explain for decades. Now neuroscientists at MIT have published an important new insight into how it works.

In a study in PLOS Computational Biology, scientists at the Picower Institute for Learning and Memory compared measurements of the activity of brain cells in animals performing a working memory task with the output of various computer models representing the brain’s mechanisms for retaining information. Two theories of the underlying mechanism.
The results strongly support the new idea that networks of neurons store information by making transient changes to the pattern of their connections, or synapses, and contradict the traditional alternative that memory is created by neurons kept continuously active (as in idle engine)).

While both models allow information to be kept in mind, only the version that allows synapses to change connections instantaneously (“short-term synaptic plasticity”) produces patterns of neural activity that mimic what is actually observed in real brains at work. Senior author Earl K. Miller acknowledges that the idea that brain cells maintain memory by always being “on” may be simpler, but it doesn’t represent what nature is doing and doesn’t produce intermittent thinking Neural activity that may generate complex thinking flexibility supported by short-term synaptic plasticity.
“You need these mechanisms to give working memory activity the freedom it needs to be flexible,” said Miller, a professor of neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS). As simple as a light switch. But working memory is as complex and dynamic as our thoughts.”
Co-lead author Leo Kozachkov, who received his Ph.D. at MIT in November for theoretical modeling work that included this study, said matching computer models to real-world data was critical.
“Most people think that working memory ‘happens’ in neurons—that persistent neural activity produces persistent thoughts. However, this idea has recently come under scrutiny because it doesn’t quite line up with the data,” said Kozachkov, co-supervisor Say. Written by co-senior author Jean-Jacques Slotine, BCS and professor of mechanical engineering. “Using artificial neural networks with short-term synaptic plasticity, we show that synaptic activity (rather than neural activity) can underlie working memory. An important takeaway from our paper: These neural network models of ‘plasticity’ are better suited to the brain – like Quantitatively the same, with an additional functional advantage in terms of robustness.”
Models match nature

Together with co-lead author John Tauber, an MIT graduate student, Kozachkov’s goal was not just to determine how working memory information is retained, but to elucidate how nature actually does it. That means starting with “ground truth” measurements of the electrical “spike” activity of hundreds of neurons in the animal’s prefrontal cortex as it plays a working memory game. In each of many rounds, the animal sees an image, which then disappears. After a second, it sees two pictures including the original, and has to view the original to get a little reward. The critical moment is that second in the middle, known as the “latency period,” during which the image must be kept in mind before the test.
The team consistently observed what Miller’s lab had observed many times before: neurons spiking profusely when they saw the original image, spiking only intermittently during the delay, and then again when they had to recall the image during testing. Spikes occur (these dynamics are caused by the interplay of beta and gamma frequency brain rhythms). In other words, spikes are strong when information must be initially stored and must be recalled, but only when it must be maintained. Spikes are not continuous during latency.
In addition, the team trained a software “decoder” to read out working memory information from measures of spike activity. They are very accurate when the spikes are high, but not when the spikes are low, such as during lag periods. This suggests that the spikes do not represent information during the delay. But this raises a key question: If the spikes don’t remember information, what can?
Researchers, including Oxford’s Mark Stokes, have proposed that changes in the relative strength, or “weight,” of synapses could act as a proxy for storing information. The MIT team tested this idea by computationally modeling neural networks containing two versions of each major theory. Machine learning networks were trained to perform the same working memory tasks as real animals and output neural activity that could also be interpreted by the decoder.
The upshot is that computational networks that allow short-term synaptic plasticity to encode information spike when the actual brain does, and don’t when it doesn’t. Networks that spik continuously as a method of maintaining memory spiked all the time, including when the natural brain wasn’t spiking. The decoder results showed a drop in accuracy during delays in models of synaptic plasticity, but remained unusually high in models of sustained spiking.
In another layer of analysis, the team created a decoder to read information from synaptic weights. They found that during delays, synapses represented working memory information, whereas spikes did not.
Of the two versions of the model with short-term synaptic plasticity, the most realistic one, called “PS-Hebb,” has a negative feedback loop that keeps the neural network stable and robust, Kozachkov said.
The role of working memory
In addition to better matching nature, models of synaptic plasticity confer other benefits that may be important to real brains. One is that the plasticity model retains information in its synaptic weights even after as many as half of the artificial neurons have been “ablated.” The persistent activity model collapsed after losing only 10-20% of synapses. And, Miller adds, it takes less energy to boost energy occasionally than to boost energy consistently.
Also, Miller said, quick spikes rather than sustained spikes make room in time to store multiple items in memory. Research shows that people can hold up to four different things in working memory. Miller’s lab plans to conduct new experiments to determine whether models with intermittent spikes and synaptic weight-based information storage perform better with real neural data when animals have to remember multiple things rather than just a single image. match."

My opinion only so DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 30 users

stuart888

Regular
Insto's increasing holdings. Vanguard to be precise. Up 0.05 %.

Institutional Ownership
7.79%

Top 10 Institutions
6.73%

Mutual Fund Ownership
7.05%

Float
83.08%
Mutual Fund Ownership
Institutional Ownership
Institution NameShares Held (% Change)% Outstanding
Vanguard Investments Australia Ltd.23,171,293 (‎+0.06%)
1.34
The Vanguard Group, Inc.22,963,058 (‎+0.01%)
1.33
BlackRock Institutional Trust Company, N.A.18,881,892 (‎+0.04%)
1.09
BlackRock Advisors (UK) Limited12,901,595 (‎-0.01%)
0.75
LDA Capital Limited10,000,000 (‎-0.07%)
0.58
Irish Life Investment Managers Ltd.9,141,627 (‎-0.00%)
0.53
FV Frankfurter Vermögen AG7,500,000 (‎+0.01%)
0.43
BetaShares Capital Ltd.5,308,642 (‎+0.00%)
0.31
BlackRock Investment Management (Australia) Ltd.3,157,867 (‎+0.00%)
0.18
State Street Global Advisors Australia Ltd.3,133,230 (‎+0.00%)
0.18
State Street Global Advisors (US)2,641,218 (‎+0.01%)0.15
First Trust Advisors L.P.2,016,088 (‎-0.00%)0.12
Nuveen LLC1,773,407 (‎+0.00%)0.10
Charles Schwab Investment Management, Inc.1,543,302 (‎-0.00%)0.09
California State Teachers Retirement System1,479,448 (‎+0.01%)0.09



https://www.msn.com/en-au/money/wat...dbe88e447bdea7c96&duration=1D&l3=L3_Ownership
Ending the year up nicely today in the USA, when the market was down.

I really don't focus on the stock price, since Brainchip should do fantastically as the implementation kicks in. Dollar cost averaging seems like a great approach to accumulate. I like to focus on 36 months out.

Too many dots here with fantastic, much needed, low power AI smarts technology! Nothing certain, but very confident from here in Florida USA! Thanks to all the dot collectors! 🎯🎯🎯


1672438520902.png
 
  • Like
  • Love
  • Fire
Reactions: 44 users

cosors

👀
There is no emoji for that:
Screenshot_2022-12-30-23-38-20-11_40deb401b9ffe8e1df2f1cc5ba480b12.jpg
 
  • Like
  • Haha
  • Thinking
Reactions: 10 users

Boab

I wish I could paint like Vincent
AKIDA outshines all the rest and has 1.2 million neurons but are the unsung heroes the 10 billion synapses. The following recently exposed research suggests they might just be:

Science News | Keeping information in mind may mean storing it across synapses, study suggests​

  • 5 minute read


Published on30 December 2022

Author​

TeamAskbyGeeks


Washington [US]Dec. 29 (ANI): From the time you read your Wi-Fi password off the coffee shop’s menu board to the time you can go back to your laptop to enter it, you have to remember it.
If you’re wondering how your brain does this, you’re asking a question about working memory, which researchers have been trying to explain for decades. Now neuroscientists at MIT have published an important new insight into how it works.

In a study in PLOS Computational Biology, scientists at the Picower Institute for Learning and Memory compared measurements of the activity of brain cells in animals performing a working memory task with the output of various computer models representing the brain’s mechanisms for retaining information. Two theories of the underlying mechanism.
The results strongly support the new idea that networks of neurons store information by making transient changes to the pattern of their connections, or synapses, and contradict the traditional alternative that memory is created by neurons kept continuously active (as in idle engine)).

While both models allow information to be kept in mind, only the version that allows synapses to change connections instantaneously (“short-term synaptic plasticity”) produces patterns of neural activity that mimic what is actually observed in real brains at work. Senior author Earl K. Miller acknowledges that the idea that brain cells maintain memory by always being “on” may be simpler, but it doesn’t represent what nature is doing and doesn’t produce intermittent thinking Neural activity that may generate complex thinking flexibility supported by short-term synaptic plasticity.
“You need these mechanisms to give working memory activity the freedom it needs to be flexible,” said Miller, a professor of neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS). As simple as a light switch. But working memory is as complex and dynamic as our thoughts.”
Co-lead author Leo Kozachkov, who received his Ph.D. at MIT in November for theoretical modeling work that included this study, said matching computer models to real-world data was critical.
“Most people think that working memory ‘happens’ in neurons—that persistent neural activity produces persistent thoughts. However, this idea has recently come under scrutiny because it doesn’t quite line up with the data,” said Kozachkov, co-supervisor Say. Written by co-senior author Jean-Jacques Slotine, BCS and professor of mechanical engineering. “Using artificial neural networks with short-term synaptic plasticity, we show that synaptic activity (rather than neural activity) can underlie working memory. An important takeaway from our paper: These neural network models of ‘plasticity’ are better suited to the brain – like Quantitatively the same, with an additional functional advantage in terms of robustness.”
Models match nature

Together with co-lead author John Tauber, an MIT graduate student, Kozachkov’s goal was not just to determine how working memory information is retained, but to elucidate how nature actually does it. That means starting with “ground truth” measurements of the electrical “spike” activity of hundreds of neurons in the animal’s prefrontal cortex as it plays a working memory game. In each of many rounds, the animal sees an image, which then disappears. After a second, it sees two pictures including the original, and has to view the original to get a little reward. The critical moment is that second in the middle, known as the “latency period,” during which the image must be kept in mind before the test.
The team consistently observed what Miller’s lab had observed many times before: neurons spiking profusely when they saw the original image, spiking only intermittently during the delay, and then again when they had to recall the image during testing. Spikes occur (these dynamics are caused by the interplay of beta and gamma frequency brain rhythms). In other words, spikes are strong when information must be initially stored and must be recalled, but only when it must be maintained. Spikes are not continuous during latency.
In addition, the team trained a software “decoder” to read out working memory information from measures of spike activity. They are very accurate when the spikes are high, but not when the spikes are low, such as during lag periods. This suggests that the spikes do not represent information during the delay. But this raises a key question: If the spikes don’t remember information, what can?
Researchers, including Oxford’s Mark Stokes, have proposed that changes in the relative strength, or “weight,” of synapses could act as a proxy for storing information. The MIT team tested this idea by computationally modeling neural networks containing two versions of each major theory. Machine learning networks were trained to perform the same working memory tasks as real animals and output neural activity that could also be interpreted by the decoder.
The upshot is that computational networks that allow short-term synaptic plasticity to encode information spike when the actual brain does, and don’t when it doesn’t. Networks that spik continuously as a method of maintaining memory spiked all the time, including when the natural brain wasn’t spiking. The decoder results showed a drop in accuracy during delays in models of synaptic plasticity, but remained unusually high in models of sustained spiking.
In another layer of analysis, the team created a decoder to read information from synaptic weights. They found that during delays, synapses represented working memory information, whereas spikes did not.
Of the two versions of the model with short-term synaptic plasticity, the most realistic one, called “PS-Hebb,” has a negative feedback loop that keeps the neural network stable and robust, Kozachkov said.
The role of working memory
In addition to better matching nature, models of synaptic plasticity confer other benefits that may be important to real brains. One is that the plasticity model retains information in its synaptic weights even after as many as half of the artificial neurons have been “ablated.” The persistent activity model collapsed after losing only 10-20% of synapses. And, Miller adds, it takes less energy to boost energy occasionally than to boost energy consistently.
Also, Miller said, quick spikes rather than sustained spikes make room in time to store multiple items in memory. Research shows that people can hold up to four different things in working memory. Miller’s lab plans to conduct new experiments to determine whether models with intermittent spikes and synaptic weight-based information storage perform better with real neural data when animals have to remember multiple things rather than just a single image. match."

My opinion only so DYOR
FF

AKIDA BALLISTA
Hi FF,
not sure if the acquisition of the JAST learning rule patent earlier this year would help in this regard?
JAST.jpg
JAST.jpg
 
  • Like
  • Love
Reactions: 13 users

Evermont

Stealth Mode
I noted that in the BRN CES announcement they stated ‘Akida processors’ plural - maybe that’s part of the gig, to introduce Akida 2.0 ?

Great pick up @jtardif999 Do we have more than one? 🔥 Maybe this relates to the different levels of the enablement program.

Also note the reference to Akida 1.0 in the technology section of the website. Is this new?


Laguna Hills, Calif. – December 29, 2022 –BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that it will be joining partners Prophesee, Socionext, and VVDN January 5-8 at CES to showcase compelling solutions on constrained edge devices, featuring its Akida™ processors. Akida processors simplify development by supporting today’s mainstream network models and workflows while being future-proofed for next-generation edge AI solutions.
 
  • Like
  • Fire
  • Love
Reactions: 39 users
Top Bottom