BRN Discussion Ongoing

@Diogenese

Just came across these guys and their recent paper (Jul 22) in line with their commercial offering based out of Canada.

Wondering if you can run your eye over whenever?

Can't see mention of SNN or CNN2SNN or neuromorphic etc but what jumped out was the bit quantization, ARM connection / use and wondered how relevant their product is to our space.

Would be nice if they were incorporating our IP somewhere within but don't think are.

TIA & fully expect an ogre haha



Screenshot_2022-07-31-22-34-43-07_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2022-07-31-22-35-16-47_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2022-07-31-22-35-37-38_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2022-07-31-22-36-02-38_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2022-07-31-22-36-16-22_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2022-07-31-22-37-03-89_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
Reactions: 12 users

Diogenese

Top 20
@Diogenese

Just came across these guys and their recent paper (Jul 22) in line with their commercial offering based out of Canada.

Wondering if you can run your eye over whenever?

Can't see mention of SNN or CNN2SNN or neuromorphic etc but what jumped out was the bit quantization, ARM connection / use and wondered how relevant their product is to our space.

Would be nice if they were incorporating our IP somewhere within but don't think are.

TIA & fully expect an ogre haha



View attachment 12974 View attachment 12975 View attachment 12976 View attachment 12977 View attachment 12978 View attachment 12979

As you say fmf, not Akida.

The priority date is November 2018, which is probably too early for Akida.

They do talk about 2-bit quantzations which will save power and time, but don't mention our secret sauce n-of-m rank coding.

US2021350233A1 System and Method for Automated Precision Configuration for Deep Neural Networks
1659280833888.png




[0003] In modern intelligent applications and devices, deep neural networks (DNNs) have become ubiquitous when solving complex computer tasks, such as recognizing objects in images and translating natural language. The success of these networks has been largely dependent on high performance computing machinery, such as Graphics Processing Units (GPUs) and server-class Central Processing Units (CPUs). Consequently, the adoption of DNNs to solve real-world problems is typically limited to scenarios where such computing is available. Recently, many new computer processors specifically designed for artificial intelligence (AI) applications have emerged. These dedicated processors, such as Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs) and analog computers offer the promise of more efficient and accessible AI products and services. However, designing DNN models optimized for these new processors remains a significant challenge for AI engineers and application developers. Significant domain expertise and trial-and-error is often required to create an optimized DNN for a specialized hardware. One of the main challenges is how to enable a precision configuration for a given DNN architecture that maintains accuracy and optimizes for memory, energy and latency performance on a given hardware architecture. The task of quantizing individual layers of a DNN, which can contain dozens of layers, often results in sub optimal performance in a real-world environment. Thus, there is significant interest in automating the task of enabling a precision configuration for an entire DNN architecture that considers the properties of the hardware architecture to optimize memory, energy and latency as well as maintain a desired level of accuracy on the given dataset.
...
0007] There exists a need for scalable, automated processes for model quantization on diverse DNN architectures and hardware back-ends. Generally, it is found that the current capacity for model quantization is outpaced by the rapid development of new DNNs and disparate hardware platforms that aim to increase the applicability and efficiency of deep learning workloads.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 19 users
D

Deleted member 118

Guest
 
  • Haha
  • Like
Reactions: 8 users
D

Deleted member 118

Guest
Looking forward to this week if this pans out
D8A78E0C-1E33-42BF-8D13-B47FF45C9D0E.jpeg



 
  • Like
  • Haha
Reactions: 6 users
Morning
 
  • Like
  • Love
  • Fire
Reactions: 5 users
D

Deleted member 118

Guest
  • Like
  • Love
Reactions: 8 users
At least some else is awake

Haha, I like being an early riser. I’m tired from travelling otherwise would have been up at 5 am
 
  • Like
  • Haha
Reactions: 6 users

Sirod69

bavarian girl ;-)
and I go to bed now
1659300328040.png
 
  • Like
  • Haha
  • Love
Reactions: 18 users

cosors

👀
Thanks again I got it. Now it's time to learn. Two others have also ordered it!
 
  • Like
  • Love
Reactions: 3 users
  • Haha
  • Like
Reactions: 16 users
My personal experience with an ASX listed company shifting to the NASDAQ was a disaster. Trying to communicate with the company and with Computershare US, which still operates using snail mail, from Australia was a nightmare. The business ended up in receivership and trying to sort out what happened to my share holding took months of emails and direct calls at midnight, ended up lost everything. Many lessons learned and a very sour taste about NASDAQ listing.

I realise it will happen eventually, but it is expensive and I hope it does not happen in the near future. When it does happen I won’t agree unless it is a dual listing on ASX and NASDAQ.
It will be a NO vote for me. I'm here for the long haul and the franked dividends.
 
  • Like
Reactions: 11 users

stuart888

Regular
I continue on the LSTM learning focus. While I have a computer science degree, this kind of programming is really tough to get the head around. Every bit helps, thanks @Diogenese as you have been very helpful. I am so excited that the next version of Akida is moving forward nicely.

I can fully grasp that the LSTM memory addition adds a lot of use cases, especially good for events that don't happen much at all. So long-term memory at the edge, with ultra low power, is very important for sparse events.

All is rah-rah-go-brainchip!

1659302835440.png
 
  • Like
  • Fire
  • Love
Reactions: 27 users

M_C

Founding Member
Love the fact that we are potentially involved in positive changes in the world (curing blindness no less, amongst other things).................
.
(speculation only) - University of Santa Barbara


So far, the researchers evaluated the performance of their neural autoencoder-based approach in the context of visual neuroprostheses. They found that it achieved remarkable results, consistently leading to higher-quality visual perceptions across a wide range of virtual patients, which is a significant step forward in the path towards attaining reliable bionic vision.

The neural encoder created by the Granley and his colleagues generated far more convincing visual stimuli than other conventional encoding strategies, using the same training datasets. Notably, it could also easily be applied other neuroprostheses that can be described using a sensory model, including those designed to enhance the senses of hearing and touch.

"I'm excited about the potential broader impact of our framework," Granley said. "We were able to demonstrate the benefit gained by 'closing the loop on perception,' or in other words, including in-the-loop a model of the effects of stimulation on the patient's perception. This could be useful for a variety of prostheses. For example, cochlear implants could use this framework to improve auditory perceptions."

The model introduced by this team of researchers could eventually be used by developers to improve the quality of the vision enabled by visual neuroprosthetic devices. In addition, it could be applied to existing prosthetic limbs to produce more convincing feelings of cutaneous touch in patients who are missing specific limbs or have undergone amputations.

"In this project, we only used virtual, simulated patients," Granley added. "In the future, I would like to test our encoder on human patients with implanted visual prostheses. If we could attain the same improvement on real patients, then this would mark a huge step towards restoring vision to millions of people suffering from blindness."

Screenshot_20220627-074153_LinkedIn.jpg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 47 users

stuart888

Regular
Brainchip management seemed to be clear that 2022 is all about Relationships/Eco-system! Seems pretty clear that the Brainchip team is winning! Brainchip is blasting in the eco-system, and likely up on a Nasa CubeSat soon too!

1659304853758.png
 
  • Like
  • Fire
  • Love
Reactions: 36 users

alwaysgreen

Top 20
It will be a NO vote for me. I'm here for the long haul and the franked dividends.
Just sitting here dreaming of $1 dividends... 💸
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Evermont

Stealth Mode
Top 20 is out.
 
  • Like
  • Fire
Reactions: 13 users

Evermont

Stealth Mode
1659309301780.png
 
  • Like
  • Fire
  • Love
Reactions: 45 users

toasty

Regular
Not sure this tells us anything we didn't already know.......??
 
  • Like
  • Fire
Reactions: 7 users

stuart888

Regular
  • Love
  • Like
Reactions: 5 users

Harwig

Regular
Ann of top 20. ASX. Non sensitive
 
  • Like
  • Fire
Reactions: 8 users
Top Bottom