BRN Discussion Ongoing

Slymeat

Move on, nothing to see.
I received a research paper for Weebit Nano that mentions Brainchip and states that ReRAM may be a good fit for neuromorphic SOC implementations.

Is that seed I sewed finally picking up some traction? How wonderful would it be to see (sing the following to Stevie Wonder and Paul McCartney's Ebony and Ivory—thanks for pointing out the correction @JDelekto )
“Weebit ReRAM and Brainchip Akida IP,
living together in perfect harmony,
providing essential functionality on the same SOC”.

The report is all about Weebit Nano but contains specific reference to Brainchip and the following tidbit of how ReRAM could be useful.

Weebit Nano has high potential in Neuromorphic Computing
Since conventional computing systems have separate processors and memory units, conventional chip architecture poses substantial challenges when it comes to hardware implementation and deployment of neuromorphic computing. To try and deal with that, several deployment techniques have been proposed, which entail cutting down on memory requirements to fit the hardware, which is actually undesirable. This has brought ReRAM into the picture.
ReRAM could offer the following advantages:
1) smaller footprint and non-volatility (compared to SRAM),
2) lower voltages and scaling below 28nm (compared to Flash),
3) lower cost, lower area, multi-level cell (compared to MRAM).
They didn‘t even mention the lower power consumption, cheaper to manufacture, temperature resilience, radiation resilience—basically none of the cheaper, faster, stronger, lasts longer. Able to leap tall buildings in a single leap. (I made that last bit up)

In the report, specific mention of Brainchip is made on page 9. It also has a link to prior research they have done on Brainchip. Nothing new for us, but is excellent to see the flag being waved in papers dedicated to other companies.
 

Attachments

  • WBT research report_101122.pdf
    735.9 KB · Views: 168
Last edited:
  • Like
  • Fire
  • Love
Reactions: 47 users

alwaysgreen

Top 20

Media Alert: BrainChip Talks To Arm About the Importance of Ecosystem Partnerships on Latest Podcast​

Wednesday, November 9, 2022 5:30 PM
Brainchip Holdings Limited/ADR

Brainchip Holdings Limited/ADR

Share this Article
Share on Twitter Share on Facebook Share on Linkedin
Topic:

Company Update
LAGUNA HILLS, CA / ACCESSWIRE / November 9, 2022 / BrainChip Holdings Ltd (ASX:BRN), (OTCQX:BRCHF), (ADR:BCHPY), the world's first commercial producer of ultra-low power neuromorphic AI IP, today announced that Kevin Ryan, Senior Director Go-to-Market and Ecosystem - IoT at Arm, shares his insight as part of the "This is our Mission" podcast. He joins BrainChip Vice President of Ecosystem and Partnerships Rob Telson to help listeners understand how Arm is working with its ecosystem to accelerate IoT design and developer enablement. The podcast will be available Thursday, November 10, 2022, at 3:00 p.m. PDT on BrainChip's website and across popular podcast platforms.
Arm is the leading technology provider of processor IP, offering the widest range of cores to address the performance, power, and cost requirements of every device - from IoT sensors to supercomputers, and from smartphones and laptops to autonomous vehicles. Ryan is an industry veteran with broad leadership background across all go-to-market functions. He works with partners to develop complete solutions that deliver real-world benefits to customers.
"The Arm ecosystem has shipped more than 230 billion chips and brings together contributions from partners to create solutions that benefit customers across a wide range of devices," said Telson. "Working with Arm allows us to offer pre-integrated designs that help developers get to market quicker and smoother. I'm excited that listeners will get a better understanding of the power of the ecosystem and how BrainChip helps solve specific use cases in the IoT market."
The "This is Our Mission" podcast provides AI industry insight to listeners including users, developers, analysts, technical and financial press, and investors. Past episodes are available at https://brainchipinc.com/brainchip-podcasts.
About BrainChip Holdings Ltd (ASX:BRN, OTCQX:BRCHF, ADR:BCHPY)
BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company's first-to-market neuromorphic processor, AkidaTM, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping machine learning local to the chip, independent of the cloud, also dramatically reduces latency while improving privacy and data security. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers' products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006
###
Media Contact:
Mark Smith
JPR Communications
818-398-1424
Investor Contact:
Mark Komonoski
Integrous Communications
Direct: 877-255-8483
Mobile: 403-470-8384
mkomonoski@integcom.us
SOURCE: Brainchip Holdings Limited/ADR

Arm-azing guest today! Let's go!
 
  • Like
  • Love
  • Fire
Reactions: 21 users

JK200SX

Regular
1668035711708.png
 
  • Like
  • Haha
  • Love
Reactions: 22 users

Quercuskid

Regular
Cant stop buying, managed to get a good parcel at $0.595, i hope thats the bottom as I cant afford to buy anymore 😳
 
  • Like
  • Haha
  • Fire
Reactions: 21 users

HopalongPetrovski

I'm Spartacus!
Cant stop buying, managed to get a good parcel at $0.595, i hope thats the bottom as I cant afford to buy anymore 😳
Oh Well!
Now that we know you're all set, I'll ring Sean and tell him to take his foot off the brakes. 🤣

 
  • Like
  • Haha
  • Love
Reactions: 13 users

Dr E Brown

Regular
I want to keep these ramblings for the next 2-3 years and resend them to MF. I used to get upset by them and worry. Now I just smile and consider the repetitive nature of the comments and go about my day.

 
  • Like
  • Haha
  • Fire
Reactions: 11 users

SERA2g

Founding Member
Got to visit the company's new offices this afternoon, much brighter and fresher than the last office, it's bigger, nicely set out, and best of all from an accountant's perspective, the leasing fees are less.

Made the point of thanking all the staff I briefly said hello to, I was also interested to see two full time patent lawyers working in their own office space, that tells a story without me saying a word.

Has our technological lead increased? I believe it has; an opinion shared from someone who should know.

Sadly, I wasn't given permission to take any photos, the company has a new policy in place with regards privacy, and security which I obviously had to respect or no visit as such.

Can we expect Sean to honor his comments with regards Akida 2.0 the short answer is yes, but remember timelines can change, things can be rather fluid (if that's) the correct word, so the CEO will make an announcement when the time is right for the company as a whole, not based on shareholder anxiousness, I believe that to be as close as what I was told.

I left the Brainchip offices this afternoon in Perth with a feeling of calm, a feeling of quiet confidence that my decision 7 years ago to invest in BRN is still today the best long-term investment strategy I have ever made.

My 2023/2024 prediction of growth trending upwards now comfortably locked into my thinking, with that I'm happy to wait.
A young "ARM" potentially in the making......my opinion is yes.

Thank you, Peter, Adam and Tony for your warm, friendly hospitality, as always, I say thank you.

Tech x
Thank you for this post TECH.

I am very pleased to see the lease fees have reduced.

Over and out. SERA2g
 
  • Like
  • Haha
Reactions: 16 users

Beebo

Regular
5F57F544-78D6-4DDF-A8DF-AC52AFE926FA.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 25 users

alwaysgreen

Top 20
  • Like
Reactions: 1 users

Dhm

Regular
Can you guys please define for me hopefully with examples what an AI accelerator is? I see that our favourite youtuber Anastasia nominated AI accelerators the most important thing in computing at present.
 
  • Like
Reactions: 4 users

Beebo

Regular
Not sure why they would post a link to a podcast that hasn't even released yet
 
  • Like
  • Fire
Reactions: 5 users

jk6199

Regular
Hate shorters, still not sure where volume comes from for them to buy low.

On a positive, I bought more too today, so thank you 👌

One question. At these prices, are we at risk at dropping out of the 200? If so, how does that affect us?
 
  • Like
Reactions: 3 users

alwaysgreen

Top 20
Hate shorters, still not sure where volume comes from for them to buy low.

On a positive, I bought more too today, so thank you 👌

One question. At these prices, are we at risk at dropping out of the 200? If so, how does that affect us?
Unfortunately, we will almost certainly be dropped out of the 200 in the next rebalance.
 
  • Like
  • Sad
  • Thinking
Reactions: 6 users

alwaysgreen

Top 20
That's great but it's still not out for 2 more hours.
 

Diogenese

Top 20
Can you guys please define for me hopefully with examples what an AI accelerator is? I see that our favourite youtuber Anastasia nominated AI accelerators the most important thing in computing at present.
My guess is that an AI accelerator is a circuit which performs basic computationally intensive classification functions in recognizing images or speech or other sensor type data faster than doing the job in software (CNN) on a CPU/GPU/cloud server. The CPU/GPU/cloud server can then perform some programmed function in response to the classification of the data.

AI accelerators in the past have been lumped with fixed model libraries which are difficult to update with additional images/sounds ...

Again in the past, AI accelerators used MAC (multiply accumulate) circuits of 8 bits or more.

Akida does data classification using compact model libraries adapted specifically for Akida using 1-bit to 4-bit "spikes", so Akida is capable of performing the function of an AI accelerator faster and more efficiently that previous AI accelerators ... and much more. [#### PS: The result of the classification by the AI Accelerator uses much less data than the input sensor data. ###]

On-chip one-shot learning makes Akida capable of updating its model library to improve accuracy. Such modifications can be shared via the internet to update other Akidas within a specific group or which use the same model library.

If Anastasi is talking about AI accelerators, I'm sure she understands Akida's capabilities in this field.

Someone needs to update Wiki:
https://en.wikipedia.org/wiki/AI_accelerator#Emergence_of_dedicated_AI_accelerator_ASICs

Emergence of dedicated AI accelerator ASICs[edit]​

While GPUs and FPGAs perform far better than CPUs for AI-related tasks, a factor of up to 10 in efficiency[34][35] may be gained with a more specific design, via an application-specific integrated circuit (ASIC).[citation needed] These accelerators employ strategies such as optimized memory use[citation needed] and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[36][37] Some adopted low-precision floating-point formats used AI acceleration are half-precision and the bfloat16 floating-point format.[38][39][40][41][42][43][44] Companies such as Google, Qualcomm, Amazon, Apple, Facebook, AMD and Samsung are all designing their own AI ASICs.[45][46][47][48][49][50] Cerebras Systems has also built a dedicated AI accelerator based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2), to support deep learning workloads.[51][52]
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users

Slymeat

Move on, nothing to see.
  • Like
  • Love
Reactions: 8 users
Can you guys please define for me hopefully with examples what an AI accelerator is? I see that our favourite youtuber Anastasia nominated AI accelerators the most important thing in computing at present.
Hi @Dhm

Back a long time ago I did a lot of reading about accelerators. Probably at least 2 years now so some may have moved on a little but this is what I took away in a nutshell and in my very lay technophobe language which will likely cause the engineer among us to cringe - @Diogenese :

1. An accelerator is used to speed up data transmission from the sensor/source of the data acquisition to the processor.

2. The processor can be in the Cloud or it can just be in the room where they keep the main computer at an office or factory.

3. Traditionally an accelerator simply compressed all of the data in batches which reduces the bandwidth needed to transmit the data to the processor by sending multiple batches rather than one continuous stream.

4. This works OK in a factory as you can anticipate all the data you are trying to capture and might need to send and have sufficient bandwidth installed to cope with your closed system.

5. Out there on the web where you are sending your data over the 3,4 or 5G network you are in competition with others for the bandwidth and your batches of compressed data can be larger than the available bandwidth, notwithstanding being compressed, and it can still be forced to queue if there is a lot of traffic.

6. I do not understand the full technical reason but if you have lots of compressed data batches (packets) queuing up waiting for a chance to send they can actually become jammed (my word). Since I originally read about this problem I have read a couple of papers where solutions to the log jam were being proposed so not sure about the present situation.

This then leads to using AKIDA as an accelerator and its advantages:

1. AKIDA does not compress the data coming from the sensor.

2. AKIDA actually processes the data coming from the sensor to arrive at an actionable insight or to sort out the needed relevant data to make a decision about what action to take.

3. AKIDA can then either take action at the sensor or it can send that actionable insight to somewhere else as meta data.

4. AKIDA's huge advantage as an accelerator is therefore it is sending only tiny little packets of meta data which can reliably slot in to the bandwidth with ease.

5. AKIDA's huge advantage continues when the meta data reaches the cloud or processor where the action is to be taken because that computer does not have to wade through every single piece of data collected and sent to it and sort out what is relevant before taking action. AKIDA gives it just the relevant data so it can action that data almost instantaneously.

This ability or advantage can be critical in health, automotive, aeronautical, defence and space applications.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 36 users

alwaysgreen

Top 20
  • Haha
Reactions: 4 users
  • Like
Reactions: 4 users

Cardpro

Regular
ISL Wins Urban Air Mobility Challenge

ISL’s Agility Prime project team submitted an innovative idea to the ATI Urban Air Mobility Innovation Challenge in July 2022. The ISL team was notified of selection to pitch the idea to a select panel of judges at the Defense TechConnect Innovation Summit & Expo in Washington D.C. on September 28th, 2022. The innovative idea was centered around providing neuromorphic radar capabilities to the urban air mobility community for safety and autonomous use.
The ISL team was one of 20 selected companies to pitch their autonomous and urban air mobility ideas to the panel. The top 5 pitches were selected to receive “no strings attached” funding. The pitch was limited to 5 minutes with a 3-minute Q&A after to help the judging panel better understand the technology’s use and project goals. The ISL pitch team received a fair number of questions from the panel, many of which were centered around our neuromorphic training capability and modeling and simulation tool RFview®. ISL’s pitch was selected as one of the winners of the innovation challenge and received an innovation award certificate and an innovation medal.

Remember our EAP, Information Systems Labs?

Information Systems Labs Using Neuromorphic Chip-Based Radar Research Solution for Air Force.

More on ISL, as per their website:

ISL’s team of scientists/engineers continue to understand how the morphology of individual neurons, circuits, applications, and overall architectures creates desirable computations. Leveraging this understanding and the newly developed and emerging commercial neuromorphic chips, ISL is developing a new low-power, lightweight detect and avoid (DAA) system for very small UAS platforms that exploits automotive radar hardware, light-weight EO/IR sensors, advanced data fusion algorithms, and neuromorphic computing.

Additionally, ISL has pioneered an AI acceptance methodology that allows for DoD testing of AI solutions using essentially the same statistically based methodology in use today. ISL was awarded a US Patent for this February (see link). The methodology leverages ISL’s RF Digital Engineering tools (https://www.islinc.com/digital-engineering ).
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Top Bottom