Oh Well!Cant stop buying, managed to get a good parcel at $0.595, i hope thats the bottom as I cant afford to buy anymore![]()
Thank you for this post TECH.Got to visit the company's new offices this afternoon, much brighter and fresher than the last office, it's bigger, nicely set out, and best of all from an accountant's perspective, the leasing fees are less.
Made the point of thanking all the staff I briefly said hello to, I was also interested to see two full time patent lawyers working in their own office space, that tells a story without me saying a word.
Has our technological lead increased? I believe it has; an opinion shared from someone who should know.
Sadly, I wasn't given permission to take any photos, the company has a new policy in place with regards privacy, and security which I obviously had to respect or no visit as such.
Can we expect Sean to honor his comments with regards Akida 2.0 the short answer is yes, but remember timelines can change, things can be rather fluid (if that's) the correct word, so the CEO will make an announcement when the time is right for the company as a whole, not based on shareholder anxiousness, I believe that to be as close as what I was told.
I left the Brainchip offices this afternoon in Perth with a feeling of calm, a feeling of quiet confidence that my decision 7 years ago to invest in BRN is still today the best long-term investment strategy I have ever made.
My 2023/2024 prediction of growth trending upwards now comfortably locked into my thinking, with that I'm happy to wait.
A young "ARM" potentially in the making......my opinion is yes.
Thank you, Peter, Adam and Tony for your warm, friendly hospitality, as always, I say thank you.
Tech x
Not sure why they would post a link to a podcast that hasn't even released yet
Not sure why they would post a link to a podcast that hasn't even released yet
Unfortunately, we will almost certainly be dropped out of the 200 in the next rebalance.Hate shorters, still not sure where volume comes from for them to buy low.
On a positive, I bought more too today, so thank you
One question. At these prices, are we at risk at dropping out of the 200? If so, how does that affect us?
That's great but it's still not out for 2 more hours.![]()
Media Alert: BrainChip Talks To Arm About the Importance of Ecosystem Partnerships on Latest Podcast
BrainChip Holdings Ltd (ASX:BRN), (OTCQX:BRCHF), (ADR:BCHPY), the world's first commercial producer of ultra-low power neuromorphic AI IP, today announced that Kevin Ryan, Senior Director Go-to-Market and Ecosystem - IoT at Arm, shares his insight as part of the "This is our Mission" podcast.finance.yahoo.com
My guess is that an AI accelerator is a circuit which performs basic computationally intensive classification functions in recognizing images or speech or other sensor type data faster than doing the job in software (CNN) on a CPU/GPU/cloud server. The CPU/GPU/cloud server can then perform some programmed function in response to the classification of the data.Can you guys please define for me hopefully with examples what an AI accelerator is? I see that our favourite youtuber Anastasia nominated AI accelerators the most important thing in computing at present.
3pm PDT, That’s 10am AEDT on Thursday Nov 11 - i.e tomorrow.That's great but it's still not out for 2 more hours.
Hi @DhmCan you guys please define for me hopefully with examples what an AI accelerator is? I see that our favourite youtuber Anastasia nominated AI accelerators the most important thing in computing at present.
Even worse lol3pm PDT, That’s 10am AEDT on Thursday Nov 11 - i.e tomorrow.
More on ISL, as per their website:ISL Wins Urban Air Mobility Challenge
ISL’s Agility Prime project team submitted an innovative idea to the ATI Urban Air Mobility Innovation Challenge in July 2022. The ISL team was notified of selection to pitch the idea to a select panel of judges at the Defense TechConnect Innovation Summit & Expo in Washington D.C. on September 28th, 2022. The innovative idea was centered around providing neuromorphic radar capabilities to the urban air mobility community for safety and autonomous use.
The ISL team was one of 20 selected companies to pitch their autonomous and urban air mobility ideas to the panel. The top 5 pitches were selected to receive “no strings attached” funding. The pitch was limited to 5 minutes with a 3-minute Q&A after to help the judging panel better understand the technology’s use and project goals. The ISL pitch team received a fair number of questions from the panel, many of which were centered around our neuromorphic training capability and modeling and simulation tool RFview®. ISL’s pitch was selected as one of the winners of the innovation challenge and received an innovation award certificate and an innovation medal.
Remember our EAP, Information Systems Labs?
Information Systems Labs Using Neuromorphic Chip-Based Radar Research Solution for Air Force.
Hi @Dhm
Back a long time ago I did a lot of reading about accelerators. Probably at least 2 years now so some may have moved on a little but this is what I took away in a nutshell and in my very lay technophobe language which will likely cause the engineer among us to cringe - @Diogenese :
1. An accelerator is used to speed up data transmission from the sensor/source of the data acquisition to the processor.
2. The processor can be in the Cloud or it can just be in the room where they keep the main computer at an office or factory.
3. Traditionally an accelerator simply compressed all of the data in batches which reduces the bandwidth needed to transmit the data to the processor by sending multiple batches rather than one continuous stream.
4. This works OK in a factory as you can anticipate all the data you are trying to capture and might need to send and have sufficient bandwidth installed to cope with your closed system.
5. Out there on the web where you are sending your data over the 3,4 or 5G network you are in competition with others for the bandwidth and your batches of compressed data can be larger than the available bandwidth, notwithstanding being compressed, and it can still be forced to queue if there is a lot of traffic.
6. I do not understand the full technical reason but if you have lots of compressed data batches (packets) queuing up waiting for a chance to send they can actually become jammed (my word). Since I originally read about this problem I have read a couple of papers where solutions to the log jam were being proposed so not sure about the present situation.
This then leads to using AKIDA as an accelerator and its advantages:
1. AKIDA does not compress the data coming from the sensor.
2. AKIDA actually processes the data coming from the sensor to arrive at an actionable insight or to sort out the needed relevant data to make a decision about what action to take.
3. AKIDA can then either take action at the sensor or it can send that actionable insight to somewhere else as meta data.
4. AKIDA's huge advantage as an accelerator is therefore it is sending only tiny little packets of meta data which can reliably slot in to the bandwidth with ease.
5. AKIDA's huge advantage continues when the meta data reaches the cloud or processor where the action is to be taken because that computer does not have to wade through every single piece of data collected and sent to it and sort out what is relevant before taking action. AKIDA gives it just the relevant data so it can action that data almost instantaneously.
This ability or advantage can be critical in health, automotive, aeronautical, defence and space applications.
My opinion only DYOR
FF
AKIDA BALLISTA
My most humble thank you both @Diogenese and @Fact Finder.My guess is that an AI accelerator is a circuit which performs basic computationally intensive classification functions in recognizing images or speech or other sensor type data faster than doing the job in software (CNN) on a CPU/GPU/cloud server. The CPU/GPU/cloud server can then perform some programmed function in response to the classification of the data.
AI accelerators in the past have been lumped with fixed model libraries which are difficult to update with additional images/sounds ...
Again in the past, AI accelerators used MAC (multiply accumulate) circuits of 8 bits or more.
Akida does data classification using compact model libraries adapted specifically for Akida using 1-bit to 4-bit "spikes", so Akida is capable of performing the function of an AI accelerator faster and more efficiently that previous AI accelerators ... and much more. [#### PS: The result of the classification by the AI Accelerator uses much less data than the input sensor data. ###]
On-chip one-shot learning makes Akida capable of updating its model library to improve accuracy. Such modifications can be shared via the internet to update other Akidas within a specific group or which use the same model library.
If Anastasi is talking about AI accelerators, I'm sure she understands Akida's capabilities in this field.
Someone needs to update Wiki:
https://en.wikipedia.org/wiki/AI_accelerator#Emergence_of_dedicated_AI_accelerator_ASICs
Emergence of dedicated AI accelerator ASICs[edit]
While GPUs and FPGAs perform far better than CPUs for AI-related tasks, a factor of up to 10 in efficiency[34][35] may be gained with a more specific design, via an application-specific integrated circuit (ASIC).[citation needed] These accelerators employ strategies such as optimized memory use[citation needed] and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[36][37] Some adopted low-precision floating-point formats used AI acceleration are half-precision and the bfloat16 floating-point format.[38][39][40][41][42][43][44] Companies such as Google, Qualcomm, Amazon, Apple, Facebook, AMD and Samsung are all designing their own AI ASICs.[45][46][47][48][49][50] Cerebras Systems has also built a dedicated AI accelerator based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2), to support deep learning workloads.[51][52]
Hey - just a tip - you can actually get the URL for said tweet by hitting share/send and then "copy URL"... then you can just paste that URL into your post here and it will natively place the tweet into your post (i.e. the actual tweet instead of just a screenshot of it). The upshot is that it is then more likely to be engaged with.![]()
Media Alert: BrainChip Talks To Arm About the Importance of Ecosystem Partnerships on Latest Podcast
LAGUNA HILLS, CA / ACCESSWIRE / November 9, 2022 / BrainChip Holdings Ltd (ASX:BRN), (OTCQX:BRCHF), (ADR:BCHPY), the world's first commercial producer of ultra-low power neuromorphic AI IP, today announced that Kevin Ryan, Senior Director Go-to-Market and Ecosystem - IoT at Arm, shares his...www.accesswire.com
View attachment 21657