BRN Discussion Ongoing

Rob Telson likes:

Sally Ward-Foxton
Sally Ward-Foxton(She Senior Reporter at EE Times
45 Min.
On STMicroelectronics’s press call this morning, I asked Remi El-Ouazzane about the STM32N6 MCUs – set to be the first ST parts with dedicated AI acceleration – expected to sample around the end of '22.
“The baby is alive and doing well,” he said. “We will have more on this in the coming weeks, but so far so good, we are quite happy with where we are.”
We have already seen home-grown AI accelerators added to parts from companies like NXP Semiconductors and Silicon Labs. If you’re interested in this sector, stay tuned to EE Times | Electronic Engineering Times.
View attachment 31512
Yeah she's alright, but she ain't no rocket scientist..
 
  • Haha
  • Like
Reactions: 8 users

jtardif999

Regular
Quick question. The original roadmap shows Akida 2000 as being "an optimised version of AKD 1500 for LSTM and transformers". We now know that Akida 2000 includes the transformer part, but what happened to the LSTM part? Or have LSTM's been replaced with TENN's? I tried to google information on TNN's but kept getting articles about tennis, which was entertaining but not particularly helpful. But then I discovered something that @TechGirl posted from Carnegie Mellon University which describes TNN's as follows:

Processor Architecture: Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra. By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.

View attachment 31412
I think the LSTM capability is tied up in the screen/image segmentation addition to Akida2.0. The ability divide the screen/image into smaller segments and then be able to infer relationships between some/all of the segments. I think that even though this seems intuitively to be related to vision applications it will also apply to sentences and language structure allowing Akida to be able to make more complex inferences associated with language.
 
  • Like
  • Thinking
Reactions: 10 users
Had a mental note on this guy as seen him pop up last year on a surf & a comment on Akida.

I just did a quick TSE search and see that @Sirod69 has also picked up one him and posted not long ago.

Seems to be watching Akida and he's the Director Digital Solutions of a smaller Middle East company.

Can't find a connection as yet other than he's aware of us.


Partnered with some multinational companies in automation (Rockwell), drones (Skylark), subsea (Marlinks), fibre optic sensing (Fluves), o&g, mining, manufacturing (Proclink).

Mohd Amirrudin Esa’s Post​

View profile for Mohd Amirrudin Esa
Mohd Amirrudin Esa
Technology Leadership | Artificial Intelligence Data Science | UAV Robotics & Automation | Energy Sustainability | R&D | Product Development Smart Manufacturing | IR5.0 Cognitive Solutions
4w Edited

'Neuromorphic Akida + Minsky AI Engine'. Wow! This will definitely bring a wealth of advanced AI-driven solutions to the table. BrainChip's Akida ultra-low power, fully digital, event-based neuromorphic AI IP combined with AI Labs' Minsky AI Engine's ease of use, smart models, and sensory #inference capabilities will further enhance the capabilities of the whole Edge-AI application, resulting in a powerful real-time solution for 'INDUSTRIAL SYSTEM HEALTH MONITORING'. The addition of AI Labs' expertise in #predictivemaintenance applications, such as #vibration #analysis and #temperature #sensing, is sure to take this #solution to new heights! BrainChip Ai Labs Link to news release: https://lnkd.in/gqtgMvmE #ai #digital #assetmonitoring #edgecomputing #edgeai #Akidachip #MinskyAIengine #lowpower #datascience #cognitiveanalytics #computervision #embeddedsystems #iiot #aiot #neuromorphic #internetofthings #autonomoussolution #autonomoussystems #autonomousmobility #automation #intelligentautomation #anomalydetection #dronetechnology #smartuav #5g #innovation #neuralnetwork #industrialautomation #oilandgasindustry #powerindustry #industrialrobotics #robotics #distributedcomputing #droneautonomy #machineautonomy
  • Akida edge computer chip & Minsky AI engine
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Was thinking about the chatter on the X280 and went back an old article I posted begin of Jan.

Def appears a fit for our new platform.

Part snip below.


Vector Coprocessor Interface Extension (VCIX)​

At the 2022 AI Hardware Summit, Krste Asanovic SiFive Co-Founder and Chief Architect introduced a new Vector Coprocessor Interface Extension (VCIX).



As customer evaluation of the X280 went underway, SiFive say it started noticing new potential usage trends for the core. One such usage is not as the primary ML accelerator, but rather as a snappy side coprocessor/control processor with ML acceleration functionality. In other words, SiFive says it has noticed that companies were considering the X280 as a replacement coprocessor and control processor for their main SoC. Instead of rolling out their own sequencers and other controllers, the X280 proved a good potential replacement.

To assist customers with such applications, SiFive developed the new Vector Coprocessor Interface Extension (VCIX, pronounced “Vee-Six”). VCIX allows for tight coupling between the customer’s SoC/accelerator and the X280. For example, consider a hardware AI startup with a novel way of processing neural networks or one that has designed a very large computational engine. Instead of designing a custom sequencer or control unit, they can simply use the X280 as a drop-in replacement. With VCIX, they are given direct connections to the X280. The interface includes direct access into the vector unit and memory units as well as the instruction stream, allowing an external circuit to utilize the vector pipeline as well as directly access the caches and vector register file.

The capabilities of essentially modifying the X280 core are far beyond anything you can get from someone like Arm. In theory, you could have an accelerator processing its own custom instructions by doing operations on its own side and sending various tasks to the X280 (as a standard RISC-V operation) or directly execute various operations on the X280 vector unit by going directly to that unit. Alternatively, the VCIX interface can work backward by allowing for custom execution engines to be connected to X280 for various custom applications (e.g., FFTs, image signal processing, Matrix operations). That engine would then operate as if they are part of the X280, operating in and out of the X280’s own vector register file. In other words, VCIX essentially allows you to much better customize the X280 core with custom instructions and custom operations on top of a fully working RISC-V core capable of booting full Linux and supporting virtualization.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

Tothemoon24

Top 20
9328E87F-A6F6-47D7-9DAB-C3246F6E8E97.png
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Slade

Top 20
Is Akida in this? Just posted now.


0EF5BF0B-5E8A-4932-A417-0CFA2BF2F82E.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 34 users

stockduck

Regular

"......Once you join the company you will work on the Neurokit2e project which is part of the EU Horizon Europe framework. Our goal in the Neurokit2e project is to design a RISC-V-based Application Specific Accelerator for Neuromorphic Computing.
......
  • Develop middle-ware SW (compatible with Spiking Neural Networks) for fast adoption of the newly proposed extensions.



I don`t know, if this is related to brainchip, but interesting to me are the customers and partners from codasip.

Thanks for the recent news on Monday!
 
  • Like
  • Thinking
Reactions: 11 users

VictorG

Member
  • Haha
  • Like
Reactions: 15 users

Slade

Top 20


March 7, 2023
Products/Services / Press Releases
Expanded support menu and added 2 new colors for general sales
Acceptance of general purchase acceptance of "NICOBO", a robot that makes you smile
Panasonic Entertainment & Communication Co., Ltd. (hereinafter referred to as Panasonic) has decided to sell "NICOBO", which was provided only to supporters who applied for crowdfunding, and started accepting reservations for purchasing NICOBO from today. To do. General purchases will be available on the NICOBO official website from May 16th.

Nikobo is a "weak robot" born from a project proposed by an employee who seeks to provide value in the form of "richness of mind." Unlike robots that perform tasks in place of humans, Nikobo, who doesn't do anything, amplifies the kindness and smiles of those around her, creating new ways of happy interaction between robots and humans that have never existed before. We will propose it as a value.

Along with this general sale, two colors of smoke navy and shell pink have been added in addition to the stone gray nikobo that has been available so far. We aim to penetrate a wider range of people with colors that blend well with interiors. In order to live with Nikobo, you will need to purchase the main unit and the monthly fee necessary for Nikobo to adapt to life with the purchaser and continue to evolve. Purchase reservations will be accepted from the NICOBO official website, and will be shipped to those who have applied for reservations in conjunction with the start of sales in mid-May. In addition, along with the release, we will strengthen the support menu for living with Nikobo with peace of mind. We have prepared a NICOBO CLINIC that provides a NICOBO health checkup service and a knit exchange service to change NICOBO's knitwear into new ones. In addition, we have prepared a care plan that offers discounts on NICOBO CLINIC services such as treatment costs when hospitalization is required.

Panasonic's technology supports the realization of the concept of "weak robot" Nikobo, such as noise reduction technology for voice recognition, which is indispensable for communication, and information communication linkage with smartphone applications. Through the commercialization of Nikobo, Panasonic will accelerate its efforts to create new value of "impression and comfort."

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 11 users

Tothemoon24

Top 20
FF recently touched on the subject of the elephant in the room when it comes to the ever growing list of large language chat platforms & the need to reduce the ridiculous amounts of energy required to preform .

This recently written paper bodes well. I think ?
 

Attachments

  • D1879100-1EE5-464E-8FE9-19A1E596D776.png
    D1879100-1EE5-464E-8FE9-19A1E596D776.png
    1.1 MB · Views: 95
Last edited:
  • Like
  • Love
  • Fire
Reactions: 21 users

charles2

Regular
Way more efficient battery tech on the horizon. Consider the implications for Brainchip and customers.

 
  • Like
  • Love
  • Wow
Reactions: 10 users

Yak52

Regular
150000 shares? You mean parcels of 40000 or less in the US?
Because on the asx I saw a solid accumulation of nearly 20million shares... on the back of great news.

We need to get off the ASX ASAP. This why we have extremely low US investors. Today we had 150,000 shares traded, average is 40,000 or less. We will not get the US investor until we are listed on NASDAQ.

A USA Investor Glen by any chance?

Perhaps Australian Investors need to ask their Australian Brokerages about how to trade BRN (IF/WHEN) it is de-listed ASX and re-listed on the NASDAQ. this seems to be the model presented in the past as the future direction for BrainChip.

Firstly there will be that IRS form W-8 BEN on with holding tax to be completed...............if using a USA Account.

Then most Australian Retail investors will discover that they cannot hold/trade on NASDAQ, or at least easily. That will knock off 95% of Retail ASX Holders, plus the market time is aprox Midnight AU until 7.30am ..............which will be hard for many to follow anyway.

So don't wish for something too hard without looking into it properly, as it just might be the end of your BRAINCHIP ride!

Yak52 :cool:
 

Yak52

Regular
No offence but, it doesn't sit well with me when people say "Don't worry. Just day traders doing blah blah blah. It's short-term blah blah blah".

Our SP jumping for one day then slowly getting eroded away seems to be a common thing. Wouldn't surprise me if yesterday's gain will be gone by end of next week. Then it will be back to square one. Waiting for the next 4C. Getting over-excited about partnerships.
The sight of a blue sky seems to be the only "short-term" thing that's happening right now.

Frustrated, but still holding.
Not advice.

Agree with the comments by DK161 about day traders and "dont worry".

It is always THE Daytraders themselfs saying BS comments like that and the fabled "GAP FILL" quotes. Balony, it is just the DTs and pip traders HOPING to convince enough retail about it to make it a reality by convincing them to selling out.
Some stocks have high numbers of DTs and little retail , other stocks have low or even no DTs and all retail depending on the company.
The "Gap Voodoo" BS is always evident on stocks with high numbers of DTs who perpetuate this concept mostly through social media.
Insto traders push this mercilessly.

Yak52
 

wilzy123

Founding Member
 
  • Like
  • Fire
  • Love
Reactions: 54 users

charles2

Regular

Does it get better than this.....?????

And timely.

Gobsmacked! You can tell!
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

Glen

Regular
NVISO AND Panasonic in May will have there first mass consumer product The Nicobo robot for sale in Japan.
 
  • Like
  • Fire
Reactions: 13 users

stockduck

Regular
Dio here is the link to the paper 😶‍🌫️
https://arxiv.org/pdf/2302.13939
Halleluja....what does this mean......

"...4.1 Datasets

We test two variants of the 45 million parameter model; one where T = 1024 and another where T = 3, 072. We used the Enwik8 dataset to conduct both training and testing. The findings of this experiment are presented in Table 1. To explore the efficiency of our 125 million parameter scale, we trained our model using the BookCorpus [47] dataset, and text generated samples are provided in Fig. 3. Our most extensive model with 260 million parameters was trained using the OpenWebText2 [17] dataset. Text samples of this experiment are shown in Fig. 2. At present, we are conducting additional experiments on the larger models and will update this preprint once completed. All experiments were conducted on four NVIDIA V100 graphic cards. For the models of 45M, 120M and 260M, we trained them for 12, 24 and 48 hours respectively.
...."

Can someone help here? There isn`t meant Nvida V100 graphic cards have SNN IP in it, right? Sorry I`m not a "professional" in this case.:unsure:
 
  • Like
  • Haha
  • Thinking
Reactions: 5 users

Tothemoon24

Top 20

Impressive list !​


Come Find Edge Impulse at Embedded World 2023​

EMBEDDED DEVICES
Mike Senese
8 March 2023
Linkedin_1200x630_3_ddec41cb49.png

Each year, all the big names in embedded computing gather at Embedded World in Nuremberg, Germany to show off their latest innovations and developments, to meet with partners and customers, and to learn about new advancements in their fields. This year, Embedded World is happening from March 14–16, and Edge Impulse is excited to once again be participating with a range of activities.

embedded world | …it's a smarter world
H5LJ0H2eGnMXfLTpUSvvSxIMORTXeZloKPUHYbSl-Fv9uTF8bzf1wbuWNGzd51iTQsAgX4OiSpDLIVkwpwRKEy8yPL6p7qNXSPQopTt1m_RD0qc3yGRWU8VR2xMEs_PfwtXToapRkXZUEVKyNmI2bis

First held in 2003, Embedded World is known as possibly the largest show in the world for the embedded industry. The exhibition focuses on products and services related to embedded systems, including hardware, software, and tools for developing and testing. The conference portion of the event features presentations and workshops from industry experts on a variety of topics, such as security, connectivity, and real-time operating systems. There’s a lot there for everyone.
With our machine learning toolkit that is ideally optimized for embedded applications, Edge Impulse and Embedded World are a perfect match. Here are some of the different places you will be able to find us and what we’ll be getting up to in each spot.
tkw3srg0yQQxcfB-J4osq_mBLeCqyoe3-ZwAn_AAInh2gqMa5fD3tsx6maxmcFFX46TRGpHIEjdmecQMqNjgodIXms8mVwQety8xIT-L_kKO3GnhYoy77u0DoRoenoN7GNewnghnfPNf5YftFDdLFd4

Edge Impulse Booth
Hall 2, Booth 2-238
This year we will be hosting our own space in the TinyML area of Embedded World. Our booth will have a demo from BrainChip, showing off our FOMO visual object-detection algorithm running on the BrainChip Akida AKD1000, featuring their neuromorphic IP.
Also at the booth: Meet BrickML, the first product based on the Edge Impulse “Industrial Monitoring” reference design, focused on providing machine learning processing for industrial/machine monitoring applications. Built in collaboration with Reloc and Zalmotek, BrickML can be used to track numerous aspects of industrial machinery performance via its multitude of embedded sensors. We’ll be showing it in a motor-monitoring demonstration. BrickML is fully integrated into the Edge Impulse platform which makes everything from data logging from the device, to ML inference model deployment on to the device a real snap. (Our Industrial Monitoring reference design includes hardware and software source code to rapidly design your own product, available for Edge Impulse enterprise customers.)
esSCdCuQarpv7nYBBz-Yv9U1Mya8kEWFbQMmDDEPH9u9Kk6OJl6oqDTb52oH8YDTSXrxu6bJed55rrZ3skim5klMnOHCVJpCuNgudC3ntnOwhVr0K8o0eBfDLHocEyBHe3jNoVyhCfZEFXzLJMMauAw

We’ll additionally be showing off devices from companies we work with, including Oura, the health-monitoring wearable that is discreetly embedded in a ring you wear on your finger, and NOWATCH, a wrist-based wearable that tracks your stress levels and mental well-being.
File:TexasInstruments-Logo.svg - Wikimedia Commons

Texas Instruments
Hall 3A, Booth 3A-215
In the TI booth you’ll find our Edge Impulse/Texas Instruments demo. This will show TI’s YOLOX-nano-lite model. The model was trained on a Kaggle dataset to detect weeds and crops. The dataset was loaded to Edge Impulse and the YOLOX model was trained via the “Bring Your Own Model” extensions to Edge Impulse Studio. The train model was then deployed to run on the TI Deep Learning framework.
File:Advantech logo.svg - Wikimedia Commons

Advantech
Hall 3, Booth 3-339
Scailable will be demonstrating their Edge Impulse FOMO-driven object detection implementation at the Advantech booth. It uses the Advantech ICAM camera to distinguish small washers, screws, and other items on several different trays. They’ll be demonstrating different trays and different models for the demo, and showing how to train new models at the booth.
File:AVSystem logo.jpg - Wikimedia Commons

AVSystem
Demo at the Zephyr booth: Hall 4, Booth 4-170
AVSystems’ Coiote is a LwM2M-based IoT device-management platform, providing support for constrained IoT devices at scale. It integrates with a tinyML-based vibration sensor and can detect and report anomalies in vibrations. This demo is based on the Nordic Thingy:91, which runs the Zephyr OS, and uses the Edge Impulse platform.
The Things Conference

Arduino
Hall 2, Booth 2-238
Check out the “vineyard pest monitoring” vision demo, running on the Arduino Nicla Vision and MKR WAN 1310, built by Zalmotek and using Edge Impulse for machine learning.
alif_4bb36c6432.png

Alif
Hall 4, Booth 4-544
Alif will also be hosting an Edge Impulse-powered demo to the show. It is viewable in their private conference room by appointment; contact kirtana@edgeimpulse.com to set up a meeting.
Press Kit | Synaptics

Synaptics panel, featuring Edge Impulse
Tuesday, 3/14 @ 3PM (local time)
Hall 1, Booth 500
Edge Impulse co-founder/CEO Zach Shelby will be a participant in the “Rapid Development of AI Applications on the Katana SoC” panel, brought to you by one of our partner companies, Synaptics, and moderated by Rich Nass from Embedded Computing Design.
Come find us!
In addition to these locations and scheduled events, we’ll have numerous staff members from Edge Impulse on site and ready to answer any questions you may have about our tools and use cases. Be sure to stop by to say hi.
(And if you can’t make it in person, you can always drop us a note: hello@edgeimpulse.com)
 
  • Like
  • Fire
  • Love
Reactions: 27 users

cosors

👀
Halleluja....what does this mean......

"...4.1 Datasets

We test two variants of the 45 million parameter model; one where T = 1024 and another where T = 3, 072. We used the Enwik8 dataset to conduct both training and testing. The findings of this experiment are presented in Table 1. To explore the efficiency of our 125 million parameter scale, we trained our model using the BookCorpus [47] dataset, and text generated samples are provided in Fig. 3. Our most extensive model with 260 million parameters was trained using the OpenWebText2 [17] dataset. Text samples of this experiment are shown in Fig. 2. At present, we are conducting additional experiments on the larger models and will update this preprint once completed. All experiments were conducted on four NVIDIA V100 graphic cards. For the models of 45M, 120M and 260M, we trained them for 12, 24 and 48 hours respectively.
...."

Can someone help here? There isn`t meant Nvida V100 graphic cards have SNN IP in it? Sorry I`m not a "professional" in this case.:unsure:
You are absolutely right. It's cold as shit here and it's snowing and I'm out with my phone and I was lazy. I deleted my post. Still interesting or not?
Sorry for that and thanks for reading. With freeze fingers and feets it looked to complicated for me.
 
Last edited:
  • Haha
  • Wow
  • Fire
Reactions: 6 users
D

Deleted member 118

Guest
Can someone please help me and get up early enough to watch the Cerence presentation? It’s on at about 5.30 am eastern standard time Aus, unless I’m mistaken. Any takers? TIA. 🥰
Rise and shine

 
  • Haha
  • Love
Reactions: 11 users
Top Bottom