BRN Discussion Ongoing

cosors

👀
Maybe from general interest:

I have just heard an interesting R&D report on the radio about Ai improved noise cancelling. It was noted that other big companies are certainly still developing this exciting technology too.
The idea is to let sounds through the cancelling. Until now, there has been no system that allows the user to individually set, define or let learn which sounds the Ai should filter out. The headphones have to learn what the person wants to hear, without cloud, obvious. To do this, the team uses the time differences between the left and right headphones and the noise source. This team solves this as follows: if the person with noise cancelling headphones points their face in the direction of what they want to hear despite the suppression, the Ai or electronics learns within around three seconds that the source is being targeted because it recognises the runtime differences from left to right and lets these sounds through.
So far with app on the smartphone. He also says that the team is working on button (? small) headphones, which they want to introduce in about 6 to 8 months.
Up to now this is being done with the phone he said, but I can very well imagine that the neural network will be placed directly in the headphones, drastically reducing latency even further.
I'm on the road and my research options with the phone are limited, but it's about Shyam Gollakota's team at the University of Washington.

KEYWORDS
Augmented hearing, auditory perception, spatial computing
PDF:

______
Older status:
_______


___
Webuild an end-to-end hardware system that integrates a noisecanceling headset (Sony WH-1000XM4), a pair of binaural microphones (Sonic Presence SP15C) with our real-time target speech hearing network running on an embedded IoT CPU (Orange Pi 5B).

We deploy our neural network on the embedded device by converting the PyTorch model into an ONNX model using a nightly
PyTorch version (2.1.0.dev20230713+cu118) and we use the python package onnxsim to simplify the resulting ONNX model.
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 17 users

Frangipani

Regular
AC6AEA3B-12B3-4404-A0A0-7CE668AC6D65.jpeg


1B5757A8-70E1-4E2E-8453-8B815973CB51.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Frangipani

Regular
A new Brains & Machines podcast (and transcript) is out. The latest episode’s guest was Dylan Muir from SynSense:


I merely skimmed the transcript - encouragingly, BrainChip gets mentioned a few times during the post-interview discussion, and once again, Ralph Etienne-Cummings makes a reference to our CTO’s earlier robotic research and to the company he founded in 1999, Iguana Robotics.

Here is a transcript excerpt that deals with SynSense’s approach to commercialisation - the original interview was recorded in mid-2023, followed by a brief interview update in March 2024, after SynSense had acquired iniVation.

4D2E2941-3D36-4A92-8107-ABE33F7094D2.jpeg

9BFD7DA9-79C6-4F68-A232-2F0639B0B1B6.jpeg




During the post-interview discussion, not everyone agreed with Dylan Muir’s reasoning that on-chip learning wouldn’t be necessary any time soon:

03C1D93A-2CF4-4194-B0B1-234A5F271966.jpeg



Which begs the question: Are the folks at SynSense really convinced that on-chip learning isn’t a big deal or is it possibly a case of sour grapes? 🤔

Surprisingly, it never gets mentioned that SynSense became a de-facto Chinese company in 2020 (by moving its headquarters from Switzerland to China) and what kind of problems this brings about with regard to commercialisation.
 
  • Like
  • Love
  • Thinking
Reactions: 12 users
Do we know why Rob left BRN at all or is this still unknown ?.
 
  • Like
Reactions: 1 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 12 users

7für7

Regular
Do we know why Rob left BRN at all or is this still unknown ?.
I think he wanted to install Japanese toilets in all their offices with akida inside but Sean said no! Very dramatic situation
 
  • Haha
  • Thinking
Reactions: 6 users
  • Haha
  • Wow
  • Like
Reactions: 15 users

JDelekto

Regular
  • Haha
  • Like
Reactions: 19 users

manny100

Regular
The only worry I have is about proxy votes. e.g my shares in my super have vote with Australian super, though I believe they will not the idiots but they can vote against my wishes of my holdings.
Dyor
BRN has canvassed the funds.
They will vote no spill. They likely will vote YES to remuneration as a strike may lead to an SP drop which sees a fall on the value of funds under their control .
Also those controlling voting have big pay jobs and probably have some empathy for their peers at BRN.
IMO there is no chance of a spill.
 
  • Like
  • Love
  • Thinking
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Thinking
  • Love
Reactions: 11 users

Kachoo

Regular
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
As we all know, we've been waiting a loooong time for the launch of Renesas new MCU with AKIDA + 22nm CMOS process with the integration of a software define radio and Bluetooth 5.3 Low Energy.

Now, I don't wish to FREAK anyone out, but I just noticed this Renesas DA14535 Ultra Low Power Bluetooth 5.3 SoC.

Maybe @Diogenese could take a peek for us to see whether this is anything to get excited about?

Could AKIDA be the "SWD interface" which I've circled below?

Here's a link to the data sheet which was uploaded on 1 April 2024.








Screenshot 2024-05-18 at 11.40.48 am.png





Screenshot 2024-05-18 at 12.00.41 pm.png
 
Last edited:
  • Like
  • Love
  • Haha
Reactions: 15 users
9ze and

We do not but he seams to like the recent BRN LinkedIn post!
Yes it’s a good sign, I thought Rob was a good spokes person for BRN
 
  • Like
Reactions: 2 users

MrNick

Regular
Perhaps Celus are using Akida and needed someone with intimate knowledge of what’s possible. Pure speculation. Partnerships have always been the success mantra. Just ask Melania Trump.
9ze and

We do not but he seams to like the recent BRN LinkedIn post!
 
  • Haha
Reactions: 1 users

Kachoo

Regular
Perhaps Celus are using Akida and needed someone with intimate knowledge of what’s possible. Pure speculation. Partnerships have always been the success mantra. Just ask Melania Trump.
It is a possibility that those leaving will go into products that involve the IP basicly BRN is IP yeah they can build a chip with the IP and Tenns but the reality is it's a processor for the component so all these are possible. If they believed it to be superior to othere they will build out to a product.
 
  • Like
Reactions: 3 users

Diogenese

Top 20
View attachment 63131
Hi Ill,

Nodar runs on Nvidia Jetson orin:

https://www.agritechtomorrow.com/ne...al-automation-powered-by-nvidia-jetson/15184/

NODAR Announces Advanced Stereo Vision Technology for Next-Generation Agricultural Automation, Powered by NVIDIA Jetson

Visit http://www.nodarsensor.com for further information
NODAR's AgriView Revolutionizes the Agriculture Market with State-of-the-Art 3D Vision for Autonomous Farming, Powered by NVIDIA Jetson Orin System-on-Modules
01/09/24, 06:00 AM | Precision Farming
In a significant development for agricultural technology, NODAR announces its next-generation solutions for the farming industry, powered by the NVIDIA Jetson platform for edge AI and robotics.
As we all know, we've been waiting a loooong time for the launch of Renesas new MCU with AKIDA + 22nm CMOS process with the integration of a software define radio and Bluetooth 5.3 Low Energy.

Now, I don't wish to FREAK anyone out, but I just noticed this Renesas DA14535 Ultra Low Power Bluetooth 5.3 SoC.

Maybe @Diogenese could take a peek for us to see whether this is anything to get excited about?

Could AKIDA be the "SWD interface" which I've circled below?

Here's a link to the data sheet which was uploaded on 1 April 2024.








View attachment 63142




View attachment 63145

That's not really an auspicious date to publish anything. Also you should get a refund for the $49.95 option. Mine has the complementary aluminium foil insert.

SwD = Software Defined?

Cortex MO+ is all thumbs.


1716003255784.png


6 Arm Cortex-M0+ 6.1 Introduction

The Arm Cortex-M0+ processor is a 32-bit Reduced Instruction Set Computing (RISC) processor with a von Neumann architecture (single bus interface).
It uses an instruction set called Thumb, which was first supported in the ARM7TDMI processor, but it also uses several newer instructions from the Armv6 architecture and a few instructions from the Thumb-2 technology.
Thumb-2 technology extends the previous Thumb instruction set to allow all operations to be carried out in one CPU state. The instruction set in Thumb-2 includes both 16-bit and 32-bit instructions; most instructions generated by the C compiler use the 16-bit instructions, and the 32-bit instructions are used when the 16-bit version cannot carry out the required operations. This results in high code density and avoids the overhead of switching between two instruction sets. In total, the Cortex-M0+ processor supports only 56 base instructions, although some instructions can have more than one form. Although the instruction set is small, the Cortex-M0+ processor is highly capable because the Thumb instruction set is highly optimized. Academically, the Cortex-M0+ processor is classified as load-store architecture, as it has separate instructions for reading and writing to memory, and instructions for arithmetic or logical operations that use registers. It has a two-stage pipeline (fetch+predecode and decode+execute) as opposed to its predecessor (Cortex-M0) that has a three-stage pipeline (fetch, decode, and execute). Figure 20 shows a simplified block diagram of the Cortex-M0+.


1716003553878.png
 
  • Like
  • Fire
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Ill,

Nodar runs on Nvidia Jetson orin:

https://www.agritechtomorrow.com/ne...al-automation-powered-by-nvidia-jetson/15184/

NODAR Announces Advanced Stereo Vision Technology for Next-Generation Agricultural Automation, Powered by NVIDIA Jetson

Visit http://www.nodarsensor.com for further information
NODAR's AgriView Revolutionizes the Agriculture Market with State-of-the-Art 3D Vision for Autonomous Farming, Powered by NVIDIA Jetson Orin System-on-Modules
01/09/24, 06:00 AM | Precision Farming
In a significant development for agricultural technology, NODAR announces its next-generation solutions for the farming industry, powered by the NVIDIA Jetson platform for edge AI and robotics.


That's not really an auspicious date to publish anything. Also you should get a refund for the $49.95 option. Mine has the complementary aluminium foil insert.

SwD = Software Defined?

Cortex MO+ is all thumbs.


View attachment 63146

6 Arm Cortex-M0+ 6.1 Introduction

The Arm Cortex-M0+ processor is a 32-bit Reduced Instruction Set Computing (RISC) processor with a von Neumann architecture (single bus interface).
It uses an instruction set called Thumb, which was first supported in the ARM7TDMI processor, but it also uses several newer instructions from the Armv6 architecture and a few instructions from the Thumb-2 technology.
Thumb-2 technology extends the previous Thumb instruction set to allow all operations to be carried out in one CPU state. The instruction set in Thumb-2 includes both 16-bit and 32-bit instructions; most instructions generated by the C compiler use the 16-bit instructions, and the 32-bit instructions are used when the 16-bit version cannot carry out the required operations. This results in high code density and avoids the overhead of switching between two instruction sets. In total, the Cortex-M0+ processor supports only 56 base instructions, although some instructions can have more than one form. Although the instruction set is small, the Cortex-M0+ processor is highly capable because the Thumb instruction set is highly optimized. Academically, the Cortex-M0+ processor is classified as load-store architecture, as it has separate instructions for reading and writing to memory, and instructions for arithmetic or logical operations that use registers. It has a two-stage pipeline (fetch+predecode and decode+execute) as opposed to its predecessor (Cortex-M0) that has a three-stage pipeline (fetch, decode, and execute). Figure 20 shows a simplified block diagram of the Cortex-M0+.


View attachment 63147


dsppntmnt.gif



PS: Unfortunately I can't ask for a refund because the "not going to hell" was an optional extra that I wasn't prepared to pay for. 😝
 
  • Haha
  • Like
Reactions: 10 users

Learning

Learning to the Top 🕵‍♂️
Just voted!

All 8: For
Last one: 110% Against.

Just my 2 cent.

Have a great weekend everyone 😀

Learning 🪴
 
  • Like
  • Love
  • Fire
Reactions: 23 users
Top Bottom