BRN Discussion Ongoing

Don't talk like that mate ,The brainchip secret meeting mob will lynch you
1706638479149.gif
 
  • Haha
Reactions: 8 users

Townyj

Ermahgerd
Correct! 👍🏻
Turns out, though, Jacques Aschenbroich is actually no longer Valeo’s CEO resp. Valeo’s Chairman of the Board of Directors.

But we don’t mind him being Chairman of the Board of Directors of French multinational telecommunications and digital service provider Orange either, do we? 😉

Now which other company does this colour remind me of again? 🤔🤣
I reckon the two would make a great match, not only in the colour scheme of things…


View attachment 55569


View attachment 55570

Dang... Well they definitely need to update their Wiki page to show otherwise.

Updated my previous post slightly.
 
  • Like
Reactions: 1 users

marsch85

Regular

Very interesting interview with Dr Tobi Delbrüch on neuromorphic engineering. Takes you through the history of the field and provides insight into the workings of academic research.

It mentions our own Tony Lewis being part of the neuromorphic community for the last decades building robots with vision back in the 90’s.

Also confirms the importance of digital, being inspired by the brain but not exactly copying it, and making adoption as easy as possible.

I’m pre-coffee and on my phone so above is a bit crude :) Highly recommend having a listen / read yourself.

Edit: looking into the background of Tony again, it is a very good reminder of the incredible management that we have been able to attract. Heavyweights in this industry who clearly see the amazing potential of Brainchip for them to come across from the Amazons, ARMs, HPs and Intels of this world.. makes you wonder if we are on to something ;-)
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 56 users

Sirod69

bavarian girl ;-)
  • Like
  • Love
  • Fire
Reactions: 20 users

GStocks123

Regular
Microchips Edge ML stream in 20h from now-Learn how we make Machine Learning (ML) easy and efficient for embedded designers. Join us for a livestream on Wednesday, January 31, 2024 at 9:00 a.m. PST: https://mchp.us/41QD9LZ. #MachineLearning #ML #EmbeddedDesigners #Engineering #MCUs #MPUs

 
  • Like
  • Fire
  • Love
Reactions: 10 users

Frangipani

Regular

Very interesting interview with Dr Tobi Delbrüch on neuromorphic engineering. Takes you through the history of the field and provides insight into the workings of academic research.

It mentions our own Tony Lewis being part of the neuromorphic community for the last decades building robots with vision back in the 90’s.

Also confirms the importance of digital, being inspired by the brain but not exactly copying it, and making adoption as easy as possible.

I’m pre-coffee and on my phone so above is a bit crude :) Highly recommend having a listen / read yourself.

Edit: looking into the background of Tony again, it is a very good reminder of the incredible management that we have been able to attract. Heavyweights in this industry who clearly see the amazing potential of Brainchip for them to come across from the Amazons, ARMs, HPs and Intels of this world.. makes you wonder if we are on to something ;-)

Absolutely agree - Tony Lewis was an excellent choice as CTO, as he is so well connected and respected in the neuromorphic community!

I recall Ralph Etienne-Cummings referring to him in another Brains & Machines podcast. The two of them go back a long way, by the way…

Just some examples:

47415705-0E3D-4C9B-B9D3-FAF21E41245B.jpeg


45A57CEC-78E5-4AF1-9C8C-A25CE596EE0A.jpeg
BED9211C-E874-4A51-B979-7B9A433A5835.jpeg


67F5E49C-62E0-40BA-86FC-039A4C39147C.jpeg

3E6C9E08-C4D8-4286-BC54-6F53E8669D20.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Frangipani

Regular
I recall Ralph Etienne-Cummings referring to him in another Brains & Machines podcast. The two of them go back a long way, by the way…

Found it: It was the discussion after the podcast with Yulia Sandamirskaya, who is heading the Research Centre "Cognitive Computing in Life Sciences" at Zurich University of Applied Sciences (ZHAW) and is a senior researcher in Intel's Neuromorphic Computing Lab and thus an expert on Loihi. Her background is in robotics, too.
She was also one of the neuromorphic researchers congratulating Tony Lewis on his appointment as Brainchip CTO, by the way:

5B65F3C8-43FC-4743-AE91-9DBEFDA53B12.jpeg




F165B7EF-F8A8-44E3-AA74-871A16503A71.jpeg
 

Attachments

  • 967B1C90-E6FE-475D-9DE8-70FE2D3630CA.jpeg
    967B1C90-E6FE-475D-9DE8-70FE2D3630CA.jpeg
    43.4 KB · Views: 39
  • Like
  • Fire
  • Love
Reactions: 17 users

Gies

Regular
  • Like
  • Fire
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
3rd eye is being liked by Rob Telson.
Soon out there


Thanks @Gies!

There's a video on this link below. At about 1.30 mins it says that ThirdEye has about twenty years experience developing AI/MR technology for the US Department of Defense (see logo on screenshot below), for the US military, airforce, marines. They are active with all the different branches of the government. Now they're shifting their focus to the commercial and consumer space as well.

They have 50 (or 15?) patents filed in optics, hardware and software. Would be interesting to check some of them out.

4.15 mins : Their glasses are entirely hands free. They don't have to connect to a phone, laptop or processing pack.

Their goal is to be the most widely used smart glasses out there.

25.15 mins : By this time next year they want to be the first to have smart with 4G or 5G built in modem to enable them to be used out in the field where there is no Wi-Fi.

US DOD
Screenshot 2024-01-31 at 9.33.51 am.png



ThirdEye Customers/Partners
Screenshot 2024-01-31 at 10.26.25 am.png



Video
 
Last edited:
  • Like
  • Love
  • Wow
Reactions: 24 users

Sirod69

bavarian girl ;-)
Just to say briefly, I went up there with you in a Mercedes, of course in Germany back then, well, what about now? Hence the song.
AND I love this song

 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 8 users

Wickedwolf

Regular
This guy gets it: Byron Callaghan

In the boundless expanse of technological galaxies, there exists a singular constellation that outshines all - the Akida 2.0 system. It is not just a beacon of brilliance; it's a veritable black hole, drawing in all realms of possibility and spewing out pure innovation. In the theatre of Edge AI, where countless players jostle for the limelight, Akida 2.0 doesn't just steal the show; it is the show.
 
  • Like
  • Love
  • Fire
Reactions: 71 users
This guy gets it: Byron Callaghan

In the boundless expanse of technological galaxies, there exists a singular constellation that outshines all - the Akida 2.0 system. It is not just a beacon of brilliance; it's a veritable black hole, drawing in all realms of possibility and spewing out pure innovation. In the theatre of Edge AI, where countless players jostle for the limelight, Akida 2.0 doesn't just steal the show; it is the show.
1706660344806.gif
 
  • Like
  • Haha
  • Fire
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
NEWS

Microchip Prioritizes Customizable Logic in New 8-bit MCUs​

one day ago by Jake Hertz

Outfitted with a configurable logic block module, the new MCUs integrate customizable logic to reduce BOM and improve performance.​


As microcontrollers (MCUs) become more central to the operation of IoT devices, designers need low-power, high-performance MCUs that don't increase system complexity.
To answer this call, Microchip recently announced a new family of devices that integrates customizable logic directly into the MCU. What might this integration mean for the future of embedded systems?

PIC16F13145

Microchip claims that its new configurable logic block (CLB) module enables customizable hardware solutions and may even eliminate the need for external logic components.

New 8-bit MCUs Integrate Configurable Logic Block​

The new PIC16F13145 MCU family introduces a configurable logic block (CLB) peripheral. The CLB consists of 32 individual logic elements, each employing a look-up table (LUT)- based design. This feature enables designers to create hardware-based, custom combinational logic functions directly within the MCU, optimizing the speed and response time of embedded control systems. This integration eliminates the need for external logic components, thereby reducing bill of materials (BOM) costs and power consumption.

A diagram of a basic logic element in the PIC16F13145

A diagram of a basic logic element in the PIC16F13145.

Another important feature of the CLB is its independence from the central processing unit's (CPU) clock speed. This allows the CLB to make logical decisions while the CPU is in sleep mode, further reducing power consumption and software reliance.
The MCU family (datasheet linked) is available in various package sizes, including 8-, 14-, and 20-pin configurations, and offers up to 14 KB of program flash memory and up to 1 KB of RAM. This goes along with an integrated 10-bit ADC with computation (ADCC) capable of up to 100 ksps, an 8-bit DAC, and two fast comparators with a 50-ns response time. These features are complemented by a range of peripherals for timing control and serial communications, including SMBus compatibility.

Understanding Customizable Logic​

Customizable logic allows hardware-based logic functions to be implemented directly within the MCU. Traditionally, such functions required external components like programmable logic devices (PLDs) or additional microcontrollers. However, with customizable logic, these functions are integrated into the MCU itself, simplifying design, reducing system footprint, and minimizing system latency.
At the heart of customizable logic in MCUs like Microchip’s new family is the configurable logic block (CLB). A CLB generally consists of multiple logic elements, each of which can be individually programmed to perform various logic functions. These logic elements are commonly based on LUTs, which can be configured to implement complex combinational logic or simple logic gates like AND, OR, and XOR. By programming these LUTs, engineers can create custom logic circuits that operate independently of the MCU's CPU.

Configurable logic blocks are software-defined hardware

Configurable logic blocks are software-defined hardware. (Click to enlarge.)

One key advantage of integrating customizable logic into MCUs is that it enhances real-time performance. Since these logic blocks operate independently of the CPU, they can make quick logical decisions, effectively reducing system latency. This is particularly advantageous in applications requiring rapid response times, such as motor control, industrial automation, or real-time data processing.

Another significant benefit is power efficiency. Customizable logic can often operate in low-power or sleep modes, making logical decisions without waking the CPU. This feature is invaluable in battery-powered or energy-sensitive applications where conserving power is crucial.


Emblazoning MCUs in Embedded Designs​

By embedding customizable logic into its family of MCUs, Microchip is offering designers new ways to get more performance and efficiency out of their embedded designs. Without the need for extra components, engineers can now create dedicated logic blocks to accelerate their product’s unique tasks, helping them balance the cost-performance tradeoff.

 
  • Like
  • Fire
  • Love
Reactions: 46 users

buena suerte :-)

BOB Bank of Brainchip
Morning Chippers :)

All eagerly awaiting 'The BIg announcement/s' Plenty of great articles being shared by our in house researchers (much appreciated) but it would be nice to have BRN give us something solid! 🙏 📢📢📢 🙏

Well this is some good news to share 😍

1706664115780.png


1706664064273.png

Cheers
 
  • Like
  • Love
  • Fire
Reactions: 34 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
If you add the following known facts together in my opinion you get Microchip already working with Brainchip:

1. Brainchip partnered with SiFive with announced compatibility with the x280 Intelligence Series,

2. Brainchip partnered with NASA,

3. Brainchip partnered with GlobalFoundries, and

4. Brainchip taping out AKD1500 minus the ARM Cortex 4, plus

5. The following article:

SL-013023.jpg

January 30, 2023

NASA Recruits Microchip, SiFive, and RISC-V to Develop 12-Core Processor SoC for Autonomous Space Missions​


by Steven Leibson
NASA’s JPL (Jet Propulsion Lab) has selected Microchip to design and manufacture the multi-core High Performance Spaceflight Computer (HPSC) microprocessor SoC based on eight RISC-V X280 cores from SiFive with vector-processing instruction extensions organized into two clusters, with four additional RISC-V cores added for general-purpose computing. The project’s operational goal is to develop “flight computing technology that will provide at least 100 times the computational capacity compared to current spaceflight computers.” During a talk at the recent RISC-V Summit, Pete Fiacco, a member of the HPSC Leadership Team and JPL Consultant, explained the overall HPSC program goals.
Despite the name, the HPSC is not strictly a processor SoC for space. It’s designed to be a reliable computer for a variety of applications on the Earth – such as defense, commercial aviation, industrial robotics, and medical equipment – as well as being a good candidate for use in government and commercial spacecraft. Three characteristics that the HPSC needs beyond computing capability are fault tolerance, radiation tolerance, and overall platform security. The project will result in the development of the HPSC chip, boards, a software stack, and reference designs with initial availability in 2024 and space-qualified hardware available in 2025. Fiacco said that everything NASA JPL does in the future will be based on the HPSC.
NASA JPL set the goals for the HPSC based on its mission requirements to put autonomy into future spacecraft. Simply put, the tasks associated with autonomy are sensing, perceiving, deciding, and actuating. Sensing involves remote imaging using multi-spectral sensors and image processing. Perception instills meaning into the sensed data using additional image processing. Decision making includes mission planning that incorporates the vehicle’s current and future orientation. Actuation involves orbital and surface maneuvering and experiment activation and management.
Correlating these tasks with NASA’s overall objectives for its missions, Fiacco explained that the HPSC is designed to allow space-bound equipment to go, land, live, and explore extraterrestrial environments. Spacecraft also need to report back to earth, which is why Fiacco also included communications in all four major tasks. All of this will require a huge leap in computing power. Simulations suggest that the HPSC increases computing performance by 1000X compared to the processors currently flying in space, and Fiacco expects that number to improve with further optimization of the HPSC’s software stack.
lg.php


It’s hard to describe how much of an upgrade the HPSC represents for NASA JPL’s computing platform without contrasting the new machine with computers currently operating off planet. For example, the essentially similar, nuclear-powered Curiosity and Perseverance rovers currently trundling around Mars with semi-autonomy are based on RAD750 microprocessors from BAE Systems. (See “Baby You Can Drive My Rover.”) The RAD750 employs the 32-bit PowerPC 750 architecture and is manufactured with a radiation-tolerant semiconductor process. This chip has a maximum clock rate of 200 MHz and represents the best of computer architecture circa 2001. Reportedly, more than 150 RAD750 processors have been launched into space. Remember, NASA likes to fly hardware that’s flown before. One of the latest space artifacts to carry a RAD750 into space is the James Webb Space Telescope (JWST), which is now imaging the universe in the infrared spectrum and is collecting massive amounts of new astronomical data while sitting in a Lagrange orbit one million miles from Earth. (That’s four times greater than the moon’s orbit.) The JWST’s RAD750 processor lopes along at 118 MHz.
Our other great space observatory, the solar-powered Hubble Space Telescope (HST), sports an even older processor. The HST payload computer is an 18-bit NASA Standard Spacecraft Computer-1 (NSSC-1) system built in the 1980s but designed even earlier. This payload computer controls and coordinates data streams from the HST’s various scientific instruments and monitors their condition. (See “Losing Hubble – Saving Hubble.”)
The original NSSC-1 computer was developed by the NASA Goddard Space Flight Center and Westinghouse Electric in the early 1970s. The design is so old that it’s not based on a microprocessor. The initial version of this computer incorporated 1700 DTL flat-pack ICs from Fairchild Semiconductor and used magnetic core memory. Long before the HST launched in 1990, the NSSC-1 processor design was “upgraded” to fit into some very early MSI TTL gate arrays, each incorporating approximately 130 gates of logic.
I’m not an expert in space-based computing, so I asked an expert for his opinion. The person I know who is most versed in space-based computing with microprocessors and FPGAs is my friend Adam Taylor, the founder and president of Adiuvo Engineering in the UK. I asked Taylor what he thought of the HPSC and he wrote:
“The HPSC is actually quite exciting for me. We do a lot in space and computation is a challenge. Many of the current computing platforms are based on older architectures like the SPARC (LEON series) or Power PC (RAD750 / RAD5545). Not only do these [processors] have less computing power, they also have ecosystems which are limited. Limited ecosystems mean longer development times (less reuse, more “fighting” with the tools as they are generally less polished) and they also limit attraction of new talent, people who want to work with modern frameworks, processors, and tools. This also limits the pool of experienced talent (which is an increasing issue like it is in many industries).
“The creation of a high-performance multicore processor based around RISC-V will open up a wide ecosystem of tools and frameworks while also providing attraction to new talent and widening the pool of experienced talent. The processors themselves look very interesting as they are designed with high performance in mind, so they have SIMD / Vector processing and AI (urgh such an overstated buzz word). It also appears they have considered power management well, which is critical for different applications, especially in space.
“It is interesting that as an FPGA design company (primarily), we have designed in several MicroChip SAM71 RT and RH [radiation tolerant and radiation hardened] microcontrollers recently, which really provide some great capabilities where processing demands are low. I see HPSC as being very complementary to this range of devices, leaving the ultrahigh performance / very hard real time applications to be implemented in FPGA. Ultimately HPSC gives engineers another tool to choose from, and it is designed to prevent the all-too-common, start-from-scratch approach, which engineers love. Sadly, that approach always increases costs and technical risk on these projects, and we have enough of that already.”
One final note: During my research for this article, I discovered that NASA’s HPSC has not always been based on the RISC-V architecture. A presentation made at the Radiation Hardened Electronics Technology (RHET) Conference in 2018 by Wesley Powell, Assistant Chief for Technology at NASA Goddard Space Flight Center’s Electrical Engineering Division, includes a block diagram of the HPSC, which shows an earlier conceptual design based on eight Arm Cortex-A53 microprocessor cores with NEON SIMD vector engines and floating-point units. Powell continues to be the Principal Technologist on the HPSC program. At some point in the HPSC’s evolution over the past four years, at least by late 2020 when NASA published a Small Business Innovation Research (SBIR) project Phase I solicitation for the HPSC, the Arm processor cores had been replaced by a requirement for RISC-V processor cores. That change was formally cast in stone last September with the announcement of the project awards to Microchip and SiFive. A sign of the times, perhaps?

My opinion only DYOR
FF

AKIDA BALLISTA
Hi Facty,

I just noticed that the author of the article you posted previously on NASA's HPSC admits he's not an expert in space-based computing, so he asked an expert for his opinion, and that expert was Adam Taylor, the founder and president of Adiuvo Engineering in the UK. That happens to be the very same Adam Taylor that wrote the below blog on our website. Talk about a coincidence!😝

Adam Taylor states in the article “It is interesting that as an FPGA design company (primarily), we have designed in several MicroChip SAM71 RT and RH [radiation tolerant and radiation hardened] microcontrollers recently, which really provide some great capabilities where processing demands are low. I see HPSC as being very complementary to this range of devices, leaving the ultrahigh performance / very hard real time applications to be implemented in FPGA."

So, from the horses mouth, so-to-speak, Aduivo has designed several radiation tolerant and radiation hardened MicroChip controllers recently.

Very interesting.gif



Screenshot 2024-01-31 at 12.31.22 pm.png




Adam Taylors opinion of BrainChip's Akida in a nutshell.🥳

Extract 1



Screenshot 2024-01-31 at 1.12.06 pm.png

Extract 2

Screenshot 2024-01-31 at 12.53.44 pm.png




 

Attachments

  • Screenshot 2024-01-31 at 12.52.34 pm.png
    Screenshot 2024-01-31 at 12.52.34 pm.png
    168.1 KB · Views: 56
Last edited:
  • Like
  • Fire
  • Love
Reactions: 69 users

Sirod69

bavarian girl ;-)
We here in Germany didn't go below 0.00 for BRN, we were with you at around 0.38, well what can I say, we were at 1.70 euros and some people like me didn't sell. OK and what can I say? I think we here want our price to rise again, right?

Sleep Sleeping GIF by yvngswag
 
  • Like
  • Love
  • Thinking
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I do not recall this being posted before and the date of release suggests not so guess which space program is using a COTS anomaly detection SNN on space missions:

Small Business Innovation Research/Small Business Tech Transfer
Neuromorphic Spacecraft Fault Monitor, Phase II
Completed Technology Project (2020 - 2022)
Project Introduction
The goal of this work is to develop a low power machine learning anomaly detector. The low power comes from the type of machine learning (Spiking Neural Network (SNN)) and the hardware the neuromorphic anomaly
detector runs on. The ability to detect and react to anomalies in sensor readings on board resource constrained spacecraft is essential, now more than ever, as enormous satellite constellations are launched and humans push out again beyond low Earth orbit to the Moon and beyond. Spacecraft are autonomous systems operating in dynamic environments. When monitored parameters exceed limits or watchdog timers are not reset, spacecraft can automatically enter a 'safe' mode where primary functionality is reduced or stopped completely. During safe mode the primary mission is put on hold while teams on the ground examine dozens to hundreds of parameters and compare them to archived historical data and the spacecraft design to determine the root cause and what corrective action to take. This is a difficult and time consuming task for humans, but can be accomplished faster, in real- time, by machine learning. As humans travel away from Earth, light travel time delays increase, lengthening the time it takes for ground crews to respond to a safe mode event. The few astronauts onboard will have a hard time replacing the brain power and experience of a team of experts on the ground. Therefore, a new approach is needed that augments existing capabilities to help the astronauts in key decision moments. We provide a new machine learning approach that recognizes nominal and faulty behavior, by learning during integration, test, and on-orbit checkout. This knowledge is stored and used for anomaly detection in a low power neuromorphic chip and continuously updated through regular operations. Anomalies are detected and context is provided in real-time, enabling both astronauts onboard, and ground crews on Earth, to take action and avoid potential faults or safe mode events.
Anticipated Benefits
The software developed in Phase II can potentially be used by NASA for anomaly detection onboard the ISS, the planned Lunar Gateway, and future missions to Mars. The NSFM software can also be used by ground crews to augment their ability to monitor spacecraft and astronaut health telemetry once it reaches the ground. The NSFM software can furthermore be used during integration and test to better inform test operators of the functionality of the system during tests in real time.
The software developed in Phase II can potentially be used for anomaly detection onboard any of the new large constellations planned by private companies. It can also be applied to crewed space missions, deep space probes, UUVs, UAVs, and many industrial applications on Earth. The NSFM software developed in Phase II can also be used during Integration and Test of any commercial satellite.


My opinion only DYOR
FF

AKIDA BALLISTA
Hi Facty,

Me again. Re the Lunar Gateway mission (as per another of your previous posts above), in this video from a year ago, Adam Taylor said he was involved in the Lunar Gateway in the space station that orbits the moon!

In June 2022, unless I'm mistaken, I beleive Rob Teslon said something along the lines that we’d helped NASA get into orbit. Putting these pieces together leads me to beleive that Adam Taylor could have come to know about us through shared work with NASA.


Screenshot 2024-01-31 at 1.36.17 pm.png


He also goes on to say that he is designing a FPGA circuit car for NASA as well. So, given Adam thinks Akida is "stunning" and "very impressive", this would seem to bode extremely well for our inclusion in said FPGA IMO.

Screenshot 2024-01-31 at 2.24.59 pm.png



 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 26 users

HopalongPetrovski

I'm Spartacus!
Well Ok I guess.
Just make sure the Third Eye doesn't enter the Lunar Gateway!
Not that there's anything wrong with that! 🤣
 
  • Haha
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Well Ok I guess.
Just make sure the Third Eye doesn't enter the Lunar Gateway!
Not that there's anything wrong with that! 🤣
That's for sure! No doubt Doodle Labs would want to be part of that action too.😝
 
  • Haha
Reactions: 7 users
Top Bottom