BRN Discussion Ongoing

Had a crook day today and been passed out most of the day.

Didn't miss much by looks haha

Anyway, just saw this CES update from about 5 hrs ago and a piece on Prophesee.


CES 2023: how Auralpine startups plan to do well​

Olivia CarterJanuary 4, 2023

After having already industrialized the first four generations of its neuromorphic and bioinspired sensor and raised 50 million in series C at the end of 2022, another Isérois, Prophesee, has meanwhile chosen to rent, not a stand but a suite, in the heart of the one of the most prestigious hotels at the show, the Venezian, in order to meet a hundred potential prospects… And to present them with three technologies, each targeting a key market: a new sensor prototype, co-developed with the Sony group and intended for the improvement of the image for the world of mobile telephony, a second sensor intended for the immersive experience for players in augmented reality, as well as a sensor for detecting presence within a room for the sector of the IOT, co-developed with the American Brainchip.

This will be the first time that we show these demonstrators publicly, some of which will also be subject to confidentiality clauses., slips to La Tribune Luca Verre, Ceo and co-founder of Prophesee, which today has 110 employees and three locations (Grenoble, Paris and Shanghai).
I was just going to turn in as had a hectic day and almost did not have a final look and am I glad that I did.

Great find generously shared FMF.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Getupthere

Regular
 
  • Like
  • Fire
Reactions: 8 users
A thought bubble which I think is supported by logic and known facts.

The Thought:

Brainchip AKIDA technology has already cracked automotive.

The facts.

Socionext is presenting AKIDA technology for automotive use.

Renesas is presenting AKIDA technology for automotive use - 3rd largest supplier of MCU’s to automotive in the World.

VVDN is presenting AKIDA technology for automotive use.

ARM presents AKIDA technology for just about everything including automotive.

Nviso presents AKIDA technology for automotive use.

Brainchip AKIDA technology is trusted by Mercedes Benz.

Brainchip AKIDA technology is trusted by Valeo and the original EAP was to explore use cases in ADAS and AV.

FORD continues to be an ASX announced customer for automotive.

EDGE IMPULSE supports AKIDA FOMO which has an in cabin automotive use case for driver fatigue and attention monitoring.

Brainchip has consistently stated in presentations for years now that they are working with automotive OEM’s and vehicle manufacturers.

NASA & DARPA approved firms are working with AKIDA technology for radar guidance, cognitive communications and autonomous vehicle navigation all of which are extreme technology use cases that could scale well into automotive on Earth.

Prophesee event based sensors are enhanced by AKIDA technology and use cases for these sensors are most certainly shown by Prophesee as being in automotive ADAS.

The issue for every Electric Vehicle now and in the future will always be battery life and range. Range is increased the more of the battery life that can be reserved for the driving wheels. AKIDA out competes GPUs on power, and price by factors before you get to its unique one shot learning and real time performance so it presents an overly compelling argument for adoption in ADAS and sensors in automotive described by Edge Impulse as Science Fiction.

If you do not think the above is sufficient to justify the ‘Thought’ then please present the opposing facts for consideration.

My opinion only DYOR
FF

AKIDA BALLISTA
Let me get this straight FF. Are you saying Akida may be in cars? Nooo Waaay. That's outrageous! We could end up taking over the world at this rate.

SC
 
  • Haha
  • Like
Reactions: 15 users

HME909

Emerged
Rough day
 

GeorgHbg

Emerged
As a European from Germany, I am currently wondering if there is a regulation in Australia similar to the European regulation

- (https://registers.esma.europa.eu/publication/searchRegister?core=esma_registers_mifid_shsexs

and here for Germany


i have the feeling that in Europe short selling of shares is not the rule.
 
  • Like
  • Thinking
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Guess we'll have to wait until then...then...

Screen Shot 2023-01-05 at 10.32.23 pm.png

 
  • Like
  • Love
  • Sad
Reactions: 18 users
D

Deleted member 118

Guest
Had a crook day today and been passed out most of the day.

Didn't miss much by looks haha

Anyway, just saw this CES update from about 5 hrs ago and a piece on Prophesee.


CES 2023: how Auralpine startups plan to do well​

Olivia CarterJanuary 4, 2023

After having already industrialized the first four generations of its neuromorphic and bioinspired sensor and raised 50 million in series C at the end of 2022, another Isérois, Prophesee, has meanwhile chosen to rent, not a stand but a suite, in the heart of the one of the most prestigious hotels at the show, the Venezian, in order to meet a hundred potential prospects… And to present them with three technologies, each targeting a key market: a new sensor prototype, co-developed with the Sony group and intended for the improvement of the image for the world of mobile telephony, a second sensor intended for the immersive experience for players in augmented reality, as well as a sensor for detecting presence within a room for the sector of the IOT, co-developed with the American Brainchip.

This will be the first time that we show these demonstrators publicly, some of which will also be subject to confidentiality clauses., slips to La Tribune Luca Verre, Ceo and co-founder of Prophesee, which today has 110 employees and three locations (Grenoble, Paris and Shanghai).
Hope your felling better soon
 
  • Like
  • Love
Reactions: 10 users
Please tag or message if you answered the quiz question or the other option to win month of subscription blah whatever rules applied. Will sort it later
Dad's live life alarm went off early this morning so pretty stressed currently with current situation.
I hope everything is ok with your Dad @Rise from the ashes
 
  • Like
  • Love
Reactions: 10 users

Deadpool

Did someone say KFC

The BMW i Vision Dee is a future EV sport sedan that can talk back to you​

View attachment 26280
View attachment 26281

AFEELA movement deep in my heart.

____
https://www.theverge.com/2023/1/4/23538742/bmw-i-vision-dee-concept-ces-av-ar-voice-assistant-e-ink
https://www.theverge.com/2023/1/4/23539863/sony-honda-electric-vehicle-afeela-ces-reveal-photos
_____
Why are they producing graphics like this ☝️ for CES 2023 and why a name like AFEELA?! I can't follow that. By the way, DEE stands for Digital Emotional Experience.
That's one ugly looking vehicle🤢, even the female avatar in the window, looks like she even wants to get out of it.🆘:LOL:
 
  • Like
  • Haha
Reactions: 23 users

Proga

Regular
Yeah. I wonder if Braichip plan to do the big 2K reveal @ CES to gain maximum exposure.
Looks like Edge Impulse and BRN have that very plan in mind.
 
  • Like
  • Thinking
Reactions: 9 users

Proga

Regular
Let me get this straight FF. Are you saying Akida may be in cars? Nooo Waaay. That's outrageous! We could end up taking over the world at this rate.

SC
Not yet but soon will be. All points to 2025 models which begin production in the 2nd half of 2024. All models are planned years in advance.
 
  • Like
  • Love
  • Fire
Reactions: 6 users

Gman

Member
I think Xilinx ai (which they recently acquired) is said to be behind this.
They seem to be working together in some capacity from 2017…


1672920297815.jpeg
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Diogenese

Top 20
"We’re proving that on-chip AI, close to the sensor, has a sensational future, for our customers’ products, as well as the planet."
I have to ask something about Qualcomm again. What exactly do we know about this "an AI accelerator chip"? Can someone say something about that?

The Snapdragon Ride Flex was first mentioned during Qualcomm’s Automotive investor day in September 2022, but more details are available now. The original Ride platform was based around a two-chip solution with an ADAS SoC and an AI accelerator chip.

Hi Sirod69,

This Qualcomm patent application relates to a large split NN over 2 or more SoCs because the weights are too large for the on SoC memory of a single NN SoC.

US2020250545A1 SPLIT NETWORK ACCELERATION ARCHITECTURE

Priority: 20190206

1672918039636.png


[0022] As noted, an artificial intelligence accelerator may be used to train a neural network. Training of a neural network generally involves determining one or more weights associated with the neural network. For example, the weights associated with a neural network are determined by hardware acceleration using a deep learning accelerator. Once the weights associated with a neural network are determined, an inference may be performed using the trained neural network, which computes results (e.g., activations) by processing input data based on the weights associated with the trained neural network.

[0023] In practice, however, a deep learning accelerator has a fixed amount of memory (e.g., static random access memory (SRAM) with a capacity of 128 megabytes (MB)). As a result, the capacity of a deep learning accelerator is sometimes not large enough to accommodate and store a single network. For example, some networks have weights of a larger size than the fixed amount of memory available from the deep learning accelerator. One solution to accommodate large networks is to split the weights into a separate storage device (e.g., a dynamic random access memory (DRAM)). These weights are then read from the DRAM during each inference. This implementation, however, uses more power and can result a memory bottleneck.

[0024] Another solution to accommodate large networks is splitting the network into multiple pieces and passing intermediate results from one accelerator to another through a host. Unfortunately, passing intermediate inference request results through the host consumes host bandwidth. For example, using a host interface (e.g., a peripheral component interconnect express (PCIe) interface) to pass intermediate inference request results consumes the host memory bandwidth. In addition, passing intermediate inference request results through the host (e.g., a host processor) consumes central processing unit cycles of the host processor and adds latency to an overall inference calculation.

[0025] One aspect of the present disclosure splits a large neural network into multiple, separate artificial intelligence (AI) inference accelerators (AIIAs). Each of the separate AI inference accelerators may be implemented in a separate system-on-chip (SoC). For example, each AI inference accelerator is allocated and stores a fraction of the weights or other parameters of the neural network. Intermediate inference request results are passed from one AI inference accelerator to another AI inference accelerator independent of a host processor. Thus, the host processor is not involved with the transfer of the intermediate inference request results.

The system passes partial results from one partial NN SoC to another NN SoC.

Now, I don't know how his differs from having 2 or more Akida 1000s connected up.

But, if Qualcomm think they've invented it, that suggests that 2 years ago, they were not planning to use Akida.

Our patent has a priority of 20181101 which pre-dates Qualcomm's priority by 3 months.
 
  • Like
  • Fire
  • Thinking
Reactions: 37 users

Edge AI Chip Company Syntiant Unveils NDP115 Neural Decision Processor at CES 2023​

https://www.semiconductor-digest.co...ndp115-neural-decision-processor-at-ces-2023/
"The Syntiant NDP115 is now shipping in production volumes. Pricing for 10Ku quantities is $3.25 per unit"

That's damn expensive actually, considering..
Then you have the cost of fitting to whatever product it's going into and no on chip learning?

What's their cost to manufacture the chips?..

I wonder why they can't offer it as IP?
Maybe because of the analog/digital architecture 🤔..

BrainChip's IP royalties from customers, could easily be a 3rd of the cost to them (in volume) at next to no cost to us..

They'd better hope they have good OEM marketing..
 
  • Like
  • Fire
Reactions: 11 users

Sirod69

bavarian girl ;-)
"We’re proving that on-chip AI, close to the sensor, has a sensational future, for our customers’ products, as well as the planet."

Hi Sirod69,

This Qualcomm patent application relates to a large split NN over 2 or more SoCs because the weights are too large for the on SoC memory of a single NN SoC.

US2020250545A1 SPLIT NETWORK ACCELERATION ARCHITECTURE

Priority: 20190206

View attachment 26284

[0022] As noted, an artificial intelligence accelerator may be used to train a neural network. Training of a neural network generally involves determining one or more weights associated with the neural network. For example, the weights associated with a neural network are determined by hardware acceleration using a deep learning accelerator. Once the weights associated with a neural network are determined, an inference may be performed using the trained neural network, which computes results (e.g., activations) by processing input data based on the weights associated with the trained neural network.

[0023] In practice, however, a deep learning accelerator has a fixed amount of memory (e.g., static random access memory (SRAM) with a capacity of 128 megabytes (MB)). As a result, the capacity of a deep learning accelerator is sometimes not large enough to accommodate and store a single network. For example, some networks have weights of a larger size than the fixed amount of memory available from the deep learning accelerator. One solution to accommodate large networks is to split the weights into a separate storage device (e.g., a dynamic random access memory (DRAM)). These weights are then read from the DRAM during each inference. This implementation, however, uses more power and can result a memory bottleneck.

[0024] Another solution to accommodate large networks is splitting the network into multiple pieces and passing intermediate results from one accelerator to another through a host. Unfortunately, passing intermediate inference request results through the host consumes host bandwidth. For example, using a host interface (e.g., a peripheral component interconnect express (PCIe) interface) to pass intermediate inference request results consumes the host memory bandwidth. In addition, passing intermediate inference request results through the host (e.g., a host processor) consumes central processing unit cycles of the host processor and adds latency to an overall inference calculation.

[0025] One aspect of the present disclosure splits a large neural network into multiple, separate artificial intelligence (AI) inference accelerators (AIIAs). Each of the separate AI inference accelerators may be implemented in a separate system-on-chip (SoC). For example, each AI inference accelerator is allocated and stores a fraction of the weights or other parameters of the neural network. Intermediate inference request results are passed from one AI inference accelerator to another AI inference accelerator independent of a host processor. Thus, the host processor is not involved with the transfer of the intermediate inference request results.

The system passes partial results from one partial NN SoC to another NN SoC.

Now, I don't know how his differs from having 2 or more Akida 1000s connected up.

But, if Qualcomm think they've invented it, that suggests that 2 years ago, they were not planning to use Akida.

Our patent has a priority of 20181101 which pre-dates Qualcomm's priority by 3 months.

Thank you @Diogenese for your answer.
Are you now completely ruling out that they are using Akida, or could it be? Namely, I see very large connections from Qualcomm to Brainchip.
 
  • Like
Reactions: 5 users

Diogenese

Top 20
Thank you @Diogenese for your answer.
Are you now completely ruling out that they are using Akida, or could it be? Namely, I see very large connections from Qualcomm to Brainchip.
Well the patent is over 2 years old, and we do have partners in common with Qualcomm, so anything is possible, but, I fear that, like Renesas, they will be reluctant to abandon their in-house development.

On the other hand, if their split NN infringes one or more of our patents ... ?

They may be able to avoid infringement because they use a Frankenstein (hybrid) NN which has several analog layers and a final digital layer.
 
  • Like
  • Love
  • Sad
Reactions: 19 users

Sirod69

bavarian girl ;-)
We've had Innoviz on the list before, but haven't talked about it for a long time




Omer David Keilaf
Omer David Keilaf• Follower:inCEO and Co-Founder at Innoviz Technologies
2 Min. • vor 2 Minuten

I’m super excited about our new product, opening new industries for Innoviz and solving critical barriers for the world. Kudos for the team for once again, making the impossible Innoviz Technologies

Innoviz Technologies to Unveil Breakthrough Innoviz360 LiDAR at CES® 2023​


 
  • Like
  • Fire
  • Love
Reactions: 13 users
I see our friends at Intellisense looking for someone to assist with NN on neuromorphic processors.

Presume to also assist with the awarded Ph II NECR project which we know about and possibly the Ph I ADORBL project.




Intellisense Systems Inc

Senior Software Engineer

Company: Intellisense Systems Inc
Location: Torrance, California, United States
Posted on: January 01
Intellisense Systems innovates what seemed impossible. We are a fast-growing Southern California technology innovator that solves tough, mission-critical challenges for our customers in advanced military, law enforcement, and commercial markets. We design, develop, and manufacture novel technology solutions for ground, vehicle, maritime, and airborne applications. Our products have been deployed in every extreme environment on Earth!
We are looking for an exceptional Senior Software Engineer to join our Artificial Intelligence (AI) and Radio Frequency (RF) Systems team. The team works on cutting-edge technologies for government customers and DoD applications.
As part of the team, you will work alongside other experienced scientists and engineers to develop novel cutting-edge solutions to several challenging problems. From creating experiments and prototyping implementations to designing new machine learning algorithms, you will contribute to algorithmic and system modeling and simulation, transition your developments to software and hardware implementations, and test your integrated solutions in accordance with project objectives, requirements, and schedules.
Projects You May Work On:
  • Real-time object detection, classification, and tracking
  • RF signal detection, classification, tracking, and identification
  • Fully integrated object detection systems featuring edge processing of modern deep learning algorithms
  • Optimization of cutting-edge neural network architectures for deployment on neuromorphic processors

Neuromorphic Enhanced Cognitive Radio​

Award Information
Agency:National Aeronautics and Space Administration
Branch:N/A
Contract:80NSSC22CA063
Agency Tracking Number:211743
Amount:$799,985.00
Phase: Phase II
Program:SBIR
Solicitation Topic Code:H6
Solicitation Number:SBIR_21_P2
Timeline
Solicitation Year:2021
Award Year:2022
Award Start Date (Proposal Award Date):2022-05-25
Award End Date (Contract End Date):2024-05-24

Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoCtrade;, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification. These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype. At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and rover for flight demonstration.


Adaptive Deep Onboard Reinforcement Bidirectional Learning System​

Award Information
Agency:National Aeronautics and Space Administration
Branch:N/A
Contract:80NSSC22PB053
Agency Tracking Number:221780
Amount:$149,996.00
Phase: Phase I
Program:SBIR
Solicitation Topic Code:H6
Solicitation Number:SBIR_22_P1
Timeline
Solicitation Year:2022
Award Year:2022
Award Start Date (Proposal Award Date):2022-07-22
Award End Date (Contract End Date):2023-01-25

NASA is seeking innovative neuromorphic processing methods and tools to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). To address this need, Intellisense Systems, Inc. (Intellisense) proposes to develop an Adaptive Deep Onboard Reinforcement Bidirectional Learning (ADORBL) processor based on neuromorphic processing and its efficient implementation on neuromorphic computing hardware. Neuromorphic processors are a key enabler to the cognitive radio and image processing system architecture, which play a larger role in mitigating complexity and reducing autonomous operations costs as communications and control become complex. ADORBL is a low-SWaP neuromorphic processing solution consisting of multispectral and/or synthetic aperture radar (SAR) data acquisition and an onboard computer running the neural network algorithms. The implementation of artificial intelligence and machine learning enables ADORBL to choose processing configurations and adjust for impairments and failures. Due to its speed, energy efficiency, and higher performance for processing, ADORBL processes raw images, finds potential targets and thus allows for autonomous missions and can easily integrate into SWaP-constrained platforms in spacecraft and robotics to support NASA missions to establish a lunar presence, to visit asteroids, and to extend human reach to Mars. In Phase I, we will develop the CONOPS and key algorithms, integrate a Phase I ADORBL processing prototype to demonstrate its feasibility, and develop a Phase II plan with a path forward. In Phase II, ADORBL will be further matured, implemented on available commercial neuromorphic computing chips, and then integrated into a Phase II working prototype along with documentation and tools necessary for NASA to use the product and modify and use the software. The Phase II prototype will be tested and delivered to NASA to demonstrate for applications to CubeSat, SmallSat, and rover flights.
 
  • Like
  • Fire
  • Love
Reactions: 45 users

cassip

Regular
Mercedes prepares for its tech talks at CES today…they start in about 6 hours

 
  • Like
  • Love
  • Fire
Reactions: 38 users
Top Bottom