BRN Discussion Ongoing

Proga

Regular
Let me get this straight FF. Are you saying Akida may be in cars? Nooo Waaay. That's outrageous! We could end up taking over the world at this rate.

SC
Not yet but soon will be. All points to 2025 models which begin production in the 2nd half of 2024. All models are planned years in advance.
 
  • Like
  • Love
  • Fire
Reactions: 6 users

Gman

Member
I think Xilinx ai (which they recently acquired) is said to be behind this.
They seem to be working together in some capacity from 2017…


1672920297815.jpeg
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Diogenese

Top 20
"We’re proving that on-chip AI, close to the sensor, has a sensational future, for our customers’ products, as well as the planet."
I have to ask something about Qualcomm again. What exactly do we know about this "an AI accelerator chip"? Can someone say something about that?

The Snapdragon Ride Flex was first mentioned during Qualcomm’s Automotive investor day in September 2022, but more details are available now. The original Ride platform was based around a two-chip solution with an ADAS SoC and an AI accelerator chip.

Hi Sirod69,

This Qualcomm patent application relates to a large split NN over 2 or more SoCs because the weights are too large for the on SoC memory of a single NN SoC.

US2020250545A1 SPLIT NETWORK ACCELERATION ARCHITECTURE

Priority: 20190206

1672918039636.png


[0022] As noted, an artificial intelligence accelerator may be used to train a neural network. Training of a neural network generally involves determining one or more weights associated with the neural network. For example, the weights associated with a neural network are determined by hardware acceleration using a deep learning accelerator. Once the weights associated with a neural network are determined, an inference may be performed using the trained neural network, which computes results (e.g., activations) by processing input data based on the weights associated with the trained neural network.

[0023] In practice, however, a deep learning accelerator has a fixed amount of memory (e.g., static random access memory (SRAM) with a capacity of 128 megabytes (MB)). As a result, the capacity of a deep learning accelerator is sometimes not large enough to accommodate and store a single network. For example, some networks have weights of a larger size than the fixed amount of memory available from the deep learning accelerator. One solution to accommodate large networks is to split the weights into a separate storage device (e.g., a dynamic random access memory (DRAM)). These weights are then read from the DRAM during each inference. This implementation, however, uses more power and can result a memory bottleneck.

[0024] Another solution to accommodate large networks is splitting the network into multiple pieces and passing intermediate results from one accelerator to another through a host. Unfortunately, passing intermediate inference request results through the host consumes host bandwidth. For example, using a host interface (e.g., a peripheral component interconnect express (PCIe) interface) to pass intermediate inference request results consumes the host memory bandwidth. In addition, passing intermediate inference request results through the host (e.g., a host processor) consumes central processing unit cycles of the host processor and adds latency to an overall inference calculation.

[0025] One aspect of the present disclosure splits a large neural network into multiple, separate artificial intelligence (AI) inference accelerators (AIIAs). Each of the separate AI inference accelerators may be implemented in a separate system-on-chip (SoC). For example, each AI inference accelerator is allocated and stores a fraction of the weights or other parameters of the neural network. Intermediate inference request results are passed from one AI inference accelerator to another AI inference accelerator independent of a host processor. Thus, the host processor is not involved with the transfer of the intermediate inference request results.

The system passes partial results from one partial NN SoC to another NN SoC.

Now, I don't know how his differs from having 2 or more Akida 1000s connected up.

But, if Qualcomm think they've invented it, that suggests that 2 years ago, they were not planning to use Akida.

Our patent has a priority of 20181101 which pre-dates Qualcomm's priority by 3 months.
 
  • Like
  • Fire
  • Thinking
Reactions: 37 users

Edge AI Chip Company Syntiant Unveils NDP115 Neural Decision Processor at CES 2023​

https://www.semiconductor-digest.co...ndp115-neural-decision-processor-at-ces-2023/
"The Syntiant NDP115 is now shipping in production volumes. Pricing for 10Ku quantities is $3.25 per unit"

That's damn expensive actually, considering..
Then you have the cost of fitting to whatever product it's going into and no on chip learning?

What's their cost to manufacture the chips?..

I wonder why they can't offer it as IP?
Maybe because of the analog/digital architecture 🤔..

BrainChip's IP royalties from customers, could easily be a 3rd of the cost to them (in volume) at next to no cost to us..

They'd better hope they have good OEM marketing..
 
  • Like
  • Fire
Reactions: 11 users

Sirod69

bavarian girl ;-)
"We’re proving that on-chip AI, close to the sensor, has a sensational future, for our customers’ products, as well as the planet."

Hi Sirod69,

This Qualcomm patent application relates to a large split NN over 2 or more SoCs because the weights are too large for the on SoC memory of a single NN SoC.

US2020250545A1 SPLIT NETWORK ACCELERATION ARCHITECTURE

Priority: 20190206

View attachment 26284

[0022] As noted, an artificial intelligence accelerator may be used to train a neural network. Training of a neural network generally involves determining one or more weights associated with the neural network. For example, the weights associated with a neural network are determined by hardware acceleration using a deep learning accelerator. Once the weights associated with a neural network are determined, an inference may be performed using the trained neural network, which computes results (e.g., activations) by processing input data based on the weights associated with the trained neural network.

[0023] In practice, however, a deep learning accelerator has a fixed amount of memory (e.g., static random access memory (SRAM) with a capacity of 128 megabytes (MB)). As a result, the capacity of a deep learning accelerator is sometimes not large enough to accommodate and store a single network. For example, some networks have weights of a larger size than the fixed amount of memory available from the deep learning accelerator. One solution to accommodate large networks is to split the weights into a separate storage device (e.g., a dynamic random access memory (DRAM)). These weights are then read from the DRAM during each inference. This implementation, however, uses more power and can result a memory bottleneck.

[0024] Another solution to accommodate large networks is splitting the network into multiple pieces and passing intermediate results from one accelerator to another through a host. Unfortunately, passing intermediate inference request results through the host consumes host bandwidth. For example, using a host interface (e.g., a peripheral component interconnect express (PCIe) interface) to pass intermediate inference request results consumes the host memory bandwidth. In addition, passing intermediate inference request results through the host (e.g., a host processor) consumes central processing unit cycles of the host processor and adds latency to an overall inference calculation.

[0025] One aspect of the present disclosure splits a large neural network into multiple, separate artificial intelligence (AI) inference accelerators (AIIAs). Each of the separate AI inference accelerators may be implemented in a separate system-on-chip (SoC). For example, each AI inference accelerator is allocated and stores a fraction of the weights or other parameters of the neural network. Intermediate inference request results are passed from one AI inference accelerator to another AI inference accelerator independent of a host processor. Thus, the host processor is not involved with the transfer of the intermediate inference request results.

The system passes partial results from one partial NN SoC to another NN SoC.

Now, I don't know how his differs from having 2 or more Akida 1000s connected up.

But, if Qualcomm think they've invented it, that suggests that 2 years ago, they were not planning to use Akida.

Our patent has a priority of 20181101 which pre-dates Qualcomm's priority by 3 months.

Thank you @Diogenese for your answer.
Are you now completely ruling out that they are using Akida, or could it be? Namely, I see very large connections from Qualcomm to Brainchip.
 
  • Like
Reactions: 5 users

Diogenese

Top 20
Thank you @Diogenese for your answer.
Are you now completely ruling out that they are using Akida, or could it be? Namely, I see very large connections from Qualcomm to Brainchip.
Well the patent is over 2 years old, and we do have partners in common with Qualcomm, so anything is possible, but, I fear that, like Renesas, they will be reluctant to abandon their in-house development.

On the other hand, if their split NN infringes one or more of our patents ... ?

They may be able to avoid infringement because they use a Frankenstein (hybrid) NN which has several analog layers and a final digital layer.
 
  • Like
  • Love
  • Sad
Reactions: 19 users

Sirod69

bavarian girl ;-)
We've had Innoviz on the list before, but haven't talked about it for a long time




Omer David Keilaf
Omer David Keilaf• Follower:inCEO and Co-Founder at Innoviz Technologies
2 Min. • vor 2 Minuten

I’m super excited about our new product, opening new industries for Innoviz and solving critical barriers for the world. Kudos for the team for once again, making the impossible Innoviz Technologies

Innoviz Technologies to Unveil Breakthrough Innoviz360 LiDAR at CES® 2023​


 
  • Like
  • Fire
  • Love
Reactions: 13 users
I see our friends at Intellisense looking for someone to assist with NN on neuromorphic processors.

Presume to also assist with the awarded Ph II NECR project which we know about and possibly the Ph I ADORBL project.




Intellisense Systems Inc

Senior Software Engineer

Company: Intellisense Systems Inc
Location: Torrance, California, United States
Posted on: January 01
Intellisense Systems innovates what seemed impossible. We are a fast-growing Southern California technology innovator that solves tough, mission-critical challenges for our customers in advanced military, law enforcement, and commercial markets. We design, develop, and manufacture novel technology solutions for ground, vehicle, maritime, and airborne applications. Our products have been deployed in every extreme environment on Earth!
We are looking for an exceptional Senior Software Engineer to join our Artificial Intelligence (AI) and Radio Frequency (RF) Systems team. The team works on cutting-edge technologies for government customers and DoD applications.
As part of the team, you will work alongside other experienced scientists and engineers to develop novel cutting-edge solutions to several challenging problems. From creating experiments and prototyping implementations to designing new machine learning algorithms, you will contribute to algorithmic and system modeling and simulation, transition your developments to software and hardware implementations, and test your integrated solutions in accordance with project objectives, requirements, and schedules.
Projects You May Work On:
  • Real-time object detection, classification, and tracking
  • RF signal detection, classification, tracking, and identification
  • Fully integrated object detection systems featuring edge processing of modern deep learning algorithms
  • Optimization of cutting-edge neural network architectures for deployment on neuromorphic processors

Neuromorphic Enhanced Cognitive Radio​

Award Information
Agency:National Aeronautics and Space Administration
Branch:N/A
Contract:80NSSC22CA063
Agency Tracking Number:211743
Amount:$799,985.00
Phase: Phase II
Program:SBIR
Solicitation Topic Code:H6
Solicitation Number:SBIR_21_P2
Timeline
Solicitation Year:2021
Award Year:2022
Award Start Date (Proposal Award Date):2022-05-25
Award End Date (Contract End Date):2024-05-24

Intellisense Systems, Inc. proposes in Phase II to advance development of a Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). NECR is a low-size, -weight, and -power (-SWaP) cognitive radio built on the open-source framework, i.e., GNU Radio and RFNoCtrade;, with new enhancements in environment learning and improvements in transmission quality and data processing. Due to the high efficiency of spiking neural networks and their low-latency, energy-efficient implementation on neuromorphic computing hardware, NECR can be integrated into SWaP-constrained platforms in spacecraft and robotics, to provide reliable communication in unknown and uncharacterized space environments such as the Moon and Mars. In Phase II, Intellisense will improve the NECR system for cognitive communication capabilities accelerated by neuromorphic hardware. We will refine the overall NECR system architecture to achieve cognitive communication capabilities accelerated by neuromorphic hardware, on which a special focus will be the mapping, optimization, and implementation of smart sensing algorithms on the neuromorphic hardware. The Phase II smart sensing algorithm library will include Kalman filter, Carrier Frequency Offset estimation, symbol rate estimation, energy detection- and matched filter-based spectrum sensing, signal-to-noise ratio estimation, and automatic modulation identification. These algorithms will be implemented on COTS neuromorphic computing hardware such as Akida processor from BrainChip, and then integrated with radio frequency modules and radiation-hardened packaging into a Phase II prototype. At the end of Phase II, the prototype will be delivered to NASA for testing and evaluation, along with a plan describing a path to meeting fault and tolerance requirements for mission deployment and API documents for integration with CubeSat, SmallSat, and rover for flight demonstration.


Adaptive Deep Onboard Reinforcement Bidirectional Learning System​

Award Information
Agency:National Aeronautics and Space Administration
Branch:N/A
Contract:80NSSC22PB053
Agency Tracking Number:221780
Amount:$149,996.00
Phase: Phase I
Program:SBIR
Solicitation Topic Code:H6
Solicitation Number:SBIR_22_P1
Timeline
Solicitation Year:2022
Award Year:2022
Award Start Date (Proposal Award Date):2022-07-22
Award End Date (Contract End Date):2023-01-25

NASA is seeking innovative neuromorphic processing methods and tools to enable autonomous space operations on platforms constrained by size, weight, and power (SWaP). To address this need, Intellisense Systems, Inc. (Intellisense) proposes to develop an Adaptive Deep Onboard Reinforcement Bidirectional Learning (ADORBL) processor based on neuromorphic processing and its efficient implementation on neuromorphic computing hardware. Neuromorphic processors are a key enabler to the cognitive radio and image processing system architecture, which play a larger role in mitigating complexity and reducing autonomous operations costs as communications and control become complex. ADORBL is a low-SWaP neuromorphic processing solution consisting of multispectral and/or synthetic aperture radar (SAR) data acquisition and an onboard computer running the neural network algorithms. The implementation of artificial intelligence and machine learning enables ADORBL to choose processing configurations and adjust for impairments and failures. Due to its speed, energy efficiency, and higher performance for processing, ADORBL processes raw images, finds potential targets and thus allows for autonomous missions and can easily integrate into SWaP-constrained platforms in spacecraft and robotics to support NASA missions to establish a lunar presence, to visit asteroids, and to extend human reach to Mars. In Phase I, we will develop the CONOPS and key algorithms, integrate a Phase I ADORBL processing prototype to demonstrate its feasibility, and develop a Phase II plan with a path forward. In Phase II, ADORBL will be further matured, implemented on available commercial neuromorphic computing chips, and then integrated into a Phase II working prototype along with documentation and tools necessary for NASA to use the product and modify and use the software. The Phase II prototype will be tested and delivered to NASA to demonstrate for applications to CubeSat, SmallSat, and rover flights.
 
  • Like
  • Fire
  • Love
Reactions: 45 users

cassip

Regular
Mercedes prepares for its tech talks at CES today…they start in about 6 hours

 
  • Like
  • Love
  • Fire
Reactions: 38 users

dippY22

Regular
The idea that every company at CES 2023 is going to announce they are working with Brainchip is just that an idea.

Rob Telson set out the FACTS and based on those FACTS the certainty is that:

1. Renesas and Brainchip will present;

2. Socionext and Brainchip will present;

3. Prophesee and Brainchip will present;

4. Nviso and Brainchip will present;

5. VVDN and Brainchip will present;

6. EDGE IMPULSE and Brainchip will present.

Prior to announcing the VVDN presentation with Brainchip Rob Telson said more to come so VVDN and Edge Impulse may have been the more and that is it.

Everything else is speculation by posters here not FACTS from the company.

Valeo not mentioning AKIDA should be seen in the light that they have not mentioned AKIDA since 2020 when they were ASX announced.

SCARLA 2 which we speculate may contain AKIDA is yet to hit the streets. Having kept quiet about AKIDA since May, 2020 what fact do we rely upon to say they will ever disclose that AKIDA is being used when the CEO Sean Hehir implied by saying watch the 4C’s that some customers will not be revealing they use AKIDA IP.

Creating your own narrative and then feeling let down because the company did not follow it is not sensible.

All it does is allow the trolls and manipulators a chance to play on your emotions.

My opinion only DYOR
FF

AKIDA BALLISTA

PS: Any sane investor will be over the moon and more than satisfied with the 6 CES Events listed above I know I am. Ok and Blind Freddie says he is too.

FF says, "The idea that every company at CES 2023 is going to announce they are working with Brainchip is just that an idea." And a wacko idea too I would add. Delusional even.

But what would be interesting and very revealing is to learn how many companies or individuals coming to the CES are at least talking to Brainchip. To answer that it certainly would be insighful to know how many of the umpteen one half hour meeting slots Brainchip has made available to anyone who would like to schedule or reserve their spot were actually filled (or reserved).

For example, if there were 100 - 1/2 hour meeting spots originally available and Brainchip only had 10 meetings reserved.....well 10% response could be construed as a win, ... or maybe not. But if 85 of the allotted meeting times were booked, well I for one would consider that to be very good and would presume Brainchip being quite happy about that. Zero attendees booking a meeting time would be telling, too, but I find that unlikely.

Anyone know if that is something we could discover, .... how many of the available meeting times were booked, or reserved??? From an investors perspective that would be information I want to know. Regards, dippY

From Brainchip.com ............

BrainChip Inc

Meet with BrainChip Inc. @ CES 2023​

30 min

Las Vegas, NV
Meet with BrainChip Inc. to discuss how the technology can benefit your company or organization.


Select a Date & Time​

January 2023​


January is now displayed
SUNMONTUEWEDTHUFRISAT
12345
67
891011121314
15161718192021
22232425262728
293031

Time zone
Pacific Time - US & Canada
(6:03am)
 
  • Like
  • Fire
  • Love
Reactions: 17 users
So, just on recent Edge Impulse news.

I see they have already updated their site as below.





Screenshot_2023-01-05-22-10-21-57_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Screenshot_2023-01-05-22-11-04-74_4641ebc0df1485bf6b47ebd018b5ee76.jpg


Whilst on the site sniffing around I saw the support also for Renesas.

We know they are taping out at the mo and read something below that I was surprised and curious about as to how many feed forward neural networks are out there.

Must be honest as not really looked that hard but didn't suspect too many more than ours?


Screenshot_2023-01-05-22-12-16-17_4641ebc0df1485bf6b47ebd018b5ee76.jpg


IMG_20230105_221435.jpg
 
  • Like
  • Fire
  • Love
Reactions: 35 users
Had a crook day today and been passed out most of the day.

Didn't miss much by looks haha

Anyway, just saw this CES update from about 5 hrs ago and a piece on Prophesee.


CES 2023: how Auralpine startups plan to do well​

Olivia CarterJanuary 4, 2023

After having already industrialized the first four generations of its neuromorphic and bioinspired sensor and raised 50 million in series C at the end of 2022, another Isérois, Prophesee, has meanwhile chosen to rent, not a stand but a suite, in the heart of the one of the most prestigious hotels at the show, the Venezian, in order to meet a hundred potential prospects… And to present them with three technologies, each targeting a key market: a new sensor prototype, co-developed with the Sony group and intended for the improvement of the image for the world of mobile telephony, a second sensor intended for the immersive experience for players in augmented reality, as well as a sensor for detecting presence within a room for the sector of the IOT, co-developed with the American Brainchip.

This will be the first time that we show these demonstrators publicly, some of which will also be subject to confidentiality clauses., slips to La Tribune Luca Verre, Ceo and co-founder of Prophesee, which today has 110 employees and three locations (Grenoble, Paris and Shanghai).
@dippY22

We at least know Prophesee presenting to the below number of prospects with Brainchip so a good start ;)

Prophesee, has meanwhile chosen to rent, not a stand but a suite, in the heart of the one of the most prestigious hotels at the show, the Venezian, in order to meet a hundred potential prospects… And to present them with three technologies, each targeting a key market: a new sensor prototype, co-developed with the Sony group and intended for the improvement of the image for the world of mobile telephony, a second sensor intended for the immersive experience for players in augmented reality, as well as a sensor for detecting presence within a room for the sector of the IOT, co-developed with the American Brainchip.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
 
  • Like
  • Love
Reactions: 7 users
  • Haha
  • Like
Reactions: 13 users

ODAAT

one day at a time
I know it's probably too soon to be part of Intel's products but wouldn't it be nice if their claim of a unique piece of technology is our little secret sauce :D

The announcement of the new Intel Core i9-13980HX marks a renewed focus on performance platforms for the tech giant, just a few short years after Apple decided to step away from Intel and build chipsets in-house. The decision has apparently lit a fire under Intel, which now lays claim to a unique piece of technology that rivals the M2 in more ways than one.

https://www.msn.com/en-au/money/new...sor/ar-AA15WDjL?ocid=emmx-mmx-feeds&PC=EMMX01
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users

dippY22

Regular
Received my Edge Impulse newsletter today, and one of the topics is "ML on Edge, Episode 3" a podcast they host ( hmmmm, podcasts(???) ...sounds familiar, eh?). This episode's guest is Guido Jouret. How long before Rob Telson, or Sean Hehir, etc,....pop up as a guest on some future episode??? I'd bet a shiney new dime that it happens.

Regards, dippY
 
  • Like
  • Thinking
  • Fire
Reactions: 15 users

Sirod69

bavarian girl ;-)
VVDN joined the nvidia partner network to expand opportunities for advanced Ai-enabled camera vision applications


Interestingly, they were previously trying to provide this very same solution on an amberella SoC. Perhaps they too, like Prophesee, came to the house of straw realisation prior to getting their hands on akida.

hmm?:unsure:

#CES2023: Today, Ambarella announced the first production member of our CV3 family of automotive AI domain controllers.

Targeting L2+ to L4 autonomous vehicles, the CV3-AD685 provides a single-chip solution for multi-sensor perception, fusion and path planning, and features a next-generation CVflow® AI engine with neural network processing that is 20x faster than the previous generation of CV2 SoCs.

“Our CV3-AD domain controller family is transforming the automotive AD and ADAS market, through its unique combination of highly efficient AI processing, advanced image processing and ultra-low power consumption,” said Fermi Wang (王奉民), President and CEO of Ambarella.

Live demonstrations of Ambarella’s CV3 technology are offered by invitation-only during CES 2023. Contact your Ambarella representative to schedule a meeting.
1672937288582.png
 
  • Like
  • Thinking
  • Fire
Reactions: 16 users

stuart888

Regular
Received my Edge Impulse newsletter today, and one of the topics is "ML on Edge, Episode 3" a podcast they host ( hmmmm, podcasts(???) ...sounds familiar, eh?). This episode's guest is Guido Jouret. How long before Rob Telson, or Sean Hehir, etc,....pop up as a guest on some future episode??? I'd bet a shiney new dime that it happens.

Regards, dippY
Yes Dippy! Very-very good discussion. Years from now, this relationship with Edge Impulse might prove to be one huge winner! Maybe our best partnership!



Guido Jouret
I'm passionate about developing high-tech solutions to solve our biggest problems. At Cisco, it was "connecting the unconnected" with the Internet, TelePresence, and IoT. At Envision Energy, it was building a software platform to provide clean and renewable sources of energy. At Nokia, it was helping to improve people's lives via digital health and immersive experiences (virtual reality). At ABB, we used digital technologies to help solve some of the biggest challenges we face: providing greener power, transportation, and the benefits of automation to people all over the world. At Plume, we made our homes smarter to enrich our daily lives.
https://www.linkedin.com/in/gjouret
 
  • Like
Reactions: 16 users

stuart888

Regular
Edge Impulse has now 13 "Strategic Partners". They also list others under "Solution Partners" and "Community Partners". Strategic Partners in addition to Brainchip:

ALIF Semiconductors,
Arudino,
Infineon,
Nordic Semiconductor,
Polyn Technologies,
Renesas,
SAIC,
Seed Studio,
Silicon Labs,
Sony, Synaptics,
Syntiant.


Zach Shelby is really sharp. The more I listen/watch/learn, the positive impression just builds. Great thoughtful delivery on all his videos. 🎅

https://www.edgeimpulse.com/ecosystem-partners

1672938552276.png
 
  • Like
  • Love
  • Fire
Reactions: 40 users

charles2

Regular
Initially in US and Canada......10000 charging stations



The Wall Street Journal

Mercedes-Benz Plans to Install Its Own Network of EV Chargers​

 
  • Like
  • Love
  • Fire
Reactions: 23 users
Top Bottom