BRN Discussion Ongoing

Diogenese

Top 20
Hi Doz,

This is an interesting PCT application which includes the N-of-M coding. It resulted in the grant of this US patent:

US11227210B2 Event-based classification of features in a reconfigurable and temporally coded convolutional spiking neural network

It has a priority of 20190725, which is too early to for transformers.

But there was a subsequent "continuation-in-part" [patent of addition] application filed in the US:
US2022147797A1 EVENT-BASED EXTRACTION OF FEATURES IN A CONVOLUTIONAL SPIKING NEURAL NETWORK
which has a priority of 20220114, and is awaiting examination by USPTO.

This application does refer to transformation modules, and the description of the transformer module was added to the parent description on 20220114.

[006] ... However, up to now temporal spiking neural networks have not been able to meet the accuracy demands of image classification. Spiking neural networks comprise a network of threshold units, and spike inputs connected to weights that are additively integrated to create a value that is compared to one or more thresholds. No multiplication functions are used. Previous attempts to use spiking neural networks in classification tasks have failed because of erroneous assumptions and subsequent inefficient spike rate approximation of conventional convolutional neural networks and architectures. In spike rate coding methods, the values that are transmitted between neurons in a conventional convolutional neural network are instead approximated as spike trains, whereby the number of spikes represent a floating-point or integer value which means that no accuracy gains or sparsity benefits may be expected. Such rate-coded systems are also significantly slower than temporal-coded systems, since it takes time to process sufficient spikes to transmit a number in a rate-coded system. The present invention avoids those mistakes and returns excellent results on complex data sets and frame-based images.

A system, comprising:
a memory for storing data representative of at least one kernel;
a plurality of spiking neuron circuits;
an input module for receiving spikes related to digital data, wherein each spike is relevant to a spiking neuron circuit and each spike has an associated spatial coordinate corresponding to a location in an input spike array;
a transformation module configured to:
transform a kernel to produce a transformed kernel having an increased resolution relative to the kernel; and/or
transform the input spike array to produce a transformed input spike array having an increased resolution relative to the input spike array;
a packet collection module configured to collect spikes until a predetermined number of spikes relevant to the input spike array have been collected in a packet in memory, and to organize the collected relevant spikes in the packet based on the spatial coordinates of the spikes; and
a convolutional neural processor configured to perform event-based convolution using memory and at least one of the transformed input spike array and the transformed kerne
l.

However, this use of the term "transformation" is not the same as the "transformer" which is supplanting LSTM with its "attention" capability.

https://blogs.nvidia.com/blog/2022/...ideo,can be used to create even better models.

1678159057668.png


Transformers use positional encoders to tag data elements coming in and out of the network. Attention units follow these tags, calculating a kind of algebraic map of how each element relates to the others.
Attention queries are typically executed in parallel by calculating a matrix of equations in what’s called multi-headed attention.
With these tools, computers can see the same patterns humans see
.
 
  • Like
  • Fire
  • Love
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Looks like we're still in the running for future smartphones, etc from Sumsung then.

Samsung: Reports of new internal CPU development team are not true


Last updated: March 6th, 2023 at 15:15 UTC+01:00

After a series of disappointments with Exynos processors, Samsung completely switched to Qualcomm chips for the Galaxy S23. The company will reportedly use a Qualcomm chip for the Galaxy S24 series as well. Over the past few days, reports claimed that Samsung restarted the development of custom CPU cores for future Galaxy devices. However, the South Korean firm claims those reports are untrue.
Regarding recent reports, Samsung Electronics reached out to us and said, “A recent media report that Samsung has established an internal team dedicated to CPU core development is not true. Contrary to the news, we have long had multiple internal teams responsible for CPU development and optimization, while constantly recruiting global talents from relevant fields.” This clearly indicates that the company hasn’t started developing custom CPU cores for its future smartphones, tablets, and laptops.


Samsung might continue using ARM’s stock CPU cores in its phones​

Samsung’s statement indicates that it might continue using ARM’s stock CPU cores in its future smartphones. ARM has reportedly changed its licensing terms, disallowing OEMs from making changes to its stock designs. ARM and Qualcomm have been at odds with each other regarding licensing terms of ARM’s CPU core designs ever since Qualcomm acquired Nuvia.

Some reports claimed in the past that Samsung MX, Samsung Electronics’ mobile phone division, has created an internal team to develop in-house smartphone chipsets from scratch. However, there hasn’t been any official information in this regard. It is being reported that Samsung will switch to an in-house chipset for the Galaxy S25 in early 2025.

Samsung has been under constant pressure from consumers due to the sub-par performance of Exynos processors. The company was also embroiled in the controversy related to GOS (Game Optimization Service) that reduced peak gaming performance on the Galaxy S22 series for stable sustained performance. Later, the company had to offer an option to turn off GOS on its smartphones through a software update.
PHONETABLET SAMSUNG ELECTRONICSSAMSUNG MXSAMSUNG SEMICONDUCTORSYSTEM LSI

Sam
 
  • Like
  • Fire
  • Wow
Reactions: 11 users

Diogenese

Top 20
View attachment 31435
I think this is the reason why Brainchip is pushing for a different way to benchmark Edge AI processors as TOPS has been weighted to much in terms of power and capability. Akida’s top end is 50 TOPs yet according to Nviso Akida outperformed Nvidia in every way.

So if Akida with much less TOPS then Nvidia processors can outperform them, then benchmarking has to change to ensure true value gets highlighted.

Sorry if I misunderstand this as I am not a super technical guy.
HI TFM,

That is the beauty of Akida - it does not need to perform all those operations to get the result because of its sparsity. Spikes are sparse, only occuring in response to an event.

In addition, the N-of-M coding (rank coding) takes advantage of the retinal receptor/pixel response time to eliminate spikes without significant data.

So frames per second per Watt would be a more meaningful comparison, or its equivalent for voice - say, eg, inferences per second per Watt.

A benchmark which compares results is better than a benchmark which counts the steps to get to the result.
 
  • Like
  • Love
  • Fire
Reactions: 36 users

ndefries

Regular
Ok all this excitement. Another 40k added to my drawer.
 
  • Like
  • Fire
  • Wow
Reactions: 27 users

HopalongPetrovski

I'm Spartacus!
I have just noticed that Nabtrade is not showing if transactions are cross trades or any of that type of info today?
Not sure how long it's been absent.
Are other platforms still routinely showing this info when you look at the course of trades?
 
  • Thinking
Reactions: 2 users

Esq.111

Fascinatingly Intuitive.
I have just noticed that Nabtrade is not showing if transactions are cross trades or any of that type of info today?
Not sure how long it's been absent.
Are other platforms still routinely showing this info when you look at the course of trades?
Afternoon HoppalongPetrovski,

Comsuck is displaying all trades, codes.
Wether it's an accurate depiction of what's actually happening is debatable.

😊.

Regards,
Esq.
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Spot on. No doubt about Ispolon.

Had a little read about Ispolon.

I have said it a few times too many but I believe AKIDA and cognitive communications across radio, wifi, 3,4,5,6G will be a stellar market.

My opinion only DYOR
FF

AKIDA BALLISTA
Maybe some further connection here to dig into with BrainCHip + Tata + Ipsolon + Renasas because of mutual interests in radio frequency stuff.

Funny how the same names keep cropping up all over the place.🥳
Screen Shot 2023-03-07 at 2.47.4.png



Screen Shot 2023-03-07 at 2.50.3.png
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Makeme 2020

Regular
RENESAS SHOWCASING NEW PRODUCTS AT EMBEDDED WORLD MARCH 14 -16
Renesas
Date:​
March 14-16, 2023​
Location:​
Nuremberg, Germany​
Booth:​
Hall 1, Stand 234 (1-234)​
Visit us at Embedded World and discover our latest solutions to make our lives easier. Whatever your area of electronic design, our experts will be on hand to showcase our latest technologies in the following demos:

mail
ADAS, xEV, and E/E Architecture Change
Explore our extensive automotive roadmaps for ADAS, xEV, and E/E architecture change.
mail
Winning Combination Reference Designs
Delve into our Analog + Power + Embedded Processing + Connectivity product portfolios and the resulting comprehensive solutions for consumer, industrial and automotive applications. Join us for a major announcement around Quick-Connect IoT and see solutions from various partners.
mail
MCUs & MPUs
Explore our wide portfolio of industry leading MCUs & MPUs.
mail
Connectivity Solutions & Wi-Fi 6/6E
Check out our connectivity solutions featuring the Future with Matter protocol and Wi-Fi 6/6E.
mail
Analog & Power
See our Analog & Power offering including sensor solutions with Arduino and GreenPAK.
Renesas
Visit us at Embedded World and discover our latest solutions to make our lives easier. Whatever your area of electronic design, our experts will be on hand to showcase our latest technologies in the following demos:

Renesas
Date:​
March 14-16, 2023​
Location:​
Nuremberg, Germany​
Booth:​
Hall 1, Stand 234 (1-234)​
Visit us at Embedded World and discover our latest solutions to make our lives easier. Whatever your area of electronic design, our experts will be on hand to showcase our latest technologies in the following demos:

mail
ADAS, xEV, and E/E Architecture Change
Explore our extensive automotive roadmaps for ADAS, xEV, and E/E architecture change.
mail
Winning Combination Reference Designs
Delve into our Analog + Power + Embedded Processing + Connectivity product portfolios and the resulting comprehensive solutions for consumer, industrial and automotive applications. Join us for a major announcement around Quick-Connect IoT and see solutions from various partners.
mail
MCUs & MPUs
Explore our wide portfolio of industry leading MCUs & MPUs.
mail
Connectivity Solutions & Wi-Fi 6/6E
Check out our connectivity solutions featuring the Future with Matter protocol and Wi-Fi 6/6E.
mail
Analog & Power
See our Analog & Power offering including sensor solutions with Arduino and GreenPAK.
Facebook
Twitter
Instagram
LinkedIn
Youtube
Renesas Electronics Singapore Pte. Ltd.
80 Bendemeer Road, 06-02 Hyflux Innovation Centre, Singapore 339949
Tel: +65 6213 0200

Renesas Electronics India Pvt. Ltd.
Bagmane Tech Park, Municipal No. 66/1-4, Lakeview Block, Block B, Ground Floor, Krishnappa Garden, C V Raman Nagar, Bengaluru, Karnataka 560 093
Tel: +91 80 6720 8700​
Unsubscribe or Manage your Communication Preferences
For details on Renesas Electronics' Privacy Policy, please click here, or to Contact Us, click here.
If you are having problems viewing this email, you can view as a webpage.​
 
  • Like
  • Fire
  • Thinking
Reactions: 36 users

jtardif999

Regular
So are we are finally done with stealth mode? Time will tell.
Don’t think so, all of the companies listed in the scroll comments have been happy to tell everyone they love BRN they are not hiding behind NDAs. Leaves all those who have NDAs still same same..
 
  • Like
Reactions: 10 users

Steve10

Regular
Looks ok on 15min chart. Has golden crossed & MACD turned green. We'll see if it holds the breakaway gap.

It's about to GC on 30min chart as well but MACD still not green.

1678164114427.png
 
Last edited:
  • Like
  • Thinking
  • Love
Reactions: 25 users

Reuben

Founding Member
Awesome thanks, you beat me to it.

A good Friend of mine Reuben asked me to post the following screenshots from a facebook group plus the article.

Here are the facebook screenshots

View attachment 31404


View attachment 31403



Here is he Google SIFive X280 article


Google deploys SiFive's Intelligence X280 processor for AI workloads​

Hybridizes the RISC-V cores with TPU architecture
September 22, 2022 By Sebastian Moss Have your say
FacebookTwitterLinkedInRedditEmailShare

Google is using the RISC-V-based SiFive Intelligence X280 processor in combination with the Google TPU, as part of its portfolio of AI chips.
Fabless chip designer SiFive said that it was also being used by NASA, Tenstorrent, Renesas, Microchip, Kinara, and others.
RISC-V is an open standard instruction set architecture based on established RISC principles, which is provided under open source licenses that do not require fees.
SiFiveGoogleTPURISCV.png

– SiFive/Google
At the AI Hardware Summit in Santa Clara, Krste Asanovic, SiFive's co-founder and chief architect took to the stage with Cliff Young, Google TPU Architect and MLPerf co-founder.
The SiFive Intelligence X280 is a multi-core capable RISC-V processor with vector extension, optimized for AI/ML applications in the data center.
At the summit, the two companies explained that the X280 processor is being used as the AI Compute Host to provide flexible programming combined with the Google MXU (systolic matrix multiplier) accelerator in the data center. However, they did not disclose the scale of the deployment.
SiFive has introduced the Vector Coprocessor Interface eXtension (VCIX), allowing customers to plug an accelerator directly into the vector register file of the X280.
Google already uses third-party ASIC design services with Broadcom for its in-house TPU AI chip, instead focusing on developing its strengths - the Matrix Multiply Unit and the Inter-Chip Interconnect.
Now it is adding the X280 in what Google calls "an elegant division of labor with the TPU," taking the MXUs and combining them with the X280.
Google's Cliff Young said that with SiFive VCIX-based general purpose cores “hybridized” with Google MXUs, you can build a machine "that lets you have your cake and eat it too."
Thanks for posting it @TechGirl .... huge news.... x280 processor from sifive is something i will keep my eyes open from now.. now that we know it is with google...
 
  • Like
  • Fire
  • Love
Reactions: 28 users
Just revisited this research article I skimmed mid last year.

Only cause discussed SNN and had Infineon and BMW authors amongst the others haha

Too technical for me but maybe @Diogenese could have a skim?

Reason for revisit is I recalled some of the new capabilities or similar that is in our new platform being discussed.

Scene / Semantic Segmentation, Spatial Temporal etc and also get a note reference at one section to Thorpes rank order coding.

They discuss some of these needs and the benefits for radar in ADAS.

They discuss Spinnaker and Loihi at this time however I suspect the new Akida functionalities could assist?



Automotive Radar Processing With Spiking Neural Networks: Concepts and Challenges

Frequency-modulated continuous wave radar sensors play an essential role for assisted and autonomous driving as they are robust under all weather and light conditions. However, the rising number of transmitters and receivers for obtaining a higher angular resolution increases the cost for digital signal processing. One promising approach for energy-efficient signal processing is the usage of brain-inspired spiking neural networks (SNNs) implemented on neuromorphic hardware. In this article we perform a step-by-step analysis of automotive radar processing and argue how spiking neural networks could replace or complement the conventional processing. We provide SNN examples for two processing steps and evaluate their accuracy and computational efficiency. For radar target detection, an SNN with temporal coding is competitive to the conventional approach at a low compute overhead. Instead, our SNN for target classification achieves an accuracy close to a reference artificial neural network while requiring 200 times less operations. Finally, we discuss the specific requirements and challenges for SNN-based radar processing on neuromorphic hardware. This study proves the general applicability of SNNs for automotive radar processing and sustains the prospect of energy-efficient realizations in automated vehicles.


5.3. Toward Neuromorphic Automated Driving
As motivated in the introduction, the use of neuromorphic hardware has a high potential to significantly reduce the energy demands for highly-automated driving. Besides radar signals, also camera and LIDAR data need to be processed in order to get a complete understanding of the automotive scene. For image processing there already exist first attempts to solve complex tasks with SNNs, e.g., for object detection (Kim et al., 2020) or semantic segmentation (Kim et al., 2021). Also recently, Viale et al. (2021) realized an SNN on Loihi for car detection using a dynamic vision sensor. Using LIDAR data, which is naturally sparse and thus predestined for SNNs, Zhou et al. (2020) showed a spiking convolutional network for real-time 3D object detection. Shalumov et al. (2021) use LIDAR data for SNN-based collision avoidance with a control network based on the neural engineering framework. All these examples show that SNN-based sensor processing for autonomous driving is a trending topic. Besides the development of SNNs and their implementation on neuromorphic hardware, also the combined processing, i.e., sensor fusion using SNNs, will become an important topic.

When it comes to AI-based autonomous driving, ensuring functional safety of both software and hardware is a critical issue. The principles that are currently developed to support machine learning models (Henriksson et al., 2018; Mohseni et al., 2019) will also apply to SNNs. Similarly, neuromorphic hardware will have to fulfill the same standards as any automotive electronic system: adhere to temperature ranges, be resistant to vibrations, be deterministic and redundant, or contain self-monitoring. For that reason, only digital neuromorphic systems are candidates for integration in cars, while the use of analog or mixed-signal neuromorphic hardware seems out of scope at the moment due to their intrinsic variability. Hence, we suggest to focus on advanced digital systems such as SpiNNaker2 (Yan et al., 2021) or Loihi2 (Orchard et al., 2021) to further explore neuromorphic hardware for automotive radar processing and automated driving in general.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 12 users

robsmark

Regular
The same thing that happens everyday after we run up 15% is my guess.
I think this proves that without deals being signed with guarantees of near-term revenue, or proven revenue from the existing two contracts, we aren’t going anywhere.

Shorts will continue to devalue this company until the company provides the market with further proof of adoption.

Non-material announcements don’t get much better than yesterdays, but unfortunately it didn’t generate the financial traction required to reverse the trend.

It is crucial that some of these EAP customers show some commitment to Brainchip by signing a deal. I see no reason why this shouldn’t be the case either given how the company has already announced that changes make to this latest revision were directly requested by the EAP customers.

More time is required I guess.
 
  • Like
  • Love
  • Fire
Reactions: 25 users
I think this proves that without deals being signed with guarantees of near-term revenue, or proven revenue from the existing two contracts, we aren’t going anywhere.

Shorts will continue to devalue this company until the company provides the market with further proof of adoption.

Non-material announcements don’t get much better than yesterdays, but unfortunately it didn’t generate the financial traction required to reverse the trend.

It is crucial that some of these EAP customers show some commitment to Brainchip by signing a deal. I see no reason why this shouldn’t be the case either given how the company has already announced that changes make to this latest revision were directly requested by the EAP customers.

More time is required I guess.
And maybe throw in some nefarious forces with deep pockets who are pressuring prices down to accumulate.
Which imo is a very good possibility.
 
  • Like
  • Love
  • Fire
Reactions: 14 users

robsmark

Regular
And maybe throw in some nefarious forces with deep pockets who are pressuring prices down to accumulate.
Which imo is a very good possibility.
That’s just wishful thinking until proven though unfortunately Rise.

Those institutions which have accumulated thus far are questionable to me also - someone is making a fortune lending them to shorters, how do we know this isn’t their strategy with this company?
 
  • Like
  • Thinking
  • Love
Reactions: 7 users
That’s just wishful thinking until proven though unfortunately Rise.

Those institutions which have accumulated thus far are questionable to me also - someone is making a fortune lending them to shorters, how do we know this isn’t their strategy with this company?
I doubt that would ever be able to be proven though so yes really pointless of me to have mentioned it.
Those shorts will eventually have to unwind which will logically move the price up. Yes logic seems to be non-existent these days. But like you mentioned in a previous post a little more time is needed.
Not that it means shit but my confidence level is pretty high that this is going to rock hard this year mate. And had that same feeling before we got the sugar hit yesterday. Lots of new info to dissect from that. I'll cut it short before I start to ramble on.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 19 users
And... Ramble I shall😂
Think of all the companies that are working with brn then think of all the employees who work with then then think of all the friends and associates connected to them oh boy he comes the tin foil😂 quite a few of them could be collaborating to keep prices down for accumulation. And that's it for my ramblings. Before I bring UFOs into the post🛸
Because we all know pvdm is a time traveler from the future. 🤔
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 14 users
SiFive and BrainChip Partner to Demo IP Compatibility
SiFive and BrainChip have partnered to show their IP is compatible in SoC designs for embedded artificial intelligence (AI). The companies have demonstrated BrainChip’s neuromorphic processing unit (NPU) IP working alongside SiFive’s RISC–V host processor IP.
Brainchip’s NPU processor IP, the basis for its Akida chip, is a neuromorphic processor designed to accelerate spiking neural networks. This IP can be used to analyze inputs from most sensor types, including cameras, to provide ultra–low power analysis in real–time applications. A recent BrainChip demo showed its Akida chip in a vehicle, detecting the driver, recognizing the driver’s face, and identifying their voice simultaneously. Keyword spotting required 600 μW, facial recognition needed 22 mW, and the visual wake–word inference used to detect the driver was 6–8 mW

 
  • Like
  • Fire
  • Love
Reactions: 37 users
RENESAS Renesas Electronics Corporation
Search
Account

Main navigation​


VVDN Technologies Pvt. Ltd.​

Logo
VVDN Technologies Pvt. Ltd. Logo

Website
https://www.vvdntech.com/
Product Category
Microcontrollers & Microprocessors
Clocks & Timing
Interface
Power & Power Management
Sensor Products
Partner Type
Preferred
Country
India
Japan
Singapore
Application Category
IoT Applications
Consumer Electronics
Artificial Intelligence
FPGA Designs
Communication & Computing Infrastructure
ODM Capability
Large lot (> 1000)
  • IoT
  • Network and Wi-Fi
  • 5G and Data Center
  • Vision
  • Cloud & Applications
Solutions
  • Various RZ module (System On Modules)
  • Retail Automation
  • IoT Devices
 
  • Like
  • Fire
  • Love
Reactions: 12 users
Top Bottom