BRN Discussion Ongoing

7für7

Top 20
What a few great weeks it been to be alive, 1st becoming a grandad, then we just had an offer accepted on a new house and get the keys in 3 weeks, plus work hasn’t been so good and I picked up a few decent contracts worth a few million as well. Then brainchip after so much pain the last few years seems like it’s just about to turned a corner ❤️❤️❤️

I’ve always believed in karma and maybe Karma is a thing 😁
Congrats man!!! What a fantastic news! 👍☺️
 
  • Like
  • Fire
  • Love
Reactions: 5 users

MDhere

Top 20
I'm not really jealous ...

...

I'm really, really, really jealous.
I know I know, I will take plenty of pics and will chat with my French friends and of course Valeo :)
I paid for the trip and the ticket "to represent all my fellow brners" :)
 
  • Like
  • Love
  • Fire
Reactions: 39 users

7für7

Top 20
Some day trader made profit on the end…but closed green
 
  • Fire
  • Love
Reactions: 2 users

Terroni2105

Founding Member
1727852293590.jpeg


 
  • Like
  • Love
  • Fire
Reactions: 57 users

CHIPS

Regular

I can beat that 😂 ! 445.491 pcs is a high amount for Germany less than an hour of market opening and it is only for the Tradegate marketplace.

1727852680040.png


This is the chart of the trades of all German markets for one hour. It adds up to 651.349 roughly. This is a lot for Germany!

1727852942848.png
 
  • Like
  • Love
  • Fire
Reactions: 12 users

Xray1

Regular
  • Like
  • Fire
Reactions: 14 users
Could Akida by any chance be the integrated AI accelerator used in the newly launched Raspberry Pi AI Camera, jointly developed by Raspberry Pi and Sony Semiconductor Solutions (SSS)? (For the sake of completeness, I should, however, add that while Sony’s IMX500 Camera Module is being promoted as having on-chip AI image processing, there is no specific mention of this involving neuromorphic technology.)

I am afraid I can’t answer the legitimate question regarding potential revenue, though. It is a fact that Sony Semiconductor Solutions has not signed a license with us, so it would have to be a license through either Megachips or Renesas (both of which also happen to be Japanese companies).
I guess - as usual - we will have to resort to watching the financials, unless we find out sooner one way or the other…

So here goes my train of thought: Yesterday (Sept 30) was the Raspberry Pi AI Camera’s official launch - coincidentally (or not?), this happened to be the day when the BRN share price soared without any official news as a catalyst… Could there have been some kind of leak, though?

Also, BrainChip has promised “exciting demos, including our Temporal Event-based Neural Networks (TENNs) and the Raspberry Pi 5 with Face and Edge Learning” for the Embedded World North America, taking place from October 8-10, 2024, at the Convention Center in Austin, TX.

The now sold-out Akida Raspberry Pi Dev Kit was based on the Raspberry Pi 4, so they won’t be using one of those for the announced Raspberry Pi 5 demo. Since we are nowadays primarily an IP company, would manufacturing and releasing a new Akida Dev Kit based on a Raspberry Pi 5 make sense? Not really. How about demonstrating an affordable AI camera using Akida technology manufactured by someone else (the mysterious Custom Customer SoC?)… 🤔

The Raspberry Pi AI Kit that came out in June and uses a Hailo AI acceleration module will only work with a Raspberry Pi 5 (introduced in October 2023), whereas the new Raspberry Pi AI Camera will work with all Raspberry Pi models. So while it may perform best on the latest Raspberry Pi 5, it will still be useful for developers with older RPI models as well.

Maybe one of the resident TSE techies will be able to tell us right away that this is a dead end or a pie in the sky, but until then I’ll keep my fingers crossed…


View attachment 70181



View attachment 70190



View attachment 70192


Sony Semiconductor Solutions and Raspberry Pi Launch the Raspberry Pi AI Camera​

Accelerating the development of edge AI solutions​

Sony Semiconductor Solutions Corporation
Raspberry Pi Ltd.
Atsugi, Japan and Cambridge, UK — Sony Semiconductor Solutions Corporation (SSS) and Raspberry Pi Ltd today announced that they are launching a jointly developed AI camera. The Raspberry Pi AI Camera, which is compatible with Raspberry Pi’s range of single-board computers, will accelerate the development of AI solutions which process visual data at the edge. Starting from September 30, the product will be available for purchase from Raspberry Pi’s network of Approved Resellers, for a suggested retail price of $70.00*.
* Not including any applicable local taxes.

raspberrypi_ai-camera.jpg

In April 2023, it was announced that SSS would make a minority investment in Raspberry Pi Ltd. Since then, the companies have been working to develop an edge AI platform for the community of Raspberry Pi developers, based on SSS technology. The AI Camera is powered by SSS’s IMX500 intelligent vision sensor, which is capable of on-chip AI image processing, and enables Raspberry Pi users around the world to easily and efficiently develop edge AI solutions that process visual data.
  • AI camera features
  • Because vision data is normally massive, using it to develop AI solutions can require a graphics processing unit (GPU), an accelerator, and a variety of other components in addition to a camera. The new Raspberry Pi AI Camera, however, is equipped with the IMX500 intelligent vision sensor which handles AI processing, making it easy to develop edge AI solutions with just a Raspberry Pi and the AI Camera.
  • The new AI Camera is compatible with all Raspberry Pi single-board computers, including the latest Raspberry Pi 5. This enables users to develop solutions with familiar hardware and software, taking advantage of the widely used and powerful libcamera and Picamera2 software libraries.
“SSS and Raspberry Pi Ltd aim to provide Raspberry Pi users and the development community with a unique development experience,” said Eita Yanagisawa, General Manager, System Solutions Division, Sony Semiconductor Solutions Corporation. “I’m very excited to share SSS edge AI sensing technology with the world’s largest development community as the first fruits of our strategic partnership. We look forward to further collaboration with Raspberry Pi using our AITRIOS™ edge AI solution development and operations platform. We aim to make the most of AI cameras equipped with our image sensors in our collaborative efforts with Raspberry Pi.”

“AI-based image processing is becoming an attractive tool for developers around the world,” said Eben Upton, CEO, Raspberry Pi Ltd. “Together with our longstanding image sensor partner Sony Semiconductor Solutions, we have developed the Raspberry Pi AI Camera, incorporating Sony’s image sensor expertise. We look forward to seeing what our community members are able to achieve using the power of the Raspberry Pi AI Camera.”

Specifications
  • Sensor model: SSS's approx. 12.3 effective megapixel IMX500 intelligent vision sensor with a powerful neural network accelerator
  • Sensor modes: 4,056(H) x 3,040(V) at 10 fps / 2,028(H) x 1,520(V) at 40 fps​
  • Unit cell size: 1.55 µm x 1.55 µm​
  • 76 degree FoV with manual/mechanical adjustable focus​
  • Integrated RP2040 for neural network firmware management​
  • Works with all Raspberry Pi models using only Raspberry Pi standard camera connector cable​
  • Pre-loaded with MobileNetSSD model​
  • Fully integrated with libcamera​
About Sony Semiconductor Solutions Corporation
Sony Semiconductor Solutions Corporation is a wholly owned subsidiary of Sony Group Corporation and the global leader in image sensors. It operates in the semiconductor business, which includes image sensors and other products. The company strives to provide advanced imaging technologies that bring greater convenience and fun. In addition, it also works to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights.
For more information, please visit https://www.sony-semicon.com/en/index.html.

About Raspberry Pi Ltd
Raspberry Pi is on a mission to put high-performance, low-cost, general-purpose computing platforms in the hands of enthusiasts and engineers all over the world. Since 2012, we’ve been designing single-board and modular computers, built on the Arm architecture, and running the Linux operating system. Whether you’re an educator looking to excite the next generation of computer scientists; an enthusiast searching for inspiration for your next project; or an OEM who needs a proven rock-solid foundation for your next generation of smart products, there’s a Raspberry Pi computer for you.
Note: AITRIOS is the registered trademark or trademark of Sony Group Corporation or its affiliates.




View attachment 70196
View attachment 70197


After reading this post early this morning I'm a little surprised that there hasn't been further discussion with the AKIDA Pico and Raspberry Pi.

My first thought this morning, when I read about "AKIDA Pico", I immediately thought of the Raspberry Pi board known as the "Raspberry Pico". This is a board that has use cases typically powered by battery, so power draw needs to be kept to a minumum.

What are people thoughts on the AKIDA Pico & Raspberry Pico being linked somehow?? (Keep in mind that the Rasberry Pico 2 has just been prereleased)
 
  • Like
  • Love
  • Fire
Reactions: 25 users

manny100

Regular
According to Simply Wall sxxxt insiders ownership of BRN is 16.7%..
Employee share scheme holds 0.0445%
That is an indication of faith by people who know.
No wonder we have had several enhancements in a short period of time.
 
  • Like
  • Love
Reactions: 11 users

CHIPS

Regular
I am in love ... with PICO :love:

Just imagine you would have sold your BRN stocks some days ago to make some extra money planning to rebuy them at a lower price :eek:. Who was it again, who had to sell some of the stocks to avoid an unhappy wife? :unsure:
I feel very sorry for your wife now 😂😂😂

Schitts Creek Love GIF by CBC
 
  • Haha
  • Fire
  • Like
Reactions: 4 users

GStocks123

Regular
Good to see us listed as partners on the Neurobus website. Looks like we may already be embedded in some of their products..

 
  • Like
  • Love
  • Fire
Reactions: 22 users

TECH

Regular
"Single neural processing engine
Minimal core for TENNS
"

I had assumed that the Akida NN would need at least 2 NPEs, but TENNS can run in a single NPE???!!!!

That is truly astonishing.

... and it does not need a microprocessor????!!!!!

https://www.epdtonthenet.net/articl...ion-for-Resource-Constrained-Deployments.aspx

This IP relies on the Akida2 event-based computing platform configuration engine as its foundation, meaning that the data quantities needing to be dealt with are kept to a minimum. Consequently only a small logic die area is required (0.18mm x 0.18mm on a 22nm semiconductor process with 50kBytes of SRAM memory incorporated), plus the associated power budget remains low (with <1mW operation being comfortably achieved). It can serve as either a standalone device (without requiring a microcontroller) or alternatively become a co-processor.

It's a self-contained package that sets the benchmark for low power.

Proving that we ARE the benchmark leaders at the "far edge"....PICO.....Pico Brainchip when you want first mover advantage, stop
procrastinating and sign up, our bus feels like it's warming the engines up again.

Fancy that, the company making an nice announcement on 1 October, my birthday 66 clickity...click.....thanks for sharing the love back in the US.

Is it just me, or has the trading pattern changed somewhat ?

Regards to all....Tech.
 
  • Like
  • Love
  • Fire
Reactions: 37 users

Tothemoon24

Top 20
IMG_9675.jpeg




Redefining AI Processing with Event-Based Architecture

BrainChip has launched the Akida Pico, enabling the development of compact, ultra-low power, intelligent devices for applications in wearables, healthcare, IoT, defense, and wake-up systems, integrating AI into various sensor-based technologies.

Akida Pico offers the lowest power standalone NPU core (less than 1mW), supports power islands for minimal standby power, and operates within an industry-standard development environment.

This new technology makes it possible for common things like drills, hand tools, and other consumer products to have smart features without costing a lot more.

Steve Brightfield, CMO at BrainChip:
“Today, a battery with a built-in tester can show how healthy it is with a simple color code:
green means it’s good, red means it needs to be replaced.

Providing a similar indicator, AI in these products can tell you when parts are wearing out before they break.

BrainChip’s low-power, low-maintenance AI works in the background without being noticed, so advanced tests can be used by anyone without needing to know a lot about them”

Steve Brightfield claimed that ordinary NPUs—including those with multiplier-accumulator arrays—run on fixed pipelines, processing every input whether or not it is beneficial.

Particularly in cases of sparse data, a typical occurrence in AI applications where most input values have little impact on the final outcome, this inefficacy often leads in wasted calculations.

By use of an event-based computing architecture, BrainChip saves computational resources and electricity by activating calculations only when relevant data is present.

BrainChip’s Akida main benefit comes from using data and neural weights’ sparsity.

Traditional NPU architectures can take advantage of weight sparsity with pre-compilation, benefiting from model weight pruning, but cannot dynamically schedule for data sparsity, they must process all of the inputs.

By processing data just when needed, BrainChip’s SNN technology can drastically lower power usage based on the degree of sparsity in the data.

BrainChip’s Akida NPU, for instance, could execute only when the sensor detects a significant signal in audio-based edge applications such as gunshot recognition or keyword detection, therefore conserving energy in the lack of acceptable data.




BrainChip’s Akida NPU: Redefining AI Processing with Event-Based Architecture​

BrainChip’s Akida NPU: Redefining AI Processing with Event-Based Architecture

1727862300585.jpg

Embedded Staff
6 min read
0
BrainChip has launched the Akida Pico, enabling the development of compact, ultra-low power, intelligent devices for applications in wearables, healthcare, IoT, defense, and wake-up systems, integrating AI into various sensor-based technologies. According to BrainChip, Akida Pico offers the lowest power standalone NPU core (less than 1mW), supports power islands for minimal standby power, and operates within an industry-standard development environment. It’s very small logic die area and configurable data buffer and model parameter memory help optimize the overall die size.

AI era​

In the sophisticated artificial intelligence (AI) era of today, including smart technology into consumer items is usually connected with cloud services, complicated infrastructure, and high expenses. Computational power and energy economy are occasionally in conflict in the realm of edge artificial intelligence. Designed for deep learning activities, traditional neural processing units (NPUs) require significant quantities of power, so they are less suited for always-on, ultra-low-power applications including sensor monitoring, keyword detection, and other extreme edge artificial intelligence uses. BrainChip is providing a fresh approach to this challenge.
BrainChip’s solution addresses one of the major challenges in edge AI: how to perform continuous AI processing without draining power. Traditional microcontroller-based AI solutions can manage low-power requirements but often lack the processing capability for complex AI tasks.


2014 saw the launch of BrainChip, which took its inspiration from Peter Van Der Made’s work on neuromorphic computing concepts. Especially using spiking neural networks (SNNs), this technique replicates how the brain manages information, therefore transforming a fundamentally different method to traditional convolutional neural networks (CNNs). The SNN-based systems of BrainChip only compute when triggered by events rather than doing continuous calculations, hence optimizing power efficiency.

In an interview with Embedded, Steve Brightfield, CMO at BrainChip, talked about how this new method will change the game for ultra-low-power AI apps, showing big steps forward in the field. Brightfield said that this new technology makes it possible for common things like drills, hand tools, and other consumer products to have smart features without costing a lot more. “Today, a battery with a built-in tester can show how healthy it is with a simple color code: green means it’s good, red means it needs to be replaced. Providing a similar indicator, AI in these products can tell you when parts are wearing out before they break. BrainChip’s low-power, low-maintenance AI works in the background without being noticed, so advanced tests can be used by anyone without needing to know a lot about them,” Brightfield said.

Traditional NPUs vs. Event-Based Computing​

Brightfield claimed that ordinary NPUs—including those with multiplier-accumulator arrays—run on fixed pipelines, processing every input whether or not it is beneficial. Particularly in cases of sparse data, a typical occurrence in AI applications where most input values have little impact on the final outcome, this inefficacy often leads in wasted calculations. By use of an event-based computing architecture, BrainChip saves computational resources and electricity by activating calculations only when relevant data is present.

“Most NPUs keep calculating all data values, even for sparse data,” Brightfield remarked. “We schedule computations dynamically using our event-based architecture, so cutting out unnecessary processing.”

The Influence of Sparsity​

BrainChip’s main benefit comes from using data and neural weights’ sparsity. Traditional NPU architectures can take advantage of weight sparsity with pre-compilation, benefiting from model weight pruning, but cannot dynamically schedule for data sparsity, they must process all of the inputs.

By processing data just when needed, BrainChip’s SNN technology can drastically lower power usage based on the degree of sparsity in the data. BrainChip’s Akida NPU, for instance, could execute only when the sensor detects a significant signal in audio-based edge applications such as gunshot recognition or keyword detection, therefore conserving energy in the lack of acceptable data.

Akida Pico Block Diagram (Source: Brainchip)

Introducing the Akida Pico: Ultra-Low Power NPU for Extreme Edge AI​

Designed on a spiking neural network (SNN) architecture, BrainChip’s Akida Pico processor transforms event-based computing. Unlike conventional artificial intelligence models that demand constant processing capability, Akida runs just in response to particular circumstances. For always-on uses like anomaly detection or keyword identification, where power economy is vital, this makes it perfect. The latest innovation from BrainChip is built on the Akida2 event-based computing platform configuration engine, which can execute with power suitable for battery-powered operation of less than a single milliwatt.

Wearables, IoT devices, and industrial sensors are among the jobs that call for continual awareness without draining the battery where the Akida Pico is well suited. Operating in the microwatt to milliwatt power range, this NPU is among the most efficient ones available; it surpasses even microcontrollers in several artificial intelligence applications.

For some always-on artificial intelligence uses, “the Akida Pico can be lower power than microcontrollers,” Brightfield said. “Every microamp counts in extreme battery-powered use cases, depending on how long it is intended to perform.”

The Akida Pico can stay always-on without significantly affecting battery life, whereas microcontroller-based AI systems often require duty cycling—turning the CPU off and on in bursts to save power. For edge AI devices that must run constantly while keeping a low power consumption, this benefit becomes very vital.

BrainChip’s MetaTF software flow allows developers to compile and optimize Temporal-Enabled Neural Networks (TENNs) on the Akida Pico. Supporting models created with TensorFlow/Keras and Pytorch, MetaTF eliminates the need to learn a new machine language framework, facilitating rapid AI application development for the Edge.

Akida Pico die area versus process (mm2) (Source: Brainchip)

Standalone Operation Without a Microcontroller​

Another remarkable feature of the Akida Pico is its ability to function alone, that is, without a host microcontroller to manage its tasks. Usually beginning, regulating, and halting operations using a microcontroller, the Akida Pico comprises an integrated micro-sequencer managing the full neural network execution on its own. This architecture reduces total system complexity, latency, and power consumption.
For applications needing a microcontroller, the Akida Pico is a rather useful co-processor for offloading AI tasks and lowering power requirements. From battery-powered wearables to industrial monitoring tools, this flexibility appeals to a wide range of edge artificial intelligence applications.

Targeting Key Edge AI Applications​

The ultra-low power characteristics of the Akida Pico help medical devices that need continuous monitoring—such as glucose sensors or wearable heart rate monitors—benefit.

Likewise, good candidates for this technology are speech recognition chores like voice-activated assistants or security systems listening for keywords. Edge artificial intelligence’s toughest obstacle is combining compute requirements with power consumption. In markets where battery life is crucial, the Akida Pico can scale performance while keeping inside limited power budgets.


One of the most notable uses of BrainChip’s artificial intelligence, according to Brightfield, is anomaly detection for motors or other mechanical systems Both costly and power-intensive, traditional methods monitor and diagnose equipment health using cloud-based infrastructure and edge servers. BrainChip embeds artificial intelligence straight within the motor or gadget, therefore flipping this concept on its head.

BrainChip’s ultra-efficient Akida Neural Processor Unit (NPU) for example, may continually examine vibration data from a motor. Should an abnormality, such as an odd vibration, be found, the system sets off a basic alert—akin to turning on an LED. Without internet access or a thorough examination, this “dumb and simple” option warns maintenance staff that the motor needs care instead of depending on distant servers or sophisticated diagnosis sites.

“In the field, a maintenance technician could only glance at the motor. Brightfield said, “they know it’s time to replace the motor before it fails if they spot a red light.” This method eliminates the need for costly software upgrades or cloud access, therefore benefiting equipment in distant areas where connectivity may be restricted.

Regarding keyword detection, BrainChip has included artificial intelligence right into the device. According to Brightfield, with 4-5% more accuracy than historical methods using raw audio data and modern algorithms, the Akida Pico uses just under 2 milliwatts of power to provide amazing results. Temporal Event-Based Neural Networks (TENNS), a novel architecture built from state space models that permits high-quality performance without the requirement for power-hungry microcontrollers, enable this achievement.

As demand for edge AI grows, BrainChip’s advancements in neuromorphic computing and event-based processing are poised to contribute significantly to the development of ultra-efficient, always-on AI systems, providing flexible solutions for various applications.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 50 users

FJ-215

Regular
"Single neural processing engine
Minimal core for TENNS
"

I had assumed that the Akida NN would need at least 2 NPEs, but TENNS can run in a single NPE???!!!!

That is truly astonishing.

... and it does not need a microprocessor????!!!!!

https://www.epdtonthenet.net/articl...ion-for-Resource-Constrained-Deployments.aspx

This IP relies on the Akida2 event-based computing platform configuration engine as its foundation, meaning that the data quantities needing to be dealt with are kept to a minimum. Consequently only a small logic die area is required (0.18mm x 0.18mm on a 22nm semiconductor process with 50kBytes of SRAM memory incorporated), plus the associated power budget remains low (with <1mW operation being comfortably achieved). It can serve as either a standalone device (without requiring a microcontroller) or alternatively become a co-processor.

It's a self-contained package that sets the benchmark for low power.
Hi Dio,

A little left field but can you see a way for the JAST learning rules to be implemented in to TENNs? (orthogonal polynomials)

From memory the JAST rules only took up something like 64K lines of code.

Way, way, way out side of my pay grade.
 
  • Like
  • Thinking
  • Fire
Reactions: 7 users

HopalongPetrovski

I'm Spartacus!
Proving that we ARE the benchmark leaders at the "far edge"....PICO.....Pico Brainchip when you want first mover advantage, stop
procrastinating and sign up, our bus feels like it's warming the engines up again.

Fancy that, the company making an nice announcement on 1 October, my birthday 66 clickity...click.....thanks for sharing the love back in the US.

Is it just me, or has the trading pattern changed somewhat ?

Regards to all....Tech.
Happy Bday Tech. Mine next week. Lets hope the combined gravity of our good karma manifests in a couple of big, juicy deals that Brainchip lands, signed, sealed, delivered and announced, providing a steady flow of revenue into our coffers. 🤣


Unknown-1.jpeg
 
  • Like
  • Haha
  • Love
Reactions: 22 users

FJ-215

Regular
Proving that we ARE the benchmark leaders at the "far edge"....PICO.....Pico Brainchip when you want first mover advantage, stop
procrastinating and sign up, our bus feels like it's warming the engines up again.

Fancy that, the company making an nice announcement on 1 October, my birthday 66 clickity...click.....thanks for sharing the love back in the US.

Is it just me, or has the trading pattern changed somewhat ?

Regards to all....Tech.
Happy Birthday Tech.....

Have you made the trek to the Melbourne sandbelt yet.

Should be on all golfer's bucket list.

Mind you, a couple out of play atm but the Big 3 (Royal, Heath, Vic) are a must play.

You deserve it, great Birthday present to yourself.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 7 users

Derby1990

Regular
BRN back to nearly half a billion MC on just a sniff of something. Look out BHP, we're coming for you!
 
  • Haha
  • Like
  • Fire
Reactions: 16 users

IloveLamp

Top 20
  • Haha
  • Fire
Reactions: 5 users

Diogenese

Top 20
Good to see us listed as partners on the Neurobus website. Looks like we may already be embedded in some of their products..

Thanks GS,

... and their other industry partners are Intel and Prophesee.




1727860071069.png



Prophesee's DVS can instantaneously detect changes above a set threshold in a field of view. This is another project on which Brainchip and Prophesee are working together.
Hi Dio,

A little left field but can you see a way for the JAST learning rules to be implemented in to TENNs? (orthogonal polynomials)

From memory the JAST rules only took up something like 64K lines of code.

Way, way, way out side of my pay grade.
Well, until you asked the question, I hadn't thought about it.

In my mind, I had considered the division of labour as being Akida doing the inference/classification and TENNS handling the temporal element. But we have learnt that TENNS can be use without Akida, so there's another neat hypothesis down the plughole.

I haven't really got into how TENNS works, but we do know that, unlike Akida, it uses MACs. However it also uses the same sparse weights and activations.

I'm more a pictures person than a words person and I'm not personally acquainted with converging orthogonal polynomials, but some of my best friends are.

This is an abstract from the TENNS patent:

WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORKS 20220622
COENEN OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]


[0012] According to an embodiment of the present disclosure, disclosed herein is a neural network system that includes an input interface, a memory including a plurality of temporal and spatial layers, and a processor.

The input interface is configured to receive sequential data that includes temporal data sequences. The memory is configured to store a plurality of group of first temporal kernel values, a first plurality of First-In-FirstOut (FIFO) buffers corresponding to a current temporal layer.

The memory further implements a neural network that includes a first plurality of neurons for the current temporal layer, a corresponding group among the plurality of groups of the first temporal kernel values is associated with each connection of a corresponding neuron of the first plurality of neurons.

The processor is configured to allocate the first plurality of FIFO buffers to a first group of neurons among the first plurality of neurons.

The processor is then configured to receive a first temporal sequence of the corresponding temporal data sequences into the first plurality of FIFO buffers allocated to the first group of neurons from corresponding temporal data sequences over a first time window.

Thereafter, the processor is configured to perform, for each connection of a corresponding neuron of the first group of neurons, a first dot product of the first temporal sequence of the corresponding temporal data sequences within a corresponding FIFO buffer of first plurality of FIFO buffers with a corresponding temporal kernel value among the corresponding group of the first temporal kernel values.

The corresponding temporal kernel values are associated with a corresponding connection of the corresponding neuron of the first group of neurons.

The processor is then further configured to determine a corresponding potential value for the corresponding neurons of the first group of neurons based on the performed first dot product and then generates a first output response based on the determined corresponding potential values
.

1727867422135.png



The TENNs as disclosed herein may effectively learn both spatial and temporal correlations from the input data.

[0050] According to an embodiment, the spatiotemporal networks may be configured to perform the temporal convolution operations either in a buffered temporal convolution mode or a recurrent temporal convolution mode, and may be alternatively referred to as a “buffer mode” or a “recurrent mode”, respectively.

[0051] According to an embodiment, the spatiotemporal network may be configured with a plurality of spatiotemporal convolution layers. Each of the spatiotemporal layers may be further split into plurality of temporal and spatial convolution layers. The kernels for the temporal and spatial convolution layers are represented as a sum over a set of basis functions, such as orthogonal polynomials, where the coefficients of the basis functions are trainable parameters of the network. This basis function representation compresses the number of parameters of the spatiotemporal network, which makes the training of the spatiotemporal network stable and resistant to overfitting
.*

* Overfitting is responsible for the "hallucinations" experienced on OpenAI.



The JAST patent summarizes the rules as follows:

US11853862B2 Method, digital electronic circuit and system for unsupervised detection of repeating patterns in a series of events 20161121

  • Input events (“spikes”) are grouped into fixed-size packets. The temporal order between events of a same packet is lost, which may seem a drawback, but indeed increases robustness as it eases the detection of distorted patterns and makes the method insensitive to changes of the event rate.
  • Weighted or un-weighted synapses are replaced by a set of binary weights. Learning only requires flipping some of these binary weights and performing sums and comparisons, thus minimizing the computational burden.
  • The number of binary weights which is set to “1” for each neuron does not vary during the learning process. This avoids ending up with non-selective or non-sensible neurons.
In other words, the synspses are GO-NO GO gates, depending on how they are programmed from the model.

[0062] The neural processor 320 may correspond to a neural processing unit (NPU). The (NPU) is a specialized circuit that implements all the necessary control and arithmetic logic necessary to execute machine learning algorithms, typically by operating on models such as artificial neural networks (ANNs) and spiking neural networks (SNNs).

However, does the loss of temporal order within a packet exclude TENNS? I wouldn't think so, as the temporal order between the packets is retained.


So the question is

A - whether the JAST rules are compatible with TENNS COPs?; and
B- whether there is any advantage in combining them?

The short answer is: the chooks ate my homework.



Truth is I haven't dug in depth into the TENNS patent or looked at the pictures.

So I think the old engineering adage "If it ain't broke, ..." applies.

As my mother used to say "Leave it alone - you'll make it explode!"
 

Attachments

  • 1727859990934.png
    1727859990934.png
    30.5 KB · Views: 26
  • Like
  • Love
  • Fire
Reactions: 34 users

Guzzi62

Regular
I forgot who Thomas Hulsing is and had to check him on Linkedin.

A systems of systems engineer at Airbus Defence and Space GmbH, working on: Life Support Systems, Scientific Experiments, Satellite SW, Data Evaluation, Automation, Innovation & Improvement.

Clearly a very good sign that he reposted the Akida Pico news, showing he is interested.

I am not so much for this kind of dot joining, but he seems to be an Akida fanboy and posting BrainChip news going back months and considering what he is doing in that 40k employees company, surely he is playing around with Akida2 & TENNs in his department.

Also positive that Akida Pico is out in the news in several specialist news outlets, last one I saw is newelectronics UK below.

 
  • Like
  • Fire
  • Love
Reactions: 19 users
I forgot who Thomas Hulsing is and had to check him on Linkedin.

A systems of systems engineer at Airbus Defence and Space GmbH, working on: Life Support Systems, Scientific Experiments, Satellite SW, Data Evaluation, Automation, Innovation & Improvement.

Clearly a very good sign that he reposted the Akida Pico news, showing he is interested.

I am not so much for this kind of dot joining, but he seems to be an Akida fanboy and posting BrainChip news going back months and considering what he is doing in that 40k employees company, surely he is playing around with Akida2 & TENNs in his department.

Also positive that Akida Pico is out in the news in several specialist news outlets, last one I saw is newelectronics UK below.

@Sirod69 who we haven't seen around for a while, confirmed that Thomas is an enthusiastic shareholder and part of her WhatsApp? group, a while back..
 
  • Like
  • Love
  • Fire
Reactions: 10 users
Top Bottom