BRN Discussion Ongoing

Wags

Regular
Hi all, not sure of the correct protocol, if any.
Just advising name change from Maccareadsalot, to Wags.
Same person, same attitude, same ingredients.
Staywell all,
Macca
 
  • Like
  • Haha
  • Fire
Reactions: 26 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 12 users

Deadpool

Did someone say KFC
Not technically brn related, but Space Jesus is worried 😟

 
  • Haha
  • Like
Reactions: 4 users

cosors

👀
Good question, I wonder myself where all those dumb nonsense speculations without any proof of context come from.
According Mr Karl Popper and me a theory is right or not wrong until it is disproved. So make an effort to disprove.
I stick to a plausible causal chain until it has been disproved.
Simply stating that something is dumb nonsense is not enough for me.
Before you were specific with Lidar and MB. Here you are much too general for me.
Someone once claimed that the world is a sphere. That was considered heresy. We know the rest of the story.
Just as an example of why I think this approach makes more sense than calling others out for their research and dot joining.
 
Last edited:
  • Like
  • Love
Reactions: 17 users

cosors

👀
Great suggestion! Let’s see some “serious research” from you then!
Thank you for your tireless work. Please never let up!
And, I really like your feet. Maybe this will help motivate you.
 
  • Haha
  • Like
  • Love
Reactions: 19 users

Satchmo25

Member
  • Like
  • Thinking
Reactions: 2 users

cosors

👀
I am thrilled! I have Sunday to check only three pages of posts to see if anyone is saying anything wrong.)
 
  • Haha
  • Like
  • Love
Reactions: 16 users

IloveLamp

Top 20
 
  • Like
  • Fire
  • Love
Reactions: 11 users

Dhm

Regular
This is Apple's toaster patent:

US2022222510A1 MULTI-OPERATIONAL MODES OF NEURAL ENGINE CIRCUIT 20210113



View attachment 45961

[0052] Referring to FIG. 3, an example neural processor circuit 218 may include, among other components, neural task manager 310 , a plurality of neural engines 314 A through 314 N (hereinafter collectively referred as “neural engines 314 ” and individually also referred to as “neural engine 314 ”), kernel direct memory access (DMA) 324 , data processor circuit 318 , data processor DMA 320 , and planar engine 340

[0053] Each of neural engines 314 performs computing operations for machine learning in parallel. Depending on the load of operation, the entire set of neural engines 314 may be operating or only a subset of the neural engines 314 may be operating while the remaining neural engines 314 are placed in a power-saving mode to conserve power. Each of neural engines 314 includes components for storing one or more kernels, for performing multiply-accumulate operations, for performing parallel sorting operations, and for post-processing to generate an output data 328 , as described below in detail with reference to FIGS. 4A and 4B. Neural engines 314 may specialize in performing computation heavy operations such as convolution operations and tensor product operations. Convolution operations may include different kinds of convolutions, such as cross-channel convolutions (a convolution that accumulates values from different channels), channel-wise convolutions, and transposed convolutions
.


View attachment 45962


[0063] FIG. 4A is a block diagram of neural engine 314 , according to one embodiment. Specifically, FIG. 4A illustrates neural engine 314 perform operations including operations to facilitate machine learning such as convolution, tensor product, and other operations that may involve heavy computation in the first mode. For this purpose, neural engine 314 receives input data 322 , performs multiply-accumulate operations (e.g., convolution operations) on input data 322 based on stored kernel data, performs further post-processing operations on the result of the multiply-accumulate operations, and generates output data 328 . Input data 322 and/or output data 328 of neural engine 314 may be of a single channel or span across multiple channels.

It has 16 neural engines each with an array of MAC (multiply accumulate) processors.

They need to find some sort of neural processor which does not rely on MACs.
Hi @Diogenese thanks for your stirling efforts explaining this. Rhetorical question coming: why couldn’t our senior management make it abundantly clear we have the solution for Apples problem?
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users

Esq.111

Fascinatingly Intuitive.
  • Like
  • Wow
  • Thinking
Reactions: 17 users

jtardif999

Regular
Looks like our mates at the US Air Force - AFRL got another $7m in the kitty approved as below.

Hopefully they allocate some to Dev with Akida.



Washington, September 25, 2023

Amendments to H.R. 4365 – Department of Defense Appropriations Act, 2024​



Carey (R-OH) – Amendment No. 116 – Increases and decreases by $7 million for research, development, test and evaluation for the Air Force with the intent that the $7 million will be used for the development of a cognitive EW machine learning/neuromorphic processing device to counter AI-enabled adaptive threats (Air Force RDT&E, Line 162, PE# 0207040F, Multi-Platform Electronic Warfare Equipment)
Worked with EW years ago; have always thought it would be an eventual likely use case for Akida.
 
  • Like
Reactions: 10 users
I do agree with your views, but I would like to add another dynamic, maybe just maybe two things are or were at play this week, the ASX didn't accept the way the company's planned announcement was presented or there is a material aspect to the announcement/s that wasn't accepted in the way it was presented for release, personally I'm happy with my earlier view of something within the first 2 weeks of October, time shall tell.

The company could be erring on the side of caution as AKD 2.0 wasn't quite ready for release, and that's a mature and responsible approach also.

Go Brisbane 🦁 Regards...Tech
This is definitely a possibility that a clarification email exchange between ASX and BRN has occurred.. It’s fairly common..
 
  • Like
Reactions: 3 users
Probably not what we want to be honest. That would just give shorters further ammunition. Though arguably there are a couple of members that could do with the flick for adding no value.

I have met Antonio (and a couple of other BM personally), and I’m comfortable with him leading our board. Well spoken, knowledgeable, and very experienced with this type of business at this stage of its life.

Let’s just hope that the landscape is different by the next AGM and shareholders are happy with what the company has delivered.

They’re publicly stating that AKD2000 is a game changer for us so in my eyes we should see some contracts before then. The EAP have apparently had this chip for some months, and AKD2000 was in itself a baby of forged from customer directive. With this in mind I see no reason why the company can’t get these contracts across the line. I would also expect our two existing contract to deliver fruit soon, and I’m particularly interested in the Valeo LiDAR system piece. I believe we could be involved and with $Billions in preorders we should get a slice.

This next 12 months are make or break in my eyes, let’s see what they can produce, hopefully without any unnecessary restructures.
I’m 💯 in agreement with you Rob.. Surely there’ll be a atleast 1-2 new IP deals before the years out as Chapman eluded to, and that should ward off the wolves atleast for the next AGM..
 
  • Like
Reactions: 6 users

Mt09

Regular
  • Like
Reactions: 8 users

Deadpool

Did someone say KFC
  • Like
  • Love
  • Fire
Reactions: 23 users
They make a NPU Neural Processing Unit, but aren't touting neuromorphic?.. I think there's a difference @Diogenese?


A lot of talk about use for Chat GPT models, so like in your article Esq, they definitely are trying to "ride" the A.I. wave..
They know where the attention is and they're going for it.

I don't think what they've got is a technical threat to us, but they are competition.
 
  • Like
  • Thinking
  • Fire
Reactions: 8 users
They make a NPU Neural Processing Unit, but aren't touting neuromorphic?.. I think there's a difference @Diogenese?


A lot of talk about use for Chat GPT models, so like in your article Esq, they definitely are trying to "ride" the A.I. wave..
They know where the attention is and they're going for it.

I don't think what they've got is a technical threat to us, but they are competition.

A neural processor, a neural processing unit (NPU), or simply an AI Accelerator is a specialized circuit that implements all the necessary control and arithmetic logic necessary to execute machine learning algorithms, typically by operating on predictive models such as artificial neural networks (ANNs) or random forests (RFs).

Nothing special, an accelerator..
 
  • Like
  • Fire
Reactions: 10 users
Hi all,

If you’re on this forum you probably already understand the content of this article but it educates on the differences between edge, far edge, cloud etc quite well. BRN not mentioned but it describes the landscape and where the technology is going.

Network-On-Chips Enabling Artificial Intelligence/Machine Learning Everywhere​

7Shares
sharethis sharing button

What goes on between the sensor and the data center.
SEPTEMBER 28TH, 2023 - BY: FRANK SCHIRRMEISTER
popularity

Recently, I attended the AI HW Summit in Santa Clara and Autosens in Brussels. Artificial intelligence and machine learning (AI/ML) were critical themes for both events, albeit from different angles. While AI/ML as a buzzword is very popular these days in all its good and bad ways, in discussions with customers and prospects, it became clear that we need to be precise in defining what type of AI/ML we are talking about when discussion requirements of networks-on-chips (NoCs).

Where is AI/ML happening?​

To discuss where the actual processing is happening, I found it helpful to use a chart that shows what is going on between sensors that create the data, the devices we all love and use, the networks transmitting the data, and the data centers where a lot of the “heavy” computing takes place.

From sensors to data centers – AI/ML happens everywhere.
Sensors
are the starting point of the AI/ML pipeline, and they collect raw data from the environment, which can be anything from temperature readings to images. At Autosens, in the context of automotive, this was all about RGB and thermal cameras, radar, and lidar. On-chip AI processing within sensors is a burgeoning concept where basic data preprocessing happens. For instance, IoT sensors utilize lightweight ML models to filter or process data, reducing the load and the amount of raw data to be transmitted. This local processing helps mitigate latency and preserve bandwidth. As discussed in some panels at Autosens, the automotive design chain needs to make some tough decisions about where computing happens and how to distribute it between zones and central computing as EE architectures evolve.
Edge devices are typically mobile phones, tablets, or other portable gadgets closer to the data source. In my view, cars are yet another device, albeit pretty complex, with its own “sensor to data center on wheels” computing distribution. The execution of AI/ML models on edge devices is crucial for applications that require real-time processing and low latency, like augmented reality (AR) and autonomous vehicles that cannot rely on “always on” connections. These devices deploy models optimized for on-device execution, allowing for quicker responses and enhanced privacy, as data doesn’t always have to reach a central server.
Edge computing is an area where AI/ML may happen without the end user realizing it. The far edge is the infrastructure most distant from the cloud data center and closest to the users. It is suitable for applications requiring more computing resources and power than edge devices but also needs lower latency than cloud solutions. Examples might include advanced analytics models or inference models that are heavy for edge devices but are latency-sensitive, the industry seems to adopt the term “Edge AI” for the computing going on here. Notable examples include facial recognition and real-time traffic updates on semi-autonomous vehicles, connected devices, and smartphones.
Data centers and the cloud are the hubs of computing resources, providing unparalleled processing power and storage. They are ideal for training complex, resource-intensive AI/ML models and managing vast datasets. High-performance computing clusters in data centers can handle intricate tasks like training deep neural networks or running extensive simulations, which are not feasible on edge devices due to resource constraints. Generative AI originally resided here, often requiring unique acceleration, but we already see it moving to the device edge as “On-Device Generative AI,” as shown by Qualcomm.
When considering a comprehensive AI/ML ecosystem, layers of AI/ML are intricately connected, creating a seamless workflow. For example, sensors might collect data and perform initial processing before sending it to edge devices for real-time inference. More detailed analysis takes place at far or near edge computing resources for more detailed analysis, before the data reaches data centers for deep insights and model (re-)training.

How are NoCs an enabler?​

As outlined above, AI/ML is happening everywhere, literally. However, as described, the resource requirements vary widely. NoCs play in three main areas here: (1) connecting the often very regular AI/ML subsystems, (2) de-risking the integration of all the various blocks on chips, and (3) connecting various silicon dies in a chiplet scenario (D2D) or various chips in a chip-to-chip (C2C) environment.

Networks-on-Chips (NoCs) as a critical enabler of AI/ML.
The first aspect – connecting AI/ML subsystems – is all about fast data movement, and for that, broad bit width, the ability to broadcast, and virtual channel functionality are critical. Some application domains are unique, as outlined in “Automotive AI Hardware: A New Breed.” In addition, the general bit-width requirements vary significantly between sensors, devices, and edges.
To enable the second aspect, connecting all the bits and pieces on a chip, it is all about the support of the various protocols – I discussed them last month in “Design Complexity In The Golden Age Of Semiconductors.” Tenstorrent’s Jim Kellerdescribed the customer concern regarding de-risking best in a recent joint press releaseregarding Arteris’ FlexNoC and Ncore technology: “The Arteris team and IP solved our on-chip network problems so we can focus on building our next-generation AI and RISC-V CPU products.”
Finally, the industry controversially discusses the connections between chiplets across all application domains. The physical interfaces with competing PHYs (XSR, BOW, OHBI, AIB, and UCIe) and their digital controllers are at the forefront of discussion. In the background, NoCs and “SuperNoCs” across multiple chiplets/chips must support the appropriate protocols. We are currently discussing Arm’s CHI C2C and other proposals. It will require the proverbial village of various companies to make the desired open chiplet ecosystem a reality.

Where are we heading from here?​

AI/ML’s large universe of resource requirements makes it an ideal fuel for what we experience as a semiconductor renaissance today. NoCs will be a crucial enabler within the AI/ML clusters, connecting building blocks on-chip and connecting chiplets carrying AI/ML subsystems. Brave new future, here we come!



Happy Day Light Savings!

😀
 
  • Like
  • Fire
  • Love
Reactions: 27 users
Top Bottom