jtardif999
Regular
This is Apple's toaster patent:
US2022222510A1 MULTI-OPERATIONAL MODES OF NEURAL ENGINE CIRCUIT 20210113
View attachment 45961
[0052] Referring to FIG. 3, an example neural processor circuit 218 may include, among other components, neural task manager 310 , a plurality of neural engines 314 A through 314 N (hereinafter collectively referred as “neural engines 314 ” and individually also referred to as “neural engine 314 ”), kernel direct memory access (DMA) 324 , data processor circuit 318 , data processor DMA 320 , and planar engine 340
[0053] Each of neural engines 314 performs computing operations for machine learning in parallel. Depending on the load of operation, the entire set of neural engines 314 may be operating or only a subset of the neural engines 314 may be operating while the remaining neural engines 314 are placed in a power-saving mode to conserve power. Each of neural engines 314 includes components for storing one or more kernels, for performing multiply-accumulate operations, for performing parallel sorting operations, and for post-processing to generate an output data 328 , as described below in detail with reference to FIGS. 4A and 4B. Neural engines 314 may specialize in performing computation heavy operations such as convolution operations and tensor product operations. Convolution operations may include different kinds of convolutions, such as cross-channel convolutions (a convolution that accumulates values from different channels), channel-wise convolutions, and transposed convolutions.
View attachment 45962
[0063] FIG. 4A is a block diagram of neural engine 314 , according to one embodiment. Specifically, FIG. 4A illustrates neural engine 314 perform operations including operations to facilitate machine learning such as convolution, tensor product, and other operations that may involve heavy computation in the first mode. For this purpose, neural engine 314 receives input data 322 , performs multiply-accumulate operations (e.g., convolution operations) on input data 322 based on stored kernel data, performs further post-processing operations on the result of the multiply-accumulate operations, and generates output data 328 . Input data 322 and/or output data 328 of neural engine 314 may be of a single channel or span across multiple channels.
It has 16 neural engines each with an array of MAC (multiply
But we know Akida is agnostic both in terms of the data it can process and the preprocessor it works with to specify the synaptic weights. Akida1500 has been created to plug in any preprocessor tech so the info in that paper is a little outdated and somewhat misleading imo.I skimmed it and these guys look like they know their sheet..
And they are bagging AKIDA, in a sense, lumping us with TrueNorth and Loihi 2..
View attachment 46025
What they're doing is "Open source" WTF is their game?..
I'd like to know what Diogenese thinks too and Peter, on this one..