Edit / add to my previous post.Merry Christmas and Happy times everyone, irrespective of beliefs or religions.
The religion of Akida is what we are witnessing.
Someone said the only sure thing in life, is death, yeah I guess so.
Though IMO, Akida will provide xmas presents for many many years to come.
This years, I believe, is the gift of SP, that in the not too distant future, will be considered ridiculously cheap.
I know there is always curve balls, and "didn't see that coming" moments, in life, but I just cant comprehend not getting a least @Fact Finder 's 1% of the $Gazillion proposed market.
Talk is 2 to 4 years, product to market time. I also think this timeframe will accelerate once the snowball effect starts, and there are many snowballs at different stages.
Ogre Master @Diogenese and his techo accomplice's will be inundated with products, patents and associations to compare. The new year sounds like its going to be very very busy, given the hype around CES 2023.
To @zeeb0t and all the regular contributors, sincerely, thank-you for your efforts. I enjoy the continued learning, balanced debate, mateship and humour. I don't enjoy the rudeness that occurs from time to time, but i guess tensions get tested. Just keep it grown up I reckon.
Stay well, and stay safe everyone. Enjoy the festivities, family, and what promises to be a great year ahead....... boom....could be any-day.
Mein Gott in Himmel, Diogenese.... a three-way conversation between Lewis Carroll, James Joyce, and a crate of scotch?
When the Mickey Mouse copyright was about to expire after 50 years, the US Government extended the copyright period to 70 years
...
and forced the rest of the world to follow suit.
And one that needs to be made for all those new to this industry.
Over at the other place after a few months the absence of announcements about Socionext bringing product to market was a stated concern of some investors.
Some were genuine but the majority were manipulators playing with the emotions of inexperienced retail.
Ford and Valeo came onboard mid 2020 and would not have had the AKD1000 engineering sample chip until around October, 2020.
The very same concerns we’re raised and promoted by WANCAs as the manipulation gained pace.
Prophesee came onboard this year late last year at the earliest. Already some genuine investors are ignoring historical timelines and looking for AKIDA to magically appear.
ARM & Intel in this context came onboard only yesterday.
There are no shortcuts in this semiconductor game but 2023 is shaping up very nicely as the earliest engagements from 2019 which likely included Mercedes Benz start to mature.
Congratulations to all the visionary long term investors who were not influenced and stayed the course.
And the curse of a thousand camel farts to all WANCAs.
My opinion only DYOR
FF
AKIDA BALLISTA
If my wife reads this she will confirm that I am a camel.He-he-he! The curse of a thousand camel farts! Just one fart seems like it would be intolerable, let alone a thousand pop-offs. Lends a whole new meaning to the term "silent but deadly".
View attachment 25093
View attachment 25095
And ONE tiny little percent of $US8.3 billion is $US83 million.Just reflecting on how big the prophesee Brainchip partnership could be…
“The global vision sensor market size was USD 8.03 Billion in 2021 and is expected to register a revenue CAGR of 17.8% over the forecast period, according to the latest analysis by Emergen Research”
Thanks for that @DiogeneseHi Fmf,
This patent has nothing to do with Akida. As you said, it is about the physical 3D layout of an array of near-memory NPU cores. They save power by jiggling the supply voltage to the cores.
WO2019046835A1 ULTRA-LOW POWER NEUROMORPHIC ARTIFICIAL INTELLIGENCE COMPUTING ACCELERATOR
View attachment 25070
[0008] A three-dimensional (3D) ultra-low power neuromorphic accelerator is described. The 3D ultra-low power neuromorphic accelerator includes a power manager as well as multiple tiers. The 3D ultra-low power neuromorphic accelerator also includes multiple cores defined on each tier and coupled to the power manager. Each core includes at least a processing element, a non-volatile memory, and a communications module.
[0062] The homogenous configuration may be implemented in the local power manager 740 using a power management integrated circuit (PMIC). In this configuration, the local power manager may be fabricated using an ultra-low voltage process, such as a fully depleted (FD)-semiconductor-on-insulator (FD-SOI) wafer process, or other ultra-low voltage process. The local power manager 740 may be configured to perform snoops on adjacent cores using, for example, handshaking circuity to communicate core-to-core to decide the power state of a corresponding core.
[0063] In one aspect of the present disclosure, the local power manager 740 may be configured to provide adaptive voltage scaling to enable sub-threshold voltage (e.g., 0.2 V to 0.25 V) operation. In this configuration, smart power management is provided by including a global power manager (GPM) 710 to coordinate with each local power manager 740 to provide dynamic voltage frequency scaling (DVFS) and power collapse control for each tier 702. In aspects of the present disclosure, the GPM 710 (shown off- chip) can be either on-chip or off-chip. In this example, the GPM 710 delivers power to a set of cores (e.g., the cores on one tier or multiple tiers), whereas the local power manager 740 derives power for each individual core.
85% increase in traffic in November. I wander whats happening in Vietnam... Hairy coconut factory
View attachment 24450
View attachment 24451
Just what I'm saying, if akida is a major difference it's time to pay upLooks like Intel have undertaken a massive internal review as they were struggling to remain competitive ... and clearly BrainChip was a key part of that review. Intel needed the Brainchip IP to compete .... gee, I hope Sean is negotiating really hard on royalties. First mover discounts are over, you have to pay more to hop on the BrainChip IP bus now !!
Thanks for that @Diogenese
What are you thoughts on this one from our mates at NVIDIA?
Anything in there of value?
View attachment 25104
View attachment 25105
View attachment 25106
0256] In at least one embodiment, neurons 2202 and synapses 2208 may be interconnected such that neuromorphic processor 2200 operates to process or analyze information received by neuromorphic processor 2200. In at least one embodiment, neurons 2202 may transmit an output pulse (or “fire” or “spike”) when inputs received through neuron input 2204 exceed a threshold. In at least one embodiment, neurons 2202 may sum or integrate signals received at neuron inputs 2204. For example, in at least one embodiment, neurons 2202 may be implemented as leaky integrate-and-fire neurons, wherein if a sum (referred to as a “membrane potential”) exceeds a threshold value, neuron 2202 may generate an output (or “fire”) using a transfer function such as a sigmoid or threshold function. In at least one embodiment, a leaky integrate-and-fire neuron may sum signals received at neuron inputs 2204 into a membrane potential and may also apply a decay factor (or leak) to reduce a membrane potential. In at least one embodiment, a leaky
integrate-and-fire neuron may fire if multiple input signals are received at neuron inputs 2204 rapidly enough to exceed a threshold value (i.e., before a membrane potential decays too low to fire). In at least one embodiment, neurons 2202 may be implemented using circuits or logic that receive inputs, integrate inputs into a membrane potential, and decay a membrane potential. In at least one embodiment, inputs may be averaged, or any other suitable transfer function may be used. Furthermore, in at least one embodiment, neurons 2202 may include, without limitation, comparator circuits or logic that generate an output spike at neuron output 2206 when result of applying a transfer function to neuron input 2204 exceeds a threshold. In at least one embodiment, once neuron 2202 fires, it may disregard previously received input information by, for example, resetting a membrane potential to 0 or another suitable default value. In at least one embodiment, once membrane potential is reset to 0, neuron 2202 may resume normal operation after a suitable period of time (or refractory period).
Don’t suppose Foxconn and Socionext having a long association and multiple product partnerships would have any implications for Brainchip.Macbooks
![]()
Marco Mezger on LinkedIn: #vietnam #iphone #china #macbook #tech #washington #beijing #taiwan…
Apple to start making MacBooks in #Vietnam by mid-2023 💡 #iPhone maker aims to have 'out of #China' production alternatives for key products Apple plans to…www.linkedin.com
I apply the nappy theory to whether it is AKIDA or not.Nothing to suggest Akida - the highlighted bits suggest not Akida.
[0255] FIG. 22 is a block diagram of a neuromorphic processor 2200, according to at least one embodiment. In at least one embodiment, neuromorphic processor 2200 may receive one or more inputs from sources external to neuromorphic processor 2200. In at least one embodiment, these inputs may be transmitted to one or more neurons 2202 within neuromorphic processor 2200. In at least one embodiment, neurons 2202 and components thereof may be implemented using circuitry or logic, including one or more arithmetic logic units (ALUs). In at least one embodiment, neuromorphic processor 2200 may include, without limitation, thousands or millions of instances of neurons 2202, but any suitable number of neurons 2202 may be used. In at least one embodiment, each instance of neuron 2202 may include a neuron input 2204 and a neuron output 2206. In at least one embodiment, neurons 2202 may generate outputs that may be transmitted to inputs of other instances of neurons 2202. For example, in at least one embodiment, neuron inputs 2204 and neuron outputs 2206 may be interconnected via synapses 2208.
[0256] In at least one embodiment, neurons 2202 and synapses 2208 may be interconnected such that neuromorphic processor 2200 operates to process or analyze information received by neuromorphic processor 2200. In at least one embodiment, neurons 2202 may transmit an output pulse (or “fire” or “spike”) when inputs received through neuron input 2204 exceed a threshold. In at least one embodiment, neurons 2202 may sum or integrate signals received at neuron inputs 2204. For example, in at least one embodiment, neurons 2202 may be implemented as leaky integrate-and-fire neurons, wherein if a sum (referred to as a “membrane potential”) exceeds a threshold value, neuron 2202 may generate an output (or “fire”) using a transfer function such as a sigmoid or threshold function. In at least one embodiment, a leaky integrate-and-fire neuron may sum signals received at neuron inputs 2204 into a membrane potential and may also apply a decay factor (or leak) to reduce a membrane potential. In at least one embodiment, a leaky integrate-and-fire neuron may fire if multiple input signals are received at neuron inputs 2204 rapidly enough to exceed a threshold value (i.e., before a membrane potential decays too low to fire). In at least one embodiment, neurons 2202 may be implemented using circuits or logic that receive inputs, integrate inputs into a membrane potential, and decay a membrane potential. In at least one embodiment, inputs may be averaged, or any other suitable transfer function may be used. Furthermore, in at least one embodiment, neurons 2202 may include, without limitation, comparator circuits or logic that generate an output spike at neuron output 2206 when result of applying a transfer function to neuron input 2204 exceeds a threshold. In at least one embodiment, once neuron 2202 fires, it may disregard previously received input information by, for example, resetting a membrane potential to 0 or another suitable default value. In at least one embodiment, once membrane potential is reset to 0, neuron 2202 may resume normal operation after a suitable period of time (or refractory period).
[0342] Tensor cores are configured to perform matrix operations in accordance with at least one embodiment. In at least one embodiment, one or more tensor cores are included in processing cores 3010. In at least one embodiment, tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In at least one embodiment, each tensor core operates on a 4x4 matrix and performs a matrix multiply and accumulate operation D = A X B + C, where A, B, C, and D are 4x4 matrices.
[0359] In at least one embodiment, training pipeline 3204 (FIG. 32) may include a scenario where facility 3102 is training their own machine learning model, or has an existing machine learning model that needs to be optimized or updated. In at least one embodiment, imaging data 3108 generated by imaging device(s), sequencing devices, and/or other device types may be received. In at least one embodiment, once imaging data 3108 is received, AI-assisted annotation 3110 may be used to aid in generating annotations corresponding to imaging data 3108 to be used as ground truth data for a machine learning model. In at least one embodiment, AI-assisted annotation 3110 may include one or more machine learning models (e.g., convolutional neural networks (C Ns)) that may be trained to generate annotations corresponding to certain types of imaging data 3108 (e.g., from certain devices) and/or certain types of anomalies in imaging data 3108.
This is the Akida NPU:
View attachment 25111
There is no sigmoid function.
The synapse elements 105, 106, 113 are closely tied to the neuron circuit elements, including via the learning feedback loop.
Just a refresher on Akida - these changes were implemented after customer feedback (remember when the whole team was burning the candle at both ends?):
WO2020092691A1 AN IMPROVED SPIKING NEURAL NETWORK
[0038] But conventional SNNs can suffer from several technological problems. First, conventional SNNs are unable to switch between convolution and fully connected operation. For example, a conventional SNN may be configured at design time to use a fully-connected feedforward architecture to learn features and classify data. Embodiments herein (e.g., the neuromorphic integrated circuit) solve this technological problem by combining the features of a CNN and a SNN into a spiking convolutional neural network (SCNN) that can be configured to switch between a convolution operation or a fully- connected neural network function. The SCNN may also reduce the number of synapse weights for each neuron. This can also allow the SCNN to be deeper (e.g., have more layers) than a conventional SNN with fewer synapse weights for each neuron.
Embodiments herein further improve the convolution operation by using a winner-take-all (WTA) approach for each neuron acting as a filter at particular position of the input space. This can improve the selectivity and invariance of the network. In other words, this can improve the accuracy of an inference operation.
[0039] Second, conventional SNNs are not reconfigurable. Embodiments herein solve this technological problem by allowing the connections between neurons and synapses of a SNN to be reprogrammed based on a user defined configuration. For example, the connections between layers and neural processors can be reprogrammed using a user defined configuration file.
[0040] Third, conventional SNNs do not provide buffering between different layers of the SNN. But buffering can allow for a time delay for passing output spikes to a next layer. Embodiments herein solve this technological problem by adding input spike buffers and output spike buffers between layers of a SCNN.
[0041] Fourth, conventional SNNs do not support synapse weight sharing. Embodiments herein solve this technological problem by allowing kernels of a SCNN to share synapse weights when performing convolution. This can reduce memory requirements of the SCNN.
[0042] Fifth, conventional SNNs often use l-bit synapse weights. But the use of l-bit synapse weights does not provide a way to inhibit connections. Embodiments herein solve this technological problem by using ternary synapse weights. For example, embodiments herein can use two-bit synapse weights. These ternary synapse weights can have positive, zero, or negative values. The use of negative weights can provide a way to inhibit connections which can improve selectivity. In other words, this can improve the accuracy of an inference operation.
[0043] Sixth, conventional SNNs do not perform pooling. This results in increased memory requirements for conventional SNNs. Embodiments herein solve this technological problem by performing pooling on previous layer outputs. For example, embodiments herein can perform pooling on a potential array outputted by a previous layer. This pooling operation reduces the dimensionality of the potential array while retaining the most important information.
[0044] Seventh, conventional SNN often store spikes in a bit array. Embodiments herein provide an improved way to represent and process spikes. For example, embodiments herein can use a connection list instead of bit array. This connection list is optimized such that each input layer neuron has a set of offset indexes that it must update. This enables embodiments herein to only have to consider a single connection list to update all the membrane potential values of connected neurons in the current layer.
[0045] Eighth, conventional SNNs often process spike by spike. In contrast, embodiments herein can process packets of spikes. This can cause the potential array to be updated as soon as a spike is processed. This can allow for greater hardware parallelization.
[0046] Finally, conventional SNNs do not provide a way to import learning (e.g., synapse weights) from an external source. For example, SNNs do not provide a way to import learning performed offline using backpropagation. Embodiments herein solve this technological problem by allowing a user to import learning performed offline into the neuromorphic integrated circuit.
[0047] In some embodiments, a SCNN can include one or more neural processors. Each neural processor can be interconnected through a reprogrammable fabric. Each neural processor can be reconfigurable. Each neuron processor can be configured to perform either convolution or classification in fully connected layers
[0048] Each neural processor can include a plurality of neurons and a plurality of synapses. The neurons can be simplified Integrate and Fire (I&F) neurons. The neurons and synapses can be interconnected through the reprogrammable fabric. Each neuron of the neural processor can be implemented in hardware or software. A neuron implemented in hardware can be referred to as a neuron circuit.
[0049] In some embodiments, each neuron can use an increment or decrement function to set the membrane potential value of the neuron. This can be more efficient than using an addition function of a conventional I&F neuron.
[0050] In some embodiments, a SCNN can use different learning functions. For example, a SCNN can use a STDP learning function. In some other embodiments, the SCNN can implement an improved version of the STDP learning function using synapse weight swapping. This improved STDP learning function can offer built-in homeostasis (e.g., stable learned weights) and improved efficiency.
[0051] In some embodiments, an input to a SCNN is derived from an audio stream. An Analog to Digital (A/D) converter can convert the audio stream to digital data. The A/D converter can output the digital data in the form of Pulse Code Modulation (PCM) data. A data to spike converter can convert the digital data to a series of spatially and temporally distributed spikes representing the spectrum of the audio stream.
[0052] In some embodiments, an input to a SCNN is derived from a video stream. The A/D converter can convert the video stream to digital data. For example, the A/D converter can convert the video stream to pixel information in which the intensity of each pixel is expressed as a digital value. A digital camera can provide such pixel information. For example, the digital camera can provide pixel information in the form of three 8-bit values for red, green and blue pixels. The pixel information can be captured and stored in memory. The data to spike converter can convert the pixel information to spatially and temporally distributed spikes by means of sensory neurons that simulate the actions of the human visual tract.
[0053] In some embodiments, an input to a SCNN is derived from data in the shape of binary values. The data to spike converter can convert the data in the shape of binary values to spikes by means of Gaussian receptive fields. As would be appreciated by a person of ordinary skill in the art, the data to spike convert can convert the data in the shape of binary values to spikes by other means.
[0054] In some embodiments, a digital vision sensor (e.g., a Dynamic Vision Sensor (DVS) from supplied by iniVation AG or other manufacture) is connected to a spike input interface of a SCNN. The digital vision sensor can transmit pixel event information in the form of spikes. The digital vision sensor can encode the spikes over an Address-event representation (AER) bus. Pixel events can occur when a pixel is increased or decreased in intensity.
Special guest appearance:
View attachment 25112 View attachment 25108