D
Deleted member 118
Guest
Ok ok calm down everyone.. we all knew this was going to happen..
Hahaha just kidding! PARTAAAY TIME!
![]()
![]()
Ok ok calm down everyone.. we all knew this was going to happen..
Hahaha just kidding! PARTAAAY TIME!
![]()
![]()
As my mother used to say "You'll get stuck like that."Well, I for one am getting a bit worried because my friends and I like to pull faces at each other in the car. It's just a "thing" we've developed over time which has turned into a bit of a competition I'm afraid. So my question is, what happens if NVISO's technology detects a human emotion it doesn't know how to classify? Do you think it brings the car safely to a stop and calls for emergency services to attend? Because that would be a bit embarrassing...
View attachment 24508
Thank you for the response mate, appreciated greatly!Hi Sera,
I flagged graiMatter a couple of years ago as one to watch.
This article discusses the different tech approaches of graiMatter and Akida:
Spiking Neural Networks: Research Projects Or Commercial Products?
Opinions differ widely, but in this space that isn’t unusual.
MAY 18TH, 2020 - BY: BRYON MOYER
https://semiengineering.com/spiking-neural-networks-research-projects-or-commercial-products/
Temporal coding is said by some to be closer to what happens in the brain, although there are differing opinions on that, with some saying that that’s the case only for a small set of examples: “It’s actually not that common in the brain,” said Jonatha Tapson, GrAI Matter’s chief scientific officer. An example where it is used is in owl’s ears. “They use their hearing to hunt at night, so their directional sensitivity has to be very high.” Instead of representing a value by a frequency of spikes, the value is encoded as the delay between spikes [ # rate coding #]. Spikes then represent events, and the goal is to identify meaningful patterns in a stream of spikes.
…
Temporally coded SNNs can be most effective when driven by sensors that generate temporal-coded data – that is, event-based sensors. Dynamic vision sensors (DVS) are examples. They don’t generate full frames of data on a frames-per-second basis. Instead, each pixel reports when its illumination changes by more than some threshold amount. This generates a “change” event, which then propagates through the network. Valentian said these also can be particularly useful in AR/VR applications for “visual odometry,” where inertial measurement units are too slow.
…
Meanwhile, BrainChip started with rate coding, but decided that wasn’t commercially viable. Instead, it uses rank coding (or rank-order coding), which uses the order of arrival of spikes (as opposed to literal timing) to a neuron as a code. This is a pattern-oriented approach, with arrivals in the prescribed order (along with synaptic weighting) stimulating the greatest response and arrivals in other orders providing less stimulation.
…
All of these coding approaches aside, GrAI Matter uses a more direct approach. “We encode values directly as numbers – 8- or 16-bit integers in GrAI One or Bfloat16 in our upcoming chip. This is a key departure from other neuromorphic architectures, which have to use rate or population or time or ensemble codes. We can use those, too, but they are not efficient,” said Tapson.
…
The [ # BrainChip #] neural fabric is fully configurable for different applications. Each node in the array contains four neural processing units (NPUs), and each NPU can be configured for event-based convolution (supporting standard or depthwise convolution) or for other configurations, including fully connected. Events are carried as packets on the network.
While NPU details or images are not available, [ # WO2020092691 published 20202507 # ] BrainChip did further explain that each NPU has digital logic and SRAM, providing something of a processing-in-memory capability, but not using an analog-memory approach. An NPU contains eight neural processing engines that implement the neurons and synapses. Each event is multiplied by a synaptic weight upon entering a neuron.
According to this article, GraiMatter are not using SNNs. From their choice of 8 bit or 16 bit integers/FP I assume they need a MAC matrix circuit to process weights and activations, as in CNN. This is not a sparse process, as every bit must be processed. hence GraiMatter would use more power and would be slower than Akida.
GraiMatter's assertion that "other neuromorphic architectures, which have to use rate or population or time or ensemble codes." does not apply to Akida, which uses rank coding, from which Simon Thorpe'se N-of-M code is derived. This is based on the discovery that the strongest signals trigger retinal receptors and pixels earlier than weaker signals. Most of the information is carried in the earlier arriving spikes, and the later-arriving spikes can be discarded. When you think about it, this is quite like how the DVS/event camera works. N-of-M coding uses the order of arrival, and does not need to track the time of arrival. It just counts the first N spikes and closes the gate.
GraiMatter uses 8 bit or 16 bit precision mathematics, whereas Akida uses inference based on probability. You may recall that some demonstrations of Akida show a bar chart with the probabilities of the subject item being one of a number of different articles, eg, dog, cat, parrot, elephant. Akida does the comparison and selects the one which is the best fit. Of course, the model libraries are much larger than that.
I find this an amazing leap of imagination, to conceive that such a process could be implemented in silicon, and N-of-M is pretty clever too.
Seems like the Police were right Socionext is a cult member.
‘Yes, an exorcist is on route.
The one thing people need to understand who are selling or waiting for the revenue to kick in, they will be paying way more then what the current sp is at. Are you okay to take the risk . You may have the chance to have few folds then or get in now and have double the few fold. I am writing patiently and hopefully its not too long. Fingers crossed .
My opinion.
Dyor
Dyor
Because we are very efficient. We don't even waste pixels on a screen.View attachment 24513
Finally showing up several days after the announcement... Hmmm I wonder why our logo is smaller than everyone else's?
It's not the size that matters, it's how you use it.View attachment 24513
![]()
IP Alliance - Intel Foundry Accelerator
See how IP Alliance partners enable designers to access high-quality IPs, support design & project schedules, & optimize PPA.www.intel.com
Finally showing up several days after the announcement... Hmmm I wonder why our logo is smaller than everyone else's?
Yeah, i keep trying to re-asure myself of that, .....................It's not the size that matters, it's how you use it.
Anyone got any thoughts on what todays "triple witching " may produce?Yeah, i keep trying to re-asure myself of that, .....................![]()
Yeah, i keep trying to re-asure myself of that, .....................![]()
Afternoon Mrgds,Anyone got any thoughts on what todays "triple witching " may produce?
Surely funds/instos would be abreast of all the "newely surfaced info?
Will the volume traded so far be doubled/triple at this s/p?
@Esq.111 .................. any intuitive fascination youd like to volunteer?
AKIDA BALLISTA
Nah it’s starts bullying you; “what’s wrong with your face retard”
$1m on close yeah boiCan we finish green after a wonderful week of 1000 eyes sluthing?
C’mon you good thing. Kick…..kiiiick!