BRN Discussion Ongoing

Science fiction anyone?


1686991212355.png



1686991310490.png


Not sure of the commercial benefits. I’m guessing Hossam is using Akida as he did for the Nanose device.

:)
 
  • Like
  • Wow
  • Fire
Reactions: 24 users

TECH

Regular
Our Patent expansion worldwide with many, many Patents currently awaiting examination, if granted will narrow the channel even
further for current players and emerging players, the question is, how many ways are there to achieve what Peter, Anil and teams have
created to this point, if it was as easy as that, many companies would already be boasting that they have achieved Edge AI on a grand
scale, but all I hear is talk, we may be in the same category at this current point in time, but the IBM's, the Qualcomm's, the Intel's etc
are all in the same boat, who has the code to crack open the future, the race is on, are we still in front, who really knows.

This future pie will be shared between many, as in my opinion, many are already sleeping with the enemy, so to speak, as we have all
learnt, many companies have something to bring to the table, so letting go of egos, or power plays will smooth the runway to success
for all.

Brainchip has "proven" technology that the others don't, we can both benefit through mutual co-operation, keeping the doors open
will ultimately benefit us all.

That's why our growing partnership list is so important, it's the clients of clients of clients that ultimately builds a huge network of
solid trust between all parties concerned.

Thanks to all the dedicated researchers still contributing to this forum, I and many others appreciate all your efforts, cheers.

Tech (y)
 
  • Like
  • Love
  • Fire
Reactions: 49 users
Science fiction anyone?


View attachment 38502


View attachment 38503

Not sure of the commercial benefits. I’m guessing Hossam is using Akida as he did for the Nanose device.

:)


I found the article relative to this: https://onlinelibrary.wiley.com/doi/10.1002/adma.202209125

It talks DNN, no mention of Akida, SNN or Brainchip :(

Pretty amazing all the same!
 
  • Like
Reactions: 3 users

equanimous

Norse clairvoyant shapeshifter goddess

Even though this is from China they are validating the significance of SNN which is probably why US is accelerating their university programs with Brainchip. ( wont allow me to unbold this sentence)​



Front. Neurosci., 12 June 2023
Sec. Neuroprosthetics
Volume 17 - 2023 | https://doi.org/10.3389/fnins.2023.1174760

This article is part of the Research Topic​

Neural Information Processing and Novel Technologies to Read and Write the Neural Code
View all 8 Articles

Feasibility study on the application of a spiking neural network in myoelectric control systems​

Antong Sun, Xiang Chen*,
newprofile_default_profileimage_new.jpg
Mengjuan Xu, Xu Zhang and Xun Chen
  • Department of Electronic Science and Technology, University of Science and Technology of China (USTC), Hefei, Anhui, China
In recent years, the effectiveness of a spiking neural network (SNN) for Electromyography (EMG) pattern recognition has been validated, but there is a lack of comprehensive consideration of the problems of heavy training burden, poor robustness, and high energy consumption in the application of actual myoelectric control systems. In order to explore the feasibility of the application of SNN in actual myoelectric control systems, this paper investigated an EMG pattern recognition scheme based on SNN. To alleviate the differences in EMG distribution caused by electrode shifts and individual differences, the adaptive threshold encoding was applied to gesture sample encoding. To improve the feature extraction ability of SNN, the leaky-integrate-and-fire (LIF) neuron that combines voltage–current effect was adopted as a spike neuron model. To balance recognition accuracy and power consumption, experiments were designed to determine encoding parameter and LIF neuron release threshold. By conducting the gesture recognition experiments considering different training test ratios, electrode shifts, and user independences on the nine-gesture high-density and low-density EMG datasets respectively, the advantages of the proposed SNN-based scheme have been verified. Compared with a Convolutional Neural Network (CNN), Long Short-Term Memory Network (LSTM) and Linear Discriminant Analysis (LDA), SNN can effectively reduce the number of repetitions in the training set, and its power consumption was reduced by 1–2 orders of magnitude. For the high-density and low-density EMG datasets, SNN improved the overall average accuracies by about (0.99 ~ 14.91%) under different training test ratios. For the high-density EMG dataset, the accuracy of SNN was improved by (0.94 ~ 13.76%) under electrode-shift condition and (3.81 ~ 18.95%) in user-independent case. The advantages of SNN in alleviating the user training burden, reducing power consumption, and improving robustness are of great significance for the implementation of user-friendly low-power myoelectric control systems.

Figure 2
www.frontiersin.org
Figure 2. 9 kinds of gestures.

 
  • Like
  • Fire
Reactions: 16 users

equanimous

Norse clairvoyant shapeshifter goddess

Even though this is from China they are validating the significance of SNN which is probably why US is accelerating their university programs with Brainchip. ( wont allow me to unbold this sentence)​



Front. Neurosci., 12 June 2023
Sec. Neuroprosthetics
Volume 17 - 2023 | https://doi.org/10.3389/fnins.2023.1174760

This article is part of the Research Topic

Neural Information Processing and Novel Technologies to Read and Write the Neural Code
View all 8 Articles

Feasibility study on the application of a spiking neural network in myoelectric control systems​

Antong Sun, Xiang Chen*,
newprofile_default_profileimage_new.jpg
Mengjuan Xu, Xu Zhang and Xun Chen
  • Department of Electronic Science and Technology, University of Science and Technology of China (USTC), Hefei, Anhui, China
In recent years, the effectiveness of a spiking neural network (SNN) for Electromyography (EMG) pattern recognition has been validated, but there is a lack of comprehensive consideration of the problems of heavy training burden, poor robustness, and high energy consumption in the application of actual myoelectric control systems. In order to explore the feasibility of the application of SNN in actual myoelectric control systems, this paper investigated an EMG pattern recognition scheme based on SNN. To alleviate the differences in EMG distribution caused by electrode shifts and individual differences, the adaptive threshold encoding was applied to gesture sample encoding. To improve the feature extraction ability of SNN, the leaky-integrate-and-fire (LIF) neuron that combines voltage–current effect was adopted as a spike neuron model. To balance recognition accuracy and power consumption, experiments were designed to determine encoding parameter and LIF neuron release threshold. By conducting the gesture recognition experiments considering different training test ratios, electrode shifts, and user independences on the nine-gesture high-density and low-density EMG datasets respectively, the advantages of the proposed SNN-based scheme have been verified. Compared with a Convolutional Neural Network (CNN), Long Short-Term Memory Network (LSTM) and Linear Discriminant Analysis (LDA), SNN can effectively reduce the number of repetitions in the training set, and its power consumption was reduced by 1–2 orders of magnitude. For the high-density and low-density EMG datasets, SNN improved the overall average accuracies by about (0.99 ~ 14.91%) under different training test ratios. For the high-density EMG dataset, the accuracy of SNN was improved by (0.94 ~ 13.76%) under electrode-shift condition and (3.81 ~ 18.95%) in user-independent case. The advantages of SNN in alleviating the user training burden, reducing power consumption, and improving robustness are of great significance for the implementation of user-friendly low-power myoelectric control systems.

Figure 2
www.frontiersin.org
Figure 2. 9 kinds of gestures.

University of South Korea

Wi-Fi frame detection via spiking neural networks with memristive synapses​

Author links open overlay panelHyun-Jong Lee, Dong-Hoon Kim, Jae-Han LimDepartment of Software, Kwangwoon University, Seoul, South Korea
Received 1 October 2022, Revised 29 April 2023, Accepted 7 June 2023, Available online 13 June 2023.
Show less
Add to Mendeley
Share
Cite
https://doi.org/10.1016/j.comcom.2023.06.006Get rights and content

Abstract​

With increasing performance of deep learning, researchers have employed Deep Neural Networks (DNNs) for wireless communications. In particular, mechanisms for detecting Wi-Fi frames using DNNs demonstrate excellent performances in terms of detection accuracy. However, DNNs require significant amount of computation resources. Thus, if the DNN based mechanisms are used in mobile devices or low-end devices, their battery would be quickly depleted. Spiking Neural Networks (SNNs), which are regarded as next generation of neural network, have advantages over DNNs: low energy consumption and limited computational complexity. Motivated by these advantages, in this paper, we propose a mechanism to detect a Wi-Fi frame using SNNs and show the feasibility of SNNs for Wi-Fi detection. The mechanism is composed of a preprocessing module for collecting an actual RF signal and an SNN module for detecting a Wi-Fi frame. The SNN module employs Leaky Integrate and Fire (LIF) neurons and Spike-Timing Dependent Plasticity (STDP) learning rule. To reflect the features of an actual neuromorphic system, our SNN module considers memristive synaptic features such as nonlinear weight update. Experimental study demonstrates that the detection capabilities of the proposed mechanism are comparable to those of previous mechanisms using DNNs, CNNs and RNNs while consuming much less energy than the previous mechanism.

 
  • Like
  • Fire
  • Love
Reactions: 17 users

equanimous

Norse clairvoyant shapeshifter goddess
University of South Korea

Wi-Fi frame detection via spiking neural networks with memristive synapses​

Author links open overlay panelHyun-Jong Lee, Dong-Hoon Kim, Jae-Han LimDepartment of Software, Kwangwoon University, Seoul, South Korea
Received 1 October 2022, Revised 29 April 2023, Accepted 7 June 2023, Available online 13 June 2023.
Show less
Add to Mendeley
Share
Cite
https://doi.org/10.1016/j.comcom.2023.06.006Get rights and content

Abstract​

With increasing performance of deep learning, researchers have employed Deep Neural Networks (DNNs) for wireless communications. In particular, mechanisms for detecting Wi-Fi frames using DNNs demonstrate excellent performances in terms of detection accuracy. However, DNNs require significant amount of computation resources. Thus, if the DNN based mechanisms are used in mobile devices or low-end devices, their battery would be quickly depleted. Spiking Neural Networks (SNNs), which are regarded as next generation of neural network, have advantages over DNNs: low energy consumption and limited computational complexity. Motivated by these advantages, in this paper, we propose a mechanism to detect a Wi-Fi frame using SNNs and show the feasibility of SNNs for Wi-Fi detection. The mechanism is composed of a preprocessing module for collecting an actual RF signal and an SNN module for detecting a Wi-Fi frame. The SNN module employs Leaky Integrate and Fire (LIF) neurons and Spike-Timing Dependent Plasticity (STDP) learning rule. To reflect the features of an actual neuromorphic system, our SNN module considers memristive synaptic features such as nonlinear weight update. Experimental study demonstrates that the detection capabilities of the proposed mechanism are comparable to those of previous mechanisms using DNNs, CNNs and RNNs while consuming much less energy than the previous mechanism.


University of Glasgow UK

published 13th of June

1686995305438.png


This is insteresting 👇

Photonic Spiking Neural Networks with Highly EfficientTraining Protocols for Ultrafast Neuromorphic
ComputingSystemsDafydd Owen-Newns1, Joshua Robertson1, Matˇej Hejda1, and Antonio Hurtado11Institute of Photonics, SUPA Department of Physics, University of Strathclyde,Glasgow, UK.*Corresponding author. Email: dafydd.owen-newns@strath.ac.ukAbstractPhotonic

technologies offer great prospects for novel, ultrafast, energy-efficient, and hardware-friendly neuromorphic (brain-like) computing platforms. Moreover, neuromorphic photonic ap-proaches based on ubiquitous, technology-mature, and low-cost vertical-cavity surface-emittinglasers (VCSELs) (devices found in fiber-optic transmitters, mobile phones, and automotive sen-sors) are of particular interest. Given that VCSELs have shown the ability to realize neuronal op-tical spiking responses (at ultrafast GHz rates), their use in spike-based information-processingsystems has been proposed. In this study, spiking neural network (SNN) operation, based ona hardware-friendly photonic system of just one VCSEL, is reported alongside a novel binaryweight ’significance’ training scheme that fully capitalizes on the discrete nature of the opticalspikes used by the SNN to process input information. The VCSEL-based photonic SNN wastested with a highly complex multivariate classification task (MADELON) before its perfor-mance was compared using a traditional least-squares training method and an alternative novelbinary weighting scheme. Excellent classification accuracies of>94% were achieved by bothtraining methods, exceeding the benchmark performance of the dataset in a fraction of the pro-cessing time. The newly reported training scheme also dramatically reduces the training set sizerequirements and the number of trained nodes (≤1% of the total network node count). ThisVCSEL-based photonic SNN, in combination with the reported ’significance’ weighting scheme,therefore grants ultrafast spike-based optical processing highly reduced training requirementsand hardware complexity for potential application in future neuromorphic systems and artificialintelligence applications.1

Conclusion


This study demonstrates the high classification performance of a novel laser-based photonic spikingneural network for tackling a highly complex, multivariate, nonlinear classification task (MADE-LON) with 500 datapoint features. Importantly, this study also introduces a novel ’significance’training approach that makes use of binary weights (0 or 1), and that leverages on the advantagesof the discrete optical spiking signals found in the photonic SNN. The experimental approach to16Figure 9: Photonic SNN performance versus the training set size, using a 4,096-node architecture.a) The optimal classification accuracy (averaged over 10 random selections of the training set) isplotted against training set size (Nt, blue). The inset highlights the consistent high performance overvarious training set sizes (Ntranges from 1 to 100). b) The number of training nodes (Nn) used toachieve the corresponding optimal accuracy, plotted against increasing training set size (red). Thepeak accuracy, 95.7%, occurs whenNn= 18 andNt= 91 (circled).an SNN combines the spiking dynamics of a VCSEL with a novel network architecture inspired bythe reservoir computing paradigm to process data entirely optically at extremely high speeds (GHzrates). The SNN uses all-optical neuron-like spikes to create a time-multiplexed feed-forward spikingneural network in which the values of each time-multiplexed (virtual) node are linked through theVCSEL’s nonlinear temporal dynamics. The computational power of the photonic SNN was demon-strated using an OLS method of weight training. We demonstrated that, by training the outputlayer weights with this OLS approach, we could achieve high accuracies of up to 91% and 94.4% forthe MADELON classification task using SNN architectures with 2,048 and 4,096 nodes, respectively.Additionally, we introduced a new ’significance’ training approach that assigns binary weights to(optical spiking) nodes according to their overall usefulness and significance score. In this approach,only high-significance scoring nodes, that is, nodes that spike frequently for one class but not others,were considered and used for network training and performance evaluation. We showed that only avery small fraction (≤1% in the presented case) of the total number of nodes (in the output layer)17were required to classify the data successfully. We demonstrated that classification accuracies of94.4% and 95.7% could be achieved using this new training method. The accuracies provided by thesignificance-training approach showed an improvement over those achieved by the OLS method, whilealso significantly reducing the number of training nodes. Moreover, the photonic SNN demonstrateda classification performance that improved upon the benchmark accuracy (93.78%) achieved bysoftware-implemented NNs reported by the dataset authors [49]. Additionally, we demonstratedthat a photonic SNN trained with the new significance method could realize high-level performancewith small training set sizes (<10 datapoints), further reducing the overall resources necessary fortraining the optical system.Finally, the proposed photonic SNN offers several inherent physical and computational benefitsover traditional digital semiconductor processing systems, notably ultrafast performance (250 ps/node),low-power usage (∼150μW average optical power, and 3.5 mA of applied bias current), and hardware-friendly implementation (using just one VCSEL to process all virtual nodes). Furthermore, aVCSEL-based photonic SNN can adjust the performance and processing rate by changing the num-ber of virtual nodes used in the system, which can be performed arbitrarily and on the fly duringpre-processing. In conclusion, we believe these results open possibilities for further photonics-basedprocessing systems that run and operate entirely on optical hardware, and that are capable of solvinghighly complex tasks with high accuracy and ultrafast, energy-efficient operation.AcknowledgmentsAuthor ContributionsD. Owen-Newns performed pre- and post-processing of all data. D. Owen-Newns & J. Robertsonperformed experimental runs on a photonic system. M. Hejda contributed to data postprocessingand visualization. A. Hurtado supervised all research efforts. All the authors contributed equally tothe writing of this manuscript.FundingThe authors acknowledge that this work was supported by the UKRI Turing AI Acceleration Fel-lowships Programme (EP/V025198/1), by the European Commission (Grant 828841-ChipAI-H2020-FETOPEN-2018-2020), and UK EPSRC (EP/N509760/1, EP/P006973/1).Conflicts of InterestThe author(s) declare(s) that there is no conflict of interest regarding the publication of this article.Data AvailabilityAll data underpinning this publication are openly available from the University of Strathclyde Knowl-edgeBase at https://doi.org/x.xxxxxxx. For the purpose of Open Access, the author has applied a18CC BY public copyright license to any author-accepted manuscript (AAM) version arising from thissubmission.

 
  • Like
  • Fire
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
1686996036841.png


Another Chinese article addressing some concerns of SNN mentions Green artificial Intelligence which is a first I have heard of this term.

Last but not least, more special applications for SNNs also should be explored still. Though SNNs have been used widely in many fields, including the neuromorphic camera, HAR task, speech recognition, autonomous driving, etc., as aforementioned and the object detection (Kim et al., 2020; Zhou et al., 2020), object tracking (Luo et al., 2020), image segmentation (Patel et al., 2021), robotic (Stagsted et al., 2020; Dupeyroux et al., 2021), etc., where some remarkable studies have applied SNNs on recently, compared to ANNs, their real-world applications are still very limited. Considering the unique advantage, efficiency of SNNs, we think there is a great opportunity for applying SNNs in the Green Artificial Intelligence (GAI), which has become an important subfield of Artificial Intelligence and has notable practical value.
We believe many studies focusing on using SNNs for GAI will emerge soon.
 
  • Like
  • Fire
Reactions: 9 users

Frangipani

Regular
Here is a page with the CVPR schedule.
It is on this coming week.
Scroll down the page and there are papers and posters you can click on also.

https://tub-rip.github.io/eventvision2023/


Our man Nandan is in session 4.
It says it is the Industrial session and speakers have been invited to speak.
I assume they do each talk one after the other.

Session #4 (16:00 h, Vancouver time)

The million dollar question (quite literally when we think of future revenue) is:
Were those five companies just randomly grouped together by the workshop organisers or were the speakers invited to present in the same session resp. did they deliberately ask to be grouped together because they are linked through more than just being players in the same industry sector?

As for Brainchip, the only confirmed link so far is the one to Prophesee, however, both @TopCat and I have been speculating in recent weeks whether or not iniVation’s new Aeveon technology might have become possible thanks to Akida.

There is also a confirmed collaboration between Sony and Prophesee. As to whether or not Brainchip is involved with Sony as well, we can only speculate.

And the fifth member of the illustrious Table Round, OmniVision?
A quick search on their website did not yield links to any of the other companies except for Sony, but that was 20 years ago.

However, their recent release of the OV02E sensor (see below) suggests to me they have incorporated neuromorphic technology that sounds all too familiar.

Maybe the workshop session 4 will give us more hints as to whether Brainchip might be involved?
For those of you interested in following the hybrid session remotely, a zoom link will be provided here: https://cvpr2023.thecvf.com/virtual/2023/workshop/18456

Session 4 will be held on
June 19th at 4 pm Vancouver time
June 20th at 1 am Central European Summer Time
June 20th at 7 am Perth/Singapore time
June 20th at 9 am Sydney/Melbourne/Brisbane Time

Since OmniVision are planning on rolling out mass production for their sensor in Q4 2023, how many different commercially available neuromorphic chips can they choose from? 🤔

Then again, I just noticed on Wikipedia that OmniVision Technologies Inc. is “an American subsidiary of Chinese semiconductor device and mixed-signal integrated circuit design house Will Semiconductor”, which would be an argument against Brainchip being involved, I guess.

Oh well, time will tell…

——————————————————————————————————————

OMNIVISION Announced New 1080p Full HD Image Sensor​

OMNIVISION’s new OV02E enables some of the world’s thinnest full HD laptop computers with AI-powered always-on face ID recognition for ultra-low power mode​

Editorial
by Editorial

May 25, 2023

in Product News, Sensor

Reading Time: 2 mins read

OMNIVISION

Share on FacebookShare on Twitter


OMNIVISION announced the new OV02E 1080p full high-definition (HD) image sensor with staggered high dynamic range (HDR) for devices with thin bezel designs, including mainstream and premium notebooks, tablets and IoT devices. The feature-packed 1/7.3-inch-format sensor works with artificial intelligence (AI) chips to sense human presence in always-on ultra-low power mode, extending battery life for portable devices.


OV02E-PRODUCT-BRIEF-IMAGE-RGB.jpg

“Our new OV02E is a single-die solution that meets the computing industry’s need for high video quality and low bill of materials (BOM) cost,” said Akeem Chen, product marketing manager, OMNIVISION. “We’ve all been on video calls where the backlighting is less than ideal, reducing image quality and blowing out the background. Now, with staggered HDR support, troublesome backlighting during a videoconference call is no longer an issue. In addition, we’ve added new features like ultra-low power mode with AI functionality for best-in-class always-on capabilities. These are some of the trending features demanded by consumers in 2023 and 2024 laptop models.”

All of these features are packed into the smallest die size for cameras with the most compact footprint, ideal for devices with a screen-to-body ratio of less than 3mm Y size, such as tablets and wearable devices. OMNIVISION’s OV02E sensor has a 1.12-micron (µm) backside-illuminated (BSI) pixel based on the company’s proven and proprietary PureCel®Plus architecture for advanced pixel sensitivity and quantum efficiency. The sensor features 2-megapixel (MP) full HD 1080p video at 60 frames per second (fps). It supports multiple camera synchronization for machine vision and IoT applications where depth detection is needed. The OV02E sensor’s always-on capability features an ultra-low power state that works with the mobile industry processor interface (MIPI) and serial peripheral interface (SPI).
Samples of the OV02E are available now, and it will be in mass production in Q4 2023.
For more information, contact an OMNIVISION sales representative: www.ovt.com/contact-sales.
Tags: AIartificial intelligenceimage sensorIoTsemiconductor
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 38 users

charles2

Regular
View attachment 38508

Another Chinese article addressing some concerns of SNN mentions Green artificial Intelligence which is a first I have heard of this term.

Last but not least, more special applications for SNNs also should be explored still. Though SNNs have been used widely in many fields, including the neuromorphic camera, HAR task, speech recognition, autonomous driving, etc., as aforementioned and the object detection (Kim et al., 2020; Zhou et al., 2020), object tracking (Luo et al., 2020), image segmentation (Patel et al., 2021), robotic (Stagsted et al., 2020; Dupeyroux et al., 2021), etc., where some remarkable studies have applied SNNs on recently, compared to ANNs, their real-world applications are still very limited. Considering the unique advantage, efficiency of SNNs, we think there is a great opportunity for applying SNNs in the Green Artificial Intelligence (GAI), which has become an important subfield of Artificial Intelligence and has notable practical value.
We believe many studies focusing on using SNNs for GAI will emerge soon.
Worth emphasizing how (green) AKIDA/Brainchip ultimately reduces the carbon footprint.

 
  • Like
  • Love
  • Fire
Reactions: 17 users

Tothemoon24

Top 20
Potentially a huge week ahead for the mighty chip , the event vision conference looks to be right in our wheelhouse.
The program of events listed below .

Session #4 , say no more $$$


Session #4 (16:00 h, Vancouver time)​

 
  • Like
  • Fire
Reactions: 27 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Weekend Financial Review paper is riddled with AI articles, yet not even a mere mention of Brainchip.??

Below article on Australian millitary & our collaboration on the Ghost Bat project.

* For overseas investors.. the Ghost Bat project should not be confused with the Australian Fat Bat program.

😃.

Regards,
Esq.
 

Attachments

  • 20230618_081003.jpg
    20230618_081003.jpg
    2.6 MB · Views: 91
Last edited:
  • Like
  • Love
  • Fire
Reactions: 20 users

Boab

I wish I could paint like Vincent
Last edited:
  • Like
Reactions: 8 users

Foxdog

Regular
It's obvious he is not interested in BRN one iota.
Yes, but why is that and what will it take to get mainstream enthusiasm for BRN. Our team cleverly started positioning in the AI space well before the current buzz around AI yet we constantly appear to be overlooked. I hope this type of commentary does not reflect what our potential customers and industry participants think of AKIDA. Uptake of GEN 2 needs to be significant and rapid once it's fully released in the next couple of months. Looking forward to the next GEN 2 announcement.
 
  • Like
  • Fire
Reactions: 15 users

HopalongPetrovski

I'm Spartacus!
It's obvious he is not interested in BRN one iota.
Just another article pushing ETF's.
It's a trend. They like their fees. 🤣
 
  • Haha
  • Like
Reactions: 8 users

HopalongPetrovski

I'm Spartacus!
Yes, but why is that and what will it take to get mainstream enthusiasm for BRN. Our team cleverly started positioning in the AI space well before the current buzz around AI yet we constantly appear to be overlooked. I hope this type of commentary does not reflect what our potential customers and industry participants think of AKIDA. Uptake of GEN 2 needs to be significant and rapid once it's fully released in the next couple of months. Looking forward to the next GEN 2 announcement.
Without committing some time and energy to research Akida, I think it is difficult for most to understand just what we do.

Making processes more efficient by reducing bandwidth requirements, energy consumption and doing some computation prior to transferring data up the chain all sounds good and worthwhile, but for the mainstream, it's just a bit HoHum.

Most people generally don't care much about the inner workings of their tech. They just want results. They respond to something that works noticeably better, faster or cheaper.
For this to happen we would need some product with both a wide application and low barriers to participation where people could have a firsthand experience of our efficiency and efficacy. The covid detector could have been a great platform for us as well as a boon to society.

I asked Sean after the AGM for his opinion of just what might be our killer Ap and he said Automotive.

This does make sense from the perspective of showcasing our many benefits including our ability of independent operation when there is no cloud connectivity.
And, over time, many people will have this experience first hand as vehicle fleets are upgraded.
But, as we now know to our chagrin, this will not happen quickly.

I think, that for the most part, most people won't know that Akida tech is a part of their user experience of whatever tech they happen to be using at the time. Peter VDM pointed to his phone at the AGM and stated that, of the dozens of chips therein, most people would not know they were ARM enabled. This was in response to the oft suggested idea of having 'AKIDA INSIDE' labels on everything we are a part off.

Also look to the changed style of our website. Gone are the cute robots designed to have mainstream appeal and the shift of emphasis to professionalism and promotion of the technical details which are more likely to be of interest to fellow engineers and people involved in complimentary operations.

Basically, at this stage of the game anyway, we are not chasing mainstream eyeballs, but rather, just those relatively few, educated, influential and commercially connected persons, who may recognise and grasp the opportunity for enhancement and advancement, lending them greater commercial viability, that we represent.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 85 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 24 users

rgupta

Regular
Without committing some time and energy to research Akida, I think it is difficult for most to understand just what we do.

Making processes more efficient by reducing bandwidth, energy consumption and doing some computation prior to transferring data up the chain all sounds good and worthwhile, but for the mainstream, it's just a bit HoHum.

Most people generally don't care much about the inner workings of their tech. They just want results. They respond to something that works noticeably better, faster or cheaper.
For this to happen we would need some product with both a wide application and low barriers to participation where people could have a firsthand experience of our efficiency and efficacy. The covid detector could have been a great platform for us as well as a boon to society.

I asked Sean after the AGM for his opinion of just what might be our killer Ap and he said Automotive.

This does make sense from the perspective of showcasing our many benefits including our ability of independent operation when there is no cloud connectivity.
And, over time, many people will have this experience first hand as vehicle fleets are upgraded.
But, as we now know to our chagrin, this will not happen quickly.

I think, that for the most part, most people won't know that Akida tech is a part of their user experience of whatever tech they happen to be using at the time. Peter VDM pointed to his phone at the AGM and stated that, of the dozens of chips therein, most people would not know they were ARM enabled. This was in response to the oft suggested idea of having 'AKIDA INSIDE' labels on everything we are a part off.

Also look to the changed style of our website. Gone are the cute robots designed to have mainstream appeal and the shift of emphasis to professionalism and promotion of the technical details which are more likely to be of interest to fellow engineers and people involved in complimentary operations.

Basically, at this stage of the game anyway, we are not chasing mainstream eyeballs, but rather, just those relatively few, educated, influential and commercially connected persons, who may recognise and grasp the opportunity for enhancement and advancement, lending them greater commercial viability, that we represent.
To my assumption data is more important to a lot of organisations than becoming green.
That is why everyone keep on investing in GPUs.
With extra data they believe they can process the same later on or incorporate new features.
It will take time for them to understand that a junk is a junk whether it is at home, workplace or data.
 
  • Like
  • Fire
Reactions: 7 users

HopalongPetrovski

I'm Spartacus!
To my assumption data is more important to a lot of organisations than becoming green.
That is why everyone keep on investing in GPUs.
With extra data they believe they can process the same later on or incorporate new features.
It will take time for them to understand that a junk is a junk whether it is at home, workplace or data.
Agree, much lip service payed towards green concepts atm, but this will change as younger staff progress in their careers, become more influential and practical implications become more noticeable. As we gradually become integrated into systems and prove our dependability, reliability and versatility, our efficiency dividend will become measurable, and engineers who love both efficiency and elegance will incorporate us more naturally and with less hesitation in their future designs.
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Diogenese

Top 20


View attachment 38524
Helium isn't hot air, but it has a similar effect:

https://developer.arm.com/documenta... Digital Signal Processing (DSP) applications.

ML is a subset of Artificial Intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Helium helps to boost Matrix Multiplication operations, which are the foundation of Convolutional Neural Networks or Classical based Machine Learning kernels.

Applications that can be greatly accelerated by Helium are Fast Fourier Transform (FFT) and Complex Dot Product as there are specific instructions which help implement these calculations
.
 
  • Like
  • Fire
Reactions: 6 users
Top Bottom