TheDrooben
Pretty Pretty Pretty Pretty Good
Bit of Sunday reading......
Akida gets a nod in section 3.8 and comparison table (below)
Akida gets a nod in section 3.8 and comparison table (below)
@Facit certainly does look like AKIDA IPYes it will be nice but just not sexy. Can you see any of the WANCA’s from MF and AFR reading a Qualcomm announcement by torch light under the covers so their mum does not catch them.
The unsophisticated WANCA’s need super trucks and exotic European tourers to understand what value Brainchip offers.
Global semiconductor giants like ARM just don’t rate.
We know that AKIDA is being offered with the ARM Cortex M4 as it screams this fact from every corner of the ARM web site.
We know that the ARM Cortex M4 is used across the Drone industry for processing CNN presently. We know that AKIDA revolutionises ARM Cortex M4 CNN processing by converting to AKIDA SNN.
Going out on a limb here but even Blind Freddie would likely drag himself away from drinking cocktails and sending emails to Mercedes asking to be put on the list to purchase an EQXX long enough to switch over to the upgraded ARM Cortex M4 with AKIDA to dramatically reduce power drain and increase performance and range in his drone fleet. In fact he seems to be nodding at the idea as I type.
Quite frankly even this old technophobe knows to the point of conviction that the biggest thing apart from AKIDA in terms of growth is going to be the proliferation of drone technology over the next ten years. Drones just make sense.
No sexy is as sexy does so we need all the WANCA’s under their covers pleasuring themselves on pictures of AKIDA powered Ford Super Trucks before they will get the bleeding obvious and attribute the correct value to Brainchip.
My opinion only DYOR
FF
AKIDA BALLISTA
And at any moment the next version of Akida could be announced.Anyone know what Edge Impulse, Prophesee, Intelligent Systems Laboratory (ISL), Intellisense, NUMEN, NASA, DARPA, US Airforce Research Labs, MOSCHIP and Nviso do as I seem to recall they have some sort of connection with BrainChip as well.
(Note to self sarcasm is fine when it is funny and does not aim to ridicule an individual. This post so far is not funny according to Blind Freddie so I better make the point clearly as according to him no one is laughing.)
Ok to clarify. The odds are highly in favour of macro economic circumstances moderating market demand (or in other words the US market falling is likely to spook the ASX) on Monday and Brainchip being on the ASX is likely to follow the market down.
The thing is in my opinion if it does go down it will be on low volumes because we all know that Brainchips value is the sum of all its parts and there are just so many parts.
My posts today together with those of others have highlighted just how many opportunities Brainchip has squirrelled away that are publicly known about either because of ASX releases, press releases or other sources such as the 1,000 Eyes.
As well as all of these we know there are up to a further 10 EAP’s under NDA’s. We know there were well north of 100 other NDA’s. We know that by late 2021 there were over 500 commercial opportunities in play. We know that the most prestigious Technology University in the USA Carnegie Mellon has trialled and approved running as part of its degree course the study of AKIDA technology along with five other universities.
We know that DELL Technologies and Tata Consulting Services have been deeply engaged with Brainchip.
If you took the time to read @uiux ‘s post today on the NASA thread it is obvious that AKIDA technology is now the worst kept secret in US Military history.
We know that Vorago successfully completed a Hardsil design proof of concept for the AKD1000 for NASA and predicted it would allow NASA to achieve the Rover target speed of 20 kph essential for successful Moon and Mars exploration.
Watching the share price go up and down by itself or through manipulation as an investor is not fun but in Brainchips case we can turn that old saying on its head and say “What goes down will go up & up & up & up” and it’s only 9 months to the 2023 AGM.
My opinion only DYOR
FF
AKIDA BALLISTA
NiceFor me he is still our James Bond. I also still wonder about the vacation spot. The stones in the water and shells give clues. But, we have a clear rule that is good and should last. It would be easier if he came to the board. Another dynamic
View attachment 15179
@Fact Finder I'm glad that you are well and that your community could help you too (Diogenese). Nevertheless, I would have found it good if you steer our darling. But in your own way you do that. Thank you for that!
See I had forgotten all about this it was so much easier when Brainchip was teetering on the brink of financial failure back in the day to list all the positives now I struggle. THERE ARE JUST TOO MANY.And at any moment the next version of Akida could be announced.
not sure if there is any point "hiding" the caneSee I had forgotten all about this it was so much easier when Brainchip was teetering on the brink of financial failure back in the day to list all the positives now I struggle. THERE ARE JUST TOO MANY.
Stop laughing Blind Freddie if you knew why didn’t you say something. OK your in for it. I am going to rearrange all the furniture and hide your cane.
FF
AKIDA BALLISTA
PE might be a lot higher by then too - the market will start pricing in exponential earning potential by that stage imo. I'd be happy with $16 for starters tho25 billion chips per year
1 cent per chip
0.25 billion (250m)
Say 85% gp
212m profit
Pe 30
6.4b mkt cap
Sp about 4 buks.....
Absolute super conservative, its gonna be way more than 1c per chip.
5c, means sp 16 buks ish!
Now lets just get into the 25b starting point
I cant wait for the next 2 4cs thats gonna really point the direction and confirm the trend
Have a great rest of the weekend all, im gonna go blow a couple hundred buks at the cas.... mmm but thats say 220 brn shares, x16, so its gonna potentially cost me 3.5k, ouch.... ah well gotta have fun along the way right!
What does it take to disrupt an existing paradigm? Perhaps MegaChip and its public disclosures can give us some guidance.
On the MegaChips thread @stuart888 posted the following:
“In a much more aggressive move, in mid-2020, founder and Chairman, Masahiro Shindo, identified AI/ML technology to be critical to Megachips’ future and asked the US operation to take a leadership position in moving the company in that direction.
MegaChips began an internal training program to allow a group of dedicated engineers to become experts in this important technology. The company made significant investments in the US to identify key partners, build relationships with local universities, and acquire key talent in this space. In 2021, the company made multi-million-dollar investments in two key AI/IP partners, Brainchip and Quadric, to bolster its offerings in the Edge AI market. The company is now positioned to make an aggressive move into the US ASIC market, using its skills in Edge AI as a key component of that move”
So in mid 2020 MegaChips commenced training a group of dedicated engineers at the same time that Brainchip was announcing its first EAP customers Ford Motors Corporation and Valeo.
I think we can now say wth confidence that MegaChips was an EAP in 2020.
MegaChips before buying a full IP licence invested close to two years in identifying the paradigm shift and training its staff noting “Chairman, Masahiro Shindo, identified AI/ML technology to be critical to Megachips’ future” and determined how this was to be embraced and implemented by MegaChips. This decision would not have been taken lightly or overnight so most likely was in contemplation back in 2019.
Remember Brainchip announced the release of AKIDA IP to select customers in June, 2019.
NINTENDO was 70% of MegaChips FUTURE in 2019 and it is hard to believe that it did not figure in MegaChips thinking where Ai/ML was concerned.
Then in October, 2021 MegaChip announced to its customers and the market generally that it was able to address customer requirements for the entire suite of Ai/Ml products from design to implementation.
Then in Brainchip’s 2022 half yearly report there appears about 2 million dollars in unexpected revenue which Brainchip advises primarily relates to IP licence fees.
If these licence fees have arrived via MegaChips then they relate to MegaChip customers at the very beginning of product development cycles.
As Nintendo account for 70% of MegaChips business I thought about the statement by the former CEO Mr. Dinardo in 2020 that there was an opportunity for AKIDA to appear in controllers.
If 70% of what you do everyday as an engineer at MegaChips relates to Nintendo products does it not make perfect sense when tasked with learning all about AKIDA technology and how it can be applied to and designed into customer technology that the technology that occupies 70% of your work day would be foremost in your contemplation.
It just makes sense in fact were it otherwise it would actually be irrational.
So I think based on what I have read that we will see a new product offering from Nintendo early in 2023 that will walk like our famous duck and it will be quaking so loud you will not be able to ignore it.
This new product from Nintendo was rumoured for release in March of 2022 but mysteriously did not appear and now is rumoured for release in March, 2023:
![]()
A New Nintendo Switch Model Isn't Releasing Before March 2023
A New Nintendo Switch Model Isn't Releasing Before March 2023press-start.com.au
My opinion only DYOR
FF
AKIDA BALLISTA
Can’t believe I have to wait 2 and half more years to buy my dream liner private jetWhere 2025, hurry up.
I probably might be perceived as over the top with my obvious confirmation bias where the future of Brainchip is concerned so I thought what the heck go for it and ask the really big question:
WILL AKIDA SAVE PLANET EARTH FROM CATASTROPHIC DESTRUCTION BY A ROGUE ASTEROID?
“NASA is ready to test tech that could save the Earth from devastating asteroid impact
The space agency's DART spacecraft will attempt to knock a passing asteroid off its path.
By Alex Hughes
3 days ago
In an effort to develop defences against incoming asteroids in space, NASA has announced a planned test of its Double Asteroid Redirection Test (DART)spacecraft.
- Share on Facebook
- Share on Twitter
- Share on WhatsApp
- Share on Reddit
- Email to a friend
The technology will be used to target an asteroid on Monday, September 26 at 12:14am BST. This is an asteroid that will pose no threat to Earth making it a safe way to test DART.
The planned test will show whether a spacecraft is able to autonomously navigate to a target asteroid and intentionally collide with it.
By doing this, DART could change the asteroid’s direction while being measured by ground-based telescopes.”
If Professor Iliadis, Rob Telson, Anil Mankar and Vorago know what they are talking about AKIDA technology could be part of the autonomous solution navigating DART on this mission.
If so how cool will it be at BBQ’s when people are boasting about their EV’s and no plastic lifestyles to be able to say “really I am part of a company that saved the planet last Friday.”
My opinion and speculative hope so DYOR
FF
AKIDA BALLISTA
Anyone looked into Ubotica Technologies?
“Transforming Satellite Services through On-Board AI”
Might not be same tech but they’re also working on a project with NASA JPL
![]()
![]()
CogniSatTM Platform
The most comprehensive, adaptable and power efficient solution for AI on-board satellites
Why AI in Space
Bandwidth
Massively more efficient usage of valuable download communications bandwidth from satellite to Earth.
Latency
Dramatically reduce time to obtain actionable intelligence from images captured by satellites.
Versatility
Enhance mission flexibility - run multiple AI applications at source and in parallel on the same satellite, even on the same data.
Security
Extract needed intelligence at source, discard other potentially sensitive data.
Autonomy
Enable self-contained applications on-board satellite with AI - no post processing of image data on earth required.
Why CogniSatTM
Bandwidth
Inference at source to extract only valuable data to optimise bandwidth.
Push to launch efficient upload of new applications and model updates.
Latency
High frame rate processing at source.
Actionable intelligence extracted in seconds.
Close to sensor integration reduces processing and data.
Versatility
Pre- and post-processing and multiple inferences possible on the same image.
Multi-camera support for EO, debris tracking, star tracking, telemetry.
Security
Reliability- Neural Network Supervisor gives confidence factor for AI models.
On-board storage to archive millions of acquired images for later retrieval.
Learning at the edge to enhance model accuracy with real in-flight data.
Autonomy
Improved decision making and collaboration between satellites.
Dynamic Mission Retargeting.
Low power consumption increases satellite duty cycle.
Specifications
View attachment 15248
"It is built around the Intel® Movidius™ Myriad™ 2 Computer Vision (CV) and Artificial Intelligence (AI) COTS VPU whose 12 vector cores provide high-performance parallel and hardware accelerated compute within a low power envelope."
When I was getting into science I was reading a good book about helium.. couldnt put it down though![]()
Who will snap up a piece of the multi-QUADRILLION-pound moon pie?
Firms will all be hoping for a piece of the pie, which includes an estimated £168 billion ($206 billion) worth of water, billions of pounds of gold and £1.2 quadrillion ($1.5 quadrillion) of helium.www.dailymail.co.uk
View attachment 15253
10 x Microsofts ? LOL
![]()
Who will snap up a piece of the multi-QUADRILLION-pound moon pie?
Firms will all be hoping for a piece of the pie, which includes an estimated £168 billion ($206 billion) worth of water, billions of pounds of gold and £1.2 quadrillion ($1.5 quadrillion) of helium.www.dailymail.co.uk
View attachment 15253
10 x Microsofts ? LOL
Machine learning can give computers the ability “learn” a specific task without expressly programming the computer for that task. One type of machine learning system is called convolutional neural networks (CNNs)—a class of deep learning neural networks. Such networks (and other forms of machine learning) can be used to, for example, help with automatically recognizing whether a cat is in a photograph. The learning takes places by using thousands or millions of photos to “train” the model to recognize when a cat is in a photograph. While this can be a powerful tool, the resulting processing of using a trained model (and training the model) can still be computationally expensive when deployed in a real-time environment. |
Image up-conversion is a technique that allows for conversion of images produced in a first resolution (e.g., 540p resolution or 960×540 with 0.5 megapixels) to a higher resolution (e.g., 1080p resolution, 1920×1080, with 2.1 megapixels). This process can be used to show images of the first resolution on a higher resolution display. Thus, for example, a 540p image can be displayed on a 1080p television and (depending on the nature of the up-conversion process) may be shown with increased graphical fidelity as compared to if the 540p image were displayed directly with traditional (e.g., linear) upscaling on a 540 television. Different techniques for image up-conversion can present a trade off between speed (e.g., how long the process takes for converting a given image) and the quality of the up-converted image. For example, if a process for up-converting is performed in real-time (e.g., such as during a video game), then the image quality of the resulting up-converted image may suffer. |
Accordingly, it will be appreciated that new and improved techniques, systems, and processes are continually sought after in these areas of technology. |
The processing discussed above generally relates to data (e.g., signals) in two dimensions (e.g., images). The techniques herein (e.g., the use of SBT's) may also be applied to data or signals of other dimensions, for example, 1D (e.g., speech recognition, anomaly detection on time series, etc. . . . ) and 3D (e.g., video, 3D textures) signals. The techniques may also be applied in other types of 2D domains such as, for example, image classification, object detection and image segmentation, face tracking, style transfer, posture estimation, etc.) |
The processing discussed in connection with FIGS. 2 and 9 relates to upconverting images from 540p to 1080. However, the techniques discussed herein may be used in other scenarios including: 1) converting to different resolutions than those discussed (e.g., from 480p to 720p or 1080p and variations thereof, etc.), 2) downconverting images to a different resolution, 3) converting images without changes in resolution; 4) images with other values for how the image is represented (e.g., grayscale). |
In certain example embodiments, the techniques herein may be applied to processing images (e.g., in real-time and/or during runtime of an application/video game) to provide anti-aliasing capability. In such an example, the size of the image before and after remains the same—but with anti-aliasing applied to the final image. Training for such a process may proceed by taking relatively low-quality images (e.g., those rendered without anti-aliasing) and those rendered with high quality anti-aliasing (or a level of anti-aliasing that is desirable for a given application or use) and training a neural network (e.g. L&R as discussed above). |
Other examples of fixed resolution applications (e.g., converting images from x resolution to x resolution) may include denoising (e.g., in conjunction with a ray-tracing process that is used by a rendering engine in a game engine). Another application of the techniques herein may include deconvolution, for example in the context of deblurring images and the like. |