BRN Discussion Ongoing

MDhere

Regular
For me he is still our James Bond. I also still wonder about the vacation spot. The stones in the water and shells give clues. But, we have a clear rule that is good and should last. It would be easier if he came to the board. Another dynamic
View attachment 15179

@Fact Finder I'm glad that you are well and that your community could help you too (Diogenese). Nevertheless, I would have found it good if you steer our darling. But in your own way you do that. Thank you for that!
Nice
 
  • Like
Reactions: 3 users
And at any moment the next version of Akida could be announced.
See I had forgotten all about this it was so much easier when Brainchip was teetering on the brink of financial failure back in the day to list all the positives now I struggle. THERE ARE JUST TOO MANY.

Stop laughing Blind Freddie if you knew why didn’t you say something. OK your in for it. I am going to rearrange all the furniture and hide your cane.

FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Love
Reactions: 41 users

MDhere

Regular
See I had forgotten all about this it was so much easier when Brainchip was teetering on the brink of financial failure back in the day to list all the positives now I struggle. THERE ARE JUST TOO MANY.

Stop laughing Blind Freddie if you knew why didn’t you say something. OK your in for it. I am going to rearrange all the furniture and hide your cane.

FF

AKIDA BALLISTA
not sure if there is any point "hiding" the cane 🤣
Also soon if you did "hide" the cane all Blind Freddie would need to do is say Ok Akida tell me who moved the cane then locate where that person put it! Akida would say ok blind Freddie a distinguished gentleman took it and placed it under the dining table, i can see that its still there. Hide and seek will be no fun soon lol
 
Last edited:
  • Haha
  • Like
Reactions: 9 users

Foxdog

Regular
25 billion chips per year
1 cent per chip
0.25 billion (250m)
Say 85% gp

212m profit
Pe 30
6.4b mkt cap

Sp about 4 buks.....

Absolute super conservative, its gonna be way more than 1c per chip.
5c, means sp 16 buks ish!

Now lets just get into the 25b starting point

I cant wait for the next 2 4cs thats gonna really point the direction and confirm the trend

Have a great rest of the weekend all, im gonna go blow a couple hundred buks at the cas.... mmm but thats say 220 brn shares, x16, so its gonna potentially cost me 3.5k, ouch.... ah well gotta have fun along the way right!
PE might be a lot higher by then too - the market will start pricing in exponential earning potential by that stage imo. I'd be happy with $16 for starters tho 😆
 
  • Like
Reactions: 5 users

Pappagallo

Regular
What does it take to disrupt an existing paradigm? Perhaps MegaChip and its public disclosures can give us some guidance.

On the MegaChips thread @stuart888 posted the following:

“In a much more aggressive move, in mid-2020, founder and Chairman, Masahiro Shindo, identified AI/ML technology to be critical to Megachips’ future and asked the US operation to take a leadership position in moving the company in that direction.

MegaChips began an internal training program to allow a group of dedicated engineers to become experts in this important technology. The company made significant investments in the US to identify key partners, build relationships with local universities, and acquire key talent in this space. In 2021, the company made multi-million-dollar investments in two key AI/IP partners, Brainchip and Quadric, to bolster its offerings in the Edge AI market. The company is now positioned to make an aggressive move into the US ASIC market, using its skills in Edge AI as a key component of that move”

So in mid 2020 MegaChips commenced training a group of dedicated engineers at the same time that Brainchip was announcing its first EAP customers Ford Motors Corporation and Valeo.

I think we can now say wth confidence that MegaChips was an EAP in 2020.

MegaChips before buying a full IP licence invested close to two years in identifying the paradigm shift and training its staff noting “Chairman, Masahiro Shindo, identified AI/ML technology to be critical to Megachips’ future” and determined how this was to be embraced and implemented by MegaChips. This decision would not have been taken lightly or overnight so most likely was in contemplation back in 2019.

Remember Brainchip announced the release of AKIDA IP to select customers in June, 2019.

NINTENDO was 70% of MegaChips FUTURE in 2019 and it is hard to believe that it did not figure in MegaChips thinking where Ai/ML was concerned.

Then in October, 2021 MegaChip announced to its customers and the market generally that it was able to address customer requirements for the entire suite of Ai/Ml products from design to implementation.

Then in Brainchip’s 2022 half yearly report there appears about 2 million dollars in unexpected revenue which Brainchip advises primarily relates to IP licence fees.

If these licence fees have arrived via MegaChips then they relate to MegaChip customers at the very beginning of product development cycles.

As Nintendo account for 70% of MegaChips business I thought about the statement by the former CEO Mr. Dinardo in 2020 that there was an opportunity for AKIDA to appear in controllers.

If 70% of what you do everyday as an engineer at MegaChips relates to Nintendo products does it not make perfect sense when tasked with learning all about AKIDA technology and how it can be applied to and designed into customer technology that the technology that occupies 70% of your work day would be foremost in your contemplation.

It just makes sense in fact were it otherwise it would actually be irrational.

So I think based on what I have read that we will see a new product offering from Nintendo early in 2023 that will walk like our famous duck and it will be quaking so loud you will not be able to ignore it.

This new product from Nintendo was rumoured for release in March of 2022 but mysteriously did not appear and now is rumoured for release in March, 2023:


My opinion only DYOR
FF

AKIDA BALLISTA

I can’t find it now but I remember seeing a presentation recently where the graphic used to represent the gaming applications for Akida was quite obviously a Nintendo Switch controller. The split design is very unique and if they were just referring to gaming in general they would not use it IMO, but rather a generic traditional controller a la PlayStation/XBox/almost every other console that has ever existed because that’s much more recognisable as a hand controller.

Using the Switch controller instead is a choice, a very specific one, and in the context of the MegaChips licensing and their relationship with Nintendo I think the motivations behind this choice are blindingly obvious.
 
  • Like
  • Fire
  • Love
Reactions: 34 users
Where 2025, hurry up.
 
  • Like
  • Fire
Reactions: 8 users
  • Like
  • Haha
  • Love
Reactions: 13 users
I probably might be perceived as over the top with my obvious confirmation bias where the future of Brainchip is concerned so I thought what the heck go for it and ask the really big question:

WILL AKIDA SAVE PLANET EARTH FROM CATASTROPHIC DESTRUCTION BY A ROGUE ASTEROID?

“NASA is ready to test tech that could save the Earth from devastating asteroid impact​

The space agency's DART spacecraft will attempt to knock a passing asteroid off its path.
By Alex Hughes
3 days ago
In an effort to develop defences against incoming asteroids in space, NASA has announced a planned test of its Double Asteroid Redirection Test (DART)spacecraft.

The technology will be used to target an asteroid on Monday, September 26 at 12:14am BST. This is an asteroid that will pose no threat to Earth making it a safe way to test DART.

The planned test will show whether a spacecraft is able to autonomously navigate to a target asteroid and intentionally collide with it.

By doing this, DART could change the asteroid’s direction while being measured by ground-based telescopes.”

If Professor Iliadis, Rob Telson, Anil Mankar and Vorago know what they are talking about AKIDA technology could be part of the autonomous solution navigating DART on this mission.

If so how cool will it be at BBQ’s when people are boasting about their EV’s and no plastic lifestyles to be able to say “really I am part of a company that saved the planet last Friday.” 😂🤣😂🥳😎

My opinion and speculative hope so DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 51 users
Anyone looked into Ubotica Technologies?
“Transforming Satellite Services through On-Board AI”

Might not be same tech but they’re also working on a project with NASA JPL


Ubotica Technologies


CognisatDevice.png

CogniSatTM Platform

The most comprehensive, adaptable and power efficient solution for AI on-board satellites​


Why AI in Space​


Bandwidth​

Massively more efficient usage of valuable download communications bandwidth from satellite to Earth.

Latency​

Dramatically reduce time to obtain actionable intelligence from images captured by satellites.

Versatility​

Enhance mission flexibility - run multiple AI applications at source and in parallel on the same satellite, even on the same data.

Security​

Extract needed intelligence at source, discard other potentially sensitive data.

Autonomy​

Enable self-contained applications on-board satellite with AI - no post processing of image data on earth required.

Why CogniSatTM​


Bandwidth​

Inference at source to extract only valuable data to optimise bandwidth.
Push to launch efficient upload of new applications and model updates.

Latency​

High frame rate processing at source.
Actionable intelligence extracted in seconds.
Close to sensor integration reduces processing and data.

Versatility​

Pre- and post-processing and multiple inferences possible on the same image.
Multi-camera support for EO, debris tracking, star tracking, telemetry.

Security​

Reliability- Neural Network Supervisor gives confidence factor for AI models.
On-board storage to archive millions of acquired images for later retrieval.
Learning at the edge to enhance model accuracy with real in-flight data.

Autonomy​

Improved decision making and collaboration between satellites.
Dynamic Mission Retargeting.
Low power consumption increases satellite duty cycle.
Specifications

189A1939-1106-458B-BD94-15B960DC5868.jpeg
 
Last edited:
  • Like
  • Thinking
Reactions: 22 users

equanimous

Norse clairvoyant shapeshifter goddess
I probably might be perceived as over the top with my obvious confirmation bias where the future of Brainchip is concerned so I thought what the heck go for it and ask the really big question:

WILL AKIDA SAVE PLANET EARTH FROM CATASTROPHIC DESTRUCTION BY A ROGUE ASTEROID?

“NASA is ready to test tech that could save the Earth from devastating asteroid impact​

The space agency's DART spacecraft will attempt to knock a passing asteroid off its path.
By Alex Hughes
3 days ago
In an effort to develop defences against incoming asteroids in space, NASA has announced a planned test of its Double Asteroid Redirection Test (DART)spacecraft.

The technology will be used to target an asteroid on Monday, September 26 at 12:14am BST. This is an asteroid that will pose no threat to Earth making it a safe way to test DART.

The planned test will show whether a spacecraft is able to autonomously navigate to a target asteroid and intentionally collide with it.

By doing this, DART could change the asteroid’s direction while being measured by ground-based telescopes.”

If Professor Iliadis, Rob Telson, Anil Mankar and Vorago know what they are talking about AKIDA technology could be part of the autonomous solution navigating DART on this mission.

If so how cool will it be at BBQ’s when people are boasting about their EV’s and no plastic lifestyles to be able to say “really I am part of a company that saved the planet last Friday.” 😂🤣😂🥳😎

My opinion and speculative hope so DYOR
FF

AKIDA BALLISTA
akida rocket.png
 
  • Like
  • Haha
  • Love
Reactions: 25 users

uiux

Regular
Anyone looked into Ubotica Technologies?
“Transforming Satellite Services through On-Board AI”

Might not be same tech but they’re also working on a project with NASA JPL


Ubotica Technologies


CognisatDevice.png

CogniSatTM Platform

The most comprehensive, adaptable and power efficient solution for AI on-board satellites​


Why AI in Space​


Bandwidth​

Massively more efficient usage of valuable download communications bandwidth from satellite to Earth.

Latency​

Dramatically reduce time to obtain actionable intelligence from images captured by satellites.

Versatility​

Enhance mission flexibility - run multiple AI applications at source and in parallel on the same satellite, even on the same data.

Security​

Extract needed intelligence at source, discard other potentially sensitive data.

Autonomy​

Enable self-contained applications on-board satellite with AI - no post processing of image data on earth required.

Why CogniSatTM​


Bandwidth​

Inference at source to extract only valuable data to optimise bandwidth.
Push to launch efficient upload of new applications and model updates.

Latency​

High frame rate processing at source.
Actionable intelligence extracted in seconds.
Close to sensor integration reduces processing and data.

Versatility​

Pre- and post-processing and multiple inferences possible on the same image.
Multi-camera support for EO, debris tracking, star tracking, telemetry.

Security​

Reliability- Neural Network Supervisor gives confidence factor for AI models.
On-board storage to archive millions of acquired images for later retrieval.
Learning at the edge to enhance model accuracy with real in-flight data.

Autonomy​

Improved decision making and collaboration between satellites.
Dynamic Mission Retargeting.
Low power consumption increases satellite duty cycle.
Specifications

View attachment 15248


"It is built around the Intel® Movidius™ Myriad™ 2 Computer Vision (CV) and Artificial Intelligence (AI) COTS VPU whose 12 vector cores provide high-performance parallel and hardware accelerated compute within a low power envelope."
 
  • Like
  • Love
Reactions: 8 users
"It is built around the Intel® Movidius™ Myriad™ 2 Computer Vision (CV) and Artificial Intelligence (AI) COTS VPU whose 12 vector cores provide high-performance parallel and hardware accelerated compute within a low power envelope."

Thanks @uiux

I see a lot of words there that could be the names of Elon Musk’s other children

I’ll take them as the equivalent of a @Diogenese ogre
 
  • Haha
  • Like
  • Fire
Reactions: 17 users

equanimous

Norse clairvoyant shapeshifter goddess
 
  • Like
  • Fire
Reactions: 10 users

uiux

Regular
  • Like
  • Wow
  • Fire
Reactions: 13 users

equanimous

Norse clairvoyant shapeshifter goddess

View attachment 15253


10 x Microsofts ? LOL
When I was getting into science I was reading a good book about helium.. couldnt put it down though
 
  • Haha
  • Like
  • Love
Reactions: 23 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 11 users
Reading some of the comments around Nintendo & controllers.

I mentioned in a previous post that I've been doing some digging, not to any major success yet however have come across something that shows some potential.

As has been mentioned earlier by @Fact Finder Nintendo was to be around 70% or so of Megachips future.

FF also expanded the thinking on the timing of possible EAP to mid 2020 or so.

This patent application from Nintendo is Mar 2020 and has just been granted recently.

BRN obviously has their original Megachips agreement plus the latest partnership for ASIC drive in the US.

I'll preface this post in saying no mention of Akida, BRN, Neuromorphic or SNN though it does relate to a Neural Process and infers that CNN as it is doesn't really make the grade for what the patent is about.

Probs need @Diogenese to run an eye over whenever has a chance as I think it is a variation on CNN only but it could be a precursor to something that Akida could assist with?

It does discuss cloud data, latency, having to train neural, real time on device learning / processing needs, it discusses imaging needs and neurons.

It includes example of Nvidia as GPU and Intel / ARM CPU but more importantly describes accelerators being ASIC / FPGA.....now....where do they come from again normally? ;)

The crux is around imaging upconverting, rendering within a game, pixel blocks but obviously they expand on it to cover many other applications.

It reminds me a little of the process of event cameras / neuromorphic processing that Prophesee developing & now partnered on with BRN.

Prophesee heavily working with Sony & the IMX as we know and would get the feeling Sony may have some influence over who Prophesee could also work with in the gaming world given Sony is the home of the PS.

Nintendo would def want to have their own process(or) & patent.

1. US20210304356 - SYSTEMS AND METHODS FOR MACHINE LEARNED IMAGE CONVERSION






Patent History
Patent number
: 11379951
Type: Grant
Filed: Mar 25, 2020
Date of Patent: Jul 5, 2022
Patent Publication Number: 20210304356
Assignee: NINTENDO CO., LTD. (Kyoto)
Inventors: Alexandre Delattre (Viroflay), Théo Charvet (Paris), Raphaël Poncet (Paris)
Primary Examiner: Joni Hsu
Application Number: 16/830,032

INTRODUCTION​

Machine learning can give computers the ability “learn” a specific task without expressly programming the computer for that task. One type of machine learning system is called convolutional neural networks (CNNs)—a class of deep learning neural networks. Such networks (and other forms of machine learning) can be used to, for example, help with automatically recognizing whether a cat is in a photograph. The learning takes places by using thousands or millions of photos to “train” the model to recognize when a cat is in a photograph. While this can be a powerful tool, the resulting processing of using a trained model (and training the model) can still be computationally expensive when deployed in a real-time environment.
Image up-conversion is a technique that allows for conversion of images produced in a first resolution (e.g., 540p resolution or 960×540 with 0.5 megapixels) to a higher resolution (e.g., 1080p resolution, 1920×1080, with 2.1 megapixels). This process can be used to show images of the first resolution on a higher resolution display. Thus, for example, a 540p image can be displayed on a 1080p television and (depending on the nature of the up-conversion process) may be shown with increased graphical fidelity as compared to if the 540p image were displayed directly with traditional (e.g., linear) upscaling on a 540 television. Different techniques for image up-conversion can present a trade off between speed (e.g., how long the process takes for converting a given image) and the quality of the up-converted image. For example, if a process for up-converting is performed in real-time (e.g., such as during a video game), then the image quality of the resulting up-converted image may suffer.
Accordingly, it will be appreciated that new and improved techniques, systems, and processes are continually sought after in these areas of technology.



Additional Example Embodiments​

The processing discussed above generally relates to data (e.g., signals) in two dimensions (e.g., images). The techniques herein (e.g., the use of SBT's) may also be applied to data or signals of other dimensions, for example, 1D (e.g., speech recognition, anomaly detection on time series, etc. . . . ) and 3D (e.g., video, 3D textures) signals. The techniques may also be applied in other types of 2D domains such as, for example, image classification, object detection and image segmentation, face tracking, style transfer, posture estimation, etc.)
The processing discussed in connection with FIGS. 2 and 9 relates to upconverting images from 540p to 1080. However, the techniques discussed herein may be used in other scenarios including: 1) converting to different resolutions than those discussed (e.g., from 480p to 720p or 1080p and variations thereof, etc.), 2) downconverting images to a different resolution, 3) converting images without changes in resolution; 4) images with other values for how the image is represented (e.g., grayscale).
In certain example embodiments, the techniques herein may be applied to processing images (e.g., in real-time and/or during runtime of an application/video game) to provide anti-aliasing capability. In such an example, the size of the image before and after remains the same—but with anti-aliasing applied to the final image. Training for such a process may proceed by taking relatively low-quality images (e.g., those rendered without anti-aliasing) and those rendered with high quality anti-aliasing (or a level of anti-aliasing that is desirable for a given application or use) and training a neural network (e.g. L&R as discussed above).
Other examples of fixed resolution applications (e.g., converting images from x resolution to x resolution) may include denoising (e.g., in conjunction with a ray-tracing process that is used by a rendering engine in a game engine). Another application of the techniques herein may include deconvolution, for example in the context of deblurring images and the like.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 46 users

Slymeat

Move on, nothing to see.
  • Like
  • Love
  • Fire
Reactions: 8 users

Jefwilto

Regular
  • Haha
  • Like
Reactions: 13 users

uiux

Regular
Reading some of the comments around Nintendo & controllers.

I mentioned in a previous post that I've been doing some digging, not to any major success yet however have come across something that shows some potential.

As has been mentioned earlier by @Fact Finder Nintendo was to be around 70% or so of Megachips future.

FF also expanded the thinking on the timing of possible EAP to mid 2020 or so.

This patent application from Nintendo is Mar 2020 and has just been granted recently.

BRN obviously has their original Megachips agreement plus the latest partnership for ASIC drive in the US.

I'll preface this post in saying no mention of Akida, BRN, Neuromorphic or SNN though it does relate to a Neural Process and infers that CNN as it is doesn't really make the grade for what the patent is about.

Probs need @Diogenese to run an eye over whenever has a chance as I think it is a variation on CNN only but it could be a precursor to something that Akida could assist with?

It does discuss cloud data, latency, having to train neural, real time on device learning / processing needs, it discusses imaging needs and neurons.

It includes example of Nvidia as GPU and Intel / ARM CPU but more importantly describes accelerators being ASIC / FPGA.....now....where do they come from again normally? ;)

The crux is around imaging upconverting, rendering within a game, pixel blocks but obviously they expand on it to cover many other applications.

It reminds me a little of the process of event cameras / neuromorphic processing that Prophesee developing & now partnered on with BRN.

Prophesee heavily working with Sony & the IMX as we know and would get the feeling Sony may have some influence over who Prophesee could also work with in the gaming world given Sony is the home of the PS.

Nintendo would def want to have their own process(or) & patent.

1. US20210304356 - SYSTEMS AND METHODS FOR MACHINE LEARNED IMAGE CONVERSION






Patent History
Patent number
: 11379951
Type: Grant
Filed: Mar 25, 2020
Date of Patent: Jul 5, 2022
Patent Publication Number: 20210304356
Assignee: NINTENDO CO., LTD. (Kyoto)
Inventors: Alexandre Delattre (Viroflay), Théo Charvet (Paris), Raphaël Poncet (Paris)
Primary Examiner: Joni Hsu
Application Number: 16/830,032

INTRODUCTION​

Machine learning can give computers the ability “learn” a specific task without expressly programming the computer for that task. One type of machine learning system is called convolutional neural networks (CNNs)—a class of deep learning neural networks. Such networks (and other forms of machine learning) can be used to, for example, help with automatically recognizing whether a cat is in a photograph. The learning takes places by using thousands or millions of photos to “train” the model to recognize when a cat is in a photograph. While this can be a powerful tool, the resulting processing of using a trained model (and training the model) can still be computationally expensive when deployed in a real-time environment.
Image up-conversion is a technique that allows for conversion of images produced in a first resolution (e.g., 540p resolution or 960×540 with 0.5 megapixels) to a higher resolution (e.g., 1080p resolution, 1920×1080, with 2.1 megapixels). This process can be used to show images of the first resolution on a higher resolution display. Thus, for example, a 540p image can be displayed on a 1080p television and (depending on the nature of the up-conversion process) may be shown with increased graphical fidelity as compared to if the 540p image were displayed directly with traditional (e.g., linear) upscaling on a 540 television. Different techniques for image up-conversion can present a trade off between speed (e.g., how long the process takes for converting a given image) and the quality of the up-converted image. For example, if a process for up-converting is performed in real-time (e.g., such as during a video game), then the image quality of the resulting up-converted image may suffer.
Accordingly, it will be appreciated that new and improved techniques, systems, and processes are continually sought after in these areas of technology.



Additional Example Embodiments​

The processing discussed above generally relates to data (e.g., signals) in two dimensions (e.g., images). The techniques herein (e.g., the use of SBT's) may also be applied to data or signals of other dimensions, for example, 1D (e.g., speech recognition, anomaly detection on time series, etc. . . . ) and 3D (e.g., video, 3D textures) signals. The techniques may also be applied in other types of 2D domains such as, for example, image classification, object detection and image segmentation, face tracking, style transfer, posture estimation, etc.)
The processing discussed in connection with FIGS. 2 and 9 relates to upconverting images from 540p to 1080. However, the techniques discussed herein may be used in other scenarios including: 1) converting to different resolutions than those discussed (e.g., from 480p to 720p or 1080p and variations thereof, etc.), 2) downconverting images to a different resolution, 3) converting images without changes in resolution; 4) images with other values for how the image is represented (e.g., grayscale).
In certain example embodiments, the techniques herein may be applied to processing images (e.g., in real-time and/or during runtime of an application/video game) to provide anti-aliasing capability. In such an example, the size of the image before and after remains the same—but with anti-aliasing applied to the final image. Training for such a process may proceed by taking relatively low-quality images (e.g., those rendered without anti-aliasing) and those rendered with high quality anti-aliasing (or a level of anti-aliasing that is desirable for a given application or use) and training a neural network (e.g. L&R as discussed above).
Other examples of fixed resolution applications (e.g., converting images from x resolution to x resolution) may include denoising (e.g., in conjunction with a ray-tracing process that is used by a rendering engine in a game engine). Another application of the techniques herein may include deconvolution, for example in the context of deblurring images and the like.

I am thinking of just basic use cases:

A controller that can recognise who is holding it to adjust the in game profile accordingly
 
  • Like
  • Love
  • Fire
Reactions: 20 users
Top Bottom