BRN Discussion Ongoing

Slymeat

Move on, nothing to see.
  • Like
  • Love
  • Fire
Reactions: 8 users

Jefwilto

Regular
  • Haha
  • Like
Reactions: 13 users

uiux

Regular
Reading some of the comments around Nintendo & controllers.

I mentioned in a previous post that I've been doing some digging, not to any major success yet however have come across something that shows some potential.

As has been mentioned earlier by @Fact Finder Nintendo was to be around 70% or so of Megachips future.

FF also expanded the thinking on the timing of possible EAP to mid 2020 or so.

This patent application from Nintendo is Mar 2020 and has just been granted recently.

BRN obviously has their original Megachips agreement plus the latest partnership for ASIC drive in the US.

I'll preface this post in saying no mention of Akida, BRN, Neuromorphic or SNN though it does relate to a Neural Process and infers that CNN as it is doesn't really make the grade for what the patent is about.

Probs need @Diogenese to run an eye over whenever has a chance as I think it is a variation on CNN only but it could be a precursor to something that Akida could assist with?

It does discuss cloud data, latency, having to train neural, real time on device learning / processing needs, it discusses imaging needs and neurons.

It includes example of Nvidia as GPU and Intel / ARM CPU but more importantly describes accelerators being ASIC / FPGA.....now....where do they come from again normally? ;)

The crux is around imaging upconverting, rendering within a game, pixel blocks but obviously they expand on it to cover many other applications.

It reminds me a little of the process of event cameras / neuromorphic processing that Prophesee developing & now partnered on with BRN.

Prophesee heavily working with Sony & the IMX as we know and would get the feeling Sony may have some influence over who Prophesee could also work with in the gaming world given Sony is the home of the PS.

Nintendo would def want to have their own process(or) & patent.

1. US20210304356 - SYSTEMS AND METHODS FOR MACHINE LEARNED IMAGE CONVERSION






Patent History
Patent number
: 11379951
Type: Grant
Filed: Mar 25, 2020
Date of Patent: Jul 5, 2022
Patent Publication Number: 20210304356
Assignee: NINTENDO CO., LTD. (Kyoto)
Inventors: Alexandre Delattre (Viroflay), Théo Charvet (Paris), Raphaël Poncet (Paris)
Primary Examiner: Joni Hsu
Application Number: 16/830,032

INTRODUCTION​

Machine learning can give computers the ability “learn” a specific task without expressly programming the computer for that task. One type of machine learning system is called convolutional neural networks (CNNs)—a class of deep learning neural networks. Such networks (and other forms of machine learning) can be used to, for example, help with automatically recognizing whether a cat is in a photograph. The learning takes places by using thousands or millions of photos to “train” the model to recognize when a cat is in a photograph. While this can be a powerful tool, the resulting processing of using a trained model (and training the model) can still be computationally expensive when deployed in a real-time environment.
Image up-conversion is a technique that allows for conversion of images produced in a first resolution (e.g., 540p resolution or 960×540 with 0.5 megapixels) to a higher resolution (e.g., 1080p resolution, 1920×1080, with 2.1 megapixels). This process can be used to show images of the first resolution on a higher resolution display. Thus, for example, a 540p image can be displayed on a 1080p television and (depending on the nature of the up-conversion process) may be shown with increased graphical fidelity as compared to if the 540p image were displayed directly with traditional (e.g., linear) upscaling on a 540 television. Different techniques for image up-conversion can present a trade off between speed (e.g., how long the process takes for converting a given image) and the quality of the up-converted image. For example, if a process for up-converting is performed in real-time (e.g., such as during a video game), then the image quality of the resulting up-converted image may suffer.
Accordingly, it will be appreciated that new and improved techniques, systems, and processes are continually sought after in these areas of technology.



Additional Example Embodiments​

The processing discussed above generally relates to data (e.g., signals) in two dimensions (e.g., images). The techniques herein (e.g., the use of SBT's) may also be applied to data or signals of other dimensions, for example, 1D (e.g., speech recognition, anomaly detection on time series, etc. . . . ) and 3D (e.g., video, 3D textures) signals. The techniques may also be applied in other types of 2D domains such as, for example, image classification, object detection and image segmentation, face tracking, style transfer, posture estimation, etc.)
The processing discussed in connection with FIGS. 2 and 9 relates to upconverting images from 540p to 1080. However, the techniques discussed herein may be used in other scenarios including: 1) converting to different resolutions than those discussed (e.g., from 480p to 720p or 1080p and variations thereof, etc.), 2) downconverting images to a different resolution, 3) converting images without changes in resolution; 4) images with other values for how the image is represented (e.g., grayscale).
In certain example embodiments, the techniques herein may be applied to processing images (e.g., in real-time and/or during runtime of an application/video game) to provide anti-aliasing capability. In such an example, the size of the image before and after remains the same—but with anti-aliasing applied to the final image. Training for such a process may proceed by taking relatively low-quality images (e.g., those rendered without anti-aliasing) and those rendered with high quality anti-aliasing (or a level of anti-aliasing that is desirable for a given application or use) and training a neural network (e.g. L&R as discussed above).
Other examples of fixed resolution applications (e.g., converting images from x resolution to x resolution) may include denoising (e.g., in conjunction with a ray-tracing process that is used by a rendering engine in a game engine). Another application of the techniques herein may include deconvolution, for example in the context of deblurring images and the like.

I am thinking of just basic use cases:

A controller that can recognise who is holding it to adjust the in game profile accordingly
 
  • Like
  • Love
  • Fire
Reactions: 20 users

MDhere

Top 20
AKIDA got a free plug today. Bridge to Brisbane was on and one of the bosses asked us if we wanted any random words added into his speech cause it spruces up peoples attention so i said yes sure can u throw in the word AKIDA in yr speech, so he did ! 🤣 15000 people heard the word AKIDA. So in their sleep they may be like hmmm AKIDA what is this wonderful mysterious word randomly placed in the middle of a speech and they will be hyponotised. 🤣
 
  • Like
  • Haha
  • Love
Reactions: 47 users

TasTroy77

Founding Member
Back in the NINTENDO camp looking at the announcement on the Megachips website it clearly states that Megachips "solve customers problems by integrating Akida s innovative technology into our ASIC solution
services in markets such as automotive, IoT, cameras, gaming and industrial robotics. So therefore by mentioning gaming and the fact that Nintendo is a large scale customer of Megachips there has to be a better than average chance of Nintendo being integrated with Akida IP.

Screenshot_20220828-205225_Drive.jpg
 
  • Like
  • Fire
  • Love
Reactions: 42 users

Deadpool

Did someone say KFC
I probably might be perceived as over the top with my obvious confirmation bias where the future of Brainchip is concerned so I thought what the heck go for it and ask the really big question:

WILL AKIDA SAVE PLANET EARTH FROM CATASTROPHIC DESTRUCTION BY A ROGUE ASTEROID?

“NASA is ready to test tech that could save the Earth from devastating asteroid impact​

The space agency's DART spacecraft will attempt to knock a passing asteroid off its path.
By Alex Hughes
3 days ago
In an effort to develop defences against incoming asteroids in space, NASA has announced a planned test of its Double Asteroid Redirection Test (DART)spacecraft.

The technology will be used to target an asteroid on Monday, September 26 at 12:14am BST. This is an asteroid that will pose no threat to Earth making it a safe way to test DART.

The planned test will show whether a spacecraft is able to autonomously navigate to a target asteroid and intentionally collide with it.

By doing this, DART could change the asteroid’s direction while being measured by ground-based telescopes.”

If Professor Iliadis, Rob Telson, Anil Mankar and Vorago know what they are talking about AKIDA technology could be part of the autonomous solution navigating DART on this mission.

If so how cool will it be at BBQ’s when people are boasting about their EV’s and no plastic lifestyles to be able to say “really I am part of a company that saved the planet last Friday.” 😂🤣😂🥳😎

My opinion and speculative hope so DYOR
FF

AKIDA BALLISTA
Hey mate, Your light hearted tones within your above post is bang on the knocker, I reckon.
I said in an earlier post in an around about way, that it is an extremely humbling experience to be involved (be it a minor shareholder) of what is undoubtedly at play here. Beneficial AI at its finest, and where only getting started.
rocket GIF
 
  • Like
  • Fire
  • Love
Reactions: 16 users
I am thinking of just basic use cases:

A controller that can recognise who is holding it to adjust the in game profile accordingly
Could you develop that maybe with fingerprint sensor in the controller like unlock on phone.

Once print registered, whoever holds the controller it picks up their print on one of the buttons as a sensor?

Or you thinking cam to image the holders face for ID?
 
  • Like
  • Fire
  • Love
Reactions: 6 users

uiux

Regular
Could you develop that maybe with fingerprint sensor in the controller like unlock on phone.

Once print registered, whoever holds the controller it picks up their print on one of the buttons as a sensor?

Or you thinking cam to image the holders face for ID?

yea just some cheap low power cam

example use would be when you create the profile on the switch, it gets you to 'enrol' your face


think of how useful this could be, just passing the controller around and having your settings load automatically, eg. sensitivity, inverted controls, keybindings etc
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 14 users
yea just some cheap low power cam

example use would be when you create the profile on the switch, it gets you to 'enrol' your face
Shouldn't be too hard. Front facing cheap cam as you say.
 
  • Like
  • Love
  • Fire
Reactions: 6 users
I am thinking of just basic use cases:

A controller that can recognise who is holding it to adjust the in game profile accordingly
A Fact Finder weekend crazy idea.

Tony Dawe is alleged to have said something along the lines that a couple of EAP’s had completely thrown out their ideas for AKD1000 after realising how powerful it was and gone back to the drawing board thus delaying the anticipated outcome of the EAP.

What if one of these was Nintendo and that is why the March 2021 release date of their new product was abandoned and is now estimated to be March, 2023.

If AKIDA is capable of increasing the Rover speed to 20 kph on the Moon what can it do for Nintendo.

As I said a weekend crazy idea.

FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Wow
Reactions: 51 users

uiux

Regular
  • Like
  • Fire
  • Love
Reactions: 20 users
I can’t find it now but I remember seeing a presentation recently where the graphic used to represent the gaming applications for Akida was quite obviously a Nintendo Switch controller. The split design is very unique and if they were just referring to gaming in general they would not use it IMO, but rather a generic traditional controller a la PlayStation/XBox/almost every other console that has ever existed because that’s much more recognisable as a hand controller.

Using the Switch controller instead is a choice, a very specific one, and in the context of the MegaChips licensing and their relationship with Nintendo I think the motivations behind this choice are blindingly obvious.
IMG_20220828_203246.jpg
 
  • Like
  • Fire
  • Love
Reactions: 38 users
  • Like
  • Love
  • Haha
Reactions: 7 users

uiux

Regular
  • Haha
  • Like
Reactions: 11 users

clip

Regular
I can’t find it now but I remember seeing a presentation recently where the graphic used to represent the gaming applications for Akida was quite obviously a Nintendo Switch controller. The split design is very unique and if they were just referring to gaming in general they would not use it IMO, but rather a generic traditional controller a la PlayStation/XBox/almost every other console that has ever existed because that’s much more recognisable as a hand controller.

Using the Switch controller instead is a choice, a very specific one, and in the context of the MegaChips licensing and their relationship with Nintendo I think the motivations behind this choice are blindingly obvious.
The presentation is on the megachips frontpage :)
_____________________________________________________________________________
who-is-megachips.png
 
  • Like
  • Fire
  • Love
Reactions: 30 users
  • Like
  • Haha
Reactions: 3 users

butcherano

Regular
Don’t recall this podcast with Doug Fairbairn being posted before.

Was recorded in June this year (after the Brainchip podcast) and gives a really good insight into the Megachips priorities for the US market - wearables (headphones and earbuds), the security market and the industrial space are the three that they’re pursuing actively.

Definitely not detracting from Nintendo though…this podcast was focussed purely on their push into the US market.


Edit - sorry @DingoBorat, I just realised that you posted this a couple of months ago….but was definitely worth a fresh listen!…(y)
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 23 users

uiux

Regular
Don’t recall this podcast with Doug Fairbairn being posted before.

Was recorded in June this year (after the Brainchip podcast) and gives a really good insight into the Megachips priorities for the US market - wearables (headphones and earbuds), the security market and the industrial space are the three that they’re pursuing actively.

Definitely not detracting from Nintendo though…this podcast was focussed purely on their push into the US market.


Always nice to be grounded
 
  • Like
  • Love
Reactions: 4 users

Sirod69

bavarian girl ;-)
I've just discovered the following on LinkedIn - great, which speakers are participating here, certainly has a lot to do with Brainchip
Just read it through and have fun!

Ricky Hudi, Founder & CEO bei FMT - Future Mobility Technologies GmbH, wrote:

We are only one month away from The Autonomous MainEvent on September 27th 2022 at the Hofburg Imperial Palace in Vienna.
I am looking very much forward to welcome so many of you – not only in Vienna but worldwide through our virtual platform.

Our slogan for this years main event will be:
"Act to Impact"

As Chairman of The Autonomous, it’s an honor to welcome an incredible lineup of speakers - leading experts, executives as well as prestigious academics and thought leaders - who are shaping the future of #autonomousmobility and working on the biggest challenges for the industry.

Our participants will have the chance to listen to 6 panels, 4 keynotes, and join one of our 5 workshops.
Seats are filling up fast, so if you want to be part of this unique global gathering, be quick and book your ticket:
https://lnkd.in/eBiefAyY

Register now to hear from and network with:
Georges Massing (Vice President MB.OS Automated Driving, Powernet & E/E Integration, Mercedes-Benz)
Nakul Duggal (Senior Vice President and General Manager Automotive, Qualcomm)
Markus Heyn (Member of the Board of Management, Bosch)
Maria Anhalt (CEO, Elektrobit)
Lars Reger (CTO & Executive Vice President, NXP Semiconductors)
Dr. Riccardo Mariani (VP of Industry Safety, NVIDIA)
Essa Al-Saleh (CEO and Board Member, Volta Trucks)
Johann Jungwirth (Senior Vice President of Autonomous Vehicles, Mobileye)
Andreas Urschitz (CMO, Infineon)
Glen De Vos (Senior Vice President and CTO, Aptiv)
Philip Koopman (Associate Professor, Carnegie Mellon University)
Indu Vijayan (Director of Product Management, AEye)
Peter S. Schiefer (President, Automotive Division, Infineon)
Mike Potts (CEO, StreetDrone)
Georg Kopetz (CEO, TTTech Auto)
Robert E. Siegel (Lecturer in Management at the Stanford Graduate School of Business)
Stefan Poledna (CTO, TTTech Auto)
Richard Damm (President of Kraftfahrt-Bundesamt, KBA and Chairman of the UNECE Working Party on Automated, Autonomous and Connected Vehicles)
Christoph Hartung (CEO, ETAS GmbH)
Dr. Benedikt Wolfers (Founding Partner, PSWP)
Bryant Walker Smith (Professor at the University of South Carolina)
Jens Kötz (Head of Development Architecture, Energy, Security and Diagnostic Functions, AUDI AG)
Karsten Michels (Head of Product Line HPC at Continental Automotive)
Bernhard Mueller-Bessler (Head of Autonomous Solutions, Hexagon)
Frank Han (Chief Software Architect, Changan Automobile)
Karl Håkan Schildt (Senior Vice President, Traton Group)
Andreas Tschiesner (Senior Partner, McKinsey & Company)
Michael Nikowitz (Coordinator for automated driving at the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology)

We have a fully packed agenda covering hot topics and the most pressing challenges for the future of safe autonomous mobility.
Make sure to secure your ticket!
 
  • Like
  • Fire
  • Love
Reactions: 28 users

clip

Regular
yea just some cheap low power cam

example use would be when you create the profile on the switch, it gets you to 'enrol' your face


think of how useful this could be, just passing the controller around and having your settings load automatically, eg. sensitivity, inverted controls, keybindings etc
I am thinking about an integrated camera too. Not only for face detection but also for object detection/tracking and gestures.
Then literally everything in front of the camera can become game relevant.
For example: You need to open this door? Just hold up the yellow key card in front of the camera.
Objects in front of the camera can trigger game events.

Only problem i see is, a camera in a gaming console with internet connection might rise concerns regarding privacy....
 
Last edited:
  • Like
  • Haha
Reactions: 2 users
Top Bottom