Slymeat
Move on, nothing to see.
Absolutely mind blowing. Spikes do indeed rule.
Absolutely mind blowing. Spikes do indeed rule.
He did so he went to see the QuackSo does the man in the moon talk like Donald Duck?
Reading some of the comments around Nintendo & controllers.
I mentioned in a previous post that I've been doing some digging, not to any major success yet however have come across something that shows some potential.
As has been mentioned earlier by @Fact Finder Nintendo was to be around 70% or so of Megachips future.
FF also expanded the thinking on the timing of possible EAP to mid 2020 or so.
This patent application from Nintendo is Mar 2020 and has just been granted recently.
BRN obviously has their original Megachips agreement plus the latest partnership for ASIC drive in the US.
I'll preface this post in saying no mention of Akida, BRN, Neuromorphic or SNN though it does relate to a Neural Process and infers that CNN as it is doesn't really make the grade for what the patent is about.
Probs need @Diogenese to run an eye over whenever has a chance as I think it is a variation on CNN only but it could be a precursor to something that Akida could assist with?
It does discuss cloud data, latency, having to train neural, real time on device learning / processing needs, it discusses imaging needs and neurons.
It includes example of Nvidia as GPU and Intel / ARM CPU but more importantly describes accelerators being ASIC / FPGA.....now....where do they come from again normally?
The crux is around imaging upconverting, rendering within a game, pixel blocks but obviously they expand on it to cover many other applications.
It reminds me a little of the process of event cameras / neuromorphic processing that Prophesee developing & now partnered on with BRN.
Prophesee heavily working with Sony & the IMX as we know and would get the feeling Sony may have some influence over who Prophesee could also work with in the gaming world given Sony is the home of the PS.
Nintendo would def want to have their own process(or) & patent.
1. US20210304356 - SYSTEMS AND METHODS FOR MACHINE LEARNED IMAGE CONVERSION
U.S. Patent for Systems and methods for machine learned image conversion Patent (Patent # 11,379,951 issued July 5, 2022) - Justia Patents Search
A computer system is provided for converting images through use of a trained neural network. A source image is divided into blocks and context data is added to each pixel block. The context blocks are split into channels and each channel from the same context block is added to the same...patents.justia.com
Patent History
Patent number: 11379951
Type: Grant
Filed: Mar 25, 2020
Date of Patent: Jul 5, 2022
Patent Publication Number: 20210304356
Assignee: NINTENDO CO., LTD. (Kyoto)
Inventors: Alexandre Delattre (Viroflay), Théo Charvet (Paris), Raphaël Poncet (Paris)
Primary Examiner: Joni Hsu
Application Number: 16/830,032
INTRODUCTION
Machine learning can give computers the ability “learn” a specific task without expressly programming the computer for that task. One type of machine learning system is called convolutional neural networks (CNNs)—a class of deep learning neural networks. Such networks (and other forms of machine learning) can be used to, for example, help with automatically recognizing whether a cat is in a photograph. The learning takes places by using thousands or millions of photos to “train” the model to recognize when a cat is in a photograph. While this can be a powerful tool, the resulting processing of using a trained model (and training the model) can still be computationally expensive when deployed in a real-time environment.
Image up-conversion is a technique that allows for conversion of images produced in a first resolution (e.g., 540p resolution or 960×540 with 0.5 megapixels) to a higher resolution (e.g., 1080p resolution, 1920×1080, with 2.1 megapixels). This process can be used to show images of the first resolution on a higher resolution display. Thus, for example, a 540p image can be displayed on a 1080p television and (depending on the nature of the up-conversion process) may be shown with increased graphical fidelity as compared to if the 540p image were displayed directly with traditional (e.g., linear) upscaling on a 540 television. Different techniques for image up-conversion can present a trade off between speed (e.g., how long the process takes for converting a given image) and the quality of the up-converted image. For example, if a process for up-converting is performed in real-time (e.g., such as during a video game), then the image quality of the resulting up-converted image may suffer.
Accordingly, it will be appreciated that new and improved techniques, systems, and processes are continually sought after in these areas of technology.
Additional Example Embodiments
The processing discussed above generally relates to data (e.g., signals) in two dimensions (e.g., images). The techniques herein (e.g., the use of SBT's) may also be applied to data or signals of other dimensions, for example, 1D (e.g., speech recognition, anomaly detection on time series, etc. . . . ) and 3D (e.g., video, 3D textures) signals. The techniques may also be applied in other types of 2D domains such as, for example, image classification, object detection and image segmentation, face tracking, style transfer, posture estimation, etc.)
The processing discussed in connection with FIGS. 2 and 9 relates to upconverting images from 540p to 1080. However, the techniques discussed herein may be used in other scenarios including: 1) converting to different resolutions than those discussed (e.g., from 480p to 720p or 1080p and variations thereof, etc.), 2) downconverting images to a different resolution, 3) converting images without changes in resolution; 4) images with other values for how the image is represented (e.g., grayscale).
In certain example embodiments, the techniques herein may be applied to processing images (e.g., in real-time and/or during runtime of an application/video game) to provide anti-aliasing capability. In such an example, the size of the image before and after remains the same—but with anti-aliasing applied to the final image. Training for such a process may proceed by taking relatively low-quality images (e.g., those rendered without anti-aliasing) and those rendered with high quality anti-aliasing (or a level of anti-aliasing that is desirable for a given application or use) and training a neural network (e.g. L&R as discussed above).
Other examples of fixed resolution applications (e.g., converting images from x resolution to x resolution) may include denoising (e.g., in conjunction with a ray-tracing process that is used by a rendering engine in a game engine). Another application of the techniques herein may include deconvolution, for example in the context of deblurring images and the like.
Hey mate, Your light hearted tones within your above post is bang on the knocker, I reckon.I probably might be perceived as over the top with my obvious confirmation bias where the future of Brainchip is concerned so I thought what the heck go for it and ask the really big question:
WILL AKIDA SAVE PLANET EARTH FROM CATASTROPHIC DESTRUCTION BY A ROGUE ASTEROID?
“NASA is ready to test tech that could save the Earth from devastating asteroid impact
The space agency's DART spacecraft will attempt to knock a passing asteroid off its path.
By Alex Hughes
3 days ago
In an effort to develop defences against incoming asteroids in space, NASA has announced a planned test of its Double Asteroid Redirection Test (DART)spacecraft.
- Share on Facebook
- Share on Twitter
- Share on WhatsApp
- Share on Reddit
- Email to a friend
The technology will be used to target an asteroid on Monday, September 26 at 12:14am BST. This is an asteroid that will pose no threat to Earth making it a safe way to test DART.
The planned test will show whether a spacecraft is able to autonomously navigate to a target asteroid and intentionally collide with it.
By doing this, DART could change the asteroid’s direction while being measured by ground-based telescopes.”
If Professor Iliadis, Rob Telson, Anil Mankar and Vorago know what they are talking about AKIDA technology could be part of the autonomous solution navigating DART on this mission.
If so how cool will it be at BBQ’s when people are boasting about their EV’s and no plastic lifestyles to be able to say “really I am part of a company that saved the planet last Friday.”
My opinion and speculative hope so DYOR
FF
AKIDA BALLISTA
Could you develop that maybe with fingerprint sensor in the controller like unlock on phone.I am thinking of just basic use cases:
A controller that can recognise who is holding it to adjust the in game profile accordingly
Could you develop that maybe with fingerprint sensor in the controller like unlock on phone.
Once print registered, whoever holds the controller it picks up their print on one of the buttons as a sensor?
Or you thinking cam to image the holders face for ID?
Shouldn't be too hard. Front facing cheap cam as you say.yea just some cheap low power cam
example use would be when you create the profile on the switch, it gets you to 'enrol' your face
A Fact Finder weekend crazy idea.I am thinking of just basic use cases:
A controller that can recognise who is holding it to adjust the in game profile accordingly
Shouldn't be too hard. Front facing cheap cam as you say.
I can’t find it now but I remember seeing a presentation recently where the graphic used to represent the gaming applications for Akida was quite obviously a Nintendo Switch controller. The split design is very unique and if they were just referring to gaming in general they would not use it IMO, but rather a generic traditional controller a la PlayStation/XBox/almost every other console that has ever existed because that’s much more recognisable as a hand controller.
Using the Switch controller instead is a choice, a very specific one, and in the context of the MegaChips licensing and their relationship with Nintendo I think the motivations behind this choice are blindingly obvious.
Make sure you patent it first hahaI am going to make a prototype right now.
Make sure you patent it first haha
The presentation is on the megachips frontpageI can’t find it now but I remember seeing a presentation recently where the graphic used to represent the gaming applications for Akida was quite obviously a Nintendo Switch controller. The split design is very unique and if they were just referring to gaming in general they would not use it IMO, but rather a generic traditional controller a la PlayStation/XBox/almost every other console that has ever existed because that’s much more recognisable as a hand controller.
Using the Switch controller instead is a choice, a very specific one, and in the context of the MegaChips licensing and their relationship with Nintendo I think the motivations behind this choice are blindingly obvious.
Maybe need change your avatar to the turtleI wouldn't want to piss super Mario off
Don’t recall this podcast with Doug Fairbairn being posted before.
Was recorded in June this year (after the Brainchip podcast) and gives a really good insight into the Megachips priorities for the US market - wearables (headphones and earbuds), the security market and the industrial space are the three that they’re pursuing actively.
Definitely not detracting from Nintendo though…this podcast was focussed purely on their push into the US market.
![]()
Podcast EP84: MegaChips and Their Launch in the US with Doug Fairbairn - Semiwiki
Dan is joined by semiconductor and EDA industry veteran Douglas Fairbairn. Doug provides details about MegaChips, where he currently heads business development. MegaChips is a large, successful 30-year old semiconductor company based in Japan. Doug is helping MegaChips launch in the US with a...semiwiki.com
I am thinking about an integrated camera too. Not only for face detection but also for object detection/tracking and gestures.yea just some cheap low power cam
example use would be when you create the profile on the switch, it gets you to 'enrol' your face
think of how useful this could be, just passing the controller around and having your settings load automatically, eg. sensitivity, inverted controls, keybindings etc