BRN Discussion Ongoing

Tothemoon24

Top 20
  • Like
  • Fire
  • Love
Reactions: 8 users

Diogenese

Top 20
Jensen Huang on generative AI and stuff:

 
  • Like
  • Fire
  • Wow
Reactions: 10 users

Frangipani

Regular
Thankfully, not only short-sighted fools are writing about Brainchip these days, while heavily shorting BRN at the same time and thus profiting from the negative sentiment they are reinforcing. It proves to me that boasting a PhD in Quantitative Finance from Cambridge, for example, is not everything and does obviously not prevent the holder from misjudging and failing to spot a ridiculously undervalued company.

Foresighted business journalists such as Eddy Sunarto, on the other hand, are looking past the dismal share price development to see (or at least glimpse) the true potential of the company instead. In the following article, which will also please quite a number of BRN shareholders invested in Weebit Nano, Brainchip gets a mention among a number of ASX-listed companies that are likely to profit from the AI boom:



“Thanks to the artificial intelligence (AI) boom, Nvidia Corp has now become one of the most valuable companies in the world after crossing the trillion-dollar (USD) market cap a couple of weeks ago.

Other companies belonging to that coveted trillion dollar club include Apple (US$2.9T), Microsoft (US$2.5T), Alphabet ($1.6T), and Amazon ($1.3T).

Nvidia’s shares surged as tech companies scramble to buy the company’s advanced GPU chips to power the super computers needed in crunching massive amount of data. ChatGPT for example required around 20,000 Nvidia chips to process its AI training data.

Like previous tech booms, Nvidia’s rise is a reminder that it’s the ‘picks and shovels’ investing strategy which could profit the most from the next wave of development.

(…)


BRAINCHIP (ASX:BRN)

Brainchip has basically developed an ultra-low power, AI neural processor that is capable of continuous learning – which allows customers to create ultra-low power chips and systems with the ability to incrementally learn on-chip without the need to retrain in the cloud.

So, it can think with ultra-low power consumption because it eliminates the power usage caused by interaction and communication between separate elements, and avoids dependency on network connections to powerful remote computer infrastructures.”

(…)

B9DDEDA5-9944-4700-AE80-4121AB8729A5.jpeg

Good on ya, Eddy!
 
  • Like
  • Fire
  • Love
Reactions: 54 users
  • Like
  • Fire
  • Love
Reactions: 58 users

TopCat

Regular
I’ve never noticed anyone from Brainchip “liking” an inivation post before and especially an Aeveon post.

IMG_3196.jpeg
 
  • Like
  • Fire
  • Thinking
Reactions: 31 users

buena suerte :-)

BOB Bank of Brainchip
Thanks for posting @chapman89!

Nintendo is looking good IMO? 🥳🤞

MegaChips Forms Strategic Partnership with BrainChip​

View attachment 38407


View attachment 38408
See below- Forward-looking statements in this section represent the judgment of MegaChips as of March 31, 2022.

View attachment 38409


MegaChips Report 2022 Page 13

View attachment 38412
What a team ... nice work Jesse and hairy toes ( spa party co-ordinator) :love:... love ya work

Great news! finally we are getting some traction... Nintendo ... yes please 🙏🙏🙏

Would be great to hear some news regarding Mercedes .. Just a gut feeling(warm & fuzzy) :) but I 100% do believe we ARE in with these guys BIG TIME!! and something will appear very soon!!!

Time will tell !?

Good night from a very wet Perth ☔☔
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 62 users

charles2

Regular
  • Like
  • Fire
Reactions: 10 users

Sirod69

bavarian girl ;-)
Last edited:
  • Like
  • Fire
  • Love
Reactions: 22 users

Sirod69

bavarian girl ;-)
Brainchip advanced neuromorphic IP in megachips!

1686850472901.png


Provision of Edge AI Subsystem

We co-design subsystem incorporating characteristic Edge AI IP according to the customer’s system. (We can also support specified AI IP.)

  • Example 1: Intuitive UI subsystem (BrainChip IP)
  • Example 2: Image AI subsystem (Quadric IP)

 
  • Like
  • Love
  • Fire
Reactions: 36 users
D

Deleted member 118

Guest
Last edited by a moderator:
  • Like
  • Fire
  • Love
Reactions: 34 users

alwaysgreen

Top 20


AMDs latest AI heavy presentation.

No mention of Brainchip.


1686860290583.gif
 
  • Like
  • Fire
Reactions: 5 users

goodvibes

Regular
Rob Telson likes it…me too.

Introducing the Mercedes-Benz Vision One-Eleven: Redefining automotive excellence.


We are thrilled to introduce the world to the Mercedes-Benz Vision One-Eleven, a paradigm-shifting sports car concept that epitomizes the fusion of design, technology and sustainability. As a progressive interpretation of the 1970s icon, the Vision One-Eleven pays homage to the legendary C 111 experimental vehicles, while propelling us into a new era of automotive excellence. With the inclusion of YASA's axial flux motors, the Vision One-Eleven achieves remarkable power density and efficiency, outperforming radial motors of similar nature.

This visionary masterpiece is showcasing a highly dynamic design language and innovative all-electric drive technology. Experience the pinnacle of ICONIC LUXURY as we redefine the boundaries of what's possible in the automotive industry

Experience more on mb4.me/MBVision111
#mercedesbenz #VisionOneEleven #C111
 
  • Like
  • Love
  • Fire
Reactions: 16 users
D

Deleted member 118

Guest
Only 2 patents I can find any relevance to Nintendo but I guess these have been look at before


 
  • Like
  • Fire
Reactions: 10 users

TheFunkMachine

seeds have the potential to become trees.
  • Like
  • Fire
  • Thinking
Reactions: 6 users
  • Like
  • Thinking
Reactions: 9 users

Euzor

Emerged

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 35 users

Tuliptrader

Regular
Only 2 patents I can find any relevance to Nintendo but I guess these have been look at before


Nice find @Rocket577

It definitely shows that Nintendo are playing in our "sandbox". Still talking back propagation though.

I highlighted a paragraph I found interesting.


Systems and methods of neural network training​

Abstract​

A computer system is provided for training a neural network that converts images. Input images are applied to the neural network and a difference in image values is determined between predicted image data and target image data. A Fast Fourier Transform is taken of the difference. The neural network is trained on based the L1 Norm of resulting frequency data.


  • TECHNICAL OVERVIEW
  • [0002]
    The technology described herein relates to machine learning and training machine learned models or systems. More particularly, the technology described includes subject matter that relates to training neural networks to convert or upscale images by using, for example, Fourier Transforms (FTs).
  • INTRODUCTION
  • [0003]
    Machine learning can give computers the ability to “learn” a specific task without expressly programming the computer for that task. An example of machine learning systems includes deep learning neural networks. Such networks (and other forms of machine learning) can be used to, for example, help with automatically recognizing whether a cat is in a photograph. The learning takes place by using thousands or millions of photos to “train” the network (also called a model) to recognize when a cat is in a photograph. The training process can include, for example, determining weights for the model that achieve the indicated goal (e.g., identifying cats within a photo). The training process may include using a loss function in a way (e.g., via backpropagation) that seeks to train the model or neural network that will minimize the loss represented by the function. Different loss functions include L1 (Least Absolute Deviations) and L2 (Least Square Errors) loss functions.
  • [0004]
    It will be appreciated that new and improved techniques, systems, and processes are continually sought after in these areas of technology, such as technology that is used to train machine learned models or neural networks.
  • SUMMARY
  • [0005]
    In some examples, computer system for training a neural network that processes images is provided. In some examples, the system is used to train neural networks to upscale images from one resolution to another resolution. The system may include computer storage that stores image data for a plurality of images. The system may be configured to generate, from the plurality of images, input image data and then apply the input image data to a neural network to generate predicted output image data. A difference between the predicted output image data and target image data is calculated and that difference may then be transformed in frequency domain data. In some examples, the L1-loss is then used on the frequency domain data calculated from the difference, which is then used to train the neural network using backpropagation (e.g., stochastic gradient descent). Using the L1 loss may encourage sparsity of the frequency domain data, which may also be referred to, or part of, Compressed Sensing. In contrast, using the L2 loss on the same frequency domain data may generally not produce in good results due to, for example, Parseval's Theorem. In other words, the L2 loss of frequency transformed data, using a Fourier Transform, is the same as the L2 loss of the data—i.e., L2(FFT(data))=L2(data). Thus, frequency transforming the data when using an L2 loss may not produce different results.

TT
 
  • Like
  • Love
  • Fire
Reactions: 36 users

Tuliptrader

Regular
Nice find @Rocket577

It definitely shows that Nintendo are playing in our "sandbox". Still talking back propagation though.

I highlighted a paragraph I found interesting.


Systems and methods of neural network training​

Abstract​

A computer system is provided for training a neural network that converts images. Input images are applied to the neural network and a difference in image values is determined between predicted image data and target image data. A Fast Fourier Transform is taken of the difference. The neural network is trained on based the L1 Norm of resulting frequency data.


  • TECHNICAL OVERVIEW
  • [0002]
    The technology described herein relates to machine learning and training machine learned models or systems. More particularly, the technology described includes subject matter that relates to training neural networks to convert or upscale images by using, for example, Fourier Transforms (FTs).
  • INTRODUCTION
  • [0003]
    Machine learning can give computers the ability to “learn” a specific task without expressly programming the computer for that task. An example of machine learning systems includes deep learning neural networks. Such networks (and other forms of machine learning) can be used to, for example, help with automatically recognizing whether a cat is in a photograph. The learning takes place by using thousands or millions of photos to “train” the network (also called a model) to recognize when a cat is in a photograph. The training process can include, for example, determining weights for the model that achieve the indicated goal (e.g., identifying cats within a photo). The training process may include using a loss function in a way (e.g., via backpropagation) that seeks to train the model or neural network that will minimize the loss represented by the function. Different loss functions include L1 (Least Absolute Deviations) and L2 (Least Square Errors) loss functions.
  • [0004]
    It will be appreciated that new and improved techniques, systems, and processes are continually sought after in these areas of technology, such as technology that is used to train machine learned models or neural networks.
  • SUMMARY
  • [0005]
    In some examples, computer system for training a neural network that processes images is provided. In some examples, the system is used to train neural networks to upscale images from one resolution to another resolution. The system may include computer storage that stores image data for a plurality of images. The system may be configured to generate, from the plurality of images, input image data and then apply the input image data to a neural network to generate predicted output image data. A difference between the predicted output image data and target image data is calculated and that difference may then be transformed in frequency domain data. In some examples, the L1-loss is then used on the frequency domain data calculated from the difference, which is then used to train the neural network using backpropagation (e.g., stochastic gradient descent). Using the L1 loss may encourage sparsity of the frequency domain data, which may also be referred to, or part of, Compressed Sensing. In contrast, using the L2 loss on the same frequency domain data may generally not produce in good results due to, for example, Parseval's Theorem. In other words, the L2 loss of frequency transformed data, using a Fourier Transform, is the same as the L2 loss of the data—i.e., L2(FFT(data))=L2(data). Thus, frequency transforming the data when using an L2 loss may not produce different results.

TT
A question for @Diogenese

Could the Fast Fourier Transform (FFT) action in this patent be effectively replaced and improved by Akida 2.0

TT
 
  • Like
  • Thinking
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

Quick reminder,

Today is Quadruple Witching Day on our market.

Should see elevated volume transacting today with rebalancing from Funds, Indexes & Hedge Funds ballancing their portfolio.

Evident by pre market trades on alot of Australia shares already.

Regards,
Esq.
 
  • Like
  • Fire
  • Love
Reactions: 37 users
Top Bottom