BRN Discussion Ongoing

Stupid, when exactly is it? But I'm happy anyway!


Looking Polar Bear GIF by BBC America
I honestly can't remember what time zone they release in.
@cosors
One for you
 
  • Haha
  • Love
Reactions: 2 users

Dhm

Regular
Hi @Steve10

A few of us have latched onto that idea with great excitement and I hope Qualcomm does look to include Brainchip in the future for its science fiction qualities. However it was identified via @Bravo and debunked by @Diogenese as NOT being the case currently.

Qualcomm were working with Icatch and another company whose name escapes me to provide it’s AI needs. N

EDIT: the links below discusses Qualcomm‘s tech and although they look the same have different conten:





It is however highly likely Brainchip could improve performance but given there has been no announcement re partnership/agreements I don’t think it’s the case as the moment!


On the flip side if another phone company wants to match or beat Qualcomm current technology then a Prophesee/Brainchip event camera in their phone would be a good fix so fingers crossed Apple, Pixel, Nokia etc are developing that.

:)
Yes, we have tossed this around over the last month or so. @chapman89 published a chat he had with Luca Verre, who said this:
Screen Shot 2023-02-19 at 1.21.01 pm.png



I have no doubt we will be more to Prophesee than our current status of 'technology demonstrator', but for the time being that is what we are.
 
  • Like
  • Fire
Reactions: 18 users

Diogenese

Top 20
Open AI produced its own chip technology for ChatGPT based on RISC-V open source architecture I believe. Main keywords they use to describe it are similar to akida. You can speak to the bot in a certain way and it will disclose some information about it creation.

Needless to say akida could be used in conjunction with its in-house design for GPT4 but I'm confident it's not part of previous versions.

Happy for others to chip in here.

In a previous post I mentioned I found it interesting that intel ditched a billion dollars of investment into RISC-V at the same time it incorporated brainchip into its programs and intel diverted the billion dollars of funding to this program.

I'd generally say that it's a very good sign but RISC-v is competition to x86 which is licensed by intel. So it's probable intel may have other forces pushing it arm for switching.
Interesting thoughts on Intel and RISC-V. Maybe Intel are applying the "if you can't beat 'em, join 'em" principle.

Do you have a link for the OpenAi/RISC-V SoC? I couldn't find any patent applications, but there is an 18 month NDA on patent applications.

Ilya Sutskever, OpenAi's Chief Scientist, is named as inventor in 19 Google patents pre-2017.

https://worldwide.espacenet.com/pat...863A1?q=in = "Sutskever" AND nftxt = "google"

When you look at OpenAi's mission statement about beneficial AI,

https://openai.com/about/

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome
.

I wonder if he was a tad disillusioned at Google.
 
  • Like
  • Thinking
Reactions: 6 users

Diogenese

Top 20
Today I did a google search that Stuart888 has put up a few times and looking in the last week only.


If you go to tools and choose only the last week, it will give you a link to a NASA document dated 13th March 2023.

There are numerous scopes in here that relate to Akida such as:

Neuromorphic Software for Cognition and Learning for Space Missions

Extreme Radiation Hard Neuromorphic Hardware

Radiation Tolerant Neuromorphic Learning Hardware


Akida is mentioned in the document.

Someone else might be able to shed more light on this document as it has many pages.

Apologies if already posted, I cant keep up with all the posts and gifs sometimes!
I knew those NASA rockets were fast, but 13 March???
 
  • Haha
  • Fire
  • Like
Reactions: 8 users

BigDonger101

Founding Member
I think there would be many of you here that have followed Brainchip extremely closely for the past 3/4 years, and maybe longer.

If you're like me - you don't even look at the cash in your portfolio.

For full transparency, my average is 33 cents. I'm that excited for the company that I do not feel the need to do as much research as I used to, and can finally start focusing on different sectors without any worry.

I went through the first ATH to 97 cents. I didn't sell.

I went through the next ATH to 2.34. I didn't sell.

I will go through the next ATH & I won't sell. But luckily on this next run, BRN will solidify themselves as the main players. Remember Novonix? Yeh something like that :)

I'm ambivalent regarding dealings with China. But it is a ''kill or be killed'' game. I'd rather we be the predator than prey.

Good luck all! Patience is always rewarded, even when it feels like it's not! That's all part of the game :)
 
  • Like
  • Love
  • Fire
Reactions: 59 users

Crestman

Regular
I knew those NASA rockets were fast, but 13 March???
My bad Diogenese, it is actually the completion date:

Completed Proposal Package Due Date and Time:

March 13, 2023 - 5:00 p.m. ET

I have put the image below linked to the document so it will download.

1676776205708.png
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Tothemoon24

Top 20
  • Like
  • Fire
  • Love
Reactions: 25 users

Jasonk

Regular
Interesting thoughts on Intel and RISC-V. Maybe Intel are applying the "if you can't beat 'em, join 'em" principle.

Do you have a link for the OpenAi/RISC-V SoC? I couldn't find any patent applications, but there is an 18 month NDA on patent applications.

Ilya Sutskever, OpenAi's Chief Scientist, is named as inventor in 19 Google patents pre-2017.

https://worldwide.espacenet.com/patent/search/family/057135560/publication/US2018032863A1?q=in = "Sutskever" AND nftxt = "google"

When you look at OpenAi's mission statement about beneficial AI,

https://openai.com/about/

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome
.

I wonder if he was a tad disillusioned at Google.
Link to an earlier post regarding RISC-V

Hope the link works. I was only reading about Intel and RISC-V and out of interested I asked ChatGPT about its hardware, partners and was given the attached response.
 
Last edited:
  • Like
  • Fire
Reactions: 4 users
Seems like Open Ai has changed tack for CHAT Gpt4
I know they talk of their partnership with Cerebras, ................. BUT


Is it just me , or , does anybody else find themselves saying,

"hey, thats what Akida can do/make better " ................... :sneaky:

Its the newest version of CHAT Gpt ( 4 )

IE ............ SPARSITY = less computational power consumption
............. MULTIMODAL LANGUAGE MODEL = RTs continual emphasise on multi modalities
..............ADVANTAGE OF FASTER CHIPS OR HARDWARE = have they found a better way to reduce the computational power issues?
...............SELF HEALING SENSORS = Brainchips catchcry ........... making sensors smart
................ARTIFICIAL INTELIGENCE ON EDGE DEVICES = eliminating the need for large cloud servers
.................RESOURCE CONSTRAINED ENVIROMENTS = low power consumption, ? 6mths on a ÄAA" battery
.................BIOLOGICAL BRAINS ABLE TO LEARN = one shot learning

Check out this video if interested




Just wondering whether Elon has taken onboard my numerous emails to him ...................:unsure:

AKIDA ( G iveit Patience Time ) BALLISTA

Recent Forbes article on ChatGPT.

There is some commentary with Rain AI but just gotta look at the behind the scenes issues, that users don't think about, like power etc.

Kinda where neuromorphic can assist obviously as Rain says but as usual journos forget to actually do their reasearch...Akida anyone.


ChatGPT Burns Millions Every Day. Can Computer Scientists Make AI One Million Times More Efficient?
John Koetsier
Senior Contributor
John Koetsier is a journalist, analyst, author, and speaker.

Running ChatGPT costs millions of dollars a day, which is why OpenAI, the company behind the viral natural-language processing artificial intelligence has started ChatGPT Plus, a $20/month subscription plan. But our brains are a million times more efficient than the GPUs, CPUs, and memory that make up ChatGPT’s cloud hardware. And neuromorphic computing researchers are working hard to make the miracles that big server farms in the clouds can do today much simpler and cheaper, bringing them down to the small devices in our hands, our homes, our hospitals, and our workplaces.

One of the keys: modeling computing hardware after the computing wetware in human brains.

Including — surprisingly — modeling a characteristic about our own wetware that we really don’t like: death.

“We have to give up immortality,” the CEO of Rain AI, Gordon Wilson, told me in a recent TechFirst podcast. “We have to give up the idea that, you know, we can save software, we can save the memory of the system after the hardware dies.”

Wilson is quoting Geoff Hinton, a cognitive psychologist and computer scientist, author or co-author of over 200 peer-reviewed publications, current Google employee working on Google Brain, and one of the “godfathers” of deep learning. At a recent NeurIPS machine learning conference, he talked about the need for a different kind of hardware substrate to form the foundation of AI that is both smarter and more efficient. It’s analog and neuromorphic — built with artificial neurons in a very human style — and it’s co-designed with software to form a tight blend of hardware and software that is massively more efficient than current AI hardware

Achieving this is not just a nice-to-have, or a vague theoretical dream.

Building a next-generation foundation for artificial intelligence is literally a multi-billion-dollar concern in the coming age of generative AI and search. One reason is that when training large language models (LLM) in the real world, there are two sets of costs to consider.

Training a large language model like that used by ChatGPT is expensive — likely in the tens of millions of dollars — but running it is the true expense. Running the model, responding to people’s questions and queries, uses what AI experts call “inference.”

That’s precisely what runs ChatGPT compute costs into the millions regularly. But it will cost Microsoft’s AI-enhanced Bing much more.

And the costs for Google to respond to the competitive threat and duplicate this capability could be literally astronomical.

“Inference costs far exceed training costs when deploying a model at any reasonable scale,” say Dylan Patel and Afzal Ahmad in SemiAnalysis. “In fact, the costs to inference ChatGPT exceed the training costs on a weekly basis. If ChatGPT-like LLMs are deployed into search, that represents a direct transfer of $30 billion of Google’s profit into the hands of the picks and shovels of the computing industry.”

If you run the numbers like they have, the implications are staggering.

“Deploying current ChatGPT into every search done by Google would require 512,820 A100 HGX servers with a total of 4,102,568 A100 GPUs,” they write. “The total cost of these servers and networking exceeds $100 billion of Capex alone, of which Nvidia would receive a large portion.”

Assuming that’s not going to happen (likely a good assumption), Google has to find another way to approach similar capability. In fact, Microsoft, which has only released its new ChatGPT-enhanced Bing in very limited availability for very good reasons probably including hardware and cost, needs another way.

Perhaps that other way is analogous to something we already have a lot of familiarity with.

According to Rain AI’s Wilson, we have to learn from the most efficient computing platform we currently know of: the human brain. Our brain is “a million times” more efficient than the AI technology that ChatGPT and large language models use, Wilson says. And it happens to come in a very flexible, convenient, and portable package.

“I always like to talk about scale and efficiency, right? The brain has achieved both,” Wilson says. “Typically, when we’re looking at compute platforms, we have to choose.”

That means you can get the creativity that is obvious in ChatGPT or Stable Diffusion, which relies on data center compute to build AI-generated answers or art (trained, yes, on copyrighted images), or you can get something small and efficient enough to deploy and run on a mobile phone, but doesn’t have much intelligence.

That, Wilson says, is a trade-off that we don’t want to keep having to make.

Which is why, he says, an artificial brain built with memristors that can “ultimately enable 100 billion-parameter models in a chip the size of a thumbnail,” is critical.

For reference, ChatGPT’s large language model is built on 175 billion parameters, and it’s one of the largest and most powerful yet built. ChatGPT 4, which rumors say is as big a leap from ChatGPT 3 as the third version was from its predecessors — will likely be much larger. But even the current version used 10,000 Nvidia GPUs just for training, with likely more to support actual queries, and costs about a penny an answer.

Running something of roughly similar scale on your finger is going to be multiple orders of magnitude cheaper.

And if we can do that, it unlocks much smarter machines that generate that intelligence in much more local ways.

“How can we make training so cheap and so efficient that you can push that all the way to the edge?” Wilson asks. “Because if you can do that, then I think that’s what really encapsulates an artificial brain. It’s a device. It’s a piece of hardware and software that can exist, untethered, perhaps in a cell phone, or AirPods, or a robot, or a drone. And it importantly has the ability to learn on the fly. To adapt to a changing environment or a changing self.”

That’s a critical evolution in the development of artificial intelligence. Doing so enables smarts in machines we own and not just rent, which means intelligence that is not dependent on full-time access to the cloud. Also: intelligence that doesn’t upload everything known about us to systems owned by corporations we end up having no choice but to trust.

It also, potentially, enables machines that differentiate. Learn. Adapt. Maybe even grow.

My car should know me and my area better than a distant colleagues’ car. Your personal robot should know you and your routines, your likes and dislikes, better than mine. And those likes and dislikes, with your personal data, should stay local on that local machine.

There’s a lot more development, however, to be done on analog systems and neuromorphic computing: at least several years. Rain has been working on the problem for six years, and Wilson thinks shipping product in quantity — 10,000 units for Open AI, 100,000 units for Google — is at least “a few years away.” Other companies like chip giant Intel are also working on neuromorphic computing with the Loihi chip, but we haven’t seen that come to the market in scale yet.

If and when we do, however, the brain-emulation approach shows great promise. And the potential for great disruption.

“A brain is a platform that supports intelligence,” says Wilson. “And a brain, a biological brain, is hardware and software and algorithms all blended together in a very deeply intertwined way. An artificial brain, like what we’re building at Rain, is also hardware plus algorithms plus software, co-designed, intertwined, in a way that is really ... inseparable.”
 
  • Like
  • Love
Reactions: 14 users

Diogenese

Top 20
I was reading the other day about Intel terminating Pathfinder RISC-V development kit program. It was planning on investing a billion dollars as of a month ago.

The below chain-of-events attached related or coincidence to BRN joining Intel foundry services?

Interesting enough I asked ChatGPT about this RISC-V.

Just another side note. It's been clear for some time via linkedin that Intel verification engineering/s were clearly interested in BRN... low and behold over night Intel seem to have pivoted technologies.
Great research, sadly it looks like Intel has pulled the plug on Pathfinder:

https://www.theregister.com/2023/01/30/intel_ris_v_pathfinder_discontinued/


After less than half a year, Intel quietly kills RISC-V dev environment​


Did Pathfinder get lost in sea of red ink? Or is Chipzilla becoming RISC averse?​

Simon Sharwood
Mon 30 Jan 2023 // 06:02 UTC

Intel has shut down its RISC-V Pathfinder – an initiative it launched less than six months ago to encourage use of the open source RISC-V CPU designs.
Pathfinder was launched in August 2022. A joint press release from the 30th of that month includes a canned quote from Vijay Krishnan, general manager for RISC-V Ventures at Intel Corporation, who at the time stated: "With Intel Pathfinder, users will be able to test drive pre-silicon concepts on Intel FPGAs or virtual simulators."
"There should be tremendous value for pre-silicon architects, software developers and product managers looking to prove out use cases upfront in the product development lifecycle," he added.
Intel billed the service as "scalable from individual users in academia and research, all the way to large-scale commercial projects."
On December 1, 2022, Intel emitted an announcement of impending enhancements to the Pathfinder.
That document again featured Krishnan, this time quoted as saying "Maintaining a torrid pace of execution and fostering ecosystem collaboration are key imperatives for Intel Pathfinder for RISC-V." Next came a quote from Sundari Mitra, chief incubation officer, corporate vice president, and general manager at Intel's Incubation & Disruptive Innovation (IDI) Group: "We are excited to see Intel Pathfinder for RISC-V grow rapidly while continuing to adapt to market needs."
But in recent days a visit to pathfinder.intel.com produces only the following announcement:
We regret to inform you that Intel is discontinuing the Intel® Pathfinder for RISC-V program effective immediately.
Since Intel will not be providing any additional releases or bug fixes, we encourage you to promptly transition to third-party RISC-V* software tools that best meet your development needs
.

So the question is whether OpenAI is going it alone with Pathfinder.
 
  • Like
  • Thinking
Reactions: 7 users

Quiltman

Regular
  • Like
Reactions: 12 users

Diogenese

Top 20

The last time someone said that, it was a decoy duck.

All that waddles is not Akida.

Just remember Ella ...

TDK use an analog neuromorphic element:
US2020210818A1 ARRAY DEVICE INCLUDING NEUROMORPHIC ELEMENT AND NEURAL NETWORK SYSTEM

[0327] The neuromorphic element 711 is controlled by a control signal which is input from a control unit (not shown) assigning weights, and the values of the weights change with change in characteristics (for example, conductance) of the neuromorphic element 711 . The neuromorphic element 711 multiplies the weights (the values thereof) corresponding to the characteristics of the neuromorphic element 711 by an input signal and outputs a signal which is the result of multiplication.
 

Attachments

  • 1676781440597.png
    1676781440597.png
    65.1 KB · Views: 44
  • Like
  • Fire
Reactions: 10 users

wilzy123

Founding Member
The last time someone said that, it was a decoy duck.

All that waddles is not Akida.

Just remember Ella ...

TDK use an analog neuromorphic element:
US2020210818A1 ARRAY DEVICE INCLUDING NEUROMORPHIC ELEMENT AND NEURAL NETWORK SYSTEM

[0327] The neuromorphic element 711 is controlled by a control signal which is input from a control unit (not shown) assigning weights, and the values of the weights change with change in characteristics (for example, conductance) of the neuromorphic element 711 . The neuromorphic element 711 multiplies the weights (the values thereof) corresponding to the characteristics of the neuromorphic element 711 by an input signal and outputs a signal which is the result of multiplication.

Misleading junk decoy duck posts? Never.

crawl-space-roaches-tennessee.gif
 

hamilton66

Regular
It says bucket loads of profit is coming to those who have patience.
Rise, just an observation. U and I have been in a few identical shares over the yrs. Always saw u as a trader. I've never seen u so upbeat on any share u've ever owned. For what it's worth, I think u judgement is spot on. I'm as frustrated as f34k. That said, hugely confident.
GLTA
 
  • Like
  • Love
  • Fire
Reactions: 15 users
Rise, just an observation. U and I have been in a few identical shares over the yrs. Always saw u as a trader. I've never seen u so upbeat on any share u've ever owned. For what it's worth, I think u judgement is spot on. I'm as frustrated as f34k. That said, hugely confident.
GLTA
More of a long-term holder in stocks that have solid fundamentals and yes have traded some along the way but will always have a core holding in those, that will not get sold till my target is hit or a revised target, can't just set a target price to sell at and hold to that price if the stock will never hit that price for whatever reason. BRN is the only stock ... Hmmm possibly Dre also now (very interesting developments occurring) that will bring big bucks to my Kitty but for excitement and big change to an industry that's BRN for sure. And yes at times frustration has occurred with BRN probably moreso a year ago than now haha but also feel it building again 🤣. My confidence here also is high my only downer is getting in a few people who have paid much higher prices which kind of weighs down on my mind otherwise I'm pretty darn happy here..
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users
Was having a flick through a Qualcomm presso (attached) from Oct 21 I just found...not sure if been posted over the journey.

Gives a good insight into the track they took / taken so far....but one thing we all agree on below from their presso.....though suspect Akida could help them along a bit ;)


1676791969870.png


1676794074288.png


 

Attachments

  • presentation_-_enabling_on-device_learning_at_scale.pdf
    2.6 MB · Views: 202
  • Like
  • Fire
  • Love
Reactions: 31 users
You mean this one ;)


Just had another look at this NASA solicitation some more.

Hadn't noticed it before but they also stated what else the 22nm FDSOI is being used for....you know....the same one we taping out at GF :unsure:;)

Fabrication techniques include silicon-on-insulator, which significantly reduces the possibility of destructive latch up, such as the 22-nm fully depleted silicon on insulator (FDSOI) technology node that is being used for automotive chip fabrication and shows promise for space processors.


H6.22Deep Neural Net and Neuromorphic Processors for In-Space Autonomy and Cognition
Lead Center: GRC

Participating Center(s): ARC, GSFC

Solicitation Year: 2023

Subtopic Introduction:
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Diogenese

Top 20
Suppose the computer crashes when a child is using GPT to do their assignment.

"Please, sir, the chatbot ate my homework."
 
  • Haha
  • Like
Reactions: 33 users
Suppose the computer crashes when a child is using GPT to do their assignment.

"Please, sir, the chatbot ate my homework."
Yuval Noah Harari, speaks of a "Useless Class" of people, who are not only unemployed, but unemployable.


The mind is our last frontier, that gives us advantage over machines.

Humans are now using them to cheat on "intellectual" endeavors?

Not long before it will be a pointless exercise.

"The machine can produce work and has knowledge that you can never surpass.
Sorry, they got the job"..
 
  • Like
  • Wow
  • Fire
Reactions: 14 users

Diogenese

Top 20
Yuval Noah Harari, speaks of a "Useless Class" of people, who are not only unemployed, but unemployable.


The mind is our last frontier, that gives us advantage over machines.

Humans are now using them to cheat on "intellectual" endeavors?

Not long before it will be a pointless exercise.

"The machine can produce work and has knowledge that you can never surpass.
Sorry, they got the job"..
Simon Thorpe has considered human obsolescence and wrote a paper proposing a flat rate transaction tax back in 2010:

https://www.researchgate.net/public...inancial_Transaction_Tax_to_replace_all_taxes

This could be linked with the current discussion of a universal income.
 
  • Like
  • Fire
Reactions: 9 users
Top Bottom