BRN Discussion Ongoing

TopCat

Regular

Language generators such as ChatGPT are gaining attention for their ability to reshape how we use search engines and change the way we interact with artificial intelligence. But these algorithms are both computationally expensive to run and depend on maintenance from just a few companies to avoid outages.

but Eshraghian is powering a language model with an alternative algorithm called a spiking neural network (SNN). He and two students have recently released the open-sourced code for the largest language-generating SNN ever, named SpikeGPT, which uses 22 times less energy than a similar model using typical deep learning. Using SNNs for language generation can have huge implications for accessibility, data security, and green computing and energy efficiency within this field.
 
  • Like
  • Fire
  • Wow
Reactions: 20 users

Language generators such as ChatGPT are gaining attention for their ability to reshape how we use search engines and change the way we interact with artificial intelligence. But these algorithms are both computationally expensive to run and depend on maintenance from just a few companies to avoid outages.

but Eshraghian is powering a language model with an alternative algorithm called a spiking neural network (SNN). He and two students have recently released the open-sourced code for the largest language-generating SNN ever, named SpikeGPT, which uses 22 times less energy than a similar model using typical deep learning. Using SNNs for language generation can have huge implications for accessibility, data security, and green computing and energy efficiency within this field.
We need to get them a development board.
 
  • Like
  • Fire
  • Haha
Reactions: 13 users

HopalongPetrovski

I'm Spartacus!
Hi Hoppy,

AKD1 can identify objects in individual frames of video with their position.

It passes this information to the CPU which can then track the movement of the object.

AKD2 has additional storage to store a sequence of frames with the tracked images and locations so it can track movement.

Similarly with words/sentences/pargraphs.

Doing it in silicon is much faster and less energy intensive.

Until I see the Gen 2 patent, I'm guessing how it does this, but I have a vague idea which I won't disclose here because:
A. It's probably wrong;
B. It may be patentable.
Thank you again.
Hope its B. You manage to patent it, and make a billion dollars. 🤣

Unknown.jpeg
 
  • Like
  • Fire
  • Haha
Reactions: 7 users

TopCat

Regular
  • Like
  • Love
Reactions: 6 users

Language generators such as ChatGPT are gaining attention for their ability to reshape how we use search engines and change the way we interact with artificial intelligence. But these algorithms are both computationally expensive to run and depend on maintenance from just a few companies to avoid outages.

but Eshraghian is powering a language model with an alternative algorithm called a spiking neural network (SNN). He and two students have recently released the open-sourced code for the largest language-generating SNN ever, named SpikeGPT, which uses 22 times less energy than a similar model using typical deep learning. Using SNNs for language generation can have huge implications for accessibility, data security, and green computing and energy efficiency within this field.

Seems to be a increasing consensus.


BLOG POST​

TECH & SOURCING @ MORGAN LEWIS

TECHNOLOGY, OUTSOURCING, AND COMMERCIAL TRANSACTIONS
NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

Neuromorphic Computing: A More Efficient Technology for Activities That Require a Human Touch

March 01, 2023

ChatGPT and subsequent artificial intelligence (AI) programs have been in the headlines recently. Not as common is the discussion of the cost associated with developing and operating such AI tools or if such AI is right for every job.

It is estimated that ChatGPT can cost millions of dollars per day to operate. Given the potentially large price tag, consumers may ask how users can harness the benefit of AI without the high operating cost and what the best technology is in applications where precise decision-making and outcomes are desired. Some believe that the answer to both of these questions is neuromorphic computing.

What Is Neuromorphic Computing?


Neuromorphic computing is designed to mimic the human brain, operating in a manner that allows the technology to solve problems in ways our brains would.

The chips that power neuromorphic computing are specially designed with the same structure as our brain’s neurons and synapses, allowing it to make decisions and judgments in a way that typical computer systems or algorithms cannot.

Neuromorphic computing is intended to be more efficient, powerful, and cost-effective than other AI technologies.

Although still in development and not widely deployed, it is being evaluated in various settings, including cloud computing, robotics, and autonomous vehicle technology.

The End of the Algorithm in AI?


Rather than processing all the data to follow an algorithm to an answer, the goal of neuromorphic computing is to decipher the necessary information to determine the correct solution. Leveraging this would allow companies and consumers to implement technology into everyday life wherever a human touch is required—rather than utilizing answers based solely on an algorithm.

AI is effective at providing large amounts of computing power, responding to queries that may take a human or even a standard computer a long time to answer.

Neuromorphic computing, on the other hand, takes a more active approach, giving the correct response or action to a scenario.

Key Takeaway


As technology and society integrate on a deeper level, there will be an increased demand on our computers and technology to interact with us as a human would with speech, movement, and reason. Neuromorphic computing’s deployment is no easy feat, and we will be on the lookout for how companies bring humanity into future computers and technologies.
 
  • Like
  • Love
  • Fire
Reactions: 23 users
Worrying about the SP every day must be truly exhausting for those who find it hard to handle the psychology of the 'red' days of recent times. It's only painful if you NEED to sell now or don't believe in Brainchip anymore. These SP fluctuations are surely going to seem trivial when Brainchip start closing deals (no matter whether these deals are announced or we see it via revenue in quarterlies). Getting excited for Brainchip's promising future seems a much wiser use of time, given the announcement on 6th March.

The company recently released a fantastic announcement that has strengthened and increased the lead of Brainchip's position in neuromorphic edge AI. Some of our partners are quoted, gushing about us. Brainchip has implemented new features in it's second generation Akida platform that customers asked for to meet their needs. Game changing new features. I don't know about you - but any major company I have ever worked for, does not spend massive $$$ to improve functionality on a whim, without being confident that said changes will secure deals.

Lead adopters are engaging with the second gen Akida NOW. We will be in a Renesas chip this year. How could anyone be anything except excited, is beyond me.

DYOR, all in my excited opinon.
 
  • Like
  • Love
  • Fire
Reactions: 60 users
And for holders like me who have to think about how a lot things connect, this article was right up my alley.

Especially liked the couple points I highlighted near the end.



The Different Types of AI Accelerators

The Different Types of AI Accelerators​

Posted on February 15, 2023 by Kristoffer Bonheur

AI accelerators are specialized hardware accelerators and coprocessors that process artificial intelligence algorithms or AI-related tasks. The expansion of AI applications and use cases has made these hardware components advantageous in modern consumer electronic devices such as computers, smartphones, and other smart devices. These hardware components are also found in autonomous or self-driving vehicles, robotics systems, and automated machines used in different industrial applications.

Remember that AI accelerators are coprocessors. Their purpose is to take AI-related workloads from the central processor to improve the efficiency of the overall system. Examples of these workloads include machine learning, deep learning, image processing, and natural language processing, among others. Their purpose and advantages also rest on the purpose and advantages of hardware accelerators. Of course, to understand better their purpose, it is important to understand the different types of AI accelerators.

A GUIDE TO THE DIFFERENT TYPES OF AI ACCELERATORS

1. Graphics Processing Unit
A graphics accelerator or a graphics processing unit or GPU is a hardware accelerator or specialized coprocessor designed to handle image rendering. These components are essential in modern computer systems with a graphical user interface.

Most personal computers and portable devices such as smartphones and tablets have integrated graphics processors that form part of their respective system-on-chips. Use cases that require intensive image rendering such as in the case of video gaming and high-resolution graphics design and video editing require discrete graphics processors.

Nevertheless, behind rendering images or processing graphics, GPUs are flexible. Note that these components have been used in blockchain applications that require proof-of-work validation to mine cryptocurrencies or validate blockchain entries.

GPUs have also been the first coprocessors that have been retrofitted as AI accelerators. They have been used for AI-related processing because the mathematical bases of image manipulation and artificial neural networks and relevant deep learning models are similar. Data centers used for training AI models with huge datasets use discrete GPUs.

2. Field-Programmable Gate Arrays
A field-programmable gate array or FPGA is an integrated circuit that can be programmed in the field after it has been manufactured. Traditional microprocessors have fixed architectures while FPGAs can be programmed to a particular architecture.

FPGAs can be described as a blank canvas. They can be configured to perform a wide range of digital logic functions. The configuration is generally specified using a hardware description language. They are often used in applications that require high-performance and low-latency processing. These include video processing, simulations, and cryptography.

There have been attempts to configure FPGAs as dedicated AI accelerators. They have been used for handling machine learning and deep learning tasks that require complex mathematical computations. They are also used for implementing neural networks, providing real-time video processing, and can be employed in computer vision applications.

Remember that FPGAs are beneficial for use cases that require low latency and high performance. The search engine Bing uses these chips for its search algorithm while Microsoft uses them in its Project Catapult which is aimed at augmenting cloud computing.

3. Application-Specific Integrated Circuit
Running in contrast with FPGAs are application-specific integrated circuits or ASICs. They are integrated chips designed and deployed for a particular use. Remember that FPGAs are fundamentally blank chips manufactured without a particular use case in mind. ASICs are manufactured for a specific application.

ASICs are essentially custom-built designed and optimized for a specific task or set of tasks. They can be a standalone hardware component or a part of a greater integrated chip such as in the case of a system-on-a-chip composed of different processors and coprocessors.

Nevertheless, in the field of artificial intelligence, ASICs are useful in AI-related tasks where high computational power is required. Examples include deep learning algorithms such as using large language models or real-time video and image processing such as in the case of computational photography features of smartphones.

There are several examples of ASICs. Google began using its Tensor Processing Unit or TPU in 2015 and made it available to the public in 2018. A TPU is used for neural network machine learning to perform matrix computations.

Another example is the Neural Engine of Apple. It is an AI accelerator built within the A series and M series system-on-chips used in iPhones, iPads, and Mac computers. The AI Engine of Qualcomm is another example that is based on its proprietary Qualcomm Hexagon digital signal processor and a licensed Tensor accelerator.

4. Massively Multicore Scalar Processors
A particular massively multicore scalar processor is essentially a multi-core processor. Modern CPUs and GPUs are multi-core processors. However, based on its name alone, a massively multicore scalar processor has massive amounts of simple processing cores.

Scalar processing is at the heart of this hardware accelerator. A scalar processor processes a single data item at a time.

However, considering its multiple cores, a massively multicore scalar processor can execute more than one data item or multiple instructions in a single clock cycle by distributing them in its redundant cores.

Massively multicore scalar processors are also called superscalar processors. One of the advantages of these hardware components is that they use simple arithmetic units that can be combined in various ways to execute different types of algorithms.

Another advantage of massively multicore scalar processors is that they are highly scalable. This enables them to handle complex computations effectively and efficiently. Hence, in considering this, this advantage also makes them suitable for AI applications where large amounts of data need to be processed as quickly and as power efficiently as possible.

5. Neuromorphic Hardware Components
The emerging field of neuromorphic computing is dedicated to developing approaches to computing patterned from the structure and function of the human brain or biological neural systems. Neuromorphic hardware is one of its applications

A neuromorphic hardware is designed to mimic the features of the human brain. For example, considering how neurons and synapses interact in a biological neural system, this hardware has structures and components that allow the simulation of neurological electrical activity. This can also be called a physical artificial neural network.

There are different benefits to using this hardware. It is designed to effectively and efficiently process and handle complex and high-dimensional data. This makes it suitable for use in AI applications such as natural language processing and computer vision.

Using a particular neuromorphic hardware in a computer system would make it an AI accelerator and a coprocessor because it will be dedicated to handling AI-specific applications
. This same hardware can also become a next-generation main processor or central processing unit of a computer system based on neuromorphic computing.
 
  • Like
  • Love
  • Fire
Reactions: 33 users

jtardif999

Regular
So cool that our partners feel the need to increase their workforce to target or maximise potential from our new enhanced Akida range.

Bit of 'the chicken and the egg' scenario here:

View attachment 31583

Was Edge Impulse involved in the feedback loop with their customers having them already lined up for the change?
Or did this second generation enhancement trigger the mass market requiring the new sales FTE.

Either way one of OUR Ecosystem partners are preparing to manage the increase in demand which will help Sean execute his last statement from the txt above
"we are focused on executing more IP licenced agreements and generating revenue growth over the coming years."

Our team has support to assist our ubiquitous ambitions.

I am a patient person but I'd be so pleased to see ONE license land before May 23rd.
Will this last BRN announcement be the final key to unlocking the licensing charge. Was it the final proof that Akida can evolve and extend its reach into the future.
Don’t forget the 4 million dollars in licence fees they received via MegaChips in 2022 - that wasn’t all MegaChips directly paying up I don’t think (may not be any of theirs?). You should factor this into the speed of BrainChips commercialisation,.. whilst this figure is hiding details it probably represents a further 2 licences?, possibly 4 or even more depending on the size of the deals and/or how much of it is actually MegaChips based.
 
  • Like
  • Love
  • Fire
Reactions: 20 users

robsmark

Regular
Would it be fair to say that due WBT have less shares on offer then less manipulation would happen?
It wouldn’t make a shade of difference. As long as there’s liquidity, people/bots can buy them.
 
Last edited:
  • Like
Reactions: 7 users

buena suerte :-)

BOB Bank of Brainchip
It would make a shade of difference. As long as there’s liquidity, people/bots can buy them.

Huge SOI difference but similar short % !?​

WBT Latest Reported Shorts (Daily)​

DATEREPORTED SHORTISSUED SHARES% SHORT
8 March 2023136,508173,640,8550.07%

BRN Latest Reported Shorts (Daily)​

DATEREPORTED SHORTISSUED SHARES% SHORT
8 March 20231,634,8391,767,058,1450.09%
 
  • Like
  • Fire
  • Wow
Reactions: 13 users

robsmark

Regular

Huge SOI difference but similar short % !?​

WBT Latest Reported Shorts (Daily)​

DATEREPORTED SHORTISSUED SHARES% SHORT
8 March 2023136,508173,640,8550.07%

BRN Latest Reported Shorts (Daily)​

DATEREPORTED SHORTISSUED SHARES% SHORT
8 March 20231,634,8391,767,058,1450.09%
What’s the total shorts? I don’t know how to find this data to check…
 
  • Like
Reactions: 2 users
  • Like
Reactions: 2 users

TheFunkMachine

seeds have the potential to become trees.
That's why all our key staff have great hearing, especially for listening and co-operating when a company other than ours
makes an engineering suggestion that ultimately benefits both parties...that's my definition of "Beneficial AI".

The love affair continues, anyone willing to loan me, say $500,000 to 1Million AUD ? in my opinion this current pattern is heading
for a major shake up beyond April, is AKD1500 being baked for a client/s, I (sense) it is ? wrong, we shall see.

Texta :ROFLMAO:(y)
I love most of your content Chris, but Texta is not going to catch on. Hehe
 
  • Haha
  • Like
Reactions: 10 users

buena suerte :-)

BOB Bank of Brainchip
What’s the total shorts? I don’t know how to find this data to check…
Can only get up to a week ago!

WBT Latest Reported Shorts (Aggregate)​

DATEREPORTED SHORTISSUED SHARES% SHORTDAILY RANK
3 March 2023855,620173,640,8550.4928%311th 13
2 March 2023902,944173,640,8550.5200%298th 139
1 March 2023229,824173,640,8550.1324%437th 21
28 February 2023309,297173,640,8550.1781%416th 134
27 February 202334,438173,640,8550.0198%550th 112

BRN Latest Reported Shorts (Aggregate)​

DATEREPORTED SHORTISSUED SHARES% SHORTDAILY RANK
3 March 2023121,892,0961,767,058,1456.8980%12th 1
2 March 2023121,551,0871,767,058,1456.8787%11th 3
1 March 2023119,070,7761,767,058,1456.7384%14th 1
28 February 2023118,815,9761,767,058,1456.7239%15th 2
27 February 2023118,114,1241,767,058,1456.6842%13th 1
 
  • Like
  • Wow
  • Sad
Reactions: 14 users

db1969oz

Regular
What’s the total shorts? I don’t know how to find this data to check…
The total is from 3/3. Their daily is only 1 day behind and the last few days have been big numbers added to this. It’s all on asx website and ASIC website. https://asic.gov.au/regulatory-resources/markets/short-selling/short-position-reports-table/. https://www.google.com.au/url?sa=t&...hortsell.txt&usg=AOvVaw1aMEx7ix9PlhUpnT-9JcJr
 

Attachments

  • F9ED1157-35BB-4534-8EB0-41426150D366.jpeg
    F9ED1157-35BB-4534-8EB0-41426150D366.jpeg
    932.6 KB · Views: 157
  • Like
Reactions: 7 users

gex

Regular
Deadset. One thing I've learnt in the market game is how so impatient people really are. Crazy
 
  • Like
  • Fire
  • Thinking
Reactions: 13 users

Townyj

Ermahgerd
Can only get up to a week ago!

WBT Latest Reported Shorts (Aggregate)​

DATEREPORTED SHORTISSUED SHARES% SHORTDAILY RANK
3 March 2023855,620173,640,8550.4928%311th 13
2 March 2023902,944173,640,8550.5200%298th 139
1 March 2023229,824173,640,8550.1324%437th 21
28 February 2023309,297173,640,8550.1781%416th 134
27 February 202334,438173,640,8550.0198%550th 112

BRN Latest Reported Shorts (Aggregate)​

DATEREPORTED SHORTISSUED SHARES% SHORTDAILY RANK
3 March 2023121,892,0961,767,058,1456.8980%12th 1
2 March 2023121,551,0871,767,058,1456.8787%11th 3
1 March 2023119,070,7761,767,058,1456.7384%14th 1
28 February 2023118,815,9761,767,058,1456.7239%15th 2
27 February 2023118,114,1241,767,058,1456.6842%13th 1

I honestly thought they would of had their fill by now.. Relentless
 
  • Like
  • Sad
  • Fire
Reactions: 7 users

Schwale

Regular
Renesas & Socionext have both been partnered separately with Hailo for years yet both keep on keeping on with Brainchip and AKIDA.

Hailo is an accelerator and uses more power than AKIDA and the last time I looked was 2021 and it was running at multiple watts. In 2020 Socionext, Foxconn and Hailo had a product out cannot remember now exactly what it was but not in any way competing with AKIDA 1000 and now with AKIDA 2000 and it’s performance levels well it’s a look out Hailo AKIDA is coming through.

My opinion only DYOR
FF


AKIDA BALLISTA
Just noticed Anil's like on LinkedIn...could there be link with Brainchip and Hailo?
 

Attachments

  • Screenshot_20230309_205912_LinkedIn~2.jpg
    Screenshot_20230309_205912_LinkedIn~2.jpg
    344.5 KB · Views: 133
  • Like
  • Fire
  • Thinking
Reactions: 8 users
Here is another of my crazy wild fun filled speculations.😂🤣😂

We know Neuromorphic won the Mercedes Benz popular vote.

What if one of the earliest access AKIDA 1500 & 2000 IP customers was Mercedes Benz.

If this was the case it would have been impossible for Mercedes Benz to talk about Neuromorphic before Brainchip made public and filed its patents around these new IPs.

What if one part of the more to come in the next couple of weeks is a Mercedes Benz grand Neuromorphic presentation.

Then again who knows but would be fun.

My speculation and opinion only so DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 61 users

miaeffect

Oat latte lover
Here is another of my crazy wild fun filled speculations.😂🤣😂

We know Neuromorphic won the Mercedes Benz popular vote.

What if one of the earliest access AKIDA 1500 & 2000 IP customers was Mercedes Benz.

If this was the case it would have been impossible for Mercedes Benz to talk about Neuromorphic before Brainchip made public and filed its patents around these new IPs.

What if one part of the more to come in the next couple of weeks is a Mercedes Benz grand Neuromorphic presentation.

Then again who knows but would be fun.

My speculation and opinion only so DYOR
FF

AKIDA BALLISTA
Screenshot_20230309-211609_Gallery.jpg

Hi FF

Does Akida 2.0 have same 1.2 million Neurons and 10 billion synapses
 
  • Like
  • Thinking
  • Fire
Reactions: 14 users
Top Bottom