BRN Discussion Ongoing

JDelekto

Regular
I prefer to discuss it with my fellow shareholders on this forum.

That’s the only way change happens.

I am not against the company…. I have been a shareholder for a very long time.

Way before BRN name was ever mentioned anywhere on the internet.

I will always support the company but I will always call out BS when I see it.
@equanimous makes a good point that several people keep repeating that the company is making no revenue. I think that what is closer to the truth here is that the company has not yet established a revenue stream that will cover its operating costs but also bring in profits for its shareholders.

Even if a company like Microsoft, Google, or Meta were to dump a billion dollars into BrainChip, that would not be considered revenue but instead an investment in the company with the expectation that it would be profitable one day. If such an event happened, we would see the stock price rise, not because of the influx of investment funds, but the perceived value such large companies would put on the technology. There is currently a debate about whether or not OpenAI (the company behind ChatGPT that received funding from both Elon Musk and Microsoft) will be able to make enough revenue to keep from going bankrupt at the end of 2024.

Sean Hehir was very transparent and honest about this upfront. He said that our revenue stream would be lumpy. As evidenced by the financial statements the company has filed over the past two years, what Sean told everyone was accurate.

From a technology perspective, neuromorphic computing is not new, but its applications and marketability have been dubious to those in the AI community who have researched the technology. BrainChip, and a couple of other competitors in the neuromorphic space, such as Sysense and Quadric, are marketing a technology that has not yet been accepted as mainstream.

AI accelerators such as Intel's Movidius and Nvidia's GPUs have been crunching numbers for CNNs for years. They are solutions that are "good enough," at least until a better technology surfaces. Neuromorphic processors seem to be a new and disruptive solution that provides several benefits over traditional CNNs, but it does require market adoption. BrainChip's inclusion of CNN to SNN conversion provides the 'adapter' easing those using the earlier technology over to the newer platform.

I have no expectation BrainChip will earn the revenues it needs to keep the lights on over the first few years its product has been introduced to the market. I expect more IP sales to come as they listen to all customers' needs and adapt their IP offerings accordingly. I prefer these to be "quality" IP engagements, where those who integrate BrainChip's Akida technology will sell their products to many customers in several different markets.

Even if this initial revenue stream is "lumpy" and still not enough to cover BrainChip's operating costs, as long as the stream continues and trends upwards, I will be confident that I have made a wise investment.

I attended a Developer's Conference early this year where presentations were given on Generative AI and Machine Learning. Presenters were using Nvidia GPUs for running machine learning scenarios. I asked two presenters if they ever heard of BrainChip's Akida (which they did not) and explained that it was a neuromorphic processor. Surprisingly, both had stated there was ongoing research into neuromorphics.

Neither of these individuals knew of BrainChip, nor that they had already commercialized their IP. With Mercedes' announcement of the Vision EQXX at CES 2022, BrainChip and Akida are still relatively unknown in the mainstream.

I am considering crafting a presentation for DevCon in 2024, where I think Akida's one-shot learning and inferencing will elicit some wows from the audience, even if its technology has been commercially available for the past two years.
 
  • Like
  • Love
  • Fire
Reactions: 63 users

Cardpro

Regular
Deleted
 
Last edited:

Cardpro

Regular
@equanimous makes a good point that several people keep repeating that the company is making no revenue. I think that what is closer to the truth here is that the company has not yet established a revenue stream that will cover its operating costs but also bring in profits for its shareholders.

Even if a company like Microsoft, Google, or Meta were to dump a billion dollars into BrainChip, that would not be considered revenue but instead an investment in the company with the expectation that it would be profitable one day. If such an event happened, we would see the stock price rise, not because of the influx of investment funds, but the perceived value such large companies would put on the technology. There is currently a debate about whether or not OpenAI (the company behind ChatGPT that received funding from both Elon Musk and Microsoft) will be able to make enough revenue to keep from going bankrupt at the end of 2024.

Sean Hehir was very transparent and honest about this upfront. He said that our revenue stream would be lumpy. As evidenced by the financial statements the company has filed over the past two years, what Sean told everyone was accurate.

From a technology perspective, neuromorphic computing is not new, but its applications and marketability have been dubious to those in the AI community who have researched the technology. BrainChip, and a couple of other competitors in the neuromorphic space, such as Sysense and Quadric, are marketing a technology that has not yet been accepted as mainstream.

AI accelerators such as Intel's Movidius and Nvidia's GPUs have been crunching numbers for CNNs for years. They are solutions that are "good enough," at least until a better technology surfaces. Neuromorphic processors seem to be a new and disruptive solution that provides several benefits over traditional CNNs, but it does require market adoption. BrainChip's inclusion of CNN to SNN conversion provides the 'adapter' easing those using the earlier technology over to the newer platform.

I have no expectation BrainChip will earn the revenues it needs to keep the lights on over the first few years its product has been introduced to the market. I expect more IP sales to come as they listen to all customers' needs and adapt their IP offerings accordingly. I prefer these to be "quality" IP engagements, where those who integrate BrainChip's Akida technology will sell their products to many customers in several different markets.

Even if this initial revenue stream is "lumpy" and still not enough to cover BrainChip's operating costs, as long as the stream continues and trends upwards, I will be confident that I have made a wise investment.

I attended a Developer's Conference early this year where presentations were given on Generative AI and Machine Learning. Presenters were using Nvidia GPUs for running machine learning scenarios. I asked two presenters if they ever heard of BrainChip's Akida (which they did not) and explained that it was a neuromorphic processor. Surprisingly, both had stated there was ongoing research into neuromorphics.

Neither of these individuals knew of BrainChip, nor that they had already commercialized their IP. With Mercedes' announcement of the Vision EQXX at CES 2022, BrainChip and Akida are still relatively unknown in the mainstream.

I am considering crafting a presentation for DevCon in 2024, where I think Akida's one-shot learning and inferencing will elicit some wows from the audience, even if its technology has been commercially available for the past two years.
1. Investment from massive tech giants didn't happen - but yes, there are so many AI start ups who were able to get fundings from big tech giants.

2. Our revenue stream is not rumpy it is just deteriorating every year. It's more like free-falling, 0 new IP contracts over 2 years and not much revenue... Sean mislead us saying watch the financial... the vibe I got was that we are heading up not down but look where we r now (I would truely love to see high/surprising revenue in the coming half yearlies and multiple posts telling me how stupid I am/told u so)...

3. Why do you have no expectation that we will see meaningful revenue..? I don't fking get it..

4. Yea not many people know about Brainchip... isn't it sad that not many people know about this revolutionary tech or isn't it weird that we r seeing peanuts in our statements even after releasing the main product that we were all waiting on? Wtf happened to with all these EAPs/NDAs? Is Akida 2.0 gonna actually change the dynamics or do we need Akida 3.0? Akida X?

Ahhhhhhhhhhhhhhhhhhhhhhhhhh plzzzzzzzzz sign new IPs... or surprise me with massive revenue / good partnerships / MOUs with massive companies / public exposures, etc..............
 
  • Like
  • Fire
  • Love
Reactions: 12 users

wilzy123

Founding Member
1. Investment from massive tech giants didn't happen - but yes, there are so many AI start ups who were able to get fundings from big tech giants.

2. Our revenue stream is not rumpy it is just deteriorating every year. It's more like free-falling, 0 new IP contracts over 2 years and not much revenue... Sean mislead us saying watch the financial... the vibe I got was that we are heading up not down but look where we r now (I would truely love to see high/surprising revenue in the coming half yearlies and multiple posts telling me how stupid I am/told u so)...

3. Why do you have no expectation that we will see meaningful revenue..? I don't fking get it..

4. Yea not many people know about Brainchip... isn't it sad that not many people know about this revolutionary tech or isn't it weird that we r seeing peanuts in our statements even after releasing the main product that we were all waiting on? Wtf happened to with all these EAPs/NDAs? Is Akida 2.0 gonna actually change the dynamics or do we need Akida 3.0? Akida X?

Ahhhhhhhhhhhhhhhhhhhhhhhhhh plzzzzzzzzz sign new IPs... or surprise me with massive revenue / good partnerships / MOUs with massive companies / public exposures, etc..............

Yes

yeahhhp.gif
 
  • Fire
  • Like
  • Haha
Reactions: 8 users

4jct

Regular
I think one of the things that fucks me off the most about this SP is that many here have held for years, have performed significant research and have financially supported the company in its endeavours to this point in time where it can finally release Akida 2, and how have we being rewarded for that loyalty? I for one a significantly in the red now, and to think that some random investor can swoop in now and buy this stock and instantly be in a much better position that I, and many others like me is mentally exhausting. It’s alright the company saying that the SP will do what the SP will do, but to me personally it feels like they‘re spitting in my face.

I fully understand that my investment decisions are my responsibility, but that doesn’t make it any easier to swallow.
Hi Robsmark,
I think every BRN investor feels your frustration and what you are saying about the latest random investors is true but that is their journey so good on them and hope they kill it. Worse is the scenario of the friends and family we have introduced to BRN along the way who are now underwater and looking for someone to blame without trying to point the finger at you. As someone who has been invested since the Aziana days in 2015, I have experienced all of the above more than most along with the breath taking high following Mercedes and put my hand up as to not having sold a share along the way (therefore be careful taking any advice from me). That time has also given me more opportunity to rationalize things and I thought I'd share those thoughts here with you in the hope it may help.
If you had chosen not to invest when you did, sold at profit when you didn't and if you hadn't convinced friends to jump in I wonder where that would put you today. I'm hazarding a guess that you may not have any funds for BRN now or tomorrow when the SP reaches its final absolute low before takeoff. It's likely you wouldn't even be following them with money invested elsewhere, off a home loan or in a new car. Yes it would be ideal that we had all being saving up for the moment before the SP launches or invested in some other sure fire winner but thats not at all realistic. Only time will tell but it is my firm belief that we are all lucky to just be invested in BRN at whatever the buy in cost happened to be because our time is coming and soon. We are dead centre in the middle of a generational shift in IT and if you look at what has happened to those companies involved in the previous ones, its a pretty decent bet we are going to be in a good place. I went to the AGM as I do each year to judge the enthusiasm from the BOD and while no one will say anything about operations of course, the one piece of advice I got out of it was not to sell my shares.
Hang in there mate, our time is coming. When the dust settles, if we have realised only a 10% year on year profit we would have done well. Something tells me that wont be too difficult to achieve.
My last bit of advice is to seek out those that you have introduced to BRN and renegotiate your commission while they are feeling shite.
Regards 4JCT
 
  • Like
  • Love
  • Fire
Reactions: 55 users

Damo4

Regular
1. Investment from massive tech giants didn't happen - but yes, there are so many AI start ups who were able to get fundings from big tech giants.

2. Our revenue stream is not rumpy it is just deteriorating every year. It's more like free-falling, 0 new IP contracts over 2 years and not much revenue... Sean mislead us saying watch the financial... the vibe I got was that we are heading up not down but look where we r now (I would truely love to see high/surprising revenue in the coming half yearlies and multiple posts telling me how stupid I am/told u so)...

3. Why do you have no expectation that we will see meaningful revenue..? I don't fking get it..

4. Yea not many people know about Brainchip... isn't it sad that not many people know about this revolutionary tech or isn't it weird that we r seeing peanuts in our statements even after releasing the main product that we were all waiting on? Wtf happened to with all these EAPs/NDAs? Is Akida 2.0 gonna actually change the dynamics or do we need Akida 3.0? Akida X?

Ahhhhhhhhhhhhhhhhhhhhhhhhhh plzzzzzzzzz sign new IPs... or surprise me with massive revenue / good partnerships / MOUs with massive companies / public exposures, etc..............

How can you expect meaningful revenue yet?
Where is it coming from?
If Renesas/Megachips have secretly rushed the products to market with Akida nodes inside can you please provide the links to products?
Otherwise I must have missed the ASX ann of a new IP license up front fee, can you provide the link to that?

Another day, another set of unhinged posts from those over-invested in BRN
 
  • Like
  • Love
  • Fire
Reactions: 9 users

rgupta

Regular
I agree ent


While it's a valid point......imo Tony is a useless tit who has no interest in keeping shareholders well informed or appeasing concerns.

I for one would be happy to see the back of him. He cares not for shareholders.

Any attempts i have made to communicate with him has been met with a short fuse, high brow tone, no matter how courteous my enquiry is. And i am not the only one who has had this experience.
[/QUOTE]
I donot agree with you. Rather he seems very caring and try to provide as much information as possible.
 
  • Like
  • Love
Reactions: 11 users

IloveLamp

Top 20
While it's a valid point......imo Tony is a useless tit who has no interest in keeping shareholders well informed or appeasing concerns.

I for one would be happy to see the back of him. He cares not for shareholders.

Any attempts i have made to communicate with him has been met with a short fuse, high brow tone, no matter how courteous my enquiry is. And i am not the only one who has had this experience.
I donot agree with you. Rather he seems very caring and try to provide as much information as possible.
[/QUOTE]


That's fair, but certainly hasn't been my experience on multiple interactions
 
S

Straw

Guest
Can I make a suggestion that general concern posts go in the general concern thread otherwise it will destroy most of the value many get from coming here for decent info.
I for one have given up on TSE as it has definitely been targeted either by mentally damaged trolls and/or psychos with an agenda.
I agree with one poster who stated something along the lines of ethics has gone out the window in most spheres of life.
If you are one of those people it WILL come back and bite you in the arse at some point. Enjoy.
Edit: Before I get labelled as a complete bastard yes for those that are at a major paper loss I get it and this is not directed at you. The best thing I did when I was 95% down or thereabouts on a seriously life affecting amount was to walk away and find something else to occupy my time. Constant negativity here on social media will do your head in and there are nefarious individuals who WILL use it against you and everyone else here to further their agenda whether that be for sick thrills or financial benefit. Talk to individuals you know and trust and share your story. The term functional stupidity came up today in something I read and maybe part of avoiding it might be to not get drawn into the opinions of others without thinking through and researching things yourself.
 
Last edited by a moderator:
  • Like
  • Love
  • Fire
Reactions: 42 users

AusEire

Founding Member. It's ok to say No to Dot Joining
Anyone want a job at Mercedes?
Screenshot_20230820-220558.png
Screenshot_20230820-220636.png
Screenshot_20230820-220656.png
 
  • Like
  • Fire
  • Love
Reactions: 69 users

Cartagena

Regular
April article on EW23 round up and takeaways. Haven't noticed it before.

Nothing major, just a good acknowledgement of BRN and where we fit in the developing edge and chipset trends....mentioned in same section as a couple of friendlies...ARM & Renesas.



View attachment 42376

In short​

  • Some of the latest IoT chipset and edge trends were on full display at the 2023 Embedded World Exhibition & Conference (in March 2023).
  • As part of the Embedded World 2023 conference report, our team identified 19 industry trends related to IoT chipsets and edge computing, 10 of which are highlighted in this article.

Why it matters​

  • Embedded World is one of the world’s most important fairs for embedded systems. Technologies showcased in the fair are widely applicable to any company dealing with computerized hardware or the Internet of Things.

2. A new AI design cycle for embedded devices is emerging​

The embedded community is getting ready for hardware and devices supporting AI/ML execution at the edge. This means massive hardware design changes and complexity and an increased software stack complexity. AI hardware development company BrainChip showcased its new Akida AI processor IP that integrates with Arm’s new Cortex-M85 for handling advanced machine learning workloads at the edge. Chipmaker Renesas is one of the clients of Akida AI that showcased AI running on Arm Cortex-M85. AI-based machine vision applications are one of the driving forces of AI adoption at this point. Adlink Technologies and Vision Components, for example, showcased their respective new AI camera solutions that are capable of deploying large AI algorithms on their equipment.

Here's one more....

"We are excited to partner with BrainChip and leverage their state-of-the-art neuromorphic technology," said Frank T. Willis, President and CEO of Intellisense.
"By integrating BrainChip's Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability."
 

Attachments

  • Screenshot_20230820-225114.png
    Screenshot_20230820-225114.png
    875.7 KB · Views: 72
  • Like
  • Love
Reactions: 34 users

Cartagena

Regular
Here's one more....

"We are excited to partner with BrainChip and leverage their state-of-the-art neuromorphic technology," said Frank T. Willis, President and CEO of Intellisense.
"By integrating BrainChip's Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability."
Recent Renesas acquisition.... Could spell major opportunity for Brainchip.

Founded in 2003, Sequans is a fabless semiconductor company that designs and develops chipsets and modules for Internet of Things (IoT) devices. Offering products with extensive 5G/4G cellular categories, including 5G NR, Cat 4, Cat 1 and LTE-M/NB-IoT, Sequans provides reliable IoT wireless connectivity without the need for a gateway. The company also has proven expertise in low-power wireless devices, which is crucial in supporting massive IoT applications operating at low data rates. Its certified solutions are designed to work with all major radio frequency regulatory specifications by leading carriers in North America, Asia-Pacific and Europe.
 

Attachments

  • Screenshot_20230820-230755.png
    Screenshot_20230820-230755.png
    908.4 KB · Views: 63
  • Like
  • Love
  • Fire
Reactions: 25 users

Cartagena

Regular

Attachments

  • Screenshot_20230820-231743.png
    Screenshot_20230820-231743.png
    558.8 KB · Views: 118
  • Screenshot_20230820-231758.png
    Screenshot_20230820-231758.png
    263.2 KB · Views: 115
Last edited:
  • Like
  • Love
  • Fire
Reactions: 58 users

Cartagena

Regular

Attachments

  • Screenshot_20230820-232714.png
    Screenshot_20230820-232714.png
    480.9 KB · Views: 124
Last edited:
  • Like
  • Love
  • Fire
Reactions: 29 users

Frangipani

Regular
Akida can improve the Tachyum design by orders of magnitude in speed and power usage.

🤔
Just came across this press release by Tachyum dated August 15 - I had to google EDA tools (which they seem to focus on here) and found out they have to do with software, but then the following sentence obviously refers to changes in hardware, doesn’t it? Please correct me, if I am wrong.

“After the Prodigy design team had to replace IPs, it also had to replace RTL simulation and physical design tools.”

I should, however, add that I also found two articles in German whose authors are both very sceptical about Tachyum’s claims and describe the second generation of the supposed Prodigy wonder chip that - just like the first generation (which was never taped out despite several announcements) - so far only exists on paper, as a castle in the sky and too good to be true. One of the authors remarks that there is one thing Tachyum is even better in than raising money and developing processors, and that is writing press releases on a weekly basis.





So while I am not sure what to make of this press release (and have never looked into the company before), I wanted to share it nevertheless, especially since Tachyum liked Brainchip on LinkedIn a couple of months ago.

If their processor upgrade is indeed about switching to Akida IP (I suppose that would have to be through Renesas or MegaChips then, since there has been no new signing of an IP license?), that would explain both the postponement of their universal processor’s tape-out as well as their claims of “industry leading performance” and “potential breakthrough for satisfying the world’s appetite for computing at a lower environmental cost” that a lot of tech experts have been questioning. Tachyum states that “Delivery of the first Prodigy high-performance processors remains on track by the end of the year.” We will see. In various ways.



04005AD7-9DEB-4714-B660-558EB189ED32.jpeg






  • Aug 15, 2023 4 minutes to read

Tachyum Achieves 192-Core Chip After Switch to New EDA Tools​

LAS VEGAS, August 15, 2023 – Tachyum® today announced that new EDA tools, utilized during the physical design phase of the Prodigy®Universal Processor, have allowed the company to achieve significantly better results with chip specifications than previously anticipated, after the successful change in physical design tools – including an increase in the number of Prodigy cores to 192.
64 Cores Added

After RTL design coding, Tachyum began work on completing the physical design (the actual placement of transistors and wires) for Prodigy. After the Prodigy design team had to replace IPs, it also had to replace RTL simulation and physical design tools. Armed with a new set of EDA tools, Tachyum was able to optimize settings and options that increased the number of cores by 50 percent, and SERDES from 64 to 96 on each chip. Die size grew minimally, from 500mm2 to 600mm2 to accommodate improved physical capabilities. While Tachyum could add more of its very efficient cores and still fit into the 858mm2 reticle limit, these cores would be memory bandwidth limited, even with 16 DDR5 controllers running in excess of 7200MT/s. Tachyum cores have much higher performance than any other processor cores.

Other improvements realized during the physical design stage are:
  • Increase of the chip L2/L3 cache from 128MB to 192MB
  • Support of DDR5 7200 memory in addition to DDR5 6400
  • More speed with 1 DIMM per channel
  • Larger package accommodates additional 32 serial links and as many as 32 DIMMs connected to a single Prodigy chip
At every step of the process in bringing Prodigy to market, our innovation allows us to push beyond the limits of traditional design and continue to exceed even our lofty design goals,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “We have achieved better results and timing with our new EDA PD tools. They are so effective that we wish we had used them from the beginning of the process but as the saying goes, ‘Better now than never.’ While we did not have any choice but to change EDA tools, our physical design (PD) team worked hard to redo physical design and optimizations with the new set of PD tools, as we approach volume-level production.”

As a universal processor, the patented Prodigy architecture enables it to switch seamlessly and dynamically from normal CPU tasks to AI/ML workloads, so it delivers high AI/ML performance in both training and inference. AI/ML is increasingly important in the banking industry, and used to iden
tify fraud and cyberattacks before serious financial damage can be done.

Prodigy delivers unprecedented data center performance, power, and economics, reducing CAPEX and OPEX significantly. Because of its utility for both high-performance and line-of-business applications, Prodigy-powered data center servers can seamlessly and dynamically switch between workloads, eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization. Tachyum’s Prodigy delivers performance up to 4x that of the highest performing x86 processors (for cloud workloads) and up to 3x that of the highest performing GPU for HPC and 6x for AI applications.

With the achievement of this latest Prodigy milestone, Tachyum’s next steps are to complete the substrate package and socket design to accommodate more SERDES lines. Delivery of the first Prodigy high-performance processors remains on track by the end of the year.

Follow Tachyum​

https://twitter.com/tachyum
https://www.linkedin.com/company/tachyum
https://www.facebook.com/Tachyum/

About Tachyum​

Tachyum is transforming the economics of AI, HPC, public and private cloud workloads with Prodigy, the world’s first Universal Processor. Prodigy unifies the functionality of a CPU, a GPGPU, and a TPU in a single processor that delivers industry-leading performance, cost, and power efficiency for both specialty and general-purpose computing. When hyperscale data centers are provisioned with Prodigy, all AI, HPC, and general-purpose applications can run on the same infrastructure, saving companies billions of dollars in hardware, footprint, and operational expenses. As global data center emissions contribute to a changing climate, and consume more than four percent of the world’s electricity—projected to be 10 percent by 2030—the ultra-low power Prodigy Universal Processor is a potential breakthrough for satisfying the world’s appetite for computing at a lower environmental cost. Prodigy, now in its final stages of testing and integration before volume manufacturing, is being adopted in prototype form by a rapidly growing customer base, and robust purchase orders signal a likely IPO in late 2024. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/.
 
  • Like
  • Love
  • Thinking
Reactions: 20 users

Cartagena

Regular


IoT World Today Logo
STAY UPDATED

The Edge vs. Cloud Debate: Unleashing the Potential of on-Machine Computing​

Propelling the shift to the edge is the potential of AI and machine learning
Picture of Krishna Rangasayee
Krishna Rangasayee
August 1, 2023
6 Min Read
Highly detailed digitally generated background

GETTY IMAGES
Edge computing is at a tipping point – it’s time to unleash the potential of the edge. The cloud is dead and long live the era of the edge.
Ten years ago, technology transformation meant embracing a move to the cloud for decentralized computing and processing of the massive amounts of data enterprises generate and manage to run their businesses. Since then, computing has evolved dramatically, swinging back and forth between more centralized or on premise approaches to hybrid models in the cloud. All that’s changing.

The most exciting developments in compute architecture are happening at the edge as software innovation moves away from the cloud. By 2028, a majority of enterprises will be using edge computing, according to the Linux Foundation. In addition, Gartner predicts that by 2025 more than 50% of enterprise-generated data will be processed outside a traditional centralized data center or cloud. This prediction is already playing out as the cloud begins to lose its hold and relevance, evidenced by the slowing of its previously sustained growth. Most recently in the first half of 2023, the largest hyperscalers reported for the first time a slowdown in their cloud businesses.
With the recent rapid rise of AI services and systems creating a demand for more optimized and efficient computing, this makes sense. It is estimated that up to 20% of the world’s total power will go to computing by the end of the decade unless new compute paradigms are created. We can’t afford the power consumption data centers require and we are reaching a limitation of what they can effectively process. Ultimately, we need to move closer to where the data is created for real-time insight, review, security and feedback.

And this need is only accelerating. According to Gartner, in 2018, 25% of enterprise-generated data was created and processed at the edge. In 2025, Gartner expects that 75% of enterprise-generated data will be created and processed at the edge.
If the future of software innovation is AI; AI and the advancement of machine intelligence will only be accelerated by the edge. This opportunity is massive with the expectation that enabling AI and ML for these edge devices will be a $76 billion market by 2031. This is the foundation of the biggest technological shift we will see this decade, modernizing industries into the 21st century with endless, transformative applications that touch every aspect of our lives.

Why Is the Tide Turning Back to “on-Machine Computing”?​

There are three core benefits in returning to centralized computing at the edge – the instantaneous speed to process and analyze data, the additional control and security over an enterprise’s information and the ability to manage costs more efficiently.
Enterprises can no longer afford the latency of the cloud. We have to get closer to the compute. More than ever before, enterprises run on access to real-time insights. Any business process that requires seamless speed of information must bring the compute to the data at the point of its creation and consumption. This is even more evident when leveraging AI and ML as seconds and minutes matter most in the moment of analysis and decision-making. There is a limitation in the cloud for immediate feedback. Most processes cannot wait the time it takes to send the information to the cloud and have it return a result. Imagine an autonomous vehicle's reaction time to avoid running into an unexpected pedestrian. The AV does not have the luxury of time to wait for the cloud to return an insight on the unforeseen obstacle in its path.
In addition, the edge offers companies more opportunity to keep the data within their own walls. As concerns of software supply chain hacks and security threats continue to grow, the edge removes the need to constantly move large amounts of data. This also increases reliability, privacy, and compliance with ever-evolving regulatory requirements. Ultimately, the edge gives enterprises an additional layer of self-governance and autonomy over their information.
Finally, if an enterprise owns its compute, it can be more cost-efficient than managing it in the cloud. The edge requires less bandwidth and network resources. Devices do not have to continually connect to the internet for data transfers and the edge reduces the costs of network usage. The ongoing movement of data within cloud hosting services is one of the largest operating costs for an organization. Also, the more data that’s moved, the more it costs. The edge eliminates the need to move data since it's processed within the location it’s generated and reduces the bandwidth required to manage the data load.
As more enterprises and industries embrace the edge for compute, and apply AI and ML to all types of devices that exist between a data center and a smartphone, these benefits will only compound, opening up new, vast opportunities.
This move to the edge is already happening and nowhere is this more evident than in the resurgence of smart manufacturing.

The Edge’s Revival of Industrial 4.0​

The promise of technology, automation and applied intelligence within manufacturing has yet to be realized. The call to modernize the industry has been a widely recognized and long-debated topic over the last 10 years, though little has propelled it into reality.
As the industry continues to rebuild from the pandemic, facing shortages of labor, materials and all-time high demands, the shift to edge computing will deliver on the promise of innovation. Industry 4.0 is primed to realize the benefits of on-machine computing and it will be one of the first that demonstrates the edge’s potential.
According to 451 Research, nearly 70% of manufacturing IT is set to be deployed at the edge within the next two years. In a move to the edge, manufacturers will be able to:
  • Efficiently process data, with shorter response times and more actionable results;
  • deploy AI and ML models to complement human-driven tasks like production line supervision and quality control; and
  • strengthen security and protection over its data and products.
 

Attachments

  • Screenshot_20230820-235042.png
    Screenshot_20230820-235042.png
    1.6 MB · Views: 50
  • Screenshot_20230820-235131.png
    Screenshot_20230820-235131.png
    1.2 MB · Views: 51
  • Like
  • Fire
  • Love
Reactions: 22 users

Cartagena

Regular
Hey thanks for your contributions to this forum. I'm not a hundred percent sure but pretty sure you're a relatively new member to this forum.
Just wanted to say thanks.
And that thanks also goes out to the regulars of course also.
Appreciate your kind words...yes I'm quite new. I spend time between Oz and Europe. I'm passionate about tech and a little research goes a long way. That's what keeps me and I'm sure many here positive. Go Brainchip
 
  • Like
  • Love
  • Fire
Reactions: 32 users

Diogenese

Top 20
🤔
Just came across this press release by Tachyum dated August 15 - I had to google EDA tools (which they seem to focus on here) and found out they have to do with software, but then the following sentence obviously refers to changes in hardware, doesn’t it? Please correct me, if I am wrong.

“After the Prodigy design team had to replace IPs, it also had to replace RTL simulation and physical design tools.”

I should, however, add that I also found two articles in German whose authors are both very sceptical about Tachyum’s claims and describe the second generation of the supposed Prodigy wonder chip that - just like the first generation (which was never taped out despite several announcements) - so far only exists on paper, as a castle in the sky and too good to be true. One of the authors remarks that there is one thing Tachyum is even better in than raising money and developing processors, and that is writing press releases on a weekly basis.





So while I am not sure what to make of this press release (and have never looked into the company before), I wanted to share it nevertheless, especially since Tachyum liked Brainchip on LinkedIn a couple of months ago.

If their processor upgrade is indeed about switching to Akida IP (I suppose that would have to be through Renesas or MegaChips then, since there has been no new signing of an IP license?), that would explain both the postponement of their universal processor’s tape-out as well as their claims of “industry leading performance” and “potential breakthrough for satisfying the world’s appetite for computing at a lower environmental cost” that a lot of tech experts have been questioning. Tachyum states that “Delivery of the first Prodigy high-performance processors remains on track by the end of the year.” We will see. In various ways.



View attachment 42431





  • Aug 15, 2023 4 minutes to read

Tachyum Achieves 192-Core Chip After Switch to New EDA Tools​

LAS VEGAS, August 15, 2023 – Tachyum® today announced that new EDA tools, utilized during the physical design phase of the Prodigy®Universal Processor, have allowed the company to achieve significantly better results with chip specifications than previously anticipated, after the successful change in physical design tools – including an increase in the number of Prodigy cores to 192.
64 Cores Added

After RTL design coding, Tachyum began work on completing the physical design (the actual placement of transistors and wires) for Prodigy. After the Prodigy design team had to replace IPs, it also had to replace RTL simulation and physical design tools. Armed with a new set of EDA tools, Tachyum was able to optimize settings and options that increased the number of cores by 50 percent, and SERDES from 64 to 96 on each chip. Die size grew minimally, from 500mm2 to 600mm2 to accommodate improved physical capabilities. While Tachyum could add more of its very efficient cores and still fit into the 858mm2 reticle limit, these cores would be memory bandwidth limited, even with 16 DDR5 controllers running in excess of 7200MT/s. Tachyum cores have much higher performance than any other processor cores.

Other improvements realized during the physical design stage are:
  • Increase of the chip L2/L3 cache from 128MB to 192MB
  • Support of DDR5 7200 memory in addition to DDR5 6400
  • More speed with 1 DIMM per channel
  • Larger package accommodates additional 32 serial links and as many as 32 DIMMs connected to a single Prodigy chip
At every step of the process in bringing Prodigy to market, our innovation allows us to push beyond the limits of traditional design and continue to exceed even our lofty design goals,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “We have achieved better results and timing with our new EDA PD tools. They are so effective that we wish we had used them from the beginning of the process but as the saying goes, ‘Better now than never.’ While we did not have any choice but to change EDA tools, our physical design (PD) team worked hard to redo physical design and optimizations with the new set of PD tools, as we approach volume-level production.”

As a universal processor, the patented Prodigy architecture enables it to switch seamlessly and dynamically from normal CPU tasks to AI/ML workloads, so it delivers high AI/ML performance in both training and inference. AI/ML is increasingly important in the banking industry, and used to iden
tify fraud and cyberattacks before serious financial damage can be done.

Prodigy delivers unprecedented data center performance, power, and economics, reducing CAPEX and OPEX significantly. Because of its utility for both high-performance and line-of-business applications, Prodigy-powered data center servers can seamlessly and dynamically switch between workloads, eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization. Tachyum’s Prodigy delivers performance up to 4x that of the highest performing x86 processors (for cloud workloads) and up to 3x that of the highest performing GPU for HPC and 6x for AI applications.

With the achievement of this latest Prodigy milestone, Tachyum’s next steps are to complete the substrate package and socket design to accommodate more SERDES lines. Delivery of the first Prodigy high-performance processors remains on track by the end of the year.

Follow Tachyum​

https://twitter.com/tachyum
https://www.linkedin.com/company/tachyum
https://www.facebook.com/Tachyum/

About Tachyum​

Tachyum is transforming the economics of AI, HPC, public and private cloud workloads with Prodigy, the world’s first Universal Processor. Prodigy unifies the functionality of a CPU, a GPGPU, and a TPU in a single processor that delivers industry-leading performance, cost, and power efficiency for both specialty and general-purpose computing. When hyperscale data centers are provisioned with Prodigy, all AI, HPC, and general-purpose applications can run on the same infrastructure, saving companies billions of dollars in hardware, footprint, and operational expenses. As global data center emissions contribute to a changing climate, and consume more than four percent of the world’s electricity—projected to be 10 percent by 2030—the ultra-low power Prodigy Universal Processor is a potential breakthrough for satisfying the world’s appetite for computing at a lower environmental cost. Prodigy, now in its final stages of testing and integration before volume manufacturing, is being adopted in prototype form by a rapidly growing customer base, and robust purchase orders signal a likely IPO in late 2024. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/.
Hi Frangipani,

Here's a peek at a couple of Tachyum patents:

US10915324B2 System and method for creating and executing an instruction word for simultaneous execution of instruction operations 20180816 DANILAK RADOSLAV




1692539516199.png


a processing architecture and related methodology that utilizes location-aware processing that assigns Arithmetic Logic Units (ALU) in a processor to instruction operations based on prior allocations of ALUs to prior instruction operations. Such embodiments minimize the influence of internal transmission delay on wires between ALUs in a processor, with a corresponding significant increase in clock speed, reduction in power consumption and reduction in size.

A methodology for creating and executing instruction words for simultaneous execution of instruction operations is provided. The methodology includes creating a dependency graph of nodes with instruction operations, the graph including at least a first node having a first instruction operation and a second node having a second instruction operation being directly dependent upon the outcome of the first instruction operation; first assigning the first instruction operation to a first instruction word; second assigning a second instruction operation: to the first instruction word upon satisfaction of a first at least one predetermined criteria; and to a second instruction word, that is scheduled to be executed during a later clock cycle than the first instruction word, upon satisfaction of a second at least one predetermined criteria; and executing, in parallel by the plurality of ALUs and during a common clock cycle, any instruction operations within the first instruction word.


As you can see, Tachyum are big on ALUs.

It also seems to be unasynchronous.

This one is more recent:


EP3979070A1 SYSTEM AND METHOD OF POPULATING AN INSTRUCTION WORD 20190815

1692540044432.png


1692539802479.png



A methodology for populating an instruction word for simultaneous execution of instruction operations by a plurality of arithmetic logic units, ALUs), in a data path includes creating a dependency graph of instruction nodes, and initially designating any in the dependency graph as global, whereby the corresponding instruction node is expected to require inputs from outside of a predefined limited physical range of ALUs smaller than the full extent of the data path. A first available instruction node is selected from the dependency graph and assigned to the instruction word. Also selected are any available instruction nodes that are dependent upon a result of the first available instruction node and do not violate any predetermined rule, including that the instruction word may not include an available dependent instruction node designated as global. Available dependent instruction nodes are assigned to the instruction word, and the dependency graph updated to remove any assigned nodes from further assignment consideration.


In their blurb, they claim to have designed an all-in-one CPU-GPU-TPU, which they claim performs better than CPUs and GPUs. They need the ALUs to do the CPU/GPU work, but ALUs work on multi-bit numbers, not spikes.

Tachyum’s Prodigy delivers performance up to 4x that of the highest performing x86 processors (for cloud workloads) and up to 3x that of the highest performing GPU for HPC and 6x for AI applications.

Taking Tachyum at their word, this is very commendable, but doing Al/ML on even the best organized CPU/GPU/ALU arrangement will always be inferior to Akida.

(I think that, to accommodate 8-bit weights and actuations, Akida does include a couple of ALUs in the input layer NPUs, but the internal layers only process up to 4-bit weights & actuations)
 

Attachments

  • 1692540024832.png
    1692540024832.png
    28.7 KB · Views: 46
  • Like
  • Fire
  • Love
Reactions: 19 users

Cardpro

Regular
How can you expect meaningful revenue yet?
Where is it coming from?
If Renesas/Megachips have secretly rushed the products to market with Akida nodes inside can you please provide the links to products?
Otherwise I must have missed the ASX ann of a new IP license up front fee, can you provide the link to that?

Another day, another set of unhinged posts from those over-invested in BRN
Sorry, to clarify, I agree that we don't have (or ever had) any meaningful revenue.

I thought we would have them within couple of years after releasing the Akida Gen 1 which we've waited for a long time after multiple failed products.

Yes, Sean said this and that but our revenue is still close to nothing and our MC has dropped so hard and getting close to nothing or around 2015 level day by day. I know you like to blame shorters but truth is that we haven't had anything to prove our worth for the past two years at least (yes we formed partnerships and joined ecosystems but we haven't had any price sensitive announcements relating to revenues or future revenues).

Yes, I get that we have an amazing tech and it takes time to be adopted. But at the same time, we see multiple AI start ups specialising in edge-ai (some using SNN as well) landing deals and partnering with big tech companies listing them on their websites, fundings from big tech companies, etc...

I truely hope Gen 2 is successful and can actually land some IP deals...
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Frangipani

Regular
Hi Frangipani,

Here's a peek at a couple of Tachyum patents:

US10915324B2 System and method for creating and executing an instruction word for simultaneous execution of instruction operations 20180816 DANILAK RADOSLAV




View attachment 42445

a processing architecture and related methodology that utilizes location-aware processing that assigns Arithmetic Logic Units (ALU) in a processor to instruction operations based on prior allocations of ALUs to prior instruction operations. Such embodiments minimize the influence of internal transmission delay on wires between ALUs in a processor, with a corresponding significant increase in clock speed, reduction in power consumption and reduction in size.

A methodology for creating and executing instruction words for simultaneous execution of instruction operations is provided. The methodology includes creating a dependency graph of nodes with instruction operations, the graph including at least a first node having a first instruction operation and a second node having a second instruction operation being directly dependent upon the outcome of the first instruction operation; first assigning the first instruction operation to a first instruction word; second assigning a second instruction operation: to the first instruction word upon satisfaction of a first at least one predetermined criteria; and to a second instruction word, that is scheduled to be executed during a later clock cycle than the first instruction word, upon satisfaction of a second at least one predetermined criteria; and executing, in parallel by the plurality of ALUs and during a common clock cycle, any instruction operations within the first instruction word.


As you can see, Tachyum are big on ALUs.

It also seems to be unasynchronous.

This one is more recent:


EP3979070A1 SYSTEM AND METHOD OF POPULATING AN INSTRUCTION WORD 20190815

View attachment 42450

View attachment 42448


A methodology for populating an instruction word for simultaneous execution of instruction operations by a plurality of arithmetic logic units, ALUs), in a data path includes creating a dependency graph of instruction nodes, and initially designating any in the dependency graph as global, whereby the corresponding instruction node is expected to require inputs from outside of a predefined limited physical range of ALUs smaller than the full extent of the data path. A first available instruction node is selected from the dependency graph and assigned to the instruction word. Also selected are any available instruction nodes that are dependent upon a result of the first available instruction node and do not violate any predetermined rule, including that the instruction word may not include an available dependent instruction node designated as global. Available dependent instruction nodes are assigned to the instruction word, and the dependency graph updated to remove any assigned nodes from further assignment consideration.


In their blurb, they claim to have designed an all-in-one CPU-GPU-TPU, which they claim performs better than CPUs and GPUs. They need the ALUs to do the CPU/GPU work, but ALUs work on multi-bit numbers, not spikes.

Tachyum’s Prodigy delivers performance up to 4x that of the highest performing x86 processors (for cloud workloads) and up to 3x that of the highest performing GPU for HPC and 6x for AI applications.

Taking Tachyum at their word, this is very commendable, but doing Al/ML on even the best organized CPU/GPU/ALU arrangement will always be inferior to Akida.

(I think that, to accommodate 8-bit weights and actuations, Akida does include a couple of ALUs in the input layer NPUs, but the internal layers only process up to 4-bit weights & actuations)

Thank’s for looking up those Tachyum patents and for your detailed explanation, @Diogenese. While semiconductor tech in general is very much over my head, I do understand you are practically excluding that Akida is involved. 😃

Now that your ogre has long since retired, may I suggest an alternative in the form of an illustration by German humorist, poet, illustrator, and painter Wilhelm Busch (1832-1908), whose most famous work, Max und Moritz – Eine Bubengeschichte in sieben Streichen (Max and Moritz: A Story of Seven Boyish Pranks), was published in 1865. Busch’s black humorous tale, written entirely in rhymed couplets, was illustrated by the author himself and remains a beloved classic of German literature to this day.

A lesser known of his tales is called Diogenese and The Bad Boys of Corinth (1864), about another terrible duo, harassing your namesake, the ancient Greek philosopher, who was quietly lying in his barrel, thinking, when the mischiefs arrived on the scene.
Well, those bad boys ultimately meet their untimely demise, just like Max and Moritz do (https://en.m.wikipedia.org/wiki/Max_and_Moritz), and the old cynic crawls back into his barrel, chuckling contently.

The story’s penultimate illustration would serve well as a replacement for the ogre, whenever you feel pestered by us non-techies... 😂


BACBF7A7-9120-40BB-8B9A-0FD7C7E992A1.jpeg
FDCAC93D-F707-48BD-B40C-0DCD11B785C0.jpeg

BD35DEA8-E765-41E8-B454-77BABA92B6C1.jpeg
 
  • Haha
  • Like
  • Wow
Reactions: 13 users
Top Bottom