1. Investment from massive tech giants didn't happen - but yes, there are so many AI start ups who were able to get fundings from big tech giants.@equanimous makes a good point that several people keep repeating that the company is making no revenue. I think that what is closer to the truth here is that the company has not yet established a revenue stream that will cover its operating costs but also bring in profits for its shareholders.
Even if a company like Microsoft, Google, or Meta were to dump a billion dollars into BrainChip, that would not be considered revenue but instead an investment in the company with the expectation that it would be profitable one day. If such an event happened, we would see the stock price rise, not because of the influx of investment funds, but the perceived value such large companies would put on the technology. There is currently a debate about whether or not OpenAI (the company behind ChatGPT that received funding from both Elon Musk and Microsoft) will be able to make enough revenue to keep from going bankrupt at the end of 2024.
Sean Hehir was very transparent and honest about this upfront. He said that our revenue stream would be lumpy. As evidenced by the financial statements the company has filed over the past two years, what Sean told everyone was accurate.
From a technology perspective, neuromorphic computing is not new, but its applications and marketability have been dubious to those in the AI community who have researched the technology. BrainChip, and a couple of other competitors in the neuromorphic space, such as Sysense and Quadric, are marketing a technology that has not yet been accepted as mainstream.
AI accelerators such as Intel's Movidius and Nvidia's GPUs have been crunching numbers for CNNs for years. They are solutions that are "good enough," at least until a better technology surfaces. Neuromorphic processors seem to be a new and disruptive solution that provides several benefits over traditional CNNs, but it does require market adoption. BrainChip's inclusion of CNN to SNN conversion provides the 'adapter' easing those using the earlier technology over to the newer platform.
I have no expectation BrainChip will earn the revenues it needs to keep the lights on over the first few years its product has been introduced to the market. I expect more IP sales to come as they listen to all customers' needs and adapt their IP offerings accordingly. I prefer these to be "quality" IP engagements, where those who integrate BrainChip's Akida technology will sell their products to many customers in several different markets.
Even if this initial revenue stream is "lumpy" and still not enough to cover BrainChip's operating costs, as long as the stream continues and trends upwards, I will be confident that I have made a wise investment.
I attended a Developer's Conference early this year where presentations were given on Generative AI and Machine Learning. Presenters were using Nvidia GPUs for running machine learning scenarios. I asked two presenters if they ever heard of BrainChip's Akida (which they did not) and explained that it was a neuromorphic processor. Surprisingly, both had stated there was ongoing research into neuromorphics.
Neither of these individuals knew of BrainChip, nor that they had already commercialized their IP. With Mercedes' announcement of the Vision EQXX at CES 2022, BrainChip and Akida are still relatively unknown in the mainstream.
I am considering crafting a presentation for DevCon in 2024, where I think Akida's one-shot learning and inferencing will elicit some wows from the audience, even if its technology has been commercially available for the past two years.
1. Investment from massive tech giants didn't happen - but yes, there are so many AI start ups who were able to get fundings from big tech giants.
2. Our revenue stream is not rumpy it is just deteriorating every year. It's more like free-falling, 0 new IP contracts over 2 years and not much revenue... Sean mislead us saying watch the financial... the vibe I got was that we are heading up not down but look where we r now (I would truely love to see high/surprising revenue in the coming half yearlies and multiple posts telling me how stupid I am/told u so)...
3. Why do you have no expectation that we will see meaningful revenue..? I don't fking get it..
4. Yea not many people know about Brainchip... isn't it sad that not many people know about this revolutionary tech or isn't it weird that we r seeing peanuts in our statements even after releasing the main product that we were all waiting on? Wtf happened to with all these EAPs/NDAs? Is Akida 2.0 gonna actually change the dynamics or do we need Akida 3.0? Akida X?
Ahhhhhhhhhhhhhhhhhhhhhhhhhh plzzzzzzzzz sign new IPs... or surprise me with massive revenue / good partnerships / MOUs with massive companies / public exposures, etc..............
Hi Robsmark,I think one of the things that fucks me off the most about this SP is that many here have held for years, have performed significant research and have financially supported the company in its endeavours to this point in time where it can finally release Akida 2, and how have we being rewarded for that loyalty? I for one a significantly in the red now, and to think that some random investor can swoop in now and buy this stock and instantly be in a much better position that I, and many others like me is mentally exhausting. It’s alright the company saying that the SP will do what the SP will do, but to me personally it feels like they‘re spitting in my face.
I fully understand that my investment decisions are my responsibility, but that doesn’t make it any easier to swallow.
1. Investment from massive tech giants didn't happen - but yes, there are so many AI start ups who were able to get fundings from big tech giants.
2. Our revenue stream is not rumpy it is just deteriorating every year. It's more like free-falling, 0 new IP contracts over 2 years and not much revenue... Sean mislead us saying watch the financial... the vibe I got was that we are heading up not down but look where we r now (I would truely love to see high/surprising revenue in the coming half yearlies and multiple posts telling me how stupid I am/told u so)...
3. Why do you have no expectation that we will see meaningful revenue..? I don't fking get it..
4. Yea not many people know about Brainchip... isn't it sad that not many people know about this revolutionary tech or isn't it weird that we r seeing peanuts in our statements even after releasing the main product that we were all waiting on? Wtf happened to with all these EAPs/NDAs? Is Akida 2.0 gonna actually change the dynamics or do we need Akida 3.0? Akida X?
Ahhhhhhhhhhhhhhhhhhhhhhhhhh plzzzzzzzzz sign new IPs... or surprise me with massive revenue / good partnerships / MOUs with massive companies / public exposures, etc..............
I agree ent
I donot agree with you. Rather he seems very caring and try to provide as much information as possible.While it's a valid point......imo Tony is a useless tit who has no interest in keeping shareholders well informed or appeasing concerns.
I for one would be happy to see the back of him. He cares not for shareholders.
Any attempts i have made to communicate with him has been met with a short fuse, high brow tone, no matter how courteous my enquiry is. And i am not the only one who has had this experience.
April article on EW23 round up and takeaways. Haven't noticed it before.
Nothing major, just a good acknowledgement of BRN and where we fit in the developing edge and chipset trends....mentioned in same section as a couple of friendlies...ARM & Renesas.
The top 10 IoT chipset and edge trends—as showcased at Embedded World 2023
In short Some of the latest IoT chipset and edge trends were on full display at the 2023 Embedded World Exhibition & Conference (in March 2023). As part of the Embedded World 2023 conference report, our team identified 19 industry trends related to IoT chipsets and edge computing, 10 of which...iot-analytics.com
View attachment 42376
In short
- Some of the latest IoT chipset and edge trends were on full display at the 2023 Embedded World Exhibition & Conference (in March 2023).
- As part of the Embedded World 2023 conference report, our team identified 19 industry trends related to IoT chipsets and edge computing, 10 of which are highlighted in this article.
Why it matters
- Embedded World is one of the world’s most important fairs for embedded systems. Technologies showcased in the fair are widely applicable to any company dealing with computerized hardware or the Internet of Things.
2. A new AI design cycle for embedded devices is emerging
The embedded community is getting ready for hardware and devices supporting AI/ML execution at the edge. This means massive hardware design changes and complexity and an increased software stack complexity. AI hardware development company BrainChip showcased its new Akida AI processor IP that integrates with Arm’s new Cortex-M85 for handling advanced machine learning workloads at the edge. Chipmaker Renesas is one of the clients of Akida AI that showcased AI running on Arm Cortex-M85. AI-based machine vision applications are one of the driving forces of AI adoption at this point. Adlink Technologies and Vision Components, for example, showcased their respective new AI camera solutions that are capable of deploying large AI algorithms on their equipment.
Recent Renesas acquisition.... Could spell major opportunity for Brainchip.Here's one more....
"We are excited to partner with BrainChip and leverage their state-of-the-art neuromorphic technology," said Frank T. Willis, President and CEO of Intellisense.
"By integrating BrainChip's Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability."
Research and Development Engineer at Mercedes-Benz liking Brainchip AKIDA 2nd Generation page
Research and Development Engineer at Mercedes liking Brainchip 2nd Generation page
Courtesy of Linkedin.
Akida can improve the Tachyum design by orders of magnitude in speed and power usage.
Appreciate your kind words...yes I'm quite new. I spend time between Oz and Europe. I'm passionate about tech and a little research goes a long way. That's what keeps me and I'm sure many here positive. Go BrainchipHey thanks for your contributions to this forum. I'm not a hundred percent sure but pretty sure you're a relatively new member to this forum.
Just wanted to say thanks.
And that thanks also goes out to the regulars of course also.
Hi Frangipani,
Just came across this press release by Tachyum dated August 15 - I had to google EDA tools (which they seem to focus on here) and found out they have to do with software, but then the following sentence obviously refers to changes in hardware, doesn’t it? Please correct me, if I am wrong.
“After the Prodigy design team had to replace IPs, it also had to replace RTL simulation and physical design tools.”
I should, however, add that I also found two articles in German whose authors are both very sceptical about Tachyum’s claims and describe the second generation of the supposed Prodigy wonder chip that - just like the first generation (which was never taped out despite several announcements) - so far only exists on paper, as a castle in the sky and too good to be true. One of the authors remarks that there is one thing Tachyum is even better in than raising money and developing processors, and that is writing press releases on a weekly basis.
50 Prozent mehr Kerne und Cache: Tachyum baut neue Luftschlösser in EDA-Tools
192 Kerne statt geplanter 128, dazu auch 50 Prozent mehr Cache – Tachyum baut in neuen EDA-Tools auch neue Luftschlösser.www.computerbase.de
Golem.de: IT-News für Profis
www.golem.de
So while I am not sure what to make of this press release (and have never looked into the company before), I wanted to share it nevertheless, especially since Tachyum liked Brainchip on LinkedIn a couple of months ago.
If their processor upgrade is indeed about switching to Akida IP (I suppose that would have to be through Renesas or MegaChips then, since there has been no new signing of an IP license?), that would explain both the postponement of their universal processor’s tape-out as well as their claims of “industry leading performance” and “potential breakthrough for satisfying the world’s appetite for computing at a lower environmental cost” that a lot of tech experts have been questioning. Tachyum states that “Delivery of the first Prodigy high-performance processors remains on track by the end of the year.” We will see. In various ways.
View attachment 42431
Tachyum Achieves 192-Core Chip After Switch to New EDA Tools | Tachyum
Tachyum achieved significantly better results with chip specifications than previously anticipated – including an increase in the number of Prodigy cores to 192www.tachyum.com
- Aug 15, 2023 4 minutes to read
Tachyum Achieves 192-Core Chip After Switch to New EDA Tools
LAS VEGAS, August 15, 2023 – Tachyum® today announced that new EDA tools, utilized during the physical design phase of the Prodigy®Universal Processor, have allowed the company to achieve significantly better results with chip specifications than previously anticipated, after the successful change in physical design tools – including an increase in the number of Prodigy cores to 192.
After RTL design coding, Tachyum began work on completing the physical design (the actual placement of transistors and wires) for Prodigy. After the Prodigy design team had to replace IPs, it also had to replace RTL simulation and physical design tools. Armed with a new set of EDA tools, Tachyum was able to optimize settings and options that increased the number of cores by 50 percent, and SERDES from 64 to 96 on each chip. Die size grew minimally, from 500mm2 to 600mm2 to accommodate improved physical capabilities. While Tachyum could add more of its very efficient cores and still fit into the 858mm2 reticle limit, these cores would be memory bandwidth limited, even with 16 DDR5 controllers running in excess of 7200MT/s. Tachyum cores have much higher performance than any other processor cores.
Other improvements realized during the physical design stage are:
“At every step of the process in bringing Prodigy to market, our innovation allows us to push beyond the limits of traditional design and continue to exceed even our lofty design goals,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “We have achieved better results and timing with our new EDA PD tools. They are so effective that we wish we had used them from the beginning of the process but as the saying goes, ‘Better now than never.’ While we did not have any choice but to change EDA tools, our physical design (PD) team worked hard to redo physical design and optimizations with the new set of PD tools, as we approach volume-level production.”
- Increase of the chip L2/L3 cache from 128MB to 192MB
- Support of DDR5 7200 memory in addition to DDR5 6400
- More speed with 1 DIMM per channel
- Larger package accommodates additional 32 serial links and as many as 32 DIMMs connected to a single Prodigy chip
As a universal processor, the patented Prodigy architecture enables it to switch seamlessly and dynamically from normal CPU tasks to AI/ML workloads, so it delivers high AI/ML performance in both training and inference. AI/ML is increasingly important in the banking industry, and used to identify fraud and cyberattacks before serious financial damage can be done.
Prodigy delivers unprecedented data center performance, power, and economics, reducing CAPEX and OPEX significantly. Because of its utility for both high-performance and line-of-business applications, Prodigy-powered data center servers can seamlessly and dynamically switch between workloads, eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization. Tachyum’s Prodigy delivers performance up to 4x that of the highest performing x86 processors (for cloud workloads) and up to 3x that of the highest performing GPU for HPC and 6x for AI applications.
With the achievement of this latest Prodigy milestone, Tachyum’s next steps are to complete the substrate package and socket design to accommodate more SERDES lines. Delivery of the first Prodigy high-performance processors remains on track by the end of the year.
Follow Tachyum
https://twitter.com/tachyum
https://www.linkedin.com/company/tachyum
https://www.facebook.com/Tachyum/
About Tachyum
Tachyum is transforming the economics of AI, HPC, public and private cloud workloads with Prodigy, the world’s first Universal Processor. Prodigy unifies the functionality of a CPU, a GPGPU, and a TPU in a single processor that delivers industry-leading performance, cost, and power efficiency for both specialty and general-purpose computing. When hyperscale data centers are provisioned with Prodigy, all AI, HPC, and general-purpose applications can run on the same infrastructure, saving companies billions of dollars in hardware, footprint, and operational expenses. As global data center emissions contribute to a changing climate, and consume more than four percent of the world’s electricity—projected to be 10 percent by 2030—the ultra-low power Prodigy Universal Processor is a potential breakthrough for satisfying the world’s appetite for computing at a lower environmental cost. Prodigy, now in its final stages of testing and integration before volume manufacturing, is being adopted in prototype form by a rapidly growing customer base, and robust purchase orders signal a likely IPO in late 2024. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/.
Sorry, to clarify, I agree that we don't have (or ever had) any meaningful revenue.How can you expect meaningful revenue yet?
Where is it coming from?
If Renesas/Megachips have secretly rushed the products to market with Akida nodes inside can you please provide the links to products?
Otherwise I must have missed the ASX ann of a new IP license up front fee, can you provide the link to that?
Another day, another set of unhinged posts from those over-invested in BRN
Hi Frangipani,
Here's a peek at a couple of Tachyum patents:
US10915324B2 System and method for creating and executing an instruction word for simultaneous execution of instruction operations 20180816 DANILAK RADOSLAV
View attachment 42445
a processing architecture and related methodology that utilizes location-aware processing that assigns Arithmetic Logic Units (ALU) in a processor to instruction operations based on prior allocations of ALUs to prior instruction operations. Such embodiments minimize the influence of internal transmission delay on wires between ALUs in a processor, with a corresponding significant increase in clock speed, reduction in power consumption and reduction in size.
A methodology for creating and executing instruction words for simultaneous execution of instruction operations is provided. The methodology includes creating a dependency graph of nodes with instruction operations, the graph including at least a first node having a first instruction operation and a second node having a second instruction operation being directly dependent upon the outcome of the first instruction operation; first assigning the first instruction operation to a first instruction word; second assigning a second instruction operation: to the first instruction word upon satisfaction of a first at least one predetermined criteria; and to a second instruction word, that is scheduled to be executed during a later clock cycle than the first instruction word, upon satisfaction of a second at least one predetermined criteria; and executing, in parallel by the plurality of ALUs and during a common clock cycle, any instruction operations within the first instruction word.
As you can see, Tachyum are big on ALUs.
It also seems to be unasynchronous.
This one is more recent:
EP3979070A1 SYSTEM AND METHOD OF POPULATING AN INSTRUCTION WORD 20190815
View attachment 42450
View attachment 42448
A methodology for populating an instruction word for simultaneous execution of instruction operations by a plurality of arithmetic logic units, ALUs), in a data path includes creating a dependency graph of instruction nodes, and initially designating any in the dependency graph as global, whereby the corresponding instruction node is expected to require inputs from outside of a predefined limited physical range of ALUs smaller than the full extent of the data path. A first available instruction node is selected from the dependency graph and assigned to the instruction word. Also selected are any available instruction nodes that are dependent upon a result of the first available instruction node and do not violate any predetermined rule, including that the instruction word may not include an available dependent instruction node designated as global. Available dependent instruction nodes are assigned to the instruction word, and the dependency graph updated to remove any assigned nodes from further assignment consideration.
In their blurb, they claim to have designed an all-in-one CPU-GPU-TPU, which they claim performs better than CPUs and GPUs. They need the ALUs to do the CPU/GPU work, but ALUs work on multi-bit numbers, not spikes.
Tachyum’s Prodigy delivers performance up to 4x that of the highest performing x86 processors (for cloud workloads) and up to 3x that of the highest performing GPU for HPC and 6x for AI applications.
Taking Tachyum at their word, this is very commendable, but doing Al/ML on even the best organized CPU/GPU/ALU arrangement will always be inferior to Akida.
(I think that, to accommodate 8-bit weights and actuations, Akida does include a couple of ALUs in the input layer NPUs, but the internal layers only process up to 4-bit weights & actuations)
Anyone online that recalls the name of the space industry thingamabob company we recently collaborated with. Ant-something ?