BrainShit
Regular
They did... in Sep 2023. And the Edge Box is at least a quarter late.This should be announced via the ASX as it’s a commercial product with a $$ amount
They did... in Sep 2023. And the Edge Box is at least a quarter late.This should be announced via the ASX as it’s a commercial product with a $$ amount
I can't say anything about their tech, but their aggressive claims mean nothing.@Diogenese - Jem Davis has turned up as lead NED at Literal- Labs AI - https://www.literal-labs.ai they are making some pretty aggressive claims on power saving.. If you have time would you care to comment on this ....
"Literal Labs applies a streamlined and efficient approach to AI that is faster, explainable, and up to 10,000X more energy efficient than today’s neural networks. Similar to NNs in that customers train a dataset, Literal Labs trains Tsetlin machine models specific to customer datasets. Our approach results in an optimised machine model that is then deployed onto the target hardware. The Tsetlin machine model can be deployed as software only or can be accelerated using Literal Labs accelerators. Our benchmarking shows we can achieve 250X faster inferencing than XG Boost using software only, and up to 1,000X faster and up to 10,000X less energy consumption when using hardware acceleration. The company was spun out of Newcastle University by world leaders in Tsetlin machine Dr. Alex Yakovlev and Dr. Rishad Shafik, and led by former Arm CPU division VP and semiconductor startup founder Noel Hurley. "
Wait a minute @Dio. ...any ARM CPU ... architecture. Am I supposed to take ARM's statement seriously and it doesn't depend on the 22nm (GF) or any nm at all and that's just the qualification process?Yes, I also like these superlatives when they come from others. For repetition because it's so nice just before the IFS:
ARM:
Arm’s Cortex-M coupled with BrainChip’s Akida delivers unparalleled system performance.
https://armkeil.blob.core.windows.net/developer/Files/pdf/ai-ecosystem-catalogue/BrainChip_Arm-AI-Partner-Solution-Brief-Template-1.pdf
U - nrivaled
U - nparalleled
U - biquitous
Akida is so uuu!
______________________
I would like to add this one
__________________________________________________________________Akida neuron fabric integrates into any Arm
CPU.
Obviously I'm not DiogeneseWait a minute @Dio. ...any ARM CPU ... architecture. Am I supposed to take ARM's statement seriously and it doesn't depend on the 22nm (GF) or any nm at all and that's just the qualification process?
IFS![]()
It has been said and written many times since Anil Mankar made this claim after the ARM partnership was announced.Wait a minute @Dio. ...any ARM CPU ... architecture. Am I supposed to take ARM's statement seriously and it doesn't depend on the 22nm (GF) or any nm at all and that's just the qualification process?
IFS![]()
BrainChip Begins Accepting Pre-Orders of the Akida Edge AI Box
Laguna Hills, Calif. – February 20, 2024 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today re-opened its product shopping portal and will be accepting pre-orders of its recently announced Akida Edge AI Box. The Akida™ powered Edge AI box is expected to help customers accelerate high-performance AI applications at the Edge in challenging environments where energy-efficiency is essential and cloud connectivity is not guaranteed.
Built in collaboration with VVDN Technologies, the Akida Edge AI Box is designed to meet the demanding needs of retail and security, Smart City, automotive, transportation and industrial applications. The device combines a powerful quad-core CPU platform with Akida AI accelerators to provide a huge boost in AI performance. The compact, light Edge box is cost- effective and versatile with built in ethernet and Wi-Fi connectivity, HD display support, extensible storage, and USB interfaces. BrainChip and VVDN are finalizing the set of AI applications that will run out of the box. With the ability to personalize and learn on device, the box can be customized per application and per user without the need for cloud support, enhancing privacy and security.
“The Akida Edge AI Box is ideally suited to provide the low latency and high throughput processing with ultra-low power consumption – a necessity for the next generation of smart Edge devices,” said Sean Hehir, CEO of BrainChip. “We are excited to officially launch pre- orders of the Akida Edge AI Box and bring this groundbreaking technology to market to empower customers in developing and deploying intelligent, secure and customized devices and services for multi-sensor environments in real time.”
The Akida Edge AI Box starts at $799 and can be pre-ordered by visiting https://shop.brainchipinc.com/.
I guess it wasn't sold out as advertised on the website last week. It just wasn't for sale yet.All looking extremely positive moving forward
“The Akida Edge AI Box is ideally suited to provide the low latency and high throughput processing with ultra-low power consumption – a necessity for the next generation of smart Edge devices,” said Sean Hehir, CEO of BrainChip.
View attachment 57355
Looks that way.. I'm thinking that TEAM BRN knew that they were going to be available very Soon and "Sold out" looked much more positive than..I guess it wasn't sold out as advertised on the website last week. It just wasn't for sale yet.
I said recently……. COCKROACHES!!!!!!Pre market looking like the sellers are going to try to flood the market again
Need more buyers
Look behind your couch for some change and buy more shares
Hi CosorsThat is interesting! I'm off to bed now and get my thoughts straight. See you tomorrow and have a good start to the day! Thanks to you both!
____
...akida at intel's 1.8nm announced next day... snoring
![]()
Hi DB,Obviously I'm not Diogenese..
But my recollection, is that Peter said AKIDA could go down to 7nm process node.
But I'm guessing that's because, that was the smallest viable process node, at that time?
I'm also guessing, that the "neuron fabric" whatever that actually is, would be the limiting factor..
But since it's a "digital" design, there should be no limits?...
The new smaller process nodes, are completely different "processes" too..
Hi Slade,Thanks for posting this @Learning. Why would BrainChip bother to create a post for another companies PMU on their LinkedIn account? A performance monitoring unit of all things. Is that something that Akida would be used for? Or is there more to it?
Has BrainChip been helping Tachyum with their soon to be released beta version of its prodigy Processor. I haven’t the foggiest but a look at the article below shows that Tachyum have been busy advancing their Prodigee offering.
CEO of Tachyum. “With each new enrichment we are able to incorporate into Prodigy’s software stack, we magnify the ability of a Prodigy platform ready to revolutionize the world.”
Tachyum Upgrades Software Package in Advance of Beta Release of the Prodigy Universal Processor
LAS VEGAS, February 14, 2024 – Tachyum® today announced that it has upgraded the software stack for the Prodigy® Universal Processor before the anticipated launch of its beta version around the end of quarter. Quality completion of Prodigy’s software stack is a key component as the company continues to advance towards chip production and distribution.
Tachyum software engineers have worked hard to enable the full potential of Prodigy with the development of an ecosystem of applications, system software, frameworks and libraries that are ported to run natively on Prodigy hardware. Once the software package completes its testing and runs cleanly on the FPGA, the company can fully transition to advancing the Universal Processor into production.
The Prodigy software distribution is a completely integrated software stack and package that is ready for deployment “as is.” It is available as a single pre-installed image for Tachyum’s early adopters and customers. Applications have been tested to work out of the box so that customers can immediately start using the reference design. If users encounter any issues during deployment, the software can be quickly and easily restored to its original image.
Included in the software distribution package as part of alpha testing are:
The company also announced plans to switch to the LLVM 18 release once it is available to download. LLVM plays a significant role in every major AI framework. Additionally, it is in the process of adding RAS (Reliability Accessibility Serviceability) in the form of an EDAC (Error Detection and Correction) driver in the next few weeks. Based on customer requests for server applications Tachyum agreed to add bootable SSD RAID next quarter to its UEFI.
- Latest versions of the QEMU emulator 8.2
- GCC 13.2 (GNU Compiler Collection) and glibc 2.39 (GNU C Library)
- Linux 6.6 LTS (Long Term Support), which contains a large number of changes, updates and improvements
As a Universal Processor offering industry-leading performance for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains (such as AI/ML, HPC, and cloud) with a single homogeneous architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4.5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.
“Having a robust software stack tested and ready to go upon the launch of the Prodigy Universal Processor chip is key to rapid adoption by data centers around the world looking to leverage their existing applications while achieving industry-leading performance for hyperscale, high-performance computing and artificial intelligence workloads,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “With each new enrichment we are able to incorporate into Prodigy’s software stack, we magnify the ability of a Prodigy platform ready to revolutionize the world.”
Hi WHHi DB,
Aren't Socionext already using our IP in 7nm.
I think there was a lot of press about it around CES 2023.
Hi alby,@Diogenese - Jem Davis has turned up as lead NED at Literal- Labs AI - https://www.literal-labs.ai they are making some pretty aggressive claims on power saving.. If you have time would you care to comment on this ....
"Literal Labs applies a streamlined and efficient approach to AI that is faster, explainable, and up to 10,000X more energy efficient than today’s neural networks. Similar to NNs in that customers train a dataset, Literal Labs trains Tsetlin machine models specific to customer datasets. Our approach results in an optimised machine model that is then deployed onto the target hardware. The Tsetlin machine model can be deployed as software only or can be accelerated using Literal Labs accelerators. Our benchmarking shows we can achieve 250X faster inferencing than XG Boost using software only, and up to 1,000X faster and up to 10,000X less energy consumption when using hardware acceleration. The company was spun out of Newcastle University by world leaders in Tsetlin machine Dr. Alex Yakovlev and Dr. Rishad Shafik, and led by former Arm CPU division VP and semiconductor startup founder Noel Hurley. "
Hi cosors,Wait a minute @Dio. ...any ARM CPU ... architecture. Am I supposed to take ARM's statement seriously and it doesn't depend on the 22nm (GF) or any nm at all and that's just the qualification process?
IFS![]()