BRN Discussion Ongoing

Earlyrelease

Regular
Damn about time Perth Akidaholics had another meeting to counsel each other. I couldn’t help myself but breach my own rule about not buying any more. At these prices it worse than going past, well actually not going past, the 50% off magnum ice creams at Cole’s and not stocking up. Damn shorters but I will thank you one day in the future.
 
  • Like
  • Love
  • Fire
Reactions: 27 users

Cardpro

Regular
Meh... below 20c again...
 
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets

Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets​

Business news | June 24, 2024
By Jean-Pierre Joosting
SOFTWARE RISC-V AI CHIP FABRIC CHIPLETS


Tenstorrent has licensed Baya Systems’ customizable WeaveIP™ fabric to scale its AI and RISC-V chiplet products.​

Baya’s IP and software flow enables Tenstorrent and its partners to analyze, customize and deploy its intelligent compute platform for current and future workloads and deliver highly scalable chiplet products to meet the emerging demand.


“Baya makes great, comprehensive fabric tools. Their tools start with top level architecture then allow us to plan at a detail level including performance modeling, transport, quality of service and cache coherency,” said Tenstorrent CEO Jim Keller. “This, coupled with their visualization tools, enables designers to build next generation chips, chiplets and IP. This data-driven, correct by construction fabric IP delivers the performance and scale needed for Tenstorrent’s chiplet-based solutions.”

Baya Systems’ WeaveIP portfolio optimizes standard protocols, distributed caching, advanced coherent and non-coherent fabric while allowing customizable protocols for AI and other applications over a unique transport architecture. The WeaverPro™ software provides a data-driven platform that enables designers to architect cache and memory architecture followed by algorithmically optimized unified fabric design from concept to post-silicon tuning, accelerating the development and deployment of a chiplet-ready system architecture that is globally and locally optimized.

“Tenstorrent is reputed for highly customized, high-performance AI and RISC-V solutions tailored to specific workloads and applications, which need to be future-proof,” said Sailesh Kumar, CEO of Baya Systems. “We believe Baya’s high-performance, reliable chiplet-ready fabric, and advanced analysis capability, design-time, and post-silicon runtime tuning, will be an essential component of Tenstorrent’s ability to deliver high-performance cost-effective multi-chip designs that next-level energy efficiency and are future-proofed for fast evolving applications.”

 
  • Like
  • Fire
  • Love
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 39 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Competing for Supremacy: Strategies to Dominate the AI Accelerator Market​

June 24, 2024 Lauro Rizzatti
The AI accelerator market is expected to continue its rapid growth in the coming years, propelled by more complex AI applications that require even more processing power.


Mastering AI is becoming increasingly vital in shaping economic, social, energy, military and geopolitical landscapes. Enabling the extensive implementation of advanced AI technologies across businesses, government entities and individual bodies is not only strategic but also imperative.
Despite seven decades of mostly unsuccessful investigation, AI has experienced significant growth over the last 10 years, expanding at an exponential rate. This escalating adoption has been propelled by a shift toward highly parallel computing architectures, a departure from conventional CPU-based systems. Traditional CPUs, with their sequential processing nature that handles one instruction at a time, are increasingly unable to meet the demands of advanced, highly parallel AI algorithms—case in point, large language models (LLMs). This challenge has driven the widespread development of AI accelerators, specialized hardware engineered to dramatically enhance the performance of AI applications.
AI applications involve complex algorithms that include billions to trillions of parameters and require integer and floating-point multidimensional matrix mathematics at mixed precision ranging from 4 bits to 64 bits. Although the underlying mathematics consists of simple multipliers and adders, they are replicated millions of times in AI applications, posing a sizable challenge for computing engines.
AI accelerators come in various forms, including GPUs, FPGAs and custom-designed ASICs. They offer dramatic performance enhancements over CPUs, resulting in faster execution times, more efficient model deployment and scalability to handle increasingly complex AI applications.
The booming AI accelerator market is fueled by the widespread adoption of AI across a variety of industries. From facial/image recognition and natural language processing all the way to self-driving vehicles and generative AI elaboration, AI is transforming how we live and work. This revolution has spurred a massive demand for faster, more efficient AI processing, making AI accelerators a crucial component of the AI infrastructure.
Notwithstanding the tremendous market growth, all existing commercial AI processing products have limitations, some more significant than others.
AI accelerator (Source: Vsora)

Current limitations and needs​

AI processing can occur in two primary locations: in the cloud (data centers) or at the edge, each with distinct requirements and challenges.

AI processing in the cloud

The AI accelerator market within data center applications is highly polarized, with one dominant player controlling approximately 95% of the market. To foster greater diversification, a few key issues must be addressed:
  • Massive processing power: The processing power must achieve multiple petaFLOPs delivered consistently under real-world workloads.
  • High cost of AI hardware: The steep price of AI hardware restricts access for smaller enterprises, limiting adoption to the largest corporations.
  • Massive power consumption: AI accelerators consume significant power, necessitating expensive installation facilities. These facilities contribute to substantial operational costs, making scalability difficult.
  • Market monopoly: By controlling the market, the dominant player stifles competition and prevents innovation. More energy-efficient and cost-effective solutions than existing offerings are needed.
It’s worth mentioning that there has been a recent shift in data center focus, from training to inference. This shift amplifies the need to reduce the cost per query and to lower acquisition and operational expenditures.
All the above improvements would not only make advanced AI capabilities more accessible to everyone but also promote more sustainable technological growth, enabling broader adoption across various industries.

AI processing at the edge

In contrast to the AI processing market in data centers, the market for AI processing at the edge is highly fragmented. Numerous commercial products from many startups target niche applications across various industries. From a competitive perspective, this scenario is healthy and encouraging. However, there remains a need for a more comprehensive solution.
Edge AI processing faces a different set of challenges, where low power consumption and cost are key criteria, while compute power is less critical.

Processing efficiency and latency: the Cinderellas of AI attributes

While state-of-the-art AI processors are advertised with impressive processing power, sometimes reaching multiple petaFLOPS, their real-world performance frequently falls short. These specifications typically highlight theoretical maximums and overlook the critical factor of processing efficiency—the percentage of the theoretical power achievable in practical applications. When executing leading-edge LLMs, most AI accelerators experience significant drops in efficiency, often to as low as 1% to 5%.
Latency, another crucial metric, is typically missing from AI processor specifications.
This omission arises not only because latency is highly algorithm-dependent but also due to the generally low efficiency of most processors.
Consider two real-world demands:
  • Autonomous vehicles: These systems require response times under 20 ms to interpret environmental data collected from a diverse set of sensors. Subsequently, they must decide on a course of action and execute it within 30 ms. These are challenging targets to reach.
  • Generative AI: To maintain user engagement, generative AI must produce the first response within a few seconds. To date, this can be achieved by expanding the number of processor accelerators working in parallel. This approach results in significant acquisition costs and operational expenses, with energy consumption becoming a dominant factor.
These scenarios underscore the limitations of commercial processors primarily due to the memory bottleneck that prevents data from being fed to the processing elements fast enough to keep them busy all the time.

A workable solution​

To address these challenges and secure a leading position in the market, companies should develop next-generation AI accelerators with a focus on three primary areas:
  • Technology innovation: A viable solution should be based on a novel AI-specific architecture that defeats the memory bottleneck when the memory cannot deliver data to the multipliers/adders quickly enough. The benefits of higher usable throughput, lower latency, reduced power consumption and cost would be dramatic, leading to leaps in efficiency and broad expansion of their appeal.
  • Scalability and flexibility: Developing a scalable, modular and programmable AI accelerator that can handle diverse AI workloads, not just specific tasks, and easily integrate with a variety of platforms and systems could widen the market. This would open the vast area of edge applications from small startups to large enterprises.
  • Ease of deployment: A supporting software stack would allow algorithmic developers to seamlessly map their algorithms under development onto the AI accelerator without requiring them to understand the complexities of the hardware accelerator—specifically, RTL design and debugging. This would encourage them to fully embrace the solution.
A winning strategy would also establish strategic alliances with software developers, educational institutions and other hardware manufacturers to lead to better integration and adoption rates.

The future of the AI accelerator market​

The AI accelerator market is expected to continue its rapid growth in the coming years, propelled by more complex AI applications that require even more processing power. In this scenario, the demand for high performance with high-efficiency accelerators will only intensify.
Expect to see innovation in AI acceleration architectures, with vendors focused on creating more flexible and energy-efficient solutions. As the race to dominate the AI accelerator market heats up, the ultimate winners will be those who can innovate in efficiency and scalability but also excel in making their technologies accessible and sustainable.
Ultimately, we can anticipate a solution that can perform the expected task optimally—energy-efficient, cost-efficient and with high implementation efficiency. This is not necessarily the same as the lowest power, lowest cost and the highest efficiency.

 
  • Like
  • Love
  • Fire
Reactions: 16 users

GazDix

Regular
Latest Pitt Street Report on Brainchip.

Link here:




BrainChip

@BrainChip_inc
·
5m

Pitt Street Research re-initiates coverage on BrainChip which is commercializing a revolutionary neuromorphic technology. Read Full Report: https://bit.ly/4eBNmBI
 
  • Like
  • Love
Reactions: 18 users

GazDix

Regular
Latest Pitt Street Report on Brainchip.

Link here:

BrainChip
@BrainChip_inc
·
5m

Pitt Street Research re-initiates coverage on BrainChip which is commercializing a revolutionary neuromorphic technology. Read Full Report: https://bit.ly/4eBNmBI
Really good report summarising lots of challenges and opportunities.
But:

TLDR for most of us:

Report thinks the valuation of Brainchip is undervalued and fair value is $1.59 AUD per share.
 
  • Like
  • Love
Reactions: 9 users
Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets

Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets​

Business news | June 24, 2024
By Jean-Pierre Joosting
SOFTWARE RISC-V AI CHIP FABRIC CHIPLETS


Tenstorrent has licensed Baya Systems’ customizable WeaveIP™ fabric to scale its AI and RISC-V chiplet products.​

Baya’s IP and software flow enables Tenstorrent and its partners to analyze, customize and deploy its intelligent compute platform for current and future workloads and deliver highly scalable chiplet products to meet the emerging demand.


“Baya makes great, comprehensive fabric tools. Their tools start with top level architecture then allow us to plan at a detail level including performance modeling, transport, quality of service and cache coherency,” said Tenstorrent CEO Jim Keller. “This, coupled with their visualization tools, enables designers to build next generation chips, chiplets and IP. This data-driven, correct by construction fabric IP delivers the performance and scale needed for Tenstorrent’s chiplet-based solutions.”

Baya Systems’ WeaveIP portfolio optimizes standard protocols, distributed caching, advanced coherent and non-coherent fabric while allowing customizable protocols for AI and other applications over a unique transport architecture. The WeaverPro™ software provides a data-driven platform that enables designers to architect cache and memory architecture followed by algorithmically optimized unified fabric design from concept to post-silicon tuning, accelerating the development and deployment of a chiplet-ready system architecture that is globally and locally optimized.

“Tenstorrent is reputed for highly customized, high-performance AI and RISC-V solutions tailored to specific workloads and applications, which need to be future-proof,” said Sailesh Kumar, CEO of Baya Systems. “We believe Baya’s high-performance, reliable chiplet-ready fabric, and advanced analysis capability, design-time, and post-silicon runtime tuning, will be an essential component of Tenstorrent’s ability to deliver high-performance cost-effective multi-chip designs that next-level energy efficiency and are future-proofed for fast evolving applications.”

Fingers crossed
 
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This platform is designed for the upcoming R - Car GEN 5MCU/SoC Family and future devices.

To demonstrate where we might fit into the scheme of things here, I've included below a screenshot of an article from the 7 November 2023 outlining Reneses' R-Car Roadmap.






Renesas launches R-Car Open Access platform for SDV development​

Izzy Wood
By IZZY WOODJune 24, 20243 Mins Read
ShareLinkedInTwitterFacebookEmail
Renesas launches R-Car Open Access platform for SDV development

Japanese semiconductor manufacturer Renesas Electronics has introduced R-Car Open Access (RoX), a development platform for software-defined vehicles (SDVs).
The platform integrates essential hardware, operating systems, software and tools to facilitate the rapid creation of next-generation vehicles with secure and continuous software updates. Designed for the Renesas R-Car family of system on chips (SoCs) and microcontrollers (MCUs), the RoX platform includes tools for the simple deployment of AI applications, to reduce development complexities for car OEMs and Tier 1 suppliers.
lg.php

lg.php

RoX is available in two versions. The RoX Whitebox version provides an open, accessible software package that includes royalty-free operating systems and hypervisor software such as Android Automotive OS, FreeRTOS, Linux, Xen and Zephyr RTOS.
RoX Licensed is based on industry-proven commercial software solutions, including QNX and Red Hat In-Vehicle Operating System, as well as Autosar-compliant software and SafeRTOS. This version includes pre-validated software stacks from partners like Stradvision for ADAS and Candera CGI Studio for in-vehicle infotainment (IVI).
Modern electrical/electronics (E/E) architecture now relies heavily on software to control vehicle functions and manage real-time data networks. This shift has increased the complexity of maintaining and upgrading software stacks while ensuring the highest levels of safety.
Renesas says the RoX platform aims to address these challenges by providing a cloud-native development environment and a simulation platform, supporting a software-first approach, and parallel hardware and software development.
The platform is designed for the current generation of R-Car SoCs, the upcoming R-Car Gen 5 MCU/SoC Family and future devices. The R-Car Gen 5 family provides a unified hardware architecture based on Arm CPU cores, enabling customers to reuse the same software and tools across different car models and generations.
RoX_SDV_Platform_en2-1-300x169.jpg

RoX also includes the Renesas Fast Simulator (RFS) and partner solutions like ASTC VLAB VDM and Synopsys Virtualizer Development Kit (VDK), which aim to let developers design, debug and verify software in simulation before deploying it on live SoCs and MCUs.
For AI development, the RoX platform has an AI Workbench, enabling developers to validate and optimize models and test AI applications in the cloud. This integration also supports rapid AI deployment on the R-Car heterogeneous compute platform.
The RoX platform also supports Amazon Web Services (AWS) cloud computing services.
Andrea Ketzer, director of technology strategy, automotive and manufacturing at AWS, said, “With Renesas’s R-Car Gen 5 devices supported by the AI Workbench on AWS, customers will achieve faster and more validated simulations and the ability to develop independently of hardware. This step change in development will drive the industry forward and place software innovation at the forefront of mobility.”
The R-Car Open Access Platform is available now with options for licensing. It includes open-source OS, commercial OS, full application software stacks, virtual development, cloud infrastructure and debugging tools.
Vivek Bhan, senior VP and GM of high performance computing at Renesas, said, “Today, car OEMs and Tier 1 suppliers are heavily investing in software development and maintenance. The RoX platform empowers our customers to design vehicles that deliver new value and bring improved safety and delightful comfort experiences to drivers and passengers.”





Article from 7 Nov 2023


Screenshot 2024-01-24 at 10.37.26 am.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 37 users

Boab

I wish I could paint like Vincent
Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets

Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets​

Business news | June 24, 2024
By Jean-Pierre Joosting
SOFTWARE RISC-V AI CHIP FABRIC CHIPLETS


Tenstorrent has licensed Baya Systems’ customizable WeaveIP™ fabric to scale its AI and RISC-V chiplet products.​

Baya’s IP and software flow enables Tenstorrent and its partners to analyze, customize and deploy its intelligent compute platform for current and future workloads and deliver highly scalable chiplet products to meet the emerging demand.


“Baya makes great, comprehensive fabric tools. Their tools start with top level architecture then allow us to plan at a detail level including performance modeling, transport, quality of service and cache coherency,” said Tenstorrent CEO Jim Keller. “This, coupled with their visualization tools, enables designers to build next generation chips, chiplets and IP. This data-driven, correct by construction fabric IP delivers the performance and scale needed for Tenstorrent’s chiplet-based solutions.”

Baya Systems’ WeaveIP portfolio optimizes standard protocols, distributed caching, advanced coherent and non-coherent fabric while allowing customizable protocols for AI and other applications over a unique transport architecture. The WeaverPro™ software provides a data-driven platform that enables designers to architect cache and memory architecture followed by algorithmically optimized unified fabric design from concept to post-silicon tuning, accelerating the development and deployment of a chiplet-ready system architecture that is globally and locally optimized.

“Tenstorrent is reputed for highly customized, high-performance AI and RISC-V solutions tailored to specific workloads and applications, which need to be future-proof,” said Sailesh Kumar, CEO of Baya Systems. “We believe Baya’s high-performance, reliable chiplet-ready fabric, and advanced analysis capability, design-time, and post-silicon runtime tuning, will be an essential component of Tenstorrent’s ability to deliver high-performance cost-effective multi-chip designs that next-level energy efficiency and are future-proofed for fast evolving applications.”

Nandan has runs on the board already.
 
  • Like
Reactions: 3 users

AusEire

Founding Member. It's ok to say No to Dot Joining
  • Like
  • Love
  • Fire
Reactions: 19 users

MegaportX

Regular
Today was yet another day of shenanigans on the stock market, with shorters employing their usual tactics to shake the confidence of longs and test their patience. These fluctuations and maneuvers are part and parcel of the trading landscape, designed to create uncertainty and doubt. However, it is crucial to remain steadfast and committed to your investment strategy. Market volatility is inevitable, but those who stay the course and maintain a long-term perspective often find themselves rewarded. Remember, it's the underlying value and potential of your investments that matter most, not the daily ups and downs. Stay focused, stay patient, and trust in your research and investment decisions. Your perseverance will pay off in the end.
 
  • Like
  • Love
  • Fire
Reactions: 35 users

Diogenese

Top 20
Is it a coincidence that, just when 8M shorts ae taken out, there is another unfavorable article from MF?

With MF, it's usual that many-a-mickle-makes-a-muckle.

However this is a more temperate, though still negative, review of BRN.

https://www.msn.com/en-au/lifestyle...&cvid=2de0ef4bba8f41b39d5155aab5784e0f&ei=166

You will probably be disappointed by the absence of coffee shop comparisons.

However, I think this passage reflects poorly on the accounting principles used for tech company assets.

Revenues were down an eye-watering 95% year over year, which took many by surprise. The company produced a net loss of around $29 million on these sales, with reasonably flat growth in accounts receivable. During the year, it also released its second generation Akida technology.

The bulk of the $29M "loss" was R&D investment in producing the second generation Akida., yet there is no acknowledgement of the intrinsic value of Akida 2 IP. It's not like we spent all this money and have nothing to show for it.

In JORC terms, Akida 2 IP is a proven resource, as is Akida 1 IP. You only need to look at the EAP comments on the Akida Generations page: https://brainchip.com/akida-generations/

At present, Akida IP is an off-the-books asset. As far as most investors are concerned, they cannot see it.

It's better than money in the bank, because of its earning potential ... and it is also the Magic Pudding - no matter how much you use, there's always more where that came from.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 56 users
  • Like
  • Love
  • Thinking
Reactions: 24 users

CHIPS

Regular
  • Like
  • Love
  • Thinking
Reactions: 12 users

CHIPS

Regular
Damn about time Perth Akidaholics had another meeting to counsel each other. I couldn’t help myself but breach my own rule about not buying any more. At these prices it worse than going past, well actually not going past, the 50% off magnum ice creams at Cole’s and not stocking up. Damn shorters but I will thank you one day in the future.

I confess: I am a BrainChip addict too and keep buying at this price. And immediately after I keep telling myself that I now have more than enough 🤭

Beauty Reaction GIF by Salon Line
 
  • Like
  • Haha
  • Fire
Reactions: 20 users

TheDrooben

Pretty Pretty Pretty Pretty Good
I confess: I am a BrainChip addict too and keep buying at this price. And immediately after I keep telling myself that I now have more than enough 🤭

Beauty Reaction GIF by Salon Line
Same here CHIPS.............I have been waiting for this EOFY sell down to buy more. Larry is now a bee's phallus from 200k


9452ee5b-4be5-4bc7-adfe-578eb62b02a7_text.gif



Or is it????


Happy as Larry
 
  • Like
  • Haha
  • Fire
Reactions: 18 users

FJ-215

Regular
Is it a coincidence that, just when 8M shorts ae taken out, there is another unfavorable article from MF?

With MF, it's usual that many-a-mickle-makes-a-muckle.

However this is a more temperate, though still negative, review of BRN.

https://www.msn.com/en-au/lifestyle...&cvid=2de0ef4bba8f41b39d5155aab5784e0f&ei=166

You will probably be disappointed by the absence of coffee shop comparisons.

However, I think this passage reflects poorly on the accounting principles used for tech company assets.

Revenues were down an eye-watering 95% year over year, which took many by surprise. The company produced a net loss of around $29 million on these sales, with reasonably flat growth in accounts receivable. During the year, it also released its second generation Akida technology.

The bulk of the $29M "loss" was R&D investment in producing the second generation Akida., yet there is no acknowledgement of the intrinsic value of Akida 2 IP. It's not like we spent all this money and have nothing to show for it.

In JORC terms, Akida 2 IP is a proven resource, as is Akida 1 IP. You only need to look at the EAP comments on the Akida Generations page: https://brainchip.com/akida-generations/

At present, Akida IP is an off-the-books asset. As far as most investors are concerned, they cannot see it.

It's better than money in the bank, because of its earning potential ... and it is also the Magic Pudding - no matter how much you use, there's always more where that came from.
I tend to skim over anything put out by MF but... is this a new disclaimer. Almost like they want to start hedging their bets..

1719302628911.png
 
  • Haha
  • Like
Reactions: 7 users
This platform is designed for the upcoming R - Car GEN 5MCU/SoC Family and future devices.

To demonstrate where we might fit into the scheme of things here, I've included below a screenshot of an article from the 7 November 2023 outlining Reneses' R-Car Roadmap.






Renesas launches R-Car Open Access platform for SDV development​

Izzy Wood
By IZZY WOODJune 24, 20243 Mins Read
ShareLinkedInTwitterFacebookEmail
Renesas launches R-Car Open Access platform for SDV development

Japanese semiconductor manufacturer Renesas Electronics has introduced R-Car Open Access (RoX), a development platform for software-defined vehicles (SDVs).
The platform integrates essential hardware, operating systems, software and tools to facilitate the rapid creation of next-generation vehicles with secure and continuous software updates. Designed for the Renesas R-Car family of system on chips (SoCs) and microcontrollers (MCUs), the RoX platform includes tools for the simple deployment of AI applications, to reduce development complexities for car OEMs and Tier 1 suppliers.
lg.php

lg.php

RoX is available in two versions. The RoX Whitebox version provides an open, accessible software package that includes royalty-free operating systems and hypervisor software such as Android Automotive OS, FreeRTOS, Linux, Xen and Zephyr RTOS.
RoX Licensed is based on industry-proven commercial software solutions, including QNX and Red Hat In-Vehicle Operating System, as well as Autosar-compliant software and SafeRTOS. This version includes pre-validated software stacks from partners like Stradvision for ADAS and Candera CGI Studio for in-vehicle infotainment (IVI).
Modern electrical/electronics (E/E) architecture now relies heavily on software to control vehicle functions and manage real-time data networks. This shift has increased the complexity of maintaining and upgrading software stacks while ensuring the highest levels of safety.
Renesas says the RoX platform aims to address these challenges by providing a cloud-native development environment and a simulation platform, supporting a software-first approach, and parallel hardware and software development.
The platform is designed for the current generation of R-Car SoCs, the upcoming R-Car Gen 5 MCU/SoC Family and future devices. The R-Car Gen 5 family provides a unified hardware architecture based on Arm CPU cores, enabling customers to reuse the same software and tools across different car models and generations.
RoX_SDV_Platform_en2-1-300x169.jpg

RoX also includes the Renesas Fast Simulator (RFS) and partner solutions like ASTC VLAB VDM and Synopsys Virtualizer Development Kit (VDK), which aim to let developers design, debug and verify software in simulation before deploying it on live SoCs and MCUs.
For AI development, the RoX platform has an AI Workbench, enabling developers to validate and optimize models and test AI applications in the cloud. This integration also supports rapid AI deployment on the R-Car heterogeneous compute platform.
The RoX platform also supports Amazon Web Services (AWS) cloud computing services.
Andrea Ketzer, director of technology strategy, automotive and manufacturing at AWS, said, “With Renesas’s R-Car Gen 5 devices supported by the AI Workbench on AWS, customers will achieve faster and more validated simulations and the ability to develop independently of hardware. This step change in development will drive the industry forward and place software innovation at the forefront of mobility.”
The R-Car Open Access Platform is available now with options for licensing. It includes open-source OS, commercial OS, full application software stacks, virtual development, cloud infrastructure and debugging tools.
Vivek Bhan, senior VP and GM of high performance computing at Renesas, said, “Today, car OEMs and Tier 1 suppliers are heavily investing in software development and maintenance. The RoX platform empowers our customers to design vehicles that deliver new value and bring improved safety and delightful comfort experiences to drivers and passengers.”





Article from 7 Nov 2023


View attachment 65451
The fuel has been put into the drag cars… Here we go
 
  • Like
  • Fire
Reactions: 9 users

TECH

Regular
  • Market monopoly: By controlling the market, the dominant player stifles competition and prevents innovation. More energy-efficient and cost-effective solutions than existing offerings are needed.
Take note Jensen.....

I wrote you an internal message via Linkedin in 2019...do you remember ? I'm sure you don't !

The bridge across the chasm is almost complete, the valley of death far below is now a thing of the past, a new era is dawning,
where Akida will shine majestically in the glory of the sun, a trailing breeze will continue to strengthen behind Akida's back making
our journey to the forbidden lands that much more glorious.

Feeling proud to own Brainchip shares isn't about the current share price it's about sharing the experience with some wonderful individuals
who have never given up when things got tough, an announcement will come, and you won't be expecting it.

No matter what you believe, Peter's et al technology isn't going away, it's only becoming stronger by the month, I personally believe that
the dominos are all going to fall in our favour very soon.

My views only....watch this space...Tech 😊
 
  • Like
  • Love
  • Fire
Reactions: 41 users
Top Bottom