Makeme 2020
Regular
Thanks BuddyI believe you're just embarrassing yourself
Please explain.
Thanks BuddyI believe you're just embarrassing yourself
Me too MD.There are only 1480 people in the world with 100k of shares in this company ? thankyou lord in allowing me to be one of those. that leaves how many people ?kudos to the others too that jumped on board below 100k. but I bet the top 20 will happy soon
now is a good time to be "stardust" who happens to be top 20. who amongst us is stardust?
![]()
Are you and Wilzy the gate keepersI believe you're just embarrassing yourself
There are only 1480 people in the world with 100k of shares in this company ? thankyou lord in allowing me to be one of those. that leaves how many people ?kudos to the others too that jumped on board below 100k. but I bet the top 20 will happy soon
now is a good time to be "stardust" who happens to be top 20. who amongst us is stardust?
![]()
jyadareturn.com
It's embarrassing for BRN to go from the ASX200 back to the ASX300.
Are we still waiting for 2023 or 2024 or 2025 or 2030.
Part 2Apologies in advance for the multiple posts however was the only way of dropping the whole article / review in its full via screen grabs.
I could have posted just the link but thought some may appreciate being to read via TSE when scrolling through.
While we had the Tata Elxsi Ann and it covered a couple of areas of interest, there is far more opportunity with this partnership imo and this info covers quite a bit & definitely worth a read.......couple of interesting things in there for mine
![]()
Tata Elxsi And Happiest Minds, Will Stocks Of These 2 Successful Digital Disruptors Make A Comeback In FY (2023-24)? - Jyadareturn.com
Both Tata Elxsi and Happiest Minds are in the Niche Business of providing Solutions & Services, for the Enterprise Digital Transformations and Next Gen Product & Platform Engineering through IoT (Internet of Things), Big Data Analytics, Cloud, Mobility, Virtual Reality, Cognitive Computing, and...jyadareturn.com
View attachment 44682 View attachment 44683 View attachment 44684 View attachment 44685
I do get some where there but humble thing is i met in AGM who belongs in top 20 and he is so humble he doesnt show any sign of how much he hold and all and keeps quiet.There are only 1480 people in the world with 100k of shares in this company ? thankyou lord in allowing me to be one of those. that leaves how many people ?kudos to the others too that jumped on board below 100k. but I bet the top 20 will happy soon
now is a good time to be "stardust" who happens to be top 20. who amongst us is stardust?
![]()
Yeah yeah
Thank you FMF,Apologies in advance for the multiple posts however was the only way of dropping the whole article / review in its full via screen grabs.
I could have posted just the link but thought some may appreciate being to read via TSE when scrolling through.
While we had the Tata Elxsi Ann and it covered a couple of areas of interest, there is far more opportunity with this partnership imo and this info covers quite a bit & definitely worth a read.......couple of interesting things in there for mine
![]()
Tata Elxsi And Happiest Minds, Will Stocks Of These 2 Successful Digital Disruptors Make A Comeback In FY (2023-24)? - Jyadareturn.com
Both Tata Elxsi and Happiest Minds are in the Niche Business of providing Solutions & Services, for the Enterprise Digital Transformations and Next Gen Product & Platform Engineering through IoT (Internet of Things), Big Data Analytics, Cloud, Mobility, Virtual Reality, Cognitive Computing, and...jyadareturn.com
View attachment 44682 View attachment 44683 View attachment 44684 View attachment 44685
No.
We are just waiting for some sustained revenue.
IMO and from the vantage of hindsight the MB announcement which popped and caught most of us by surprise, skyrocketed our share price to what turned out to be an unsustainable height.
Whilst we all enjoyed the ride I'm sure, without follow up 'in kind' news or confirmation from other similarly influential sources, our valuation decayed.
Given the occasion of consequent unanticipated but significant global shocks and smelling potential carrion, manipulators, shorts and other assorted bottom feeders began their feasting on our carcass which has been our unpleasant experience for the last 18 months or so.
The company has used this time in reshaping itself, putting new muscle on bone, honing and expanding both our skill sets and alliances with promising contemporary players and significant but ageing marque holders who have amassed and exert gravitic influence.
Enduring these setbacks and coming to terms with the realities of our environment has made us stronger, leaner and hungrier.
Whilst our revised strategy is longer in the making and requires more initial resilience, it is also liable to reward with both broader and deeper revenue streams, once sufficient momentum is generated.
Up until now we have only heard our motor cough a few times and emit some blue smoke, but we, here, are aware of a powerful rumbling in the distance, and soon, soon, we hope to hear that mighty merlin roar into life.
Hope the health issue gets resolved soon mate.I do get some where there but humble thing is i met in AGM who belongs in top 20 and he is so humble he doesnt show any sign of how much he hold and all and keeps quiet.
BTW i am going through some health Issue and hard for me to type so forgive me for any Typo.
Bravo had soon big connections outlined with Cadence, I reckon. Sounds promising!Could Akida be used here?
I do know we have a partnership with Cadence.
Main parts highlighted in orange!
Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design
SAN JOSE, Calif.— September 14, 2023 -- Cadence Design Systems, Inc. (Nasdaq: CDNS) today unveiled its next-generation AI IP and software tools to address the escalating demand for on-device and edge AI processing. The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs. Delivering up to 80 TOPS performance in a single core, the Neo NPUs support both classic and new generative AI models and can offload AI/ML execution from any host processor—including application processors, general-purpose microcontrollers and DSPs—with a simple and scalable AMBA® AXI interconnect. Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides developers with a “one-tool” AI software solution across Cadence AI and Tensilica® IP products for no-code AI development.
“While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices,” said Bob O’Donnell, president and chief analyst at TECHnalysis Research. “From consumer to mobile and automotive to enterprise, we’re embarking on a new era of naturally intuitive intelligent devices. For these to come to fruition, both chip designers and device makers need a flexible, scalable combination of hardware and software solutions that allow them to bring the magic of AI to a wide range of power requirements and compute performance, all while leveraging familiar tools. New chip architectures that are optimized to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process.”
The flexible Neo NPUs are well suited for ultra-power-sensitive devices as well as high-performance systems with a configurable architecture, enabling SoC architects to integrate an optimal AI inferencing solution in a broad range of products, including intelligent sensors, IoT and mobile devices, cameras, hearables/wearables, PCs, AR/VR headsets and advanced driver-assistance systems (ADAS). New hardware and performance enhancements and key features/capabilities include:
Since software is a critical part of any AI solution, Cadence also upgraded its common software toolchain with the introduction of the NeuroWeave SDK. Providing customers with a uniform, scalable and configurable software stack across Tensilica DSPs, controllers and Neo NPUs to address all target applications, the NeuroWeave SDK streamlines product development and enables an easy migration as design requirements evolve. It supports many industry-standard domain-specific ML frameworks, including TensorFlow, ONNX, PyTorch, Caffe2, TensorFlow Lite, MXNet, JAX and others for automated end-to-end code generation; Android Neural Network Compiler; TF Lite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.
- Scalability: Single-core solution is scalable from 8 GOPS to 80 TOPS, with further extension to hundreds of TOPS with multicore
- Broad configuration range: supports 256 to 32K MACs per cycle, allowing SoC architects to optimize their embedded AI solution to meet power, performance and area (PPA) tradeoffs
- Integrated support for a myriad of network topologies and operators: enables efficient offloading of inferencing tasks from any host processor—including DSPs, general-purpose microcontrollers or application processors—significantly improving system performance and power
- Ease of deployment: shortens the time to market to meet rapidly evolving next-generation vision, audio, radar, natural language processing (NLP) and generative AI pipelines
- Flexibility: Support for Int4, Int8, Int16, and FP16 data types across a wide set of operations that form the basis of CNN, RNN and transformer-based networks allows flexibility in neural network performance and accuracy tradeoffs
- High performance and efficiency: Up to 20X higher performance than the first-generation Cadence AI IP, with 2-5X the inferences per second per area (IPS/mm2) and 5-10X the inferences per second per Watt (IPS/W)
“For two decades and with more than 60 billion processors shipped, industry-leading SoC customers have relied on Cadence processor IP for their edge and on-device SoCs. Our Neo NPUs capitalize on this expertise, delivering a leap forward in AI processing and performance,” said David Glasco, vice president of research and development for Tensilica IP at Cadence. “In today’s rapidly evolving landscape, it’s critical that our customers are able to design and deliver AI solutions based on their unique requirements and KPIs without concern about whether future neural networks are supported. Toward this end, we’ve made significant investments in our new AI hardware platform and software toolchain to enable AI at every performance, power and cost point and to drive the rapid deployment of AI-enabled systems.”
“At Labforge, we use a cluster of Cadence Tensilica DSPs in our Bottlenose smart camera product line to enable best-in-class AI processing for power-sensitive edge applications,” said Yassir Rizwan, CEO of Labforge, Inc. “Cadence’s AI software is an integral part of our embedded low power AI solution, and we’re looking forward to leveraging the new capabilities and higher performance offered by Cadence’s new NeuroWeave SDK. With an end-to-end compiler toolchain flow, we can better solve challenging AI problems in automation and robotics—accelerating our time to market to capitalize on generative AI-based application demand and opening new market streams that may not have been possible otherwise.”
The Neo NPUs and the NeuroWeave SDK support Cadence’s Intelligent System Design™ strategy by enabling pervasive intelligence through SoC design excellence.
Availability
The Neo NPUs and the NeuroWeave SDK are expected to be in general availability beginning in December 2023. Early engagements have already started for lead customers. For more information, please visit www.cadence.com/go/NPU.
About Cadence
Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality. Cadence customers are the world’s most innovative companies, delivering extraordinary electronic products from chips to boards to systems for the most dynamic market applications, including consumer, hyperscale computing, 5G communications, automotive, mobile, aerospace, industrial and healthcare. For nine years in a row, Fortune magazine has named Cadence one of the 100 Best Companies to Work For. Learn more at cadence.com.
Hi Cartagena,Could Akida be used here?
I do know we have a partnership with Cadence.
Main parts highlighted in orange!
Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design
SAN JOSE, Calif.— September 14, 2023 -- Cadence Design Systems, Inc. (Nasdaq: CDNS) today unveiled its next-generation AI IP and software tools to address the escalating demand for on-device and edge AI processing. The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs. Delivering up to 80 TOPS performance in a single core, the Neo NPUs support both classic and new generative AI models and can offload AI/ML execution from any host processor—including application processors, general-purpose microcontrollers and DSPs—with a simple and scalable AMBA® AXI interconnect. Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides developers with a “one-tool” AI software solution across Cadence AI and Tensilica® IP products for no-code AI development.
“While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices,” said Bob O’Donnell, president and chief analyst at TECHnalysis Research. “From consumer to mobile and automotive to enterprise, we’re embarking on a new era of naturally intuitive intelligent devices. For these to come to fruition, both chip designers and device makers need a flexible, scalable combination of hardware and software solutions that allow them to bring the magic of AI to a wide range of power requirements and compute performance, all while leveraging familiar tools. New chip architectures that are optimized to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process.”
The flexible Neo NPUs are well suited for ultra-power-sensitive devices as well as high-performance systems with a configurable architecture, enabling SoC architects to integrate an optimal AI inferencing solution in a broad range of products, including intelligent sensors, IoT and mobile devices, cameras, hearables/wearables, PCs, AR/VR headsets and advanced driver-assistance systems (ADAS). New hardware and performance enhancements and key features/capabilities include:
Since software is a critical part of any AI solution, Cadence also upgraded its common software toolchain with the introduction of the NeuroWeave SDK. Providing customers with a uniform, scalable and configurable software stack across Tensilica DSPs, controllers and Neo NPUs to address all target applications, the NeuroWeave SDK streamlines product development and enables an easy migration as design requirements evolve. It supports many industry-standard domain-specific ML frameworks, including TensorFlow, ONNX, PyTorch, Caffe2, TensorFlow Lite, MXNet, JAX and others for automated end-to-end code generation; Android Neural Network Compiler; TF Lite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.
- Scalability: Single-core solution is scalable from 8 GOPS to 80 TOPS, with further extension to hundreds of TOPS with multicore
- Broad configuration range: supports 256 to 32K MACs per cycle, allowing SoC architects to optimize their embedded AI solution to meet power, performance and area (PPA) tradeoffs
- Integrated support for a myriad of network topologies and operators: enables efficient offloading of inferencing tasks from any host processor—including DSPs, general-purpose microcontrollers or application processors—significantly improving system performance and power
- Ease of deployment: shortens the time to market to meet rapidly evolving next-generation vision, audio, radar, natural language processing (NLP) and generative AI pipelines
- Flexibility: Support for Int4, Int8, Int16, and FP16 data types across a wide set of operations that form the basis of CNN, RNN and transformer-based networks allows flexibility in neural network performance and accuracy tradeoffs
- High performance and efficiency: Up to 20X higher performance than the first-generation Cadence AI IP, with 2-5X the inferences per second per area (IPS/mm2) and 5-10X the inferences per second per Watt (IPS/W)
“For two decades and with more than 60 billion processors shipped, industry-leading SoC customers have relied on Cadence processor IP for their edge and on-device SoCs. Our Neo NPUs capitalize on this expertise, delivering a leap forward in AI processing and performance,” said David Glasco, vice president of research and development for Tensilica IP at Cadence. “In today’s rapidly evolving landscape, it’s critical that our customers are able to design and deliver AI solutions based on their unique requirements and KPIs without concern about whether future neural networks are supported. Toward this end, we’ve made significant investments in our new AI hardware platform and software toolchain to enable AI at every performance, power and cost point and to drive the rapid deployment of AI-enabled systems.”
“At Labforge, we use a cluster of Cadence Tensilica DSPs in our Bottlenose smart camera product line to enable best-in-class AI processing for power-sensitive edge applications,” said Yassir Rizwan, CEO of Labforge, Inc. “Cadence’s AI software is an integral part of our embedded low power AI solution, and we’re looking forward to leveraging the new capabilities and higher performance offered by Cadence’s new NeuroWeave SDK. With an end-to-end compiler toolchain flow, we can better solve challenging AI problems in automation and robotics—accelerating our time to market to capitalize on generative AI-based application demand and opening new market streams that may not have been possible otherwise.”
The Neo NPUs and the NeuroWeave SDK support Cadence’s Intelligent System Design™ strategy by enabling pervasive intelligence through SoC design excellence.
Availability
The Neo NPUs and the NeuroWeave SDK are expected to be in general availability beginning in December 2023. Early engagements have already started for lead customers. For more information, please visit www.cadence.com/go/NPU.
About Cadence
Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality. Cadence customers are the world’s most innovative companies, delivering extraordinary electronic products from chips to boards to systems for the most dynamic market applications, including consumer, hyperscale computing, 5G communications, automotive, mobile, aerospace, industrial and healthcare. For nine years in a row, Fortune magazine has named Cadence one of the 100 Best Companies to Work For. Learn more at cadence.com.
Marc Kennis, September 15, 2023