BRN Discussion Ongoing

  • Fire
  • Thinking
Reactions: 2 users

FJ-215

Regular
  • Fire
  • Like
Reactions: 2 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Nice!




EXTRACT ONLY

Arm’s AI Chip Bet​

While most people associate AI data center workloads with GPU computing, there is a growing demand for more energy-efficient CPU-based solutions. This presents an opportunity for CPU players like Arm to cut into a market that is currently dominated by Nvidia’s GPUs. (Conversely, Nvidia’s plans to develop its own ARM-based CPUs will provide additional revenue to Arm, which holds the intellectual property rights to ARM designs.)

Of course, GPUs are expected to remain critical in AI training for the foreseeable future. But solutions like Arm Neoverse have proven that less computationally intense AI inference (i.e. the process of running live data through a trained AI model) can be done much more efficiently with a CPU architecture.

As the AI market evolves, rising adoption could play directly into Arm’s hands as inference supplants training as the primary growth driver.

Just as the AI training boom has fueled a surge in demand for Nvidia’s GPUs in recent years, as data centers look to ramp up their inference capacity, Arm’s AI-optimized CPUs will become more sought-after.




PS: Arm is expected to unveil a new data centre strategy at the end of the month that could bring it into more direct competition with Nvidia, so it’ll be worth keeping an eye out for more information on this as it emerges.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 31 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Everyone is talking about efficiency these days, which is quite lucky since that just happens to be AKIDA’s middle name.🥳

And since Masayoshi San is prepared to pay $9 billion a year on AI, a five year license with BrainChip would be chicken feed.


SoftBank buys Graphcore, targets further AI investments​

Posted by Harry Baldock | Jul 12, 2024 | TECHNOLOGY, Investment, AI, Products & Services, COMPANY NEWS, People, Governance, Asia-Pacific, Europe, News
SoftBank buys Graphcore, targets further AI investments

News​

SoftBank founder Masayoshi Son said earlier this year that AI will be SoftBank’s ‘next big bet’ when it comes to technology​

This week, Japanese conglomerate SoftBank has announced the acquisition of struggling UK-based AI chipmaker Graphcore.
Official financial details have not been disclosed, but anonymous sources speaking to the Financial Times valued the deal at $600 million.
Graphcore creates specialised AI chips, known as intelligence processing units, which can be used to train and operator AI large language models.
This is the same type of chip technology that has seen rival chip company Nvidia soar to around $3 trillion earlier this year.
Unlike Nvidia, however, Graphcore has struggled significantly to commercialise its technology. Valued at $2.8 billion back in 2020, Graphcore has since failed to sell its products at scale, noting “lower hardware sales to key strategic customers”. In 2022, the company recorded just $2.7 million in sales, 46% lower than in 2021, and booking a pre-tax loss for the year of $205 million.
As a result, 2023 saw Graphcore undertake cost cutting measures, cutting 20% of its workforce and closing its operations in Norway, Japan, and South Korea. At the time, the company said there was ‘material uncertainty’ over the company’s survival and called for fresh funding.
Now, as part of SoftBank, Graphcore will reportedly have all the resources it needs to return to full force.
“Demand for AI compute is vast and continues to grow,” said Graphcore’s co-founder and chief executive, Nigel Toon. “There remains much to do to improve efficiency, resilience, and computational power to unlock the full potential of AI. In SoftBank, we have a partner that can enable the Graphcore team to redefine the landscape for AI technology.”
SoftBank itself has been stepping up its focus on AI for over a year now, with Son saying earlier this year that “realising ASI (Artificial Superintelligence)” was “his only focus”. He has also said the company is ready to invest roughly $9 billion a year in AI and is prepared for largescale dealmaking in the future.


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 27 users

Wags

Regular
Please stop both of you. You both do a fantastic job investigating BrainChip's development for us and should respect each other for it. I can say for myself that I do love both of your work and I am sure all others do too. O.K. I do not like hairy toes 🥴and I am allergic to cats (but only in real life!), but I love your running BRAVO, your ideas, and your findings. 💐

I can hardly imagine how much time you FRANGIPANI spend (for us!) on finding all those connecting dots and remembering all those of the past. I very much appreciate that. We should have said that much earlier already. 💐

Both of you are important to us and this forum and you are best when working together! So please give each other at least respect if you cannot be friends.

Thank you and have a good weekend!
Well said @CHIPS couldn't agree more. Cheers to @Bravo and @Frangipani

But this also applies to anyone, who puts in the time and generously shares the outcomes of their efforts, for us all to benefit from. Thankyou.

I'm ok with alternative views or theories, allowing discussion, investigation and or debunking.

I don't see much point with the relentless negative bagging of BRN that some posters here provide, or indeed the character attacks, seems like a waste of time to me.

Personally, I'm feeling pretty anxious with BRN at the moment. Im not technical enough to appreciate the full benefits of our tech, and rely on those skilled enough here. Seems any company with the slightest AI enhancement tool, is kicking goals or getting snapped up one way or the other. Some days we struggle to hold 20c a share. WTF??

I guess the edge box's will show $$ this qtly report, but I hope to see some more upturn in revenue from other, possibly unknown source's.

I know I have said this before, so apologies in advance for being boring, but I'm a bit of a contractural / literal sort of guy.
BRN is openly and well documented, an IP focused company, following the ARM model.
BRN has stated publicly in writing "We’re embedding our IP in everything, everywhere." (It's on our website for f%cks sake)
Unless we are giving our IP away for free, is it not reasonable for an investor to assume this statement would suggest revenue $.

When I asked management about this publicly at the AGM, they blew it off as just marketing words. ?

I understand this may be hidden or buried in amongst partnerships and/or enablers, costing lots of time to bear fruit. Im just hoping that fruit is soon.
This is without discussing the "imminent' or 'explosion of sales' comments.

I'm the upbeat BRN supporter, this just shows my mood at the mo.
Anyways, on with the weekend, Rant over. Apologies all.
 
  • Like
  • Love
Reactions: 36 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Wow
  • Like
  • Fire
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

An interesting take on what has hampered our progress. Says here that companies such as ours were "founded with the idea that there would be an explosion in the use of specialized chips to train AI systems and to perform inference using trained models." But instead of buying our systems, the hyperscalers just went out and bought oodles of NVIDIA GPU's and started developing their own accelerators.

Anyway, I'm still optomistic that an Arm, Graphcore and Brainchip collaboration would be too good of a proposition for Softbank to ignore.

CAN SOFTBANK BE AN AI SUPERCOMPUTER PLAYER? WILL ARM LEND A HAND?​

July 12, 2024 Timothy Prickett Morgan
graphcore-chip-shot-logo-1030x438.jpg

There have been rumors that either Arm Ltd or parent company and Japanese conglomerate SoftBank would buy British AI chip and system upstart – it is no longer a startup if it is eight years old – Graphcore for quite some time. And yesterday the scuttlebutt was confirmed as Masayoshi Son got out the corporate checkbook for SoftBank Group and cut check to acquire the company for less money than its investors put in.
It has not been an easy time the various first round of AI chip startups that were founded in the wake of the initial success that Nvidia found with its GPU compute engines for training neural networks. Or, at least, it has not been as easy as it could have been to get customers and amass both funding and revenues on the road to profits.
Most of these startups, in one form or another, emulate the clean slate approach that Google took with its AI-specific Tensor Processing Units, which were first deployed in early 2015 and which are now in their sixth generation with the “Trillium” family of GPUs, the first of which were revealed a month ago. The idea is to strip all of the graphics processing that comes from the GPU heritage and the high-precision floating point math required by HPC simulation and modeling out of the design and, in many cases, build in fast and wide networking pipes to interconnect these resulting matrix math engines into clusters to take on very large jobs.

BrainChip, Cerebras Systems, Graphcore, Groq, Habana Labs (now part of Intel), Nervana Systems (supplanted by Habana at Intel), and SambaNova Systems all were founded with the idea that there would be an explosion in the use of specialized chips to train AI systems and to perform inference using trained models. But the ideal customers to buy these devices – or to acquire these companies – were the hyperscalers and cloud builders, and instead of buying any of these compute engines or systems based on them, they decided to use a different two-pronged approach. They bought Nvidia GPUs (and now sometimes AMD GPUs) for the masses (which they can rent at incredible premiums even after buying them at incredible premiums) and they started creating their own AI accelerators so they could have a second source, backup architecture, cheaper option.
Even the shortages of Nvidia GPUs, which have been propping up prices for the past three years, has not really help the cause of the AI chip and system upstarts all that much. Which is odd, and is a testament to the fact that people have learned to be weary of and leery of software stacks that are not fully there yet. So beware Tenstorrent and Etched (both of whom we just talked to and will write about shortly) and anyone else who thinks they have a better matrix math engine and a magic compiler.
It is not just a crowded market, it is a very expensive one to start up in and to be an upstart within. The money is just not there with the hyperscalers and cloud builders doing their own thing and enterprises being very risk averse when it comes to AI infrastructure.

Which is why Graphcore was seeking a buyer instead of another round of investment, which presumably is hard to come by. With former Prime Minister Rishi Sunak pledging £1.5 billion for AI supercomputing in the United Kingdom, with the first machine funded being the Isambard-AI cluster at the University of Bristol, there was always a chance that Graphcore would get a big chunk of money to build its Good AI supercomputer, a hypothetical and hopeful machine that the company said back in March 2022 it would build with 3D wafer stacking techniques on its “Bow” series Intelligence Processing Units. But for whatever reason, despite Graphcore being a darling of the British tech industry, the UK government did not fund the $120 million required to build the proposed Good machine, which would have 8,192 of the Bow IPUs lashed together to deliver 10 exaflops of compute at 16-bit precision and 4 PB of aggregate memory with over 10 PB/sec of aggregate memory bandwidth across those Bow IPUs.
We would have loved to see such a machine built, and so would have Graphcore, we presume. But even that would not have been enough to save Graphcore from its current fate.
Governments can fund one-off supercomputers and often do. The “K” and “Fugaku” supercomputers built by Fujitsu for RIKEN Lab in Japan are perfect examples. K and Fugaku are the most efficient supercomputers for true HPC workloads ever created on the planet – K actually was more efficient than the more recent Fugaku – but both are very expensive compared to alternatives that are nonetheless efficient. And they do not have software stacks that translate across industries as the Nvidia CUDA platform does after nearly two decades of immense work. K and Fugaku, despite their excellences, did not really cultivate a widening and deepening of indigenous compute on the isle of Japan, despite the very best efforts of Fujitsu with its Sparc64fx and A64FX processors and Tofu mesh/torus interconnects. Which is why Fujitsu is working on a more cloud-friendly and less HPC-specific fork of its Arm server chips called “Monaka,” which we detailed here back in March 2023.
Japan ponied up $1.2 billion for build the 10 petaflops K machine, which became operational in 2011, and $910 million for the 513.9 petaflops Fugaku machine, which became operational in 2021. If history is any guide, Japan will shell out somewhere around $1 billion for a “Fugaku-Next” machine, which will become operational in 2031. Heaven only knows how many exaflops it will have at what precisions.
For the United Kingdom, the University of Edinburgh is where the flagship supercomputer goes, not down the road from where Graphcore is located in Bristol. Of the £900 million ($1.12 billion) in funding from the British government to build an exascale supercomputer in the United Kingdom by 2026, £225 million ($281 million) of that was allocated to the Isambard-AI machine and most of the rest is going to be used to build a successor to the Archer2 system at the Edinburgh Parallel Computing Centre (EPCC) lab.
Graphcore was never going to get a piece of that action because it builds AI-only machinery, not hybrid HPC/AI systems, no matter how much the British government and the British people love to have an indigenous supplier. If it wanted full government support, Graphcore needed to create a real HPC/AI machine, something that could compete head to head with CPU-only and hybrid CPU-GPU machines. Governments are interested in weather forecasting, climate modeling, and nuclear weapons. This is why they build big supercomputers.
Because of the lack of interest by hyperscalers and cloud builders and the risk aversion of enterprises, Graphcore found itself in a very tight spot financially. The company has raised around $682 million in six rounds of funding between 2016, when it was founded, and 2021, in the belly of the coronavirus pandemic when transformers and large language models were coiling to spring on the world. That is not as much money as it seems given the enormous hardware and software development to create an exaflops AI system.
The last year for which we have financials for Graphcore is 2020, which according to a report in the Financial Times saw the company only generate $2.7 million in revenues but post $205 million in pre-tax losses. Last fall, Graphcore said it would need to raise new capital this year to continue operating, and presumably the revenue picture and loss picture did not improve by much. It is not clear how much money was left in the Graphcore kitty, but what we hear is that SoftBank paid a little more than $600 million to acquire the company. Assuming that all the money is gone, then Microsoft, Molten Ventures, Atomico, Baillie Gifford, and OpenAI’s co-founder Ilya Sutskever have lost money on their investment, which is a harsh reality given that only four years ago Graphcore had a valuation of $2.5 billion.
None of that harsh economics means that a Bow IPU, or a piece of one, could not make an excellent accelerator for an Arm-based processor. SoftBank’s Son, who is just a little too enamored of the cacophonous void of the singularity for our tastes (we like the chatter and laughter of individuals and the willing collaboration and independence of people), has made his aspirations in the AI sector clear.
But so what?
All of the upstarts mentioned above had and have aspirations in AI, and so do the hyperscalers and cloud builders who are actually trying to make a business out of this. And they all have some of the smartest people on Earth working on it – and still Nvidia owns this space compute engine, network, and software stack, which is analogous to lock, stock, and barrel.
Son spent $32 billion to acquire Arm Ltd back in 2016. If Son is so smart, he should have not even bothered. At the time, Nvidia had a market capitalization of $57.5 billion, which was up by a factor of 3.2X compared to 2015 as the acceleration waves were taking off in both HPC and AI. At $32 billion, Son could have acquired a 55.7 percent stake in Nvidia. With Nvidia’s market cap standing at $3,210 billion as we go to press, that hypothetical massive investment by Son in Nvidia would be worth just shy of $1,786 billion today.
To put that into perspective, the gross domestic product of the entire country of Japan was $4,210 billion in 2023.

We had a burr under our saddle earlier this year, talking about how Arm should have had a GPU or at least some kind of datacenter-class accelerator to compete against Nvidia in its portfolio. We took a certain amount of grief about this, but we stand by our statement that Arm left a lot of money on the Neoverse table by not having a big, fat XPU, and here are we with Son having Arm in one hand and Graphcore in the other. But with the hyperscalers and cloud builders already building their own accelerators, the time might have passed where Arm can sell IP blocks for accelerators.
But maybe not.

There may be a way to create a more general purpose Graphcore architecture and take on Fujitsu for Fugaku-Next, too. Or to collaborate with Fujitsu, which would be more consistent with how the Project Keisuko effort to make the K supercomputer started out in 2006 with a collaboration between Fujitsu, NEC, and Hitachi.
There is only one sure way to predict the future, and that is to live it. We shall see.

 
Last edited:
  • Like
  • Thinking
  • Sad
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Love
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Hitachi Vantara Federal’s Pragyansmita Nayak Discusses Edge Computing Challenges & Solutions​

by mm Jerry PetersenJuly 12, 2024, 11:55 am

Pragyansmita Nayak
Hitachi Vantara Federal's Pragyansmita Nayak Discusses Edge Computing Challenges & Solutions - top government contractors - best government contracting event

Pragyansmita Nayak, chief data scientist at Hitachi Vantara Federal, believes that edge computing will become a necessary component of the technological infrastructure by 2030.

Nayak predicts in an opinion column published on the Hitachi Vantara Federal website that the next decade will see a proliferation of various technologies, such as autonomous vehicles, smart cities and Internet-of-Things devices, which will require speedy and efficient local data processing that only edge computing can deliver.


Alongside the benefits, edge computing also comes with a number of challenges, including a dramatic increase in the volume of data generated by devices; security and privacy concerns; interoperability problems due to the use of proprietary technologies; and the need for greater energy efficiency.

Nayak nevertheless considers edge computing “a critical area of focus for future technological development” and so looks to other innovations to address inherent challenges.

These innovations include artificial intelligence, which could enable smarter data processing; quantum computing, which could bolster data encryption and increase processing speeds; and advances in networking, which could deliver the low-latency connectivity that edge computing requires.


Screenshot 2024-07-13 at 11.48.46 am.png


 
  • Like
  • Love
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Love
  • Thinking
Reactions: 11 users

Diogenese

Top 20
  • Like
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

In pursuit of energy efficiency: How Fujitsu and Arm are shaping AI’s tomorrow​

July 12, 2024
Japanese
kv_tcm100-7602753_tcm100-2750236-32.jpg

We are witnessing the fastest industrial revolution of all time with Artificial Intelligence (AI) and high-performance computing (HPC) being the biggest enablers. Without a doubt, AI and HPC have become an unstoppable force permeating all aspects of our society. But, despite the unprecedented pace at which AI transforms every facet of human lives, its accessibility towards sustainable digital transformation remains a massive challenge with environmental costs and impact of AI systems too often being ignored.
Accommodating the demands of this new AI-powered world brings with it a raft of challenges. Take this for instance: whilst big large language models (LLMs) are top of mind for the industry today, they are also the biggest contributors to carbon emission, on account of large numbers of parameters in the models, increasing the power usage of data centers, as borne out by a Stanford University's report
icon-window-01.png
.

Fujitsu, a global digital services company and Arm, a global leader in semiconductor design and silicon IP development, are focused on open collaboration with the international community for human-centered technological development and are working towards building easy-to-deploy solutions for diverse domains across the globe.

With the escalating need for power-efficient systems, FUJITSU-MONAKA in collaboration with Arm is aligned to enable the next-generation AI application development ecosystem through high-end energy efficient compute. The R&D efforts are focused towards enhancing various AI/Machine Learning (ML) and deep learning (DL) frameworks for Armv9-A architecture and Scalable Vector Extension 2 (SVE2) platforms. SVE2 emerges as a formidable solution for handling AI and HPC workloads by bringing significant speed improvements.
By using the widely adopted Armv9-A architecture, Fujitsu and Arm aim to work towards enabling developers to easily port and optimize their applications towards expanding the AI ecosystem and make it more accessible and affordable for various users and industries.

Architecting next generation data centers with sustainability at the core​

Training AI models and systems places considerable demands on the underlying hardware, thus increasing energy consumption. Inefficient hardware support for running complex AI workloads not only impacts energy efficiency but also performance. One solution to this is building an underlying architecture and technology stack for data centers, enabling organizations to achieve the best performance with low energy consumption. This allows companies to sustainably meet the current and future demands of AI applications and reduce the toll on the environment.
With data centers being a vital infrastructure that underpin our AI ecosystem, the industry needs to architect a new approach to supercharge data center efficiency by amalgamating power efficient hardware and software ecosystems.
Fujitsu Ltd. is developing FUJITSU-MONAKA – a 2 nanometer Armv9-A architecture-based CPU slated to be launched in the financial year of 2027 focusing mainly on providing an energy efficient solution to meet the carbon neutrality goals for a green data center supercomputing facility. FUJITSU-MONAKA processor is set to provide energy-efficiency solution to Japan’s New Energy and Industrial Technology Development Organization (NEDO) program, which has launched an ambitious initiative to achieve energy savings of 40% or more in data centers in Japan by 2030.

Unlocking energy efficiency & performance begins with a collaborative approach​

Arm and Fujitsu have a long history of collaboration on the design of the Scalable Vector Extensions (SVE) Architecture for the Arm v8-A architecture. The SVE design architecture supports a vector-length agnostic (VLA) programming model allowing the program to take advantage of wider and faster machines without the need for recompilation.
Fujitsu was the first silicon partner of Arm to implement the SVE architecture on the Fujitsu-A64FX CPU that powered the Supercomputer Fugaku, jointly developed by RIKEN and Fujitsu. This laid the foundation for Arm to develop the SVE2 architecture to accelerate Demand-Side Platform-like and AI/ML workloads in data center and edge compute.
Fujitsu designs its own microarchitecture – a formidable factor for CPU performance and power efficiency. This technology made it possible for Fugaku developed using Fujitsu-A64FX Arm-based CPU, to achieve the world's highest levels of performance and energy efficiency.
While talking about Fujitsu’s close collaboration with the Arm team, Dr. Priyanka Sharma, Director of Software Engineering at Fujitsu Research of India Private Limited (FRIPL), who is leading the MONAKA Software R&D Unit, said: “Fujitsu has a strong legacy of contributing towards the Arm ecosystem and we have extensively contributed towards building the Arm software stack. Through FUJITSU-MONAKA, we are committed to taking our association further to push our developments in the high-performance computing domain to the open source community and work towards building a unified development ecosystem that plays a vital role in advancing the creation of cross-platform software and accelerators. The MONAKA HPC R&D Unit in India is actively collaborating with the Arm team towards co-development of various software level enablement/tuning efforts to enable various ML/DL stack for Arm. The co-development with the Arm team has been a great working association and gives quite the feel of working towards the global community in building an open ecosystem for democratizing the use of AI.”

Shaping a better tomorrow with a ‘Software just works’ mantra​

Arm actively collaborates with technology companies to develop various open system standards to ensure that system software ‘just works on Arm.’ This avoids software fragmentation, ensures interoperability of system IP, and reduces time-to-market for the Arm ecosystem.
Aparajita Bhattacharya, Senior Director, Engineering Architecture and Technology at Arm shares, “My role in Arm is leading an engineering & technology organization that enables the software to ‘just work’. We worked closely with Fujitsu to enable them to achieve compliance for their Arm-based Fujitsu-A64FX CPU. During this collaboration, Arm teams worked hand-in-hand with Fujitsu’s engineering and leadership teams to better understand Fujitsu’s validation environments and collaborate with them to achieve architecture compliance on their systems. The deep dives into technical details and requests for capabilities, has led to enhancements in Arm’s compliance products. Our team has experienced first-hand the detail oriented, quality focused, and deeply courteous Japanese culture.”

Cross-industry collaboration is the key to innovation​

By working collectively towards an energy-efficient future, companies can foster innovation and growth. To deliver on this shared vision of open, flexible, and interoperable AI and HPC systems for an increasingly digitized business environment, technology leaders such as Fujitsu and Arm are driving innovation by maximizing the usefulness of backend compute that delivers on the promise of sustainable digital transformation. The next-generation Data Centre CPU FUJITSU-MONAKA can handle diverse and demanding workloads, enabling businesses to meet performance needs while lowering energy usage, and supporting the goal of a sustainable future.
We are also witnessing improved collaboration within the ecosystem. In a bid to drive cross-industry collaboration, the Linux Foundation announced the formation of the Unified Acceleration (UXL) Foundation, a cross-industry group committed to delivering an open standard accelerator programming model that simplifies development of performant, cross-platform applications in September 2023 during the OSS Summit. Both Arm and Fujitsu are members of the UXL Foundation that is focused on building a unified development ecosystem and plays a vital role in advancing the creation of cross-platform software and accelerators.
Additionally, Arm and Fujitsu are also key members of Linaro that fosters the goal of spreading and evangelizing Arm ecosystem across industries by leveraging Arm open source software. Fujitsu’s collaboration with Linaro began around 2019 for device drivers and the partnership has continued over the years with Fujitsu contributing to CI/CD pipeline and compiler toolchain.

Acknowledgements​

This article is based on results obtained from a project subsidized by the New Energy and Industrial Technology Development Organization (NEDO).

Research team leaders:​

profile1_tcm100-7602754_tcm100-2750236-32.png

Priyanka Sharma, PhD
Director - Software Engineering and Head MONAKA Software R&D Unit,
Fujitsu Research of India Private Limited (FRIPL)
LinkedIn Profile:
icon-window-01.png

profile2_tcm100-7602755_tcm100-2750236-32.png

Aparajita Bhattacharya
Senior Director, Engineering Architecture and Technology
Arm Embedded Technologies Pvt. Ltd, Bangalore
LinkedIn Profile:
icon-window-01.png

 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Maybe something worth looking into here...

In this article dated March 2023 it describes the Fujitsu/Arm MONAKA CPU due in 2027. But it also discusses the "Fugaku-Next" processor which is supposed to have energy efficient and high performance accelerators. It also shows "Neuromorpic Computing" under the heading New Computing Paradigm.

Here's a radical thought - SoftBank + Arm + Graphcore + Fujitsu + BrainChip = world domination!


Screenshot 2024-07-13 at 3.26.03 pm.png


Screenshot 2024-07-13 at 3.04.11 pm.png








 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 21 users

7für7

Top 20
Maybe something worth looking into here...

In this article dated March 2023 it describes the Fujitsu/Arm MONAKA CPU due in 2027. But it also discusses the "Fugaku-Next" processor which is supposed to have energy efficient and high performance accelerators. It also shows "Neuromorpic Computing" under the heading New Computing Paradigm.

Here's a radical thought - SoftBank + Arm + Graphcore + Fujitsu + BrainChip = world domination!


View attachment 66517

View attachment 66519







Can you imagine? And before that they kick us out with a takeover for 25 cent 🤡
 
  • Haha
  • Like
  • Thinking
Reactions: 3 users
This is a good watch (2 months old) by an animator, on how the Animation industry is being disrupted and A.I. is a major factor.

He raises some good points (like coders thinking they are safe) which I think are relevant to everyone.

Creativity is/has, been seen by some, as humanity's last stronghold against technology advancement.

If the Dreamers are threatened, everyone is.



I wish him luck, but I don't think striking action, will provide the same safeguards, as it did in the 40's..

locutus-resistance(1).gif


Things will likely get more messy, if anything..

giphy.gif


Place your bets.
 
Last edited:
  • Like
Reactions: 1 users
  • Like
Reactions: 2 users
Nice 🔥


IMG_20240713_162152.jpg
Screenshot_2024-07-13-16-22-58-01_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg


Doc... HERE
 
  • Like
  • Fire
  • Love
Reactions: 73 users

Quiltman

Regular
I reckon there is a very, very good chance that Tata is one of the organisations getting close to signing a deal with BrainChip, referred to be Sean at the AGM. Our history of collaboration has been a long one.

Use of our technology in Tata Medical Devices seems to be the catalyst.

Sounak Dey is as keen as ever …
 

Attachments

  • 99AD9EAD-3D90-443F-967B-91977437F195.jpeg
    99AD9EAD-3D90-443F-967B-91977437F195.jpeg
    342.2 KB · Views: 200
  • Like
  • Fire
  • Love
Reactions: 66 users
Top Bottom