BRN Discussion Ongoing

What chance is BRN going into the Nintendo switch 2 ?.
Even money. It either is or it isn't.

SC
 
  • Like
  • Haha
Reactions: 9 users
  • Wow
  • Like
  • Thinking
Reactions: 3 users
  • Haha
  • Like
Reactions: 2 users
  • Like
  • Thinking
  • Haha
Reactions: 6 users
Most forums are.

SC
What are you taking about?..
Obviously we're excluded from that right! 😉👍
 
  • Haha
  • Fire
Reactions: 3 users
What are you taking about?..
Obviously we're excluded from that right! 😉👍
Well most of us on here. There are a couple that are sus. 😂🤣😀

SC
 
  • Haha
  • Like
Reactions: 5 users

itsol4605

Regular
  • Like
  • Haha
Reactions: 3 users
That’s exactly what is the correct answer, fingers crossed. Lots of speculation and very difficult to guess.
I think it’s more of a yeah na
 
  • Haha
  • Like
Reactions: 6 users

7für7

Top 20
What are you taking about?..
Obviously we're excluded from that right! 😉👍
1726528745001.gif
 
  • Like
  • Fire
  • Haha
Reactions: 3 users
Thinking out aloud and knowing BRN IP can be upgraded over time it would make sense for Nintendo to be involved now somehow as each model seemed to be years apart when launched. Guessing only tho.
 
  • Like
Reactions: 3 users
Plenty going on with Intel at the moment, good read:

 
  • Like
  • Fire
Reactions: 7 users

MegaportX

Regular
This is a thread I found fis discussing Nintendo Switch 2. Interesting that last post claims it has 48 Tensor Cores.

https://www.neogaf.com/threads/repo...ight-not-be-as-powerful-as-it-sounds.1663729/

SC
48 Tensor Cores is a feature of some NVIDIA graphics processing units (GPUs), specifically those based on the Ampere and later architectures. Tensor Cores are specialized cores designed to accelerate matrix operations, which are a fundamental component of deep learning and artificial intelligence (AI) workloads.

Tensor Cores are optimized for performing mixed-precision matrix operations, which are commonly used in deep learning models. They can perform operations such as matrix multiplication, convolution, and batch normalization, among others.

The presence of 48 Tensor Cores in a GPU indicates that it is capable of handling demanding AI and deep learning workloads, such as:

  • Training and inference of large neural networks
  • Accelerating scientific simulations and data analysis
  • Enhancing graphics rendering and physics simulations in games and professional applications
GPUs with 48 Tensor Cores are typically found in high-end consumer and professional-grade graphics cards, as well as in datacenter and cloud computing environments.
 
  • Like
  • Love
Reactions: 10 users

Tezza

Regular
Surely we are due something soon. Feels like the AI revolution is passing us bye.
 
  • Like
  • Sad
  • Fire
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Come on TATA!

See section highlighted in orange.🍊




INTELLIGENCE

India in the Thick of the AI Race – How Neuromorphic Computing Could Push the Nation to The Front of the Pack​

mm

Published
2 mins ago
on
September 16, 2024
By
Gaurav Roy
Securities.io is not an investment adviser, and this does not constitute investment advice, financial advice, or trading advice. Securities.io does not recommend that any security should be bought, sold, or held by you. Conduct your own due diligence and consult a financial adviser before making any investment decisions.
India-as-a-Global-Leader-1.png

The artificial intelligence (AI) industry is changing the world in a dramatic fashion as it becomes an integral part of businesses across sectors.
From healthcare, retail, finance, manufacturing, and supply chain to education, energy, and entertainment industries, there has been a growing demand for AI-powered tools.
According to PWC, AI can transform the productivity and GDP potential of the global economy with the greatest gains to be seen in China and North America, equivalent to a total of $10.7 trillion.
Regions to Benefit Most From AI

Driven by this growing usage, the AI market is expected to grow from $50 bln in 2023 to well above $800 bln by the end of this decade, as per Statista.
This AI boom has led to competition between large companies to develop the most powerful AI models worldwide, and countries are keen to foster their own competing AI systems.
Amidst this, India has emerged as a key player, with 92% of knowledge workers utilizing generative AI compared to the much lower global average of 75%.
Just late last month, Asia's richest man, Mukesh Ambani, chairman of Reliance Industries, unveiled “JioBrain,” a suite of AI tools and applications to transform businesses in energy, textiles, and more. Reliance's telecommunications business is currently working with the Indian Institute of Technology (IIT) to launch “Bharat GPT” for Indian users.
“We need to be at the forefront of using data, with AI as an enabler for achieving a quantum jump in productivity and efficiency. “
– Mukesh Ambani
After building a high-powered IT industry worth $250 bln, India is now setting its eyes on AI services, which, according to a report by Nasscom and BCG, could be worth $17 bln in the next three years.
With over 900 million internet users, India has emerged as “the data capital of the world.” The fact that so much data is publicly available is extremely beneficial for companies, as they can write their own AI algorithms.
However, computing power and shared resources are needed to accelerate the country's AI industry. For this, the Indian government has procured a thousand GPUs to offer computing capacity to AI makers.
Earlier this year, the first shipment of Nvidia chips arrived in Indian data centers after the CEO of the world's largest chipmaker, Jensen Huang, visited India and had a discussion with Prime Minister Narendra Modi and tech executives.
“You have the data, you have the talent. This is going to be one of the largest AI markets in the world.”
– Huang told the PM at the time

A Breakthrough: Mimicking the Brain for Smarter Computing​

Mimicking the Brain for Smarter Computing

Amidst all this, scientists at the Centre for Nano Science and Engineering (CeNSe), Indian Institute of Science (IISc), Bangalore, India, made a major breakthrough in neuromorphic computing technology. This technology mimics the human brain's structure and function to create more efficient and intelligent computing systems.
This momentous progress can help India become a major player in the global AI race and make AI computing accessible to everyone and integrated into their personal devices.
This is certainly a great feat, given that the conventional ‘cloud computing model' requires large data centers that consume a lot of energy. Using resource-intensive data centers limits their use to a small community of developers.
Neuromorphic hardware promises enhanced energy efficiency and space for AI. However, at present, it can only handle low-accuracy operations. Tasks like NLP, neural network training, and signal processing require substantially more computing resolution and are currently beyond the scope of individual neuromorphic circuit elements.
However, the latest advancement by IISc scientists can actually help with it and move the space towards' edge computing,' which moves data processing and storage closer to devices that create and use the data. This reduces latency, improves application performance, and saves network costs.
Edge computing further enables real-time applications, as well as AI and machine learning applications, to process large volumes of data with greater speed and reliability.
Published in Nature, the latest research by Professor Sreetosh Goswami at the CeNSe, IISC, who led a group of scientists and students, developed a type of semiconductor device called Memristor. Instead of using traditional silicon-based technology, Memristor was created using a metal-organic film.
The use of molecular films allowed researchers to track free ionic movements, which widened the memory pathways.
To establish kinetic controls over the molecular transition that enabled the neuromorphic traits in a single circuit element, scientists applied voltage pulses and then mapped molecular movements to a distinct electrical signal. This created an extensive ‘molecular diary' of different states.
“Due to this free ionic movement, countless unique memory states and pathways were generated. Such intermediary states had remained inaccessible, so far, as most digital devices are only able to access two either high and low conductance states.”
– Professor Sreebrata Goswami, father of Prof Sreetosh, a visiting scientist at CeNSe
This platform, by combining the creativity of chemistry and the precision of electrical engineering, has “allowed us to control molecular kinetics very precisely inside an electronic circuit powered by nanosecond voltage pulses,” he added.
The molecular change was tiny, but it was exactly what allowed for the highly precise and efficient neuromorphic accelerator. This first-of-its-kind accelerator can store and process data within the same location.
So, the material basically allowed the semiconductor device to copy the way our brain processes information using synapses and neurons. When integrated with a regular digital computer, the Memristor boosted its energy and speed performance hundreds of times, hence becoming an energy-efficient ‘AI accelerator.'
The eventual scaling of the technology is expected to enable the most complex and large-scale AI tasks, such as LLM training, to be performed on a simple consumer laptop or smartphone instead of needing data centers.

Boosting AI Capabilities with Memristor Technology​

With the rising demand for faster and more efficient smartphones, digital computing based on silicon transistor technology has already been facing limitations. In addition to the growth of AI, there's now a critical need for innovations that can significantly improve the processing speed and energy efficiency of today's computing systems.
To create such a device, IISc scientists used the metal-organic film method for their Memristors — which while an In-Memory Computing approach taken by chipmakers to overcome the problems has been without much success — that works like the brain's neuron-synapse circuit.
While through the 2010s, companies tried to mimic the brain, they continued using silicon transistors and didn't make significant gains.
“In the 2020s, the research investments are moving back to academia because there is a realization that we need much more fundamental discoveries to actually achieve brain-inspired computing. If you just take a brute-force approach to use transistors and enforce certain algorithms, that's not going to work.”
– Sreetosh said in an interview
In our brain, memory and processing isn't anything like a digital computer. There are no 0s and 1s used for information processing or breaking the data into small pieces here. What our brain does is swallow big chunks of data to process information, which drastically reduces the number of steps required to get answers, making it extremely energy-efficient.
The combination of analog computing and Memristor technology makes neuromorphic computing fast and highly efficient.
So, the team stored and processed data in 16,520 states at a time instead of just two (Zeros and Ones) states. This reduced the number of steps required to do the fundamental math behind AI algorithms, multiplying 64X64 matrices, to just 64 steps, which otherwise would have taken 262,144 operations.
To test the device, the team plugged it into a conventional desktop computer and then used it to do the complex task of regenerating NASA's ‘Pillars of Creation' image using the data from the James Webb space telescope. The task was achieved much faster and with significantly less energy than a regular computer.
An energy efficiency of 4.1 tera-operations per second per watt (TOPS/W) was reported by the team, which is 220 times more efficient than a NVIDIA K80 GPU, with considerable room for further improvements.
“Neuromorphic computing has had its fair share of unsolved challenges for over a decade,“ noted Sreetosh, adding that during their decade-long research and “rare“ discovery, the team was able to solve all six challenges that they started with and in the process, “almost nailed the perfect system.”
At the current stage, it's just a proof of principle, but despite being in its early stages, the fact that the team achieved four orders of magnitude improvement over the advanced methods while consuming 460× less energy than digital computers, scientists believe this advancement can help India take a significant leap in AI hardware.
According to Navakanta Bhat, a silicon electronics expert who's a professor at CeNSE (who also led the project's design):
“What stands out is how we have transformed complex physics and chemistry understanding into groundbreaking technology for AI hardware. In the context of the India Semiconductor Mission, this development could be a game-changer, revolutionizing industrial, consumer, and strategic applications. The national importance of such research cannot be overstated.“
In the next stage, the team will build larger arrays that go as far as 256X256. Funded by the Ministry of Electronics and Information Technology (MeitY), the research further aims to develop and demonstrate a System-on-Chip solution and eventually incubate a startup to make it commercial, which is expected to happen over the next three years.

Positioning India as a Global Leader in AI​

A combination of research institutions, startups, and large tech companies is driving the significant growth that the AI sector in India is currently experiencing. It's because of these participants that the theoretical AI is being successfully translated into real-world applications.

But this is not all. India's rapidly growing AI industry has also received considerable support from its government, which has introduced several strategic policies and initiatives, such as the National AI Strategy, the AI Task Force, and the AI Mission, to fast-track its advancement.

By ramping up investment worth $1.25 bln towards the “IndiaAI Mission,“ the government is acting as the main driver behind India's AI transformation.

The Ministry of Electronics and Information Technology (MeitY), which funded the latest computing breakthrough at IISc, has been looking for AI solutions under the IndiaAI Mission.

MeitY has been inviting applications for AI solutions to address primary challenges in the areas of climate change, agriculture, healthcare, and assistive technologies. The applicants will go through several rounds of evaluation and, once chosen, will be awarded funds to further develop and deploy their solution for the use of the government and the associated entities for four years.

The initiative is taken under the IndiaAI Application Development Initiative (IADI), one of IndiaAI's seven pillars. The ministry has specified up to fifteen problem statements in the application document.

Elsewhere, AWS selected seven Indian startups for its global Generative AI accelerator program. These startups include Zocket, Unscripted AI, Phot.ai, Orbo.ai, Neural Garage, House of Models, and Converse. Among the 80 companies selected by AWS from around the world, the startups are chosen for their innovative use of AI and their global growth ambitions.

All the chosen companies will have access to the company's AI chips and computing and storage tech. They will also be provided with AWS credits, education, and mentorship to further their use of AI and machine learning technologies.

While the country is making considerable advancements, there are still many hurdles in the path to achieving AI supremacy. India needs to address the lack of skills and adequate infrastructure in order to fully realize its potential to become a global leader in AI.

Companies Helping Advance India's AI Market​

The growing AI market in India has resulted in several companies working to advance AI. For instance, AI startup Krutrim became India's first unicorn this year when it secured $50 mln in funding from Lightspeed Venture Partners and billionaire Vinod Khosla. Meanwhile, Asia's second-richest person, Gautam Adani, has announced a joint venture with UAE to explore AI and diversify into digital services.

Microsoft is also powering initiatives in India that aim to equip 2 million people with AI skills. Infosys, Wipro, Tech Mahindra, and Larsen & Toubro (L&T), among others, are utilizing AI in their businesses. HCL Technologies, meanwhile, has signed a deal with China's Foxconn to establish a chip testing facility.

Now, let's take a look at a prominent name in India's AI industry:

Tata Consultancy Services (TCS)

The largest software company in India, TCS, invested $1.5 bln in a generative AI project pipeline in July this year. This comes after TCS established a dedicated AI Cloud business unit last year. Then, there's the AI experience zone that allows employees to develop and experiment with AI solutions. So far, over 300,000 TCS employees have received foundational training in AI, while the company is currently handling more than 270 AI and generative AI engagements.

TCS is also working on boosting the semiconductor industry in India and is working with Tata Electronics to develop chips by 2026. Tata Electronics, meanwhile, is leading the manufacturing with two sanctioned semiconductor plants. The first USD 11 billion facility will be set up with Taiwan's Powerchip Semiconductor Manufacturing Corporation (PSMC) with the aim of manufacturing 50,000 wafers per month. The USD 3.26 billion second facility will be a chip assembly and testing plant that is expected to start operations in the coming years.

When it comes to financials, TCS reported an increase of +3.9% YoY in its revenue to US$7.51 billion for Q2 2024. Net Income came in at US$1.44 billion, while net cash from operations was US$1.34 billion. The company also reported “very strong“ growth in emerging markets, led by India.


Conclusion​

India, with its young and digitally savvy population, is well-positioned to drive the growth of AI not just locally but on a global scale. The AI market in India is actually projected to reach $8 billion by 2025.

Through ongoing research and development, as demonstrated by the IISc, the country can become a key player in the AI race. But for India to really take the lead, all the stakeholders need to work together to help the country's AI journey, unleash its enormous potential, and completely transform AI.


In the next stage, the team will build larger arrays that go as far as 256X256. Funded by the Ministry of Electronics and Information Technology (MeitY), the research further aims to develop and demonstrate a System-on-Chip solution and eventually incubate a startup to make it commercial, which is expected to happen over the next three years.

Positioning India as a Global Leader in AI​

A combination of research institutions, startups, and large tech companies is driving the significant growth that the AI sector in India is currently experiencing. It's because of these participants that the theoretical AI is being successfully translated into real-world applications.

But this is not all. India's rapidly growing AI industry has also received considerable support from its government, which has introduced several strategic policies and initiatives, such as the National AI Strategy, the AI Task Force, and the AI Mission, to fast-track its advancement.

By ramping up investment worth $1.25 bln towards the “IndiaAI Mission,“ the government is acting as the main driver behind India's AI transformation.

The Ministry of Electronics and Information Technology (MeitY), which funded the latest computing breakthrough at IISc, has been looking for AI solutions under the IndiaAI Mission.

MeitY has been inviting applications for AI solutions to address primary challenges in the areas of climate change, agriculture, healthcare, and assistive technologies. The applicants will go through several rounds of evaluation and, once chosen, will be awarded funds to further develop and deploy their solution for the use of the government and the associated entities for four years.

The initiative is taken under the IndiaAI Application Development Initiative (IADI), one of IndiaAI's seven pillars. The ministry has specified up to fifteen problem statements in the application document.

Elsewhere, AWS selected seven Indian startups for its global Generative AI accelerator program. These startups include Zocket, Unscripted AI, Phot.ai, Orbo.ai, Neural Garage, House of Models, and Converse. Among the 80 companies selected by AWS from around the world, the startups are chosen for their innovative use of AI and their global growth ambitions.

All the chosen companies will have access to the company's AI chips and computing and storage tech. They will also be provided with AWS credits, education, and mentorship to further their use of AI and machine learning technologies.

While the country is making considerable advancements, there are still many hurdles in the path to achieving AI supremacy. India needs to address the lack of skills and adequate infrastructure in order to fully realize its potential to become a global leader in AI.

Companies Helping Advance India's AI Market​

The growing AI market in India has resulted in several companies working to advance AI. For instance, AI startup Krutrim became India's first unicorn this year when it secured $50 mln in funding from Lightspeed Venture Partners and billionaire Vinod Khosla. Meanwhile, Asia's second-richest person, Gautam Adani, has announced a joint venture with UAE to explore AI and diversify into digital services.

Microsoft is also powering initiatives in India that aim to equip 2 million people with AI skills. Infosys, Wipro, Tech Mahindra, and Larsen & Toubro (L&T), among others, are utilizing AI in their businesses. HCL Technologies, meanwhile, has signed a deal with China's Foxconn to establish a chip testing facility.

Now, let's take a look at a prominent name in India's AI industry:

Tata Consultancy Services (TCS)

The largest software company in India, TCS, invested $1.5 bln in a generative AI project pipeline in July this year. This comes after TCS established a dedicated AI Cloud business unit last year. Then, there's the AI experience zone that allows employees to develop and experiment with AI solutions. So far, over 300,000 TCS employees have received foundational training in AI, while the company is currently handling more than 270 AI and generative AI engagements.

TCS is also working on boosting the semiconductor industry in India and is working with Tata Electronics to develop chips by 2026. Tata Electronics, meanwhile, is leading the manufacturing with two sanctioned semiconductor plants. The first USD 11 billion facility will be set up with Taiwan's Powerchip Semiconductor Manufacturing Corporation (PSMC) with the aim of manufacturing 50,000 wafers per month. The USD 3.26 billion second facility will be a chip assembly and testing plant that is expected to start operations in the coming years.

When it comes to financials, TCS reported an increase of +3.9% YoY in its revenue to US$7.51 billion for Q2 2024. Net Income came in at US$1.44 billion, while net cash from operations was US$1.34 billion. The company also reported “very strong“ growth in emerging markets, led by India.


Conclusion​

India, with its young and digitally savvy population, is well-positioned to drive the growth of AI not just locally but on a global scale. The AI market in India is actually projected to reach $8 billion by 2025.

Through ongoing research and development, as demonstrated by the IISc, the country can become a key player in the AI race. But for India to really take the lead, all the stakeholders need to work together to help the country's AI journey, unleash its enormous potential, and completely transform AI.
 
  • Like
  • Fire
  • Love
Reactions: 46 users
48 Tensor Cores is a feature of some NVIDIA graphics processing units (GPUs), specifically those based on the Ampere and later architectures. Tensor Cores are specialized cores designed to accelerate matrix operations, which are a fundamental component of deep learning and artificial intelligence (AI) workloads.

Tensor Cores are optimized for performing mixed-precision matrix operations, which are commonly used in deep learning models. They can perform operations such as matrix multiplication, convolution, and batch normalization, among others.

The presence of 48 Tensor Cores in a GPU indicates that it is capable of handling demanding AI and deep learning workloads, such as:

  • Training and inference of large neural networks
  • Accelerating scientific simulations and data analysis
  • Enhancing graphics rendering and physics simulations in games and professional applications
GPUs with 48 Tensor Cores are typically found in high-end consumer and professional-grade graphics cards, as well as in datacenter and cloud computing environments.
Thanks mate. Great explanation
Appreciated. There has been talk of DLSS being used in Switch2 which I guess would help with improving graphics. My layman's view.

SC

SC
 
  • Like
Reactions: 3 users

schuey

Regular
Surely we are due something soon. Feels like the AI revolution is passing us bye.
Hasnt started yet in the big picture
 
  • Like
Reactions: 7 users
The market does not understand the significance of the TCS/Intel collaborations with Brainchip. Throw in an Arm collaboration for a helping hand 🤪. Going to be huge. Those Indians know their Tech!
 
  • Like
  • Fire
  • Love
Reactions: 19 users

IloveLamp

Top 20

Attachments

  • 1000018375.jpg
    1000018375.jpg
    617 KB · Views: 792
  • Like
  • Fire
  • Love
Reactions: 22 users

Diogenese

Top 20
Plenty going on with Intel at the moment, good read:


Now this is interesting:

But in a win for the foundry business, Gelsinger revealed that Intel has signed a deal with AWS to co-develop an AI chip using Intel's 18A chip fabrication process. Intel has also agreed to produce a custom Xeon 6 processor for AWS, building on an existing partnership between the two firms.

"We have tripled our deal pipeline since the beginning of the year," Gelsinger said of Intel Foundry's business, describing the AWS deal as a "multi-year, multi-billion-dollar framework" that could potentially involve additional chip designs. He added that it "demonstrates the continued progress we are making to build a world-class foundry business."

Intel's cost-cutting and dealmaking -- along with a newly-awarded $3.5 billion contract to build chips for the Pentagon -- sent the company's stock soaring over 6% at market close. It's a bright spot in Intel's otherwise grim fiscal year
.

I wonder how fast Akida 2/TeNNs would be at 18A (1.8 nm). I don't suppose it's a purely linear thing - I mean, what's the inertia of an electron?

So was it a coincidence that Anil's tape out announcement was quashed?
 
  • Like
  • Thinking
  • Fire
Reactions: 26 users

schuey

Regular
A
The market does not understand the significance of the TCS/Intel collaborations with Brainchip. Throw in an Arm collaboration for a helping hand 🤪. Going to be huge. Those Indians know their Tech!
And their curry's, Oh the Naans....
 
  • Haha
Reactions: 2 users
Top Bottom