BRN Discussion Ongoing

Frangipani

Regular
Germany closed green! 🤔 I’m scared


Actually, 368E3763-6393-4B88-ADC4-246AC11A466F.jpeg , D20C4E96-207F-4D53-B603-C73E1B323F54.jpeg für D20C4E96-207F-4D53-B603-C73E1B323F54.jpeg .


At least not on Tradegate, which is Germany’s most important stock exchange regarding BRN. Low volume on Friday, though.

4A5FE0A9-6318-4D55-B19A-9A348DE01E3C.jpeg

EE783FA9-D14A-42EF-A8AD-DF5FB367C06B.jpeg


Now that you’ve addressed your own Angst after mentioning “German angst” three times last month in the context of share price volatility, will you by any chance be covering “Australian angst” the next time BRN closes red on the ASX?
 
  • Haha
  • Like
Reactions: 4 users

MrNick

Regular
NaNose has been added to the Google AI Startup Fund. And we all know who has links to them…👍🏻
 
  • Like
  • Love
  • Thinking
Reactions: 24 users
Wonder if we get a look in or have had any input here at all?

This paper has just been released and whilst there is no mention of us and the work benefited from some input from Dr's at Numenta (they work with CPUs) after a guest lecture at CMU, there has been some history with Akida and cortical column work at CMU with John Shen et al as per a prev post of mine below for reference.

From memory I think @Diogenese or someone mentioned PVDM had looked at or studied or started on similar but could be mistaken.

Submitted on 20 May 2024]

NeRTCAM: CAM-Based CMOS Implementation of Reference Frames for Neuromorphic Processors​






IMG_20240602_222358.jpg

Some info in the previous post on C3S cortical columns.



Sri Lakshmi Vemulapalli​

Research Assistant at Neuromorphic Computer Architecture Lab | ECE Graduate Student at Carnegie Mellon University | Seeking full time positions starting immediately.​

Carnegie Mellon UniversityCarnegie Mellon University​


Carnegie Mellon University

1 year 1 month

  • Teaching Assistant​

    Jan 2023 - May 20235 months
    Pittsburgh, Pennsylvania, United States
    Working as a Teaching Assistant for 18698 - Neural Signal Processing.
  • Research Assistant​

    Aug 2022 - May 202310 months
    Pittsburgh, Pennsylvania, United States
    Working as a Research Assistant under Prof. John Shen on a Neural Processor, “Akida” by Brainchip.
    - Developing a C3S designs in MetaTF and map to Akida chip.
    - Developing a technique for the conversion of CNNs to Spiking Neural Networks (SNN) and designing a framework for Native TNNs.
 
  • Like
  • Fire
  • Love
Reactions: 31 users

JDelekto

Regular
From the information I read, I believe this chip will be built using Intel's Foundry Services. Tom's Hardware reported in January this year that Nvidia had chosen Intel to produce H100 GPUs. This is similar to how BrainChip partnered with Intel Foundry Services to be able to manufacture chips with BrainChip's IP, as an alternative to using TSMC.

I think it can get confusing because Intel not only does chip design but will now fabricate chips for themselves and others. The article mentioned in that post mentions using both Intel's Foundry services, potentially as an alternative to using TSMC.
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Justchilln

Regular
How about Numem as our mystery customer for the highlighted Customer SoC?!

View attachment 64246



Ultra-low-power MRAM-based SoC for sensors/AI

Ultra-low-power MRAM-based SoC for sensors/AI

Technology News | May 30, 2024
By Jean-Pierre Joosting
MRAM AI RISC-V DSP SOC



Numem, a leader in high-performance memory IP cores and memory chips/chiplets based on its patented NuRAM (MRAM) and SmartMem technologies, and IC’ALPS, a leader in ASIC/SoC design and supply chain management, have pooled their expertise to meet the challenge of developing an ambitious integrated circuit with RISC-V processors, 2MBytes of NuRAM and a DSP/AI Custom Datapath Accelerator.​


The Custom SoC was developed in an advanced technology node. This SoC has been designed and implemented to highlight the Numem high-performance, low power memory subsystem with a RISC V Processor and AI Accelerator for ultra-low power applications. It has been developed through a close collaboration between Numem and IC’ALPS.

The physical implementation of this integrated circuit was made in a secure space (isolated location, network, and servers, and encrypted exchanges) to meet with the stringent protection of sensitive data required by this program.

“We were pleased with the collaboration and quality of service provided by IC’ALPS which made this on-time tape out possible and first time functional silicon” said Jack Guedj, CEO of Numem. “NuRAM with SmartMem is a high-performance memory subsystem which is 2-3x smaller and boast significant power reduction over SRAM”, he added.

“The challenges were numerous including — architecture, power domains, protection of the sensitive data, run times pushing improvement of EDA flow and the pressure of the tape out deadline”.

Numem and IC’Alps intend to extend their partnership to serve new SoC projects for customers.

www.numem.com
www.icalps.com
I think it’s going to be megachips, when we signed the license they said in the future they might start to make some of their own chips…….
 
  • Like
  • Fire
  • Love
Reactions: 8 users

itsol4605

Regular
NaNose has been added to the Google AI Startup Fund. And we all know who has links to them…👍🏻
As we all know:
No Brainchip Akida inside NaNose products🤷‍♂️
 
  • Haha
Reactions: 1 users

IloveLamp

Top 20
  • Haha
  • Like
Reactions: 12 users
  • Like
Reactions: 5 users

wilzy123

Founding Member
As we all know:
No Brainchip Akida inside NaNose products🤷‍♂️
You lost me at "No Brain"
 
  • Haha
  • Like
  • Fire
Reactions: 15 users
  • Haha
  • Fire
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Chinese Academy of Sciences + Synsense




Chinese scientists unveil low-power neuromorphic ‘brain-like’ chips
By GT staff reporters

Published: Jun 02, 2024 06:53 PM

An employee inspects a cellphone chip at an electronic product research and development company in Ningbo, East China's Zhejiang Province on February 22, 2024. The company's products are exported to more than 80 countries in Europe and Latin America, and its overseas order book is full through the second quarter of 2024. Photo: VCG

An employee inspects a cellphone chip at an electronic product research and development company in Ningbo, East China's Zhejiang Province on February 22, 2024. The company's products are exported to more than 80 countries in Europe and Latin America, and its overseas order book is full through the second quarter of 2024. Photo: VCG
A Chinese scientific team has developed a new 'brain-like' chip that operates on reduced energy consumption, marking a significant advance in China's chip manufacturing technology.

Researchers from the Chinese Academy of Sciences, in collaboration with other scholars, have developed Speck, a low-power neuromorphic chip capable of dynamic computing. This system-level chip, integrating algorithm, software, and hardware design, demonstrates the inherent advantages of 'brain-like' computation in incorporating high-level brain mechanisms. The study was recently published online in the international journal 'Nature'.

"The human brain is an incredibly complex neural network, consuming only 20 watts, far less than current AI systems," said Li Guoqi, a researcher at the Institute of Automation, Chinese Academy of Sciences reported by Xinhua.

He emphasized that as computational demands and energy consumption rise, mimicking the neurons and synapses of the human brain to develop new intelligent computing systems is a promising direction.

Human brains can dynamically allocate attention based on stimulus, a process known as the attention mechanism. This research proposes 'neuromorphic dynamic computing,' applying this principle to enhance neuromorphic chip designs, thereby unlocking greater performance and energy efficiency.

Speck combines a dynamic visual sensor and a neuromorphic chip on one chip, achieving remarkably low power use at rest. It can handle visual tasks with just 0.7 milliwatts, providing an energy-efficient, responsive, and low-power solution for AI applications, according to Li.

"The development of the neuromorphic chip concept is both a breakthrough in existing technology and a strategic response to US pressures, marking our pursuit of alternative development paths," Ma Jihua, a veteran telecom industry observer, told the Global Times on Sunday.

"China is leading the market in the brain-inspired chip sector," Ma told the Global Times, "Although this approach has been studied for a long time, transitioning from mathematical theory to mass manufacturing is challenging and requires extensive work," he added.

This kind of chip may help address fundamental challenges across the chip manufacturing industry, which is currently facing a bottleneck. The concept of neuromorphic computing presents a promising and viable research direction, according to Ma.




Screenshot 2024-06-03 at 9.19.17 am.png


 
Last edited:
  • Like
  • Wow
  • Thinking
Reactions: 19 users
Something to look out for over the next few days:


Computex Taipei is Taiwan's largest tech event, with many of the largest tech companies attending. It'll run from 4-7 June.

AI is going to be the main focus.
 
  • Like
  • Love
Reactions: 15 users

MrNick

Regular
👍🏻
Thanks.
Onto ignore that fella goes. At no point have I stated Akida is inside anything, let alone NaNose Medical, despite early indications our sensors could provide incredible assistance. Back in 2021 this report was fascinating, as every little nugget is for LTHs.
Night night.
RIP Rob Burrow.

 
  • Like
  • Haha
  • Love
Reactions: 12 users

7für7

Top 20
Actually, View attachment 64257 , View attachment 64256 für View attachment 64256 .


At least not on Tradegate, which is Germany’s most important stock exchange regarding BRN. Low volume on Friday, though.

View attachment 64258
View attachment 64259

Now that you’ve addressed your own Angst after mentioning “German angst” three times last month in the context of share price volatility, will you by any chance be covering “Australian angst” the next time BRN closes red on the ASX?
Actually, I wanted to respond appropriately, but I decided to delete it. It seems you don't understand irony or satire. Wallow in your self-glorification and pseudo wannabe fact-finder posts. It's too exhausting for me to deal with. Have a nice day, Mr. or Mrs. "I insult a forum member because I have nothing better to do, Karen."
 
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

200w (4).gif

Why Intel is making big bets on Edge AI​

The chipmaker's corporate vice president, Pallavi Mahajan, talks about the growing need for Edge AI
May 29, 2024 By Charlotte Trueman Have your say
FacebookTwitterLinkedInRedditEmailShare

As is the case with all things AI in recent history, Edge AI deployments have not been immune to exponential growth.
As the pendulum has swung from centralized to distributed deployments, AI has driven the majority of growth in Edge computing, with organizations increasingly looking to deploy AI algorithms and models onto local Edge devices, removing the need to constantly rely on cloud infrastructure.
As a result, research from Gartner shows that at least 50 percent of Edge deployments by the year 2026 will incorporate machine learning, a figure that sat at around five percent in the year 2022.
Pallavi Mahajan

Pallavi Mahajan, corporate vice president of Intel's Edge group software– Intel

Edge is not the cloud​

Businesses want the Edge to bring in the same agility and flexibility as the cloud, said Pallavi Mahajan, corporate vice president of Intel’s network and Edge group software. But, she notes, it’s important to differentiate between Edge AI and cloud AI.
“Edge is not the cloud, it is very different from the cloud because it is heterogeneous,” she says. “You have different hardware, you have different servers, and you have different operating systems.”
Such devices can include anything from sensors and IoT devices to routers, integrated access devices (IAD), and wide area network (WAN) access devices.
One of the benefits of Edge AI is that by storing all your data in an Edge environment rather than a data center, even when large data sets are involved, it speeds up the decision-making and data analysis process, both of which are vital for AI applications that have been designed to provide real-time insights to organizations.
Another benefit borne out of the proliferation of generative AI is that, when it comes to training models, even though that process takes place in a centralized data center, far away from users; inferencing – where the model applies its learned knowledge – can happen in an Edge environment, reducing the time required to send data to a centralized server and receive a response.
Meanwhile, talent shortages, the growing need for efficiency, and the desire to improve time to market through the delivery of new services have all caused businesses to double down on automation.
Alluding to the aforementioned benefits of Edge computing, Mahajan said there are three things driving its growth right now: businesses looking for new and different ways to automate and innovate, which will in turn improve their profit margins; the growing need for real-time insights, which means data has to stay at the Edge; and new regulations around data privacy, which means companies have to be more mindful about where customer data is being stored.
Add to that the fact that AI has now become a ubiquitous workload, it's no surprise that organizations across all sectors are looking for ways to deploy AI at the Edge.
Almost every organization deploys smart devices to support their day-to-day business operations, be that MRI machines in hospitals, sensors in factories, or cameras in shops, all of which generate a lot of data that can deliver valuable real-time insights.
GE Healthcare is one Intel customer that uses Edge AI to support the real-time insights generated by its medical devices.
The American healthcare company wanted to use AI in advanced medical imaging to improve patient outcomes, so partnered with Intel to develop a set of AI algorithms that can detect critical findings on a chest X-ray.
Mahajan explains that in real-time, the GE’s X-ray machines scan the images that are being taken and, using machine learning, automatically detect if there’s something wrong with a scan or if there’s an anomaly that needs further investigation.
While the patient is still at the hospital, the machine can also advise the physician to take more images, perhaps from different angles, to make sure nothing is being missed. The AI algorithm is embedded in the imaging device, instead of being on the cloud or a centralized server, meaning any potentially critical conditions can be identified and prioritized almost immediately.
“Experiences are changing,” Mahajan says. “How quickly you can consume the data and how quickly you can use the data to get real-time insights, that’s what Edge AI is all about.”

Intel brings AI to the Edge​

Mahajan joined Intel in 2022, having previously held software engineering roles at Juniper Networks and HPE. She explains she was hired specifically to help build Intel’s new Edge AI platform.
Unveiled at Mobile World Congress (MWC) in February 2024, the platform is an evolution of the solution codenamed Project Strata that Intel first announced at its Intel Innovation event last year.
“[Intel] has been working at the Edge for many, many years… and we felt there was a need for a platform for the Edge,” she explains. Intel says it has over 90,000 Edge deployments across 200 million processors sold in the last ten years.
Traditionally, businesses looking to deploy automation have had to do so in a very siloed way. In contrast, Mahajan explains that Intel’s new platform will enable customers to have one server that can host multiple solutions simultaneously.
The company has described its Edge AI offering as a “modular and open software platform that enables enterprises to build, deploy, run, manage and scale Edge and AI solutions on standard hardware.” The new platform has been designed to help customers take advantage of Edge AI opportunities and will include support for heterogeneous components in addition to providing lower total cost of ownership and zero-touch, policy-based management of infrastructure and applications, and AI across a fleet of Edge nodes with a single pane of glass.
The platform consists of three key components: the infrastructure layer and the AI application layer, with the industry solutions layer sitting on top. Intel provides the software, the infrastructure, and its silicon, and Intel’s customers then deploy their solutions directly on top of it.
“The infrastructure layer enables you to go out and securely onboard all of your devices,” Mahajan says. “It enables you to remotely manage these devices and abstracts the heterogeneity of the hardware that exists at the Edge. Then, on top of it, we have the AI application layer.”
This layer consists of a number of capabilities and tools, including application orchestration, low-code and high-code AI model and application development, and horizontal and industry-specific Edge services such as data thinning and annotation.
The final layer consists of the industry solutions and, to demonstrate the wide range of use cases the platform can support, it has been launched alongside an ecosystem of partners, including Amazon Web Services, Capgemini, Lenovo, L&T Technology Services, Red Hat, SAP, Vericast, Verizon Business, and Wipro.
Mahajan also lists some of the specific solutions Intel’s customers have already deployed on the platform, citing one manufacturer that is automatically detecting welding defects by training its AI tool on photos of good and bad welding jobs.
“What this platform enables you to do is build and deploy these Edge native applications which have AI in them, and then you can go out and manage, operate, and scale all these Edge devices in a very secure manner,” Mahajan says.
At the time of writing, a release date had not been confirmed for Intel’s Edge AI platform. However, during MWC, the company said it would be “later this quarter.”
There are three things driving Edge computing’s growth right now: businesses looking for new and different ways to automate and innovate; the growing need for real-time insights; and new regulations around data privacy.

AI 'everywhere'​

Although Gartner predicted in 2023 that Edge AI had two years before it hit its plateau, Intel is confident this is not the case, and has made the Edge AI platform a central part of its ‘AI Everywhere’ vision.
Alongside its Edge AI platform, Intel also previewed its Granite Rapids-D processor at MWC. Designed for Edge solutions, it has built-in AI acceleration and will feature the latest generation of Performance-cores (P-cores).
Writing on X, the social media platform previously known as Twitter, in October 2023, Intel’s CEO Pat Gelsinger said: “Our focus at Intel is to bring AI everywhere – making it more accessible to all, and easier to integrate at scale across the continuum of workloads, from client and Edge to the network and cloud.”
As demonstrated by the recent slew of announcements, Intel clearly believes that Edge AI has just reached its peak, with Mahajan stating that all industries go through what she described as “the S Curve of maturity.” Within this curve, the bottom of the ‘S’ represents those tentative first forays into exploring a new technology, where organizations run pilot programs and proof-of-concepts, while the top of the curve is the point at which the market has fully matured.
“This is where I think we are now,” she says, adding that she believes Intel was “the first to read the need for [an Edge AI] platform.” She continues: “This is the feedback that we got back from after the launch at MWC, that everybody was saying, ‘Yes, this market needs a platform.’
“I’m sure there will be more platforms to come but I'm glad that Intel has been a leader here.”

 
  • Like
  • Love
  • Fire
Reactions: 52 users
Pallavi Mahajan

The future of AI is bright, and I am excited to see our solutions empowering businesses to unlock the full potential of their data. A special thanks to Mauro Capo for walking on and sharing Accenture expertise in helping enterprises leverage the power of GenAI. Such collaborations fuel innovation and drive transformative change to shape the AI landscape.
 
  • Like
  • Fire
Reactions: 7 users

miaeffect

Oat latte lover

View attachment 64277

Why Intel is making big bets on Edge AI​

The chipmaker's corporate vice president, Pallavi Mahajan, talks about the growing need for Edge AI
May 29, 2024 By Charlotte Trueman Have your say
FacebookTwitterLinkedInRedditEmailShare

As is the case with all things AI in recent history, Edge AI deployments have not been immune to exponential growth.
As the pendulum has swung from centralized to distributed deployments, AI has driven the majority of growth in Edge computing, with organizations increasingly looking to deploy AI algorithms and models onto local Edge devices, removing the need to constantly rely on cloud infrastructure.
As a result, research from Gartner shows that at least 50 percent of Edge deployments by the year 2026 will incorporate machine learning, a figure that sat at around five percent in the year 2022.
Pallavi Mahajan

Pallavi Mahajan, corporate vice president of Intel's Edge group software– Intel

Edge is not the cloud​

Businesses want the Edge to bring in the same agility and flexibility as the cloud, said Pallavi Mahajan, corporate vice president of Intel’s network and Edge group software. But, she notes, it’s important to differentiate between Edge AI and cloud AI.
“Edge is not the cloud, it is very different from the cloud because it is heterogeneous,” she says. “You have different hardware, you have different servers, and you have different operating systems.”
Such devices can include anything from sensors and IoT devices to routers, integrated access devices (IAD), and wide area network (WAN) access devices.
One of the benefits of Edge AI is that by storing all your data in an Edge environment rather than a data center, even when large data sets are involved, it speeds up the decision-making and data analysis process, both of which are vital for AI applications that have been designed to provide real-time insights to organizations.
Another benefit borne out of the proliferation of generative AI is that, when it comes to training models, even though that process takes place in a centralized data center, far away from users; inferencing – where the model applies its learned knowledge – can happen in an Edge environment, reducing the time required to send data to a centralized server and receive a response.
Meanwhile, talent shortages, the growing need for efficiency, and the desire to improve time to market through the delivery of new services have all caused businesses to double down on automation.
Alluding to the aforementioned benefits of Edge computing, Mahajan said there are three things driving its growth right now: businesses looking for new and different ways to automate and innovate, which will in turn improve their profit margins; the growing need for real-time insights, which means data has to stay at the Edge; and new regulations around data privacy, which means companies have to be more mindful about where customer data is being stored.
Add to that the fact that AI has now become a ubiquitous workload, it's no surprise that organizations across all sectors are looking for ways to deploy AI at the Edge.
Almost every organization deploys smart devices to support their day-to-day business operations, be that MRI machines in hospitals, sensors in factories, or cameras in shops, all of which generate a lot of data that can deliver valuable real-time insights.
GE Healthcare is one Intel customer that uses Edge AI to support the real-time insights generated by its medical devices.
The American healthcare company wanted to use AI in advanced medical imaging to improve patient outcomes, so partnered with Intel to develop a set of AI algorithms that can detect critical findings on a chest X-ray.
Mahajan explains that in real-time, the GE’s X-ray machines scan the images that are being taken and, using machine learning, automatically detect if there’s something wrong with a scan or if there’s an anomaly that needs further investigation.
While the patient is still at the hospital, the machine can also advise the physician to take more images, perhaps from different angles, to make sure nothing is being missed. The AI algorithm is embedded in the imaging device, instead of being on the cloud or a centralized server, meaning any potentially critical conditions can be identified and prioritized almost immediately.
“Experiences are changing,” Mahajan says. “How quickly you can consume the data and how quickly you can use the data to get real-time insights, that’s what Edge AI is all about.”

Intel brings AI to the Edge​

Mahajan joined Intel in 2022, having previously held software engineering roles at Juniper Networks and HPE. She explains she was hired specifically to help build Intel’s new Edge AI platform.
Unveiled at Mobile World Congress (MWC) in February 2024, the platform is an evolution of the solution codenamed Project Strata that Intel first announced at its Intel Innovation event last year.
“[Intel] has been working at the Edge for many, many years… and we felt there was a need for a platform for the Edge,” she explains. Intel says it has over 90,000 Edge deployments across 200 million processors sold in the last ten years.
Traditionally, businesses looking to deploy automation have had to do so in a very siloed way. In contrast, Mahajan explains that Intel’s new platform will enable customers to have one server that can host multiple solutions simultaneously.
The company has described its Edge AI offering as a “modular and open software platform that enables enterprises to build, deploy, run, manage and scale Edge and AI solutions on standard hardware.” The new platform has been designed to help customers take advantage of Edge AI opportunities and will include support for heterogeneous components in addition to providing lower total cost of ownership and zero-touch, policy-based management of infrastructure and applications, and AI across a fleet of Edge nodes with a single pane of glass.
The platform consists of three key components: the infrastructure layer and the AI application layer, with the industry solutions layer sitting on top. Intel provides the software, the infrastructure, and its silicon, and Intel’s customers then deploy their solutions directly on top of it.
“The infrastructure layer enables you to go out and securely onboard all of your devices,” Mahajan says. “It enables you to remotely manage these devices and abstracts the heterogeneity of the hardware that exists at the Edge. Then, on top of it, we have the AI application layer.”
This layer consists of a number of capabilities and tools, including application orchestration, low-code and high-code AI model and application development, and horizontal and industry-specific Edge services such as data thinning and annotation.
The final layer consists of the industry solutions and, to demonstrate the wide range of use cases the platform can support, it has been launched alongside an ecosystem of partners, including Amazon Web Services, Capgemini, Lenovo, L&T Technology Services, Red Hat, SAP, Vericast, Verizon Business, and Wipro.
Mahajan also lists some of the specific solutions Intel’s customers have already deployed on the platform, citing one manufacturer that is automatically detecting welding defects by training its AI tool on photos of good and bad welding jobs.
“What this platform enables you to do is build and deploy these Edge native applications which have AI in them, and then you can go out and manage, operate, and scale all these Edge devices in a very secure manner,” Mahajan says.
At the time of writing, a release date had not been confirmed for Intel’s Edge AI platform. However, during MWC, the company said it would be “later this quarter.”
There are three things driving Edge computing’s growth right now: businesses looking for new and different ways to automate and innovate; the growing need for real-time insights; and new regulations around data privacy.

AI 'everywhere'​

Although Gartner predicted in 2023 that Edge AI had two years before it hit its plateau, Intel is confident this is not the case, and has made the Edge AI platform a central part of its ‘AI Everywhere’ vision.
Alongside its Edge AI platform, Intel also previewed its Granite Rapids-D processor at MWC. Designed for Edge solutions, it has built-in AI acceleration and will feature the latest generation of Performance-cores (P-cores).
Writing on X, the social media platform previously known as Twitter, in October 2023, Intel’s CEO Pat Gelsinger said: “Our focus at Intel is to bring AI everywhere – making it more accessible to all, and easier to integrate at scale across the continuum of workloads, from client and Edge to the network and cloud.”
As demonstrated by the recent slew of announcements, Intel clearly believes that Edge AI has just reached its peak, with Mahajan stating that all industries go through what she described as “the S Curve of maturity.” Within this curve, the bottom of the ‘S’ represents those tentative first forays into exploring a new technology, where organizations run pilot programs and proof-of-concepts, while the top of the curve is the point at which the market has fully matured.
“This is where I think we are now,” she says, adding that she believes Intel was “the first to read the need for [an Edge AI] platform.” She continues: “This is the feedback that we got back from after the launch at MWC, that everybody was saying, ‘Yes, this market needs a platform.’
“I’m sure there will be more platforms to come but I'm glad that Intel has been a leader here.”

Loved it

If Intel can trigger the edge AI market quickly, BRN is one of the bullets for sure
 
  • Like
  • Fire
  • Love
Reactions: 39 users
Ping
 
  • Haha
Reactions: 3 users
Top Bottom