BRN Discussion Ongoing

Trouble is, we keep trumping our own aces.
Well whoever stands still, is going to lose, at the moment..

And that's only going to get worse.
 
  • Like
  • Thinking
Reactions: 7 users

Tothemoon24

Top 20

BrainChip’s aTENNuate: A Revolution in AI Edge Processing​

By Victoria Reed / September 19, 2024

image 14 20

New innovations arrive daily, each promising to reshape the landscape. But some technologies truly stand out. Enter aTENNuate, the latest breakthrough introduced by BrainChip. This cutting-edge technology represents a monumental leap forward in AI efficiency, specifically designed for edge processing. What makes aTENNuate so special? It tackles one of the most pressing challenges in the AI industry: how to make AI smarter, faster, and more efficient while operating on smaller devices.

The Power of Edge Processing​

Before diving deeper into the benefits of aTENNuate, let’s break down edge processing. In simple terms, edge processing refers to the ability of devices—whether it’s a smartphone, a wearable, or even a car—to process data locally, without needing to rely on cloud computing. It means AI can make real-time decisions on the device itself. No lag, no delays—just fast, seamless AI performance. This is exactly where BrainChip’s aTENNuate comes in.

Reducing Power Consumption Without Sacrificing Performance​

One of the major pain points in AI at the edge has always been the balance between power consumption and performance. Processing large amounts of data requires energy, and that energy can drain battery life or stress small hardware. aTENNuate solves this by cleverly managing and optimizing neural network workloads. In layman’s terms, it’s like giving your AI a smart workout—more efficient, more effective, and with less strain on your device.

Smarter AI Models for Smarter Decisions​

aTENNuate doesn’t just reduce energy consumption; it makes AI models smarter. By enhancing the way neural networks learn and process information, this innovation ensures AI is more adaptive and quicker in decision-making. Imagine your smartphone knowing exactly when to reduce background processes or your wearable device anticipating your next move before you even act. This level of predictive intelligence is a hallmark of aTENNuate.

Real-Time Performance in Critical Applications​

Think of applications where milliseconds matter. Autonomous vehicles, healthcare devices, or even security systems—all rely on split-second decisions. With aTENNuate, AI at the edge is faster, leading to improved safety and efficiency. The ability to make decisions instantly, without a cloud connection, is a game-changer for industries where time is literally of the essence.

A Boost to Sustainability​

Beyond performance, there’s another vital aspect that aTENNuate addresses: sustainability. As the world grows more conscious of energy consumption, technologies that can reduce carbon footprints are essential. BrainChip’s aTENNuate does just that by allowing devices to do more with less power. This opens the door to greener technologies in everything from consumer electronics to industrial applications.

Expanding the AI Horizon with aTENNuate​

BrainChip’s aTENNuate: A Revolution in AI Edge Processing

While AI has made remarkable strides over the past decade, it’s often limited by the very devices we use daily. Edge devices, such as smartphones, drones, and IoT gadgets, traditionally lack the raw computing power of a full-scale cloud server. But with aTENNuate, BrainChip is extending the capabilities of these devices, allowing them to run complex AI modelsthat were once thought impossible outside of a data center. This expansion means more intelligent interactions across more devices in more places.

AI That Learns On the Fly​

Another standout feature of aTENNuate is its ability to support on-device learning. While most AI systems need to send data back to a cloud server for analysis and updates, aTENNuate enables devices to learn and evolve in real-time. For instance, your smart home devices can adapt to your routines without needing constant updates from the cloud. This not only improves performance but also enhances privacy, as sensitive data doesn’t need to be transmitted.

Enhancing Security Through Local Processing​

Security is a top concern for any AI system, especially when it comes to edge devices. With aTENNuate’s focus on local processing, data remains on the device, reducing the risk of breaches during transmission. This makes the technology ideal for sensitive applications, such as healthcare devices or financial systems. Plus, with the constant evolution of cyber threats, having AI that can quickly adapt and bolster security protocols in real-time is invaluable.

Versatility Across Industries​

The applications of aTENNuate extend across various industries, from automotive to healthcare, and even retail. In the automotive sector, edge AI systems powered by aTENNuate can enhance autonomous driving by allowing vehicles to process sensor data instantly, making real-time decisions critical for safety. In healthcare, wearable devices equipped with aTENNuate can monitor vital signs and alert users or physicians about potential health risks immediately, without needing cloud access.

Paving the Way for the Future of AI​

As AI becomes more integrated into our everyday lives, the importance of efficient, real-time processing will only grow. aTENNuate is poised to lead this shift, enabling AI to become not only smarter but also more sustainable. With its ability to reduce power consumption, enhance security, and expand AI’s potential across industries, BrainChip’s aTENNuate is much more than just another AI innovation—it’s the future of edge computing.

The Bottom Line​

The introduction of aTENNuate marks a significant step forward in the world of AI. By addressing the critical challenges of power consumption, performance, and adaptability, BrainChip is setting a new standard for edge AI solutions. Whether you’re looking at smarter homes, safer cars, or more personalized devices, aTENNuate ensures that the next generation of AI will be faster, greener, and smarter than ever before.
As AI continues to evolve, it’s exciting to see where groundbreaking technologies like aTENNuate will take us next. With real-time processing and local learning capabilities, we’re looking at a future where AI is more responsive, secure, and efficient—all without needing a cloud connection.

Resources​

  1. BrainChip Official Website
    The official BrainChip site provides detailed information on aTENNuate, including technical specifications, use cases, and news on future developments.
    BrainChip Official Website
  2. Whitepapers on Edge AI
    These technical documents cover the principles of edge AI and how innovations like aTENNuate enhance efficiency and performance.
    Example: Edge AI: Optimizing for Tomorrow’s Devices
  3. Research Articles on Edge Computing and Neural Networks
    Websites like IEEE Xplore offer in-depth academic research on neural networks, edge processing, and energy-efficient AI models.
    IEEE Xplore Digital Library
  4. AI News Outlets
    Stay updated with the latest AI advancements through reputable AI news portals like VentureBeat or TechCrunch, which frequently cover innovations in edge AI.
    VentureBeat – AI
  5. YouTube Channels and Webinars
    For visual learners, YouTube channels such as BrainChip’s official page offer insightful webinars and product demos, where they explain their cutting-edge technologies like aTENNuate in greater detail.
  6. Podcasts on AI and Edge Computing
    AI-focused podcasts often interview industry experts discussing topics like edge AI and innovations in neural network efficiency, making them a good way to digest complex information on-the-go. Some recommendations include “AI in Business” and “The AI Alignment Podcast.

IMG_9587.jpeg




BrainChip has developed the 2nd generation of its Akida platform, which is based on neuromorphic computing.
Neuromorphic computing allows complex neural networks to operate more efficiently and with lower energy consumption.

A key feature of the new Akida platform is the Temporal Event-Based Neural Networks (TENNs), which enable a significant reduction in model size and computational effort without compromising accuracy.
These networks are particularly useful for applications that process temporal data, such as audio and speech processing, video object recognition, and predictive maintenance.

The Akida platform also supports Vision Transformers (ViT) and Long Range Skip Connections, which accelerate the processing of visual and multisensory data.
These features are crucial for applications in healthcare, the automotive industry, and smart cities.

Another advantage of the Akida technology is its on-chip learning capability, which allows sensitive data to be processed locally, thereby enhancing security and privacy.
The platform is flexible and scalable, making it suitable for a wide range of Edge AI applications.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 71 users

Tothemoon24

Top 20

Occupant Safety Redefined: Socionext’s Innovations​

ASEUR24-Matthias-Neumann.png

Socionext is at the forefront of automotive interior sensing technology, known for their cutting-edge radar sensors and advanced signal processing solutions. This year, they introduce the revolutionary 60 GHz Radar Sensor, SC1260, which sets new benchmarks in vehicle safety. The SC1260 is designed to enhance in-cabin experiences, offering critical features such as child presence detection and seat occupancy monitoring, all while being highly efficient with minimal power consumption.​

At InCabin, Socionext presents a dynamic cockpit demonstrator, showcasing the SC1260’s ability to detect vital signs like breathing, ensuring a safer and smarter automotive environment.​

Watch the video below to find out more:​


Q&A with Socionext

What exciting innovations is Socionext bringing to the table this year?
We’re thrilled to showcase our latest innovation—the 60 GHz Radar Sensor, SC1260. At InCabin, we’ll present a cockpit demonstrator that highlights the groundbreaking capabilities of this chip, setting new standards for automotive interior safety.
What makes the SC1260 radar sensor stand out from existing solutions?
The SC1260 is more than just a radar sensor – it’s a highly advanced solution with integrated antennas in a compact 6×9 mm package. It includes two transmitter and four receiver antennas, simplifying vehicle integration. The chip also features advanced signal processing that outputs point cloud data directly, reducing BOM costs by enabling the use of smaller CPUs and memory. With an average power consumption of just 0.7 milliwatts, it’s incredibly efficient.
Signal processing is a critical component. Can you explain the advantages of the SC1260’s signal processing capabilities?
Absolutely. The SC1260 features integrated blocks for advanced signal processing, including range FFT and a CFAR engine for filtering. It measures distance and angle, clusters data, and outputs point cloud data in XYZ coordinates. All of this is built into the hardware, eliminating the need for an external CPU or firmware, which makes it a highly efficient solution.
What are the primary use cases for this radar sensor in automotive applications?
The SC1260 is designed for a variety of automotive interior sensing applications. One key use case is child presence detection, where the radar sensor can detect vital signs like breathing or heartbeat, even through materials like blankets. It’s also highly effective for seat occupancy detection, accurately identifying multiple passengers. Additionally, it’s perfect for security monitoring, such as intrusion detection or proximity alerts.
Are there any other potential applications for the SC1260 that you’d like to highlight?
Definitely. The SC1260 can be used in kick-sensors for tailgate operations, distinguishing between gestures like kicks or swipes. It’s also ideal for gesture control in HMI applications, allowing drivers to control functions with simple hand movements, reducing the need for physical buttons and enhancing the user experience.
You mentioned presenting a cockpit demonstrator at the show. What does the demonstrator consist of?
Our cockpit demonstrator immerses attendees in a real-world automotive experience. It features a replica of a classic car interior with both front and rear seats. Inside, we’ve installed our state-of-the-art radar sensor, connected to a front panel display that visualizes live radar data. It’s a dynamic way to showcase the future of in-cabin sensing.
Will the cockpit show how passengers can be seen in the car as well as other critical information?
Exactly! The radar sensor does more than just detect seat occupancy—it can identify passengers in real time and even detect vital signs like breathing. It’s designed to elevate vehicle safety and comfort, demonstrating how advanced radar technology can monitor passengers with precision and reliability.
Will attendees be able to experience the cockpit demonstrator and test it firsthand?
Absolutely! We invite everyone to sit in the cockpit and experience the technology firsthand. It’s an interactive demonstration where you can see the radar sensor in action. We’re excited to guide attendees through the possibilities and discuss how this technology can transform the future of automotive interiors.
How can our readers learn more?
You can visit our stand and explore our exciting products firsthand. Our experts will be on-site to answer any questions.
For those interested in learning more, Socionext encourages visits to their exhibition stand and engagement on their social media platforms:

Don’t miss key conversations at InCabin Europe this May. Get your pass here.

AutoSens Co-Location

Search
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Tothemoon24

Top 20
IMG_9588.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 38 users

Frangipani

Top 20
Encouraging words by our CTO:

D5A2BACA-ABF1-4911-81CC-025F641FE559.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 65 users

Frangipani

Top 20


B26B2E31-9E7D-479A-851F-0A12CF79B74D.jpeg




7647F273-5232-439A-8B59-B5E5681A079B.jpeg


By the way, there is at least one confirmed connection as well as two potential dots between our CTO Tony Lewis / BrainChip and members of the NeuroMed 2024 Organizing Committee:

75181FBC-72BD-4794-AD65-D7DB929D01E2.jpeg


Ralph Etienne-Cummings, Professor of Electrical and Computer Engineering at Johns Hopkins University in Baltimore: “Yeah, Tony and I have published together quite a bit as well, so we have a long history together.”
(From the Brains & Machines podcast episode with with Elisa Donati)



1B06F366-784D-4B52-959C-3901F3A485FF.jpeg

(…) The USC Brain-Body Dynamics Lab is led by Francisco Valero-Cuevas, Professor of Biomedical Engineering, Aerospace and Mechanical Engineering, Electrical and Computer Engineering, Computer Science, and Biokinesiology and Physical Therapy (try to fit all this onto one business card!), who has been fascinated with the biomechanics of the human hand for years (see the 2008 article below) and was already talking about utilising neuromorphic chips in robots three years ago, in a ‘research update’ video recorded on June 17, 2021:



“But at the same time we are building physical robots that have what are called ‘neuromorphic circuits’. These are computer chips that are emulating populations of spiking neurons that then feed motors and amplifiers and the like that actually produce manipulation and locomotion.” (from 2:56 min)


View attachment 65930


Given that a number of USC Viterbi School of Engineering faculty members are evidently favourably aware of BrainChip (see below) - plus our CTO happens to be a USC School of Engineering alumnus and possibly still has some ties to his alma mater - I wouldn’t be surprised if Valero Lab researchers were also experimenting with Akida.

View attachment 65856

View attachment 65858
 
  • Like
  • Love
  • Fire
Reactions: 49 users

Slade

Top 20
Great posts by @Frangipani today that I genuinely appreciate.
 
  • Like
Reactions: 27 users

IloveLamp

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Definitely been some interactions between T-Mobile staff and BrainChip on LinkedIn recently

Imo dyor

View attachment 69604 View attachment 69605

Looking good @IloveLamp!

bugsbunny-love.gif



Intel were helping Ericsson on this whole RAN thingamejig with Loihi 2 (see LinkedIn post below the article from approx 4 months ago), but, as we all know Loihi 2 is Intel's latest research chip and it isn't commercialised yet so that leaves...ummm...let me see...Oh, that's right, that leaves us...I hope ! 🤞

IMO. DYOR.




Ericpm.png


T-Mobile Announces Technology Partnership with NVIDIA, Ericsson and Nokia to Advance the Future of Mobile Networking with AI at the Center​

September 18, 2024
New Cutting-Edge AI-RAN Innovation Center to Tap into Combination of 5G Advanced and AI to Revolutionize Customer Experiences and Unlock New Economic Opportunities
NVIDIA_Release-NR-500x250.jpg


6 min readListen


SAN FRANCISCOSept. 18, 2024 — To further advance its 5G leadership position that is already years ahead of its closest competitor, T-Mobile (NASDAQ: TMUS) today announced a collaboration with NVIDIA, Ericsson and Nokia to design and drive the future of mobile networks with AI at the center, revolutionizing the capabilities of radio access networks (RAN) to serve customers in unprecedented ways. Leveraging T-Mobile’s 5G leadership, the NVIDIA AI Aerial platform and Ericsson’s and Nokia’s global leadership in telecommunications solutions, the consortium of companies, all founding members of the AI-RAN Alliance, are investing in an industry-first AI-RAN Innovation Center based in Bellevue, Washington, focused on bringing RAN and AI innovation closer together to deliver transformational network experiences for customers through the development of AI-RAN.
AI-RAN will dramatically improve customers’ real-world network experiences and ever-growing demand for increased speeds, reduced latency, and increased reliability needed for the latest gaming, video, social media and augmented reality applications they like to enjoy on their mobile and fixed wireless devices. AI-RAN will do this by leveraging billions of data points to devise algorithms that determine optimal network adjustments for maximum performance and to predict real-time capacity where customers need it.

A
I will not only power RAN performance and automate operations but will supercharge mobile network infrastructure to simultaneously run third-party AI application workloads at the network edge. AI-RAN comes in conjunction with other 5G Advanced features being rapidly developed with T-Mobile’s partners. AI-RAN concepts will be built in an open and containerized manner like Open RAN, with virtualized RAN and Core components managed from a central cloud, but AI-RAN is a game-changing technology because it will enhance the current Open RAN architecture with the addition of the accelerated computing that GPUs can bring to the intense network processing workloads of the future. In other words, this partnership aims to show that AI-RAN will make the promises of Open RAN more viable, while also going beyond.
“Just like T-Mobile led in 5G, we intend to lead in the next wave of network technology, for the benefit of our customers,” said Mike Sievert, CEO of T-Mobile. “AI-RAN at T-Mobile will be all about unlocking the massive capacity and performance that customers increasingly demand from mobile networks. AI-RAN has tremendous potential to completely transform the future of mobile networks, but it will be difficult to get right. That’s why T-Mobile is jumping in now to help lead the way with our partners. This collaboration between T-Mobile, NVIDIA, Nokia and Ericsson will truly define what’s next in mobile networks in the 5G Advanced era and beyond, and drive real progress where it’s needed. This group of visionaries will work together at our new Bellevue AI-RAN Innovation Center, and the partnership will not only propel the mobile network industry forward, but also has the potential to eventually advance many others as well.”
“AI will reinvent the wireless communication network and industry — going beyond voice, data, and video to support a wide range of new applications like generative AI and robotics,” said Jensen Huang, founder and CEO of NVIDIA. "NVIDIA AI Aerial is a platform that unifies communications, computing and AI. Working closely with the industry’s leaders, we will extend AI traffic to wireless networks and use AI to reinvent wireless communications.”
“Ericsson is excited to contribute to the 'Joint AI-RAN Innovation Center', which is set to drive standardization, industry alignment, and accelerate the adoption of AI-RAN technologies. This paves the way for potentially limitless innovations in network performance, reliability, and efficiency,” said Börje Ekholm, President and CEO of Ericsson. “As a founding member of the AI-RAN Alliance, we are not only committed to positioning the United States as a leader in the commercialization of AI-RAN solutions but also to exploring and harnessing future opportunities in multi-purpose cellular and AI-optimized networks.”
“AI is a game-changer for every industry, but particularly in telecoms where it will revolutionize networks and enable a variety of new applications,” said Pekka Lundmark, President and CEO, Nokia. “Our U.S headquartered Nokia Bell Labs is leading our global AI research so it is a natural fit to extend our partnership with T-Mobile on the development of their AI-RAN Innovation Center in Bellevue, Washington. We look forward to collaborating on new AI-RAN innovations to transform network security, performance and efficiency with the aim of yielding savings in network operations and increasing monetization opportunities for operators.”
A first-of-its kind AI-RAN cloud-based multipurpose network will have the potential to support not only traditional telecommunications workloads (core network and radio access network: RAN) but also AI workloads (internal and external AI as a Service or AIaaS, a cloud-based paradigm that provides access to AI capabilities in T-Mobile’s network without the need for dedicated, in-house infrastructure). With increased capacity, energy efficiencies and improved resiliency, the same platform will carry voice, video, data, and also new generative AI applications, and have the ability to make contextual AI-powered decisions around network performance and traffic routing for different applications and circumstances. Customers will benefit from better contextual, predictive and frictionless experiences on their devices. AI-RAN will also create significant enterprise cost savings and revenue growth that could also be applied to a variety of other businesses and industries.
The new AI-RAN Innovation Center will further accelerate the mission of the AI-RAN Alliance, which was announced in February 2024 at GSMA Mobile World Congress in Barcelona with a mission to enhance mobile network efficiency, reduce power consumption and retrofit existing infrastructure to unlock new economic opportunities for telecommunications companies with AI, facilitated by 5G and setting the stage for global leadership on 6G. T-Mobile, NVIDIA, Ericsson and Nokia were all founding members, together with other technology and industry leaders.
More on NVIDIA’s AI Aerial platform here:
About T-Mobile
T-Mobile US, Inc. (NASDAQ: TMUS) is America’s supercharged Un-carrier, delivering a transformative nationwide 5G network that offers reliable connectivity for all. T-Mobile’s customers benefit from its unmatched combination of value and quality, unwavering obsession with offering them the best possible service experience and undisputable drive for disruption that creates competition and innovation in wireless and beyond. Based in Bellevue, Washington, T-Mobile provides services through its subsidiaries and operates its flagship brands, T-Mobile, Metro by T-Mobile and Mint Mobile. For more information please visit: https://www.t-mobile.com




Screenshot 2024-09-21 at 3.35.56 pm.png
 
Last edited:
  • Like
  • Thinking
  • Wow
Reactions: 28 users

GStocks123

Regular
  • Like
  • Fire
  • Love
Reactions: 27 users

JB49

Regular
Not sure if shared yet...
1726907937973.png
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Quantum Corporation as opposed to Quantum Ventura.

1.)
Screenshot 2024-09-21 at 6.58.24 pm.png







2)

Screenshot 2024-09-21 at 6.57.02 pm.png





3)

Screenshot 2024-09-21 at 6.56.22 pm.png


 
  • Like
  • Fire
  • Love
Reactions: 45 users

Justchilln

Regular
Quantum Corporation as opposed to Quantum Ventura.

1.)
View attachment 69622






2)

View attachment 69621




3)

View attachment 69620

That was for one of brainchips earliest failures “studio”
 

Bravo

If ARM was an arm, BRN would be its biceps💪!
That was for one of brainchips earliest failures “studio”

Then why is BrainChip listed on their website CURRENTLY as a partner?
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Diogenese

Top 20
Christmas past or Marley
Quantum Corporation as opposed to Quantum Ventura.

1.)
View attachment 69622






2)

View attachment 69621




3)

View attachment 69620

Christmas Past or Marley's ghost?
 
  • Like
  • Thinking
Reactions: 4 users
The AI Innovator
open menu

AI MODELS/TOOLS​

Read more

giphy.gif

Featured Videos: MIT Course on Deep Learning

Part 1: Introduction to Deep Learning

Part 2: Recurrent Neural Networks, Transformers and Attention​


Part 3: Deep Computer Vision / Convolutional Neural Networks​


For the rest of the series, click here.
computer chip image

Why the Future of AI Depends on Compound Semiconductors​

BY RODNEY PELZEL ON SEPTEMBER 18, 2024
AI’s on-screen iterations, including the use of generative AI in chatbots, have been grabbing the majority of the world’s attention since the debut of ChatGPT in late 2022. For its next act, AI will move out of the cloud to devices at the edge − with the future of AI being off-screen and in the physical world.
This move will supercharge robotics and IoT, and the technology driving it all will be compound semiconductors, including the material gallium nitride (GaN). Compound semiconductors, or computer chips made from two or more elements as opposed to single-element silicon, are becoming increasingly important in areas where higher performance is required.
AI’s future at the edge cannot happen without compound semiconductor materials that outperform silicon in three key ways. First, they operate very efficiently at RF frequencies. Second, they are effective emitters and detectors of light over a broad spectrum. Third, they are very efficient, particularly in harsh environments for such things as power conversion.

Increased functionality of devices at the edge

The transition of AI off-screen into the realm of IoT means that the devices at the edge will need to get more capable. For example, they will be required to ‘sense’ their environment in ultra-high resolution − meaning that 3D recognition and LiDAR systems like those employed in mobile handsets will become essential for all sorts of devices where this capability is not needed today.
This is a realm pioneered by compound semiconductor lasers and detectors that employ gallium arsenide (GaAs) and indium phosphide (InP), and improved versions of these devices will be necessary for AI progression. Furthermore, it is not only physically sensing things in three dimensions that will be important; wearable health monitors use compound semiconductor materials for biological sensing, an area poised to be revolutionized by off-screen AI.
Finally, the next version of AI will require ultra-fast, ultra-reliable, low latency connectivity even with things at the edge, and the excellent RF properties of compound semiconductors, particularly GaAs and GaN, will be exploited to enable this capability.

Increasing efficiency, reducing power demands

When it comes to AI, much of the world’s attention is currently focused on the energy needs of data centers, and rightly so – and if all data centers around the world were converted from silicon to GaN by the end of the decade, it is estimated that energy loss would be reduced by 30% to 40%. To put that into perspective, the conversion would save more than 100 terawatt hours and avoid 125 megatons of carbon dioxide emissions.
While much of the compute is expected to move to the edge for the next iteration of AI, the energy requirements will remain high; energy usage will scale regardless of whether the compute is occurring in the cloud or at the edge. This makes compound semiconductor materials a necessity for AI to progress.
In addition, the compound semiconductor material GaN is very robust, making it ideal for edge devices such as robotics that operate in harsh environments with elevated temperatures, higher humidities, and the like.
It is easy to see how this will benefit industrial robotics at the edge, but GaN’s capability could take things further, even as far as outer space. MIT researchers recently demonstrated that GaN was able to tolerate exposure to more than 900°F for 48 hours, making it a promising candidate for space exploration.

What’s next for AI

While the conversation around AI has certainly intensified over the last year and a half, one important thing to remember about this technology is that we are really at the beginning. Right now, generative AI applications are operating on massive amounts of data, pulling information from LLMs and incorporating clever algorithms to deliver a response.
In this next ‘off-screen’ phase of AI, different bottlenecks for the technology will be encountered. The limitations will move to the devices at the edge. The sense and connect demands on edge devices will become extreme, as the next version of AI will require devices to operate on real time data in ways never seen before.
This, in turn, will make these devices more power hungry. To address these new requirements, compound semiconductor materials will be required, and materials such as GaN will become indispensable.

Author​

  • Rodney Pelzel

    Rodney Pelzel
    Rodney Pelzel is the CTO of IQE plc, a global supplier of advanced wafer products and material solutions to the semiconductor industry. He has deep expertise in semiconductor materials engineering and the epitaxial growth of compound semiconductors. His work has been widely published and he is the co-inventor of over 30 patents.
 
  • Like
  • Fire
Reactions: 7 users
Encouraging words by our CTO:

View attachment 69586
I remember reading somewhere (thought it might be in the original CTO announcement, but it's not) that Tony has a strong commercialisation focus (especially for a CTO, who are known to like to "tinker")..

Whether it was actually said or it's just a figment of my imagination, it's clearly evidenced, by his statement, that he definitely is commercialisation focused!
 
  • Like
  • Fire
Reactions: 14 users

rgupta

Regular
I remember reading somewhere (thought it might be in the original CTO announcement, but it's not) that Tony has a strong commercialisation focus (especially for a CTO, who are known to like to "tinker")..

Whether it was actually said or it's just a figment of my imagination, it's clearly evidenced, by his statement, that he definitely is commercialisation focused!
Not sure he is commercialisation focused or not but definately he is out spoken and wants to do something than to defend himself. To me he is an aggressor and will not stop from challenging himself and his competitors.
He is definately on a mission.
 
  • Like
Reactions: 10 users

Tezza

Regular
2024 is quickly coming to an end. It's probably time for Sean to come out and say 2025 is our year! 🙃
 
  • Haha
  • Like
  • Sad
Reactions: 18 users
Top Bottom