Smoothsailing
Regular
Will do. For heavens sake smooth sailing , do what I did months ago and put 7fur7 on ignore .
Will do. For heavens sake smooth sailing , do what I did months ago and put 7fur7 on ignore .
The BRN community right now:Unsolicitation alert ... but if I can't ask the BRN fam for help, who can I ask?
I have launched a new product today on a special website dedicated to tech product launches. It may even interest you personally (feel free to visit the website and sign up to the free beta if it does!) but if all of you could wonder on over to https://www.producthunt.com/posts/hellocaller-ai and vote for my launch, I would be most appreciative.
![]()
Sooo...UPDATED 17:48 EDT / MAY 25 2024
![]()
How Nvidia, TSMC, Broadcom and Qualcomm will lead a trillion-dollar silicon boom - SiliconANGLE
How Nvidia, TSMC, Broadcom and Qualcomm will lead a trillion-dollar silicon boom - SiliconANGLEsiliconangle.com
View attachment 63787
EXTRACT
View attachment 63786
Has anyone else stumbled upon this 3 year EU-funded research project called Nimble AI, kick-started in November 2022, that “aims to unlock the potential of neuromorphic vision?“ Couldn’t find anything here on TSE with the help of the search function except a reference to US-based company Nimble Robotics, but they seem totally unrelated.
The 19 project partners include imec in Leuven (Belgium) as well as Paris-based GrAI Matter Labs, highly likely Brainchip’s most serious competitor, according to other posters.
An article about Nimble AI’s ambitious project was published today:
What do you make of of the consortium’s claim that their 3D neuromorphic vision chip will have more than an edge over Akida once it will be ready to hit the market?
![]()
NimbleAI
Today only very light AI processing tasks are executed in ubiquitous IoT endpoint devices, where sensor data are generated and access to energy is usually constrained. However, this approach is not scalable and results in high penalties in terms of security, privacy, cost, energy consumption, and...www.hipeac.net
NimbleAI: Ultra-Energy Efficient and Secure Neuromorphic Sensing and Processing at the Endpoint
“Today only very light AI processing tasks are executed in ubiquitous IoT endpoint devices, where sensor data are generated and access to energy is usually constrained. However, this approach is not scalable and results in high penalties in terms of security, privacy, cost, energy consumption, and latency as data need to travel from endpoint devices to remote processing systems such as data centres. Inefficiencies are especially evident in energy consumption.
To keep up pace with the exponentially growing amount of data (e.g. video) and allow more advanced, accurate, safe and timely interactions with the surrounding environment, next-generation endpoint devices will need to run AI algorithms (e.g. computer vision) and other compute intense tasks with very low latency (i.e. units of ms or less) and energy envelops (i.e. tens of mW or less).
NimbleAI will harness the latest advances in microelectronics and integrated circuit technology to create an integral neuromorphic sensing-processing solution to efficiently run accurate and diverse computer vision algorithms in resource- and area-constrained chips destined to endpoint devices. Biology will be a major source of inspiration in NimbleAI, especially with a focus to reproduce adaptivity and experience-induced plasticity that allow biological structures to continuously become more efficient in processing dynamic visual stimuli.
NimbleAI is expected to allow significant improvements compared to state-of-the-art (e.g. commercially available neuromorphic chips), and at least 100x improvement in energy efficiency and 50x shorter latency compared to state-of-the-practice (e.g. CPU/GPU/NPU/TPUs processing frame-based video). NimbleAI will also take a holistic approach for ensuring safety and security at different architecture levels, including silicon level.”
What I find a little odd, though, is that this claim re expected superiority over “state-of-the-art (e.g. commercially available neuromorphic chips)“ doesn’t get any mention on the official Nimble AI website (https://www.nimbleai.eu/), in contrast to the expectation of “at least 100x improvement in energy efficiency and 50x shorter latency compared to state-of-the-practice (e.g. CPU/GPU/NPU/TPUs Processing Frame-based Video).”
Found a Brainchip dot and 1500 reference! There is a lot of leads to delve into with this one.
https://www.linkedin.com/feed/update/urn:li:activity:7153439471911190528/
View attachment 54706
View attachment 54707
View attachment 54708
A few months back, I shared an article featuring Brainchip in a project called Nimble AI. This isn't a small PhD project; Nimble AI has 19 project partners across Europe with €10 million funding from both the EU and UK governments.
While keeping tabs on Nimble, there hasn't been further mention of Brainchip or any more media releases I've found. However, I did notice the project coordinator of Nimble AI liking a Brainchip related post on LinkedIn.
View attachment 59295
Have a good dig into their website. It's interesting stuff. https://www.nimbleai.eu/
For the tech heads, there's a scientific paper discussing how SNNs integrate and operate within the chip stack. While it doesn't explicitly mention Brainchip, it predates the article referencing Brainchip, suggesting that Brainchip might have been incorporated later on. DATE23_nimbleai.pdf
Take a look at the project partners and their respective roles. There are some heavyweight companies and contributors involved, hopefully providing exposure to Brainchip. https://www.nimbleai.eu/consortium/
Also worth noting is Xabier Iturbe's got a second new role as the coordinator of the Spanish Association of the Semiconductor Industry's newly formed working group for neuromorphic tech.
![]()
Se constituye el grupo de trabajo en Tecnologías Neuromórficas e IA de AESEMI
El día 1 de Febrero de 2024 el grupo de trabajo en Tecnologías Neuromórficas e IA de AESEMI ha iniciado sus actividades, coordinado por el centro tecnológico vasco IKERLAN.aesemi.org
Hi FJ-215,
the article you linked to refers to a different Fraunhofer Institute, Fraunhofer IPMS in Dresden, whereas the Fraunhofer Institute shown in the video is Fraunhofer HHI (Heinrich-Hertz-Institut) in Berlin. (There are 76 Fraunhofer Institutes in total.)
At the very end of the video, there is a reference to a research paper, that I posted about a few weeks ago:
View attachment 63664
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417987
View attachment 63666
View attachment 63667
View attachment 63665
And thanks to the video we now know what neuromorphic hardware the researchers used, even though they didn’t reveal it in their paper!![]()
SynSense now have a chip with learning capabilities.But are GPUs "AI accelerators"?
Anyway, what would TSMC know about AI accelerators?
We are not alone:
https://www.synsense.ai/synsense-ad...ic-audio-processing-with-xyloaudio-3-tapeout/
SynSense advances ultra-low-power neuromorphic audio processing with Xylo™Audio 3 tapeout
2023-07-07
By SynSense
SynSense, the world’s leading commercial supplier of ultra-low-power neuromorphic hardware and application solutions, has completed the tapeout of Xylo™Audio 3, their advanced ultra-low-power audio processing platform built on the neuromorphic inference coreXylo™. Xylo™Audio 3 is based on the TSMC 40nm CMOS LOGIC Low Power process, delivering real-time, ultra-low-power audio signal processing capabilities while reducing chip costs. This tapeout marks a milestone for the commercialization of SynSense’s neuromorphic audio processing technology.
The reason I asked the earlier question is mainly because when it comes to public or workplace safety any delays are unacceptable from an OH &S perspective as it could lead to injuries. Near cloud capabilities will not be good enough.But are GPUs "AI accelerators"?
Anyway, what would TSMC know about AI accelerators?
We are not alone:
https://www.synsense.ai/synsense-ad...ic-audio-processing-with-xyloaudio-3-tapeout/
SynSense advances ultra-low-power neuromorphic audio processing with Xylo™Audio 3 tapeout
2023-07-07
By SynSense
SynSense, the world’s leading commercial supplier of ultra-low-power neuromorphic hardware and application solutions, has completed the tapeout of Xylo™Audio 3, their advanced ultra-low-power audio processing platform built on the neuromorphic inference coreXylo™. Xylo™Audio 3 is based on the TSMC 40nm CMOS LOGIC Low Power process, delivering real-time, ultra-low-power audio signal processing capabilities while reducing chip costs. This tapeout marks a milestone for the commercialization of SynSense’s neuromorphic audio processing technology.
Will it block her indoors calls?Unsolicitation alert ... but if I can't ask the BRN fam for help, who can I ask?
I have launched a new product today on a special website dedicated to tech product launches. It may even interest you personally (feel free to visit the website and sign up to the free beta if it does!) but if all of you could wonder on over to https://www.producthunt.com/posts/hellocaller-ai and vote for my launch, I would be most appreciative.
![]()
her what now?Will it block her indoors calls?
Report Metrics | Details |
---|---|
Historic Data | 2019 - 2022 |
CAGR | The global market for AI chips at the edge will reach US$22.0 billion by 2034. This represents a CAGR of 7.63% over the forecast period (2024 to 2034). |
Forecast Period | 2024 - 2034 |
Forecast Units | USD$ Billions |
Regions Covered | Worldwide, All Asia-Pacific, North America (USA + Canada), Europe |
Segments Covered | Geography (North America, APAC, Europe, Rest of World), architecture (FPGA, CPU, GPU, DSP, ASIC), packaging (SoC, MCM, 2.5D+), end-user (consumer, enterprise), application (computer vision, language, predictive), and industry vertical (consumer electronics, industrial, automotive, healthcare, retail, media & advertising, other). |
1. | EXECUTIVE SUMMARY |
1.1. | Edge AI |
1.2. | IDTechEx definition of Edge AI |
1.3. | Edge vs Cloud characteristics |
1.4. | Advantages and disadvantages of edge AI |
1.5. | Edge devices that employ AI chips |
1.6. | The edge AI chip landscape - overview |
1.7. | The edge AI chip landscape - key hardware players |
1.8. | The edge AI chip landscape - hardware start-ups |
1.9. | The AI chip landscape - other than hardware |
1.10. | Edge AI landscape - geographic split: China |
1.11. | Edge AI landscape - geographic split: North America |
1.12. | Edge AI landscape - geographic split: Rest of World |
1.13. | Inference at the edge |
1.14. | Deep learning: How an AI algorithm is implemented |
1.15. | AI chip capabilities |
2. | FORECASTS |
2.1. | Total revenue forecast |
2.2. | Methodology and analysis |
2.3. | Estimating annual revenue from smartphone chipsets |
2.4. | Smartphone chipset costs |
2.5. | Costs garnered by AI in smartphone chipsets |
2.6. | Revenue forecast by geography |
2.7. | Percentage shares of market by geography |
2.8. | Chip types: architecture |
2.9. | Forecast by chip type |
2.10. | Semiconductor packaging timeline |
2.11. | From 1D to 3D semiconductor packaging |
2.12. | 2D packaging - System-on-Chip |
2.13. | 2D packaging - Multi-Chip Modules |
2.14. | 2.5D and 3D packaging - System-in-Package |
2.15. | 3D packaging - System-on-Package |
2.16. | Forecast by packaging |
2.17. | Consumer vs Enterprise forecast |
2.18. | Forecast by application |
2.19. | Forecast by industry vertical |
2.20. | Forecast by industry vertical - full |
3. | TECHNOLOGY: FROM SEMICONDUCTOR WAFERS TO AI CHIPS |
3.1. | Wafer and chip manufacture processes |
3.1.1. | Raw material to wafer: process flow |
3.1.2. | Wafer to chip: process flow |
3.1.3. | Wafer to chip: process flow |
3.1.4. | The initial deposition stage |
3.1.5. | Thermal oxidation |
3.1.6. | Oxidation by vapor deposition |
3.1.7. | Photoresist coating |
3.1.8. | How a photoresist coating is applied |
3.1.9. | Lithography |
3.1.10. | Lithography: DUV |
3.1.11. | Lithography: Enabling higher resolution |
3.1.12. | Lithography: EUV |
3.1.13. | Etching |
3.1.14. | Deposition and ion implantation |
3.1.15. | Deposition of thin films |
3.1.16. | Silicon Vapor Phase Epitaxy |
3.1.17. | Atmospheric Pressure CVD |
3.1.18. | Low Pressure CVD and Plasma-Enhanced CVD |
3.1.19. | Atomic Layer Deposition |
3.1.20. | Molecular Beam Epitaxy |
3.1.21. | Evaporation and Sputtering |
3.1.22. | Ion Implantation: Generation |
3.1.23. | Ion Implantation: Penetration |
3.1.24. | Metallization |
3.1.25. | Wafer: The final form |
3.1.26. | Semiconductor supply chain players |
3.2. | Transistor technology |
3.2.1. | How transistors operate: p-n junctions |
3.2.2. | How transistors operate: electron shells |
3.2.3. | How transistors operate: valence electrons |
3.2.4. | How transistors work: back to p-n junctions |
3.2.5. | How transistors work: connecting a battery |
3.2.6. | How transistors work: PNP operation |
3.2.7. | How transistors work: PNP |
3.2.8. | How transistors switch |
3.2.9. | From p-n junctions to FETs |
3.2.10. | How FETs work |
3.2.11. | Moore's law |
3.2.12. | Gate length reductions |
3.2.13. | FinFET |
3.2.14. | GAAFET, MBCFET, RibbonFET |
3.2.15. | Process nodes |
3.2.16. | Device architecture roadmap |
3.2.17. | Evolution of transistor device architectures |
3.2.18. | Carbon nanotubes for transistors |
3.2.19. | CNTFET designs |
3.2.20. | Semiconductor foundry node roadmap |
3.2.21. | Roadmap for advanced nodes |
4. | EDGE INFERENCE AND KEY APPLICATIONS |
4.1. | Inference at the edge and benchmarking |
4.1.1. | Edge AI |
4.1.2. | Edge vs Cloud characteristics |
4.1.3. | Advantages and disadvantages of edge AI |
4.1.4. | Edge devices that employ AI chips |
4.1.5. | AI in smartphones and tablets |
4.1.6. | Recent history: Siri |
4.1.7. | Text-to-speech |
4.1.8. | AI in personal computers |
4.1.9. | AI chip basics |
4.1.10. | Parallel computing |
4.1.11. | Low-precision computing |
4.1.12. | AI in speakers |
4.1.13. | AI in smart appliances |
4.1.14. | AI in automotive vehicles |
4.1.15. | AI in sensors and structural health monitoring |
4.1.16. | AI in security cameras |
4.1.17. | AI in robotics |
4.1.18. | AI in wearables and hearables |
4.1.19. | The edge AI chip landscap |
4.1.20. | Inference at the edge |
4.1.21. | Deep learning: How an AI algorithm is implemented |
4.1.22. | AI chip capabilities |
4.1.23. | AI chip capabilities |
4.1.24. | MLPerf - Inference |
4.1.25. | MLPerf Edge |
4.1.26. | Inference: Edge, Nvidia vs Nvidia |
4.1.27. | MLPerf Mobile - Qualcomm HTP |
4.1.28. | The battle for domination: Qualcomm vs MediaTek |
4.1.29. | MLPerf Tiny |
4.2. | AI in smartphones |
4.2.1. | Mobile device competitive landscape |
4.2.2. | Samsung and Oppo chipsets |
4.2.3. | US restrictions on China |
4.2.4. | Smartphone chipset landscape 2022 - Present |
4.2.5. | MediaTek and Qualcomm 2020 - Present |
4.2.6. | AI processing in smartphones: 2020 - Present |
4.2.7. | Node concentrations 2020 - Present |
4.2.8. | Chipset concentrations 2020 - Present |
4.2.9. | Chipset designer concentrations 2020 - Present |
4.2.10. | Node concentrations for each chipset designer |
4.2.11. | AI-capable versus non AI-capable smartphones |
4.2.12. | Chipset volume: 2021 and 2022 |
4.3. | AI in tablets |
4.3.1. | Tablet competitive landscape |
4.3.2. | Tablet chipset landscape 2020 - Present |
4.3.3. | AI processing in tablets: 2020 - Present |
4.3.4. | Node concentrations 2020 - Present |
4.3.5. | Chipset designer concentrations 2021 - Present |
4.3.6. | Node concentrations for each chipset designer |
4.3.7. | AI-capable versus non AI-capable tablets |
4.4. | AI in automotive |
4.4.1. | AI in automobiles: Competitive landscape |
4.4.2. | Levels of driving automation |
4.4.3. | Computational efficiencies |
4.4.4. | AI chips for automotive vehicles |
4.4.5. | Performance and node trends |
4.4.6. | Rising power consumption |
5. | SUPPLY CHAIN PLAYERS |
5.1. | Smartphone chipset case studies |
5.1.1. | MediaTek: Dimensity and APU |
5.1.2. | Qualcomm: MLPerf results - Inference Mobile and Inference Tiny |
5.1.3. | Qualcomm: Mobile AI |
5.1.4. | Apple: Neural Engine |
5.1.5. | Apple: The ANE's capabilities and shortcomings |
5.1.6. | Google: Pixel Neural Core and Pixel Tensor |
5.1.7. | Google: Edge TPU |
5.1.8. | Samsung: Exynos |
5.1.9. | Huawei: Kirin chipsets |
5.1.10. | Unisoc: T618 and T710 |
5.2. | Automotive case studies |
5.2.1. | Nvidia: DRIVE AGX Orin and Thor |
5.2.2. | Qualcomm: Snapdragon Ride Flex |
5.2.3. | Ambarella: CV3-AD685 for automotive applications |
5.2.4. | Ambarella: CVflow architecture |
5.2.5. | Hailo |
5.2.6. | Blaize |
5.2.7. | Tesla: FSD |
5.2.8. | Horizon Robotics: Journey 5 |
5.2.9. | Horizon Robotics: Journey 5 Architecture |
5.2.10. | Renesas: R-Car 4VH |
5.2.11. | Mobileye |
5.2.12. | Mobileye: EyeQ Ultra |
5.2.13. | Texas Instruments: TDA4VM |
5.3. | Embedded device case studies |
5.3.1. | Nvidia: Jetson AGX Orin |
5.3.2. | NXP Semiconductors: Introduction |
5.3.3. | NXP Semiconductors: MCX N |
5.3.4. | NXP Semiconductors: i.MX 95 and NPU |
5.3.5. | Intel: AI hardware portfolio |
5.3.6. | Intel: Core |
5.3.7. | Perceive |
5.3.8. | Perceive: Ergo 2 architecture |
5.3.9. | GreenWaves Technologies |
5.3.10. | GreenWaves Technologies: GAP9 architecture |
5.3.11. | AMD Xilinx: ACAP |
5.3.12. | AMD: Versal AI |
5.3.13. | NationalChip: GX series |
5.3.14. | NationalChip: GX8002 and gxNPU |
5.3.15. | Efinix: Quantum architecture |
5.3.16. | Efinix: Titanium and Trion FPGAs |
6. | APPENDICES |
6.1. | List of smartphones surveyed |
6.1.1. | Appendix: List of smartphones surveyed - Apple and Asus |
6.1.2. | Appendix: List of smartphones surveyed - Google and Honor |
6.1.3. | Appendix: List of smartphones surveyed - Huawei, HTC and Motorola |
6.1.4. | Appendix: List of smartphones surveyed - Nokia, OnePlus, Oppo |
6.1.5. | Appendix: List of smartphones surveyed - realme |
6.1.6. | Appendix: List of smartphones surveyed - Samsung and Sony |
6.1.7. | Appendix: List of smartphones surveyed - Tecno Mobile |
6.1.8. | Appendix: List of smartphones surveyed - Xiaomi |
6.1.9. | Appendix: List of smartphones surveyed - Vivo and ZTE |
6.2. | List of tablets surveyed |
6.2.1. | Appendix: List of tablets surveyed - Acer, Amazon and Apple |
6.2.2. | Appendix: List of tablets surveyed - Barnes & Noble, Google, Huawei, Lenovo |
6.2.3. | Appendix: List of tablets surveyed - Microsoft, OnePlus, Samsung, Xiaomi |
11 tsd. Dollar for a full report? Jesus Christ… better invest it into BRN what the…..Here's a sample of the report "AI Chips for Edge Applications 2024-2034". The full report costs about $11,000. What a bargain! Who wants to "chip" in for a copy?
BTW, we are listed in the Hardware Start-Up and New Players diagram.
Annual revenue generated by AI Chips for edge devices is set to exceed US$22 billion by 2034.
![]()
AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge
Technology analyses and market forecasts for the global sale of AI chips for edge applications by geography, architecture, packaging, end-user, application, and industry vertical.
The global AI chips market for edge devices will grow to US$22.0 billion by 2034, with the three largest industry verticals at that time being Consumer Electronics, Industrial, and Automotive. Artificial Intelligence (AI) is already displaying significant transformative potential across a number of different applications, from fraud detection in high-frequency trading to the use of generative AI (such as the likes of ChatGPT) as a significant time-saver for the preparation of written documentation, as well as a creative prompt. While the use of semiconductor chips with neural network architectures (these architectures being especially well-equipped in handling machine learning workloads, machine learning being an integral facet to functioning AI) is prevalent within data centers, it is at the edge where significant opportunity for adoption of AI lies. The benefits to end-users of providing a greater array of functionalities to edge devices, as well as - in certain applications - being able to fully outsource human-hours to intelligent systems, is significant. AI has already found its way into the flagship smartphones of the world's leading designers, and is set to be rolled out across a number of different devices, from automotive vehicles to smart appliances in the home.
Following a period of dedicated research by expert analysts, IDTechEx has published a report that offers unique insights into the global edge AI chip technology landscape and corresponding markets. The report contains a comprehensive analysis of 23 players involved with AI chip design for edge devices, as well as a detailed assessment of technology innovations and market dynamics. The market analysis and forecasts focus on total revenue (where this corresponds to the revenue that can be attributed to the specific neural network architecture included in sold chips/chipsets that is responsible for handling machine learning workloads), with granular forecasts that are segmented by geography (APAC, Europe, North America, and Rest of World), type of buyer (consumer and enterprise), chip architecture (GPU, CPU, ASIC, DSP, and FPGA), packaging type (System-on-Chip, Multi-Chip Module, and 2.5D+), application (language, computer vision, and predictive), and industry vertical (industrial, healthcare, automotive, retail, media & advertising, consumer electronics, and others).
The report presents an unbiased analysis of primary data gathered via our interviews with key players, and it builds on our expertise in the semiconductor, computing and electronics sectors.
This research delivers valuable insights for:
- Companies that require AI-capable hardware.
- Companies that design/manufacture AI chips and/or AI-capable embedded systems.
- Companies that supply components used in AI-capable embedded systems.
- Companies that invest in AI and/or semiconductor design, manufacture, and packaging.
- Companies that develop devices that may require AI functionality.
![]()
Computing can be segmented with regards to the different environments, designated by where computation takes place within the network (i.e. within the cloud or at the edge of the network). This report covers the consumer edge and enterprise edge environments. Source: IDTechEx
Artificial Intelligence at the Edge
The differentiation between edge and cloud computing environments is not a trivial one, as each environment has its own requirements and capabilities. An edge computing environment is one in which computations are performed on a device - usually the same device on which the data is created - that is at the edge of the network (and, therefore, close to the user). This contrasts with cloud or data center computing, which is at the center of the network. Such edge devices include cars, cameras, laptops, mobile phones, autonomous vehicles, etc. In all of these instances, computation is carried out close to the user, at the edge of the network where the data is located. Given this definition of edge computing, edge AI is therefore the deployment of AI applications at the edge of the network, in the types of devices listed above. The benefits of running AI applications on edge devices include not having to send data back and forth between the cloud and the edge device to carry out the computation; as such, edge devices running AI algorithms can make decisions quickly without needing a connection to the internet or the cloud. Given that many edge devices run on a power cell, AI chips used for such edge devices need to have lower power consumption than within data centers, in order to be able to run effectively on these devices. This results in typically simpler algorithms being deployed, that don't require as much power.
Edge devices can be split into two categories depending on who they are intended for; consumer devices are sold directly to end-users, and so are developed with end-user requirements in mind. Enterprise devices, on the other hand, are purchased by businesses or institutions, who may have different requirements to the end-user. Both types of edge devices are considered in the report.
![]()
The consumer electronics, industrial, and automotive industry verticals are expected to generate the most revenue for AI chips at the edge by 2034. Source: IDTechEx
AI: A crucial technology for an Internet of Things
AI's capabilities in natural language processing (understanding of textual data, not just from a linguistic perspective but also a contextual one), speech recognition (being able to decipher a spoken language and convert it to text in the same language, or convert to another language), recommendation (being able to send personalized adverts/suggestions to consumers based on their interactions with service items), reinforcement learning (being able to make predictions based on observations/exploration, such as is used when training agents to play a game), object detection, and image classification (being able to distinguish objects from an environment, and decide on what that object is) are such that AI can be applied to a number of different devices across industry verticals and thoroughly transform the ways in which human users interact with these devices. This can range from additional functionality that enhances user experience (such as in smartphones, smart televisions, personal computers, and tablets), to functionality that is inherently crucial to the technology (such as is the case for autonomous vehicles and industrial robots, which would simply not be able to function in the desired manner without the inclusion of AI).
The Smart Home in particular is a growing avenue for AI (which primarily comprises consumer electronics products), given that artificial intelligence (allowing for automation and hands-free access) and Wi-Fi connectivity are two key technologies for realizing an Internet of Things (IoT), where appliances can communicate directly with one another. Smart televisions, mirrors, virtual reality headsets, sensors, kitchen appliances, cleaning appliances, and safety systems are all devices that can be brought into a state of interconnectivity through the deployment of artificial intelligence and Wi-Fi, where AI allows for hands-free access and voice command over smart home devices. The opportunity afforded by bringing AI into the home is reflected somewhat by the growth of the consumer electronics vertical over the forecast period, with it being the industry that generates the most revenue for edge AI chips in 2034.
![]()
The Edge AI chip landscape. Source: IDTechEx
The growth of AI at the edge
While the forecast presented in this report does predict substantial growth of AI at the edge over the next ten years - where global revenue is in excess of US$22 billion by 2034 - this growth is anything but steady. This is due to the saturation and stop-start nature of certain markets that have already employed AI architectures in their incumbent chipsets, and where rigorous testing is necessary prior to high volume rollout, respectively. For example, the smartphone market has already begun to saturate; though premiumization of smartphones continues (where the percentage share of total smartphones sold given over to premium smartphones is, year-on-year, increasing), where AI revenue increases as more premium smartphones are sold given that these smartphones incorporate AI coprocessing in their chipsets, it is expected that this will itself begin to saturate over the next ten years.
In contrast to this, two notable jumps in revenue on the forecast presented in the report are from 2024 to 2025, and 2026 to 2027. The first of these jumps can be largely attributed to the most cutting-edge ADAS (Advanced Driver-Assistance Systems) finding their way into car manufacturers' 2025 production line. The second jump is due in part to increased adoption of ADAS systems, as well as the relative maturation of start-ups operating presently targeting embedded devices, especially for smart home appliances. These applications are discussed in greater detail in the report, with a particular focus on the smartphone and automotive markets.
![]()
Smartphone price as compared to the node process that incumbent chipsets have been manufactured in. This plot has been created from a survey - carried out specifically for this report - of 196 smartphones released since 2020, 91 of which incorporate neural network architectures to allow for AI acceleration. Source: IDTechEx
Market developments and roadmaps
IDTechEx's model of the edge AI chips market considers architectural trends, developments in packaging, the dispersion/concentration of funding and investments, historical financial data, individual industry vertical market saturation, and geographically-localized ecosystems to give an accurate representation of the evolving market value over the next ten years.
Our report answers important questions such as:
- Which industry verticals will AI chips for edge devices be used most prominently in?
- What opportunities are there for growth within the edge computing environments?
- How has the adoption of AI within more mature markets been received, and what are the obstacles to adoption in more emergent applications?
- How will each AI chip application and industry vertical grow in the short and long-term?
- What are the trends associated with the design and manufacture of chips that incorporate neural network architectures?
Summary
This report provides critical market intelligence concerning AI hardware at the edge, particularly chips used for accelerating machine learning workloads. This includes:
Market forecasts and analysis
- Market forecasts from 2024-2034, segmented in six different ways: by geography, architecture, packaging, end-user, application and industry vertical.
- Analysis of market forecasts, including assumptions, methodologies, limitations, and explanations for the characteristics of each forecast.
A review of the technology behind AI chips
- History and context for AI chip design and manufacture.
- Overview of different architectures.
- General capabilities of AI chips.
- Review of semiconductor manufacture processes, from raw material to wafer to chip.
- Review of the physics behind transistor technology.
- Review of transistor technology development, and industry/company roadmaps in this area.
- Analysis of the benchmarking used in the industry for AI chips.
Surveys and analysis of key edge AI applications
- Analysis of the chipsets included in almost 200 smartphones released since 2020, along with pricing estimations and key trends.
- Analysis of the chipsets included in almost 50 tablets released since 2020, along with pricing estimations and key trends.
- Performance comparisons for automotive chipsets, along with key trends with regards performance, power consumption, and efficiency.
Full market characterization for each major edge AI chip product
- Review of the edge AI chip landscape, including key players across edge applications.
- Profiles of 23 of the most prominent companies designing AI chips for edge applications today, with a focus on their latest and in-development chip technologies.
- Reviews of promising start-up companies developing AI chips for edge applications.
Report Metrics Details Historic Data 2019 - 2022 CAGR The global market for AI chips at the edge will reach US$22.0 billion by 2034. This represents a CAGR of 7.63% over the forecast period (2024 to 2034). Forecast Period 2024 - 2034 Forecast Units USD$ Billions Regions Covered Worldwide, All Asia-Pacific, North America (USA + Canada), Europe Segments Covered Geography (North America, APAC, Europe, Rest of World), architecture (FPGA, CPU, GPU, DSP, ASIC), packaging (SoC, MCM, 2.5D+), end-user (consumer, enterprise), application (computer vision, language, predictive), and industry vertical (consumer electronics, industrial, automotive, healthcare, retail, media & advertising, other).
Analyst access from IDTechEx
All report purchases include up to 30 minutes telephone time with an expert analyst who will help you link key findings in the report to the business issues you're addressing. This needs to be used within three months of purchasing the report.
Table of Contents
1. EXECUTIVE SUMMARY 1.1. Edge AI 1.2. IDTechEx definition of Edge AI 1.3. Edge vs Cloud characteristics 1.4. Advantages and disadvantages of edge AI 1.5. Edge devices that employ AI chips 1.6. The edge AI chip landscape - overview 1.7. The edge AI chip landscape - key hardware players 1.8. The edge AI chip landscape - hardware start-ups 1.9. The AI chip landscape - other than hardware 1.10. Edge AI landscape - geographic split: China 1.11. Edge AI landscape - geographic split: North America 1.12. Edge AI landscape - geographic split: Rest of World 1.13. Inference at the edge 1.14. Deep learning: How an AI algorithm is implemented 1.15. AI chip capabilities 2. FORECASTS 2.1. Total revenue forecast 2.2. Methodology and analysis 2.3. Estimating annual revenue from smartphone chipsets 2.4. Smartphone chipset costs 2.5. Costs garnered by AI in smartphone chipsets 2.6. Revenue forecast by geography 2.7. Percentage shares of market by geography 2.8. Chip types: architecture 2.9. Forecast by chip type 2.10. Semiconductor packaging timeline 2.11. From 1D to 3D semiconductor packaging 2.12. 2D packaging - System-on-Chip 2.13. 2D packaging - Multi-Chip Modules 2.14. 2.5D and 3D packaging - System-in-Package 2.15. 3D packaging - System-on-Package 2.16. Forecast by packaging 2.17. Consumer vs Enterprise forecast 2.18. Forecast by application 2.19. Forecast by industry vertical 2.20. Forecast by industry vertical - full 3. TECHNOLOGY: FROM SEMICONDUCTOR WAFERS TO AI CHIPS 3.1. Wafer and chip manufacture processes 3.1.1. Raw material to wafer: process flow 3.1.2. Wafer to chip: process flow 3.1.3. Wafer to chip: process flow 3.1.4. The initial deposition stage 3.1.5. Thermal oxidation 3.1.6. Oxidation by vapor deposition 3.1.7. Photoresist coating 3.1.8. How a photoresist coating is applied 3.1.9. Lithography 3.1.10. Lithography: DUV 3.1.11. Lithography: Enabling higher resolution 3.1.12. Lithography: EUV 3.1.13. Etching 3.1.14. Deposition and ion implantation 3.1.15. Deposition of thin films 3.1.16. Silicon Vapor Phase Epitaxy 3.1.17. Atmospheric Pressure CVD 3.1.18. Low Pressure CVD and Plasma-Enhanced CVD 3.1.19. Atomic Layer Deposition 3.1.20. Molecular Beam Epitaxy 3.1.21. Evaporation and Sputtering 3.1.22. Ion Implantation: Generation 3.1.23. Ion Implantation: Penetration 3.1.24. Metallization 3.1.25. Wafer: The final form 3.1.26. Semiconductor supply chain players 3.2. Transistor technology 3.2.1. How transistors operate: p-n junctions 3.2.2. How transistors operate: electron shells 3.2.3. How transistors operate: valence electrons 3.2.4. How transistors work: back to p-n junctions 3.2.5. How transistors work: connecting a battery 3.2.6. How transistors work: PNP operation 3.2.7. How transistors work: PNP 3.2.8. How transistors switch 3.2.9. From p-n junctions to FETs 3.2.10. How FETs work 3.2.11. Moore's law 3.2.12. Gate length reductions 3.2.13. FinFET 3.2.14. GAAFET, MBCFET, RibbonFET 3.2.15. Process nodes 3.2.16. Device architecture roadmap 3.2.17. Evolution of transistor device architectures 3.2.18. Carbon nanotubes for transistors 3.2.19. CNTFET designs 3.2.20. Semiconductor foundry node roadmap 3.2.21. Roadmap for advanced nodes 4. EDGE INFERENCE AND KEY APPLICATIONS 4.1. Inference at the edge and benchmarking 4.1.1. Edge AI 4.1.2. Edge vs Cloud characteristics 4.1.3. Advantages and disadvantages of edge AI 4.1.4. Edge devices that employ AI chips 4.1.5. AI in smartphones and tablets 4.1.6. Recent history: Siri 4.1.7. Text-to-speech 4.1.8. AI in personal computers 4.1.9. AI chip basics 4.1.10. Parallel computing 4.1.11. Low-precision computing 4.1.12. AI in speakers 4.1.13. AI in smart appliances 4.1.14. AI in automotive vehicles 4.1.15. AI in sensors and structural health monitoring 4.1.16. AI in security cameras 4.1.17. AI in robotics 4.1.18. AI in wearables and hearables 4.1.19. The edge AI chip landscap 4.1.20. Inference at the edge 4.1.21. Deep learning: How an AI algorithm is implemented 4.1.22. AI chip capabilities 4.1.23. AI chip capabilities 4.1.24. MLPerf - Inference 4.1.25. MLPerf Edge 4.1.26. Inference: Edge, Nvidia vs Nvidia 4.1.27. MLPerf Mobile - Qualcomm HTP 4.1.28. The battle for domination: Qualcomm vs MediaTek 4.1.29. MLPerf Tiny 4.2. AI in smartphones 4.2.1. Mobile device competitive landscape 4.2.2. Samsung and Oppo chipsets 4.2.3. US restrictions on China 4.2.4. Smartphone chipset landscape 2022 - Present 4.2.5. MediaTek and Qualcomm 2020 - Present 4.2.6. AI processing in smartphones: 2020 - Present 4.2.7. Node concentrations 2020 - Present 4.2.8. Chipset concentrations 2020 - Present 4.2.9. Chipset designer concentrations 2020 - Present 4.2.10. Node concentrations for each chipset designer 4.2.11. AI-capable versus non AI-capable smartphones 4.2.12. Chipset volume: 2021 and 2022 4.3. AI in tablets 4.3.1. Tablet competitive landscape 4.3.2. Tablet chipset landscape 2020 - Present 4.3.3. AI processing in tablets: 2020 - Present 4.3.4. Node concentrations 2020 - Present 4.3.5. Chipset designer concentrations 2021 - Present 4.3.6. Node concentrations for each chipset designer 4.3.7. AI-capable versus non AI-capable tablets 4.4. AI in automotive 4.4.1. AI in automobiles: Competitive landscape 4.4.2. Levels of driving automation 4.4.3. Computational efficiencies 4.4.4. AI chips for automotive vehicles 4.4.5. Performance and node trends 4.4.6. Rising power consumption 5. SUPPLY CHAIN PLAYERS 5.1. Smartphone chipset case studies 5.1.1. MediaTek: Dimensity and APU 5.1.2. Qualcomm: MLPerf results - Inference Mobile and Inference Tiny 5.1.3. Qualcomm: Mobile AI 5.1.4. Apple: Neural Engine 5.1.5. Apple: The ANE's capabilities and shortcomings 5.1.6. Google: Pixel Neural Core and Pixel Tensor 5.1.7. Google: Edge TPU 5.1.8. Samsung: Exynos 5.1.9. Huawei: Kirin chipsets 5.1.10. Unisoc: T618 and T710 5.2. Automotive case studies 5.2.1. Nvidia: DRIVE AGX Orin and Thor 5.2.2. Qualcomm: Snapdragon Ride Flex 5.2.3. Ambarella: CV3-AD685 for automotive applications 5.2.4. Ambarella: CVflow architecture 5.2.5. Hailo 5.2.6. Blaize 5.2.7. Tesla: FSD 5.2.8. Horizon Robotics: Journey 5 5.2.9. Horizon Robotics: Journey 5 Architecture 5.2.10. Renesas: R-Car 4VH 5.2.11. Mobileye 5.2.12. Mobileye: EyeQ Ultra 5.2.13. Texas Instruments: TDA4VM 5.3. Embedded device case studies 5.3.1. Nvidia: Jetson AGX Orin 5.3.2. NXP Semiconductors: Introduction 5.3.3. NXP Semiconductors: MCX N 5.3.4. NXP Semiconductors: i.MX 95 and NPU 5.3.5. Intel: AI hardware portfolio 5.3.6. Intel: Core 5.3.7. Perceive 5.3.8. Perceive: Ergo 2 architecture 5.3.9. GreenWaves Technologies 5.3.10. GreenWaves Technologies: GAP9 architecture 5.3.11. AMD Xilinx: ACAP 5.3.12. AMD: Versal AI 5.3.13. NationalChip: GX series 5.3.14. NationalChip: GX8002 and gxNPU 5.3.15. Efinix: Quantum architecture 5.3.16. Efinix: Titanium and Trion FPGAs 6. APPENDICES 6.1. List of smartphones surveyed 6.1.1. Appendix: List of smartphones surveyed - Apple and Asus 6.1.2. Appendix: List of smartphones surveyed - Google and Honor 6.1.3. Appendix: List of smartphones surveyed - Huawei, HTC and Motorola 6.1.4. Appendix: List of smartphones surveyed - Nokia, OnePlus, Oppo 6.1.5. Appendix: List of smartphones surveyed - realme 6.1.6. Appendix: List of smartphones surveyed - Samsung and Sony 6.1.7. Appendix: List of smartphones surveyed - Tecno Mobile 6.1.8. Appendix: List of smartphones surveyed - Xiaomi 6.1.9. Appendix: List of smartphones surveyed - Vivo and ZTE 6.2. List of tablets surveyed 6.2.1. Appendix: List of tablets surveyed - Acer, Amazon and Apple 6.2.2. Appendix: List of tablets surveyed - Barnes & Noble, Google, Huawei, Lenovo 6.2.3. Appendix: List of tablets surveyed - Microsoft, OnePlus, Samsung, Xiaomi
Ordering Information
AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge
£€$¥元
Electronic (1-5 users)
$7,000.00
Electronic (6-10 users)
$10,000.00
Electronic and 1 Hardcopy (1-5 users)
$7,975.00
Electronic and 1 Hardcopy (6-10 users)
$10,975.00
AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge
This report characterizes the edge AI chip markets, technologies, and players. Granular forecasts over a 10-year period (up to and including 2034) across 6 different areas (therein 3 primary regional geographies, consumer and enterprise use, 5 device architectures (GPUs, CPUs, ASICs, DSPs and...www.idtechex.com
Sony Semiconductor Brings Inference Close To The Edge
Steve McDowell
Contributor
Chief Analyst & CEO, NAND Research.
https://www.forbes.com/sites/stevem...close-to-the-edge/?sh=660e80bb34f9#open-web-0
Mar 27, 2024,04:12pm EDT
![]()
Sony Semiconductor Solutions Group
NURPHOTO VIA GETTY IMAGES
AI only matters to businesses if the technology enables competitive differentiation or drives increased efficiencies. The past year has seen technology companies focus on training models that promise to change enterprises across industries. While training has been the focus, the recent NVIDIA GTC event showcased a rapid transition towards inference, where actual business value lay.
AI at the Retail Edge
Retail is one of the industries that promises to benefit most from AI. Generative AI and large language models aside, retail organizations are already deploying image recognition systems for diverse tasks, from inventory control and loss prevention to customer service.
Earlier this year, Nvidia published its 2024 State of AI in Retail and CPG report that takes a survey-based approach to understanding the use of AI in the retail sector. Nvidia found that 42% percent of retailers already use AI, with an additional 34% assessing or piloting AI programs. Narrowing the aperture, among large retailers with revenues of more than $500 million, the adoption of AI stretches to 64%. That’s a massive market.
The challenge for retailers and the array of ecosystem partners catering to them is that AI can be complex. Large language models and generative AI require infrastructure that scales beyond the capabilities of many retail locations. Using the cloud to solve those problems isn't always practical, either, as applications like vision processing need to be done at the edge, where the data lives.
Sony’s Platform Approach to On-Device Inference
Sony Semiconductor Solutions Corporation took on the challenge of simplifying vision processing and inference, resulting in the introduction of its AITRIOS edge AI sensing platform. AITRIOS addresses six significant cloud challenges based IoT systems, including handling large data volumes, enhancing data privacy, reducing latency, conserving energy, ensuring service continuity, and securing data.
AITRIOS accelerates the deployment of edge AI-powered sensing solutions across industries, enabling a comprehensive ecosystem for creating solutions that blend edge computing and cloud technologies.
![]()
Sony Semiconductor Brings Inference Close To The Edge
Sony Semiconductor Solutions Corporation took on the challenge of simplifying vision processing and inference, resulting in its AITRIOS edge AI sensing platform.www.forbes.com
![]()
Sony Semiconductor Brings Inference Close To The Edge
Sony Semiconductor Solutions Corporation took on the challenge of simplifying vision processing and inference, resulting in its AITRIOS edge AI sensing platform.www.forbes.com
View attachment 60127AITRIOS | Sony Semiconductor Solutions Group
www.aitrios.sony-semicon.com
![]()
Edge AI-Driven Vision Detection Solution Introduced at 500 Convenience StoreLocations to Measure Advertising Effectiveness|News Releases|Sony Semiconductor Solutions Group
Sony Semiconductor Solutions Group develops device business which includes Micro display, LSIs, and Semiconductor Laser, in focusing on Image Sensor.www.sony-semicon.com
April 24, 2024
Edge AI-Driven Vision Detection Solution Introduced at 500 Convenience Store Locations to Measure Advertising Effectiveness
Sony Semiconductor Solutions Corporation
Atsugi, Japan, April 24, 2024 —
Today, Sony Semiconductor Solutions Corporation (SSS) announced that it has introduced and begun operating an edge AI-driven vision detection solution at 500 convenience store locations in Japan to improve the benefits of in-store advertising.
![]()
Edge AI technology automatically detects the number of digital signage viewers and how long they viewed it.
SSS has been providing 7-Eleven and other retail outlets in Japan with vision-based technology to improve the implementation of digital signage systems and in-store advertising at their brick-and-mortar locations as part of their retail media*1 strategy. To help ensure that effective content is shown for brands and stores, this solution gives partners sophisticated tools to evaluate the effectiveness of advertising on their customers.
As part of this effort, SSS has recently introduced a solution that uses edge devices with on-sensor AI processing to automatically detect when customers see digital signage, count how many people paused to view it, and measure the percentage of viewers. The AI capabilities of the sensor collects data points such as the number of shoppers who enter the detection area, whether they saw the signage, the number who stopped to view the signage, and how long they watched for. The system does not output image data capable of identifying individuals, making it possible to provide insightful measurements while helping to preserve privacy.
Click here for an overview video of the solution and interview with 7-Eleven Japan.
Solution features:
-IMX500 intelligent vision sensor delivers optimal data collection, while helping to preserve privacy.
SSS’s IMX500 intelligent vision sensor with AI-processing capabilities automatically detects the number of customers who enter the detection area, the number who stopped to view the signage, and how long they viewed it. The acquired metadata (semantic information) is then sent to a back-end system where it’s combined with content streaming information and purchasing data to conduct a sophisticated analysis of advertising effectiveness. Because the system does not output image data that could be used to identify individuals, it helps to preserve customer privacy.
-Edge devices equipped with the IMX500 save space in store.
The IMX500 is made using SSS’s proprietary structure with the pixel chip and logic chip stacked, enabling the entire process, from imaging to AI inference, to be done on a single sensor. Compact, IMX500-equipped edge devices (approx. 55 x 40 x 35 mm) are unobtrusive in shops, and compared to other solutions that require an AI box or other additional devices for AI inference, can be installed more flexibly in convenience stores and shops with limited space.
-The AITRIOS™ platform contributes to operational stability and system expandability.
Only light metadata is output from IMX500 edge devices, minimizing the amount of data transmitted to the cloud. This helps lessen network load, even when adding more devices in multiple stores, compared to solutions that send full image data to the cloud. This curtails communication, cloud storage, and computing costs.
The IMX500 also handles AI computing, eliminating the need for other devices such as an AI box, resulting in a simple device configuration, streamlining device maintenance and reducing costs of installation. AITRIOS*2, SSS’s edge AI sensing platform, which is used to build and operate the in-store solution, delivers a complete service without the need for third-party tools, enabling simple, sustainable operations. This solution was developed with Console Enterprise Edition, one of the services offered by AITRIOS, and is installed on the partner’s Microsoft Azure cloud infrastructure. It not only connects easily and compatibly with their existing systems, but also offers system customizability and security benefits, since there is no need to output various data outside the company.![]()
*1 A new form of advertising media that provides advertising space for retailers and e-commerce sites using their own platforms
*2 AITRIOS is an AI sensing platform for streamlined device management, AI development, and operation. It offers the development environment, tools, features, etc., which are necessary for deploying AI-driven solutions, and it contributes to shorter roll-out times when launching operations, while ensuring privacy, reducing introductory cost, and minimizing complications. For more information on AITRIOS, visit: https://www.aitrios.sony-semicon.com/en
About Sony Semiconductor Solutions Corporation
Sony Semiconductor Solutions Corporation is a wholly owned subsidiary of Sony Group Corporation and the global leader in image sensors. It operates in the semiconductor business, which includes image sensors and other products. The company strives to provide advanced imaging technologies that bring greater convenience and fun. In addition, it also works to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights. For more information, please visit https://www.sony-semicon.com/en/index.html.
AITRIOS and AITRIOS logos are the registered trademarks or trademarks of Sony Group Corporation or its affiliated companies.
Microsoft and Azure are registered trademarks of Microsoft Corporation in the United States and other countries.
All other company and product names herein are trademarks or registered trademarks of their respective owners.
Here is some wild speculation: Could thispossibly be a candidate for the mysterious Custom Customer SoC, featured in the recent Investor Roadshow presentation (provided the licensing of Akida IP was done via MegaChips)?
Post in thread 'AITRIOS'
https://thestockexchange.com.au/threads/aitrios.18971/post-31633
View attachment 63835
On Dec 29, Chinese researchers from Zhejiang University Hangzhou published a paper on arXiv titled Darwin3: A large-scale neuromorphic chip with a Novel ISA and On-Chip Learning. (Take note that submissions on arXiv must be from registered authors and are moderated but not peer-reviewed, although some authors posting preprints on arXiv - and thus benefitting from immediate feedback in the open-access community and extending their potential citation readership - go on to publish them in peer-reviewed journals).
Not for the first time, however, Akida is missing from the comparison with other state-of-the-art neuromorphic chips (plus the table still lists IBM’s TrueNorth instead of the recently unveiled NorthPole). This of course begs the question “Why?!” And the two likeliest answers IMO are: a) the authors did not know about Akida or b) they did not want Akida to outshine their baby.
I’ll leave it to our resident hardware experts to comment on the question whether Darwin3, which constitutes the third generation of the Darwin family of neuromorphic chips and is claimed to have up to 2.35 million neurons and on-chip learning, could be serious future competition.
A quick search here on TSE did not yield any reference to either its predecessors Darwin (2015) or Darwin2 (2019).
View attachment 53414
View attachment 53415
Here is a good example of what Brainchip is missing.. Good PR. I thought the last question at the AMG was the best question there as it eluded to our poor PR. Brainchip only have about 13K of followers when we should have 100'sK of followers given we are suppose to be a world leader in our field. Also having a small amateur looking stand at trade shows etc. I believe our management need to urgently address this situation and start getting a PR agency on the job so we get onto Business channels and business journals etc. The CEO having a yearly interview with a small Australian Stock Analyst company is not going to cut it. He should be having interviews with the likes of Bloomberg channel and others like that... More impressive professional trade show stands and not a table with creased tabletop and a few PC like items demo'ing. Promoting our leading technology to the masses and how we can help improve the world and humanity. Maybe Management have it in mind and are awaiting till we have a couple more contracts.. If we want to be a big professional company then we need to think and act like one... I sure hope we will very soon.Article published 15 hours ago.
Chinese Chip Ignites Global Neuromorphic Computing Competition
Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
by SLG Syndication
May 27, 2024
2 mins read
[ Illustration: The China Academy]![]()
A typical computer chip, such as one found in a personal desktop for non-professional use, consumes around 100 watts of power. AI, on the other hand, requires significantly more energy. It is estimated that ChatGPT would consume approximately 300 watts per second to answer a single question. In contrast, the human brain is much more energy-efficient, requiring only around 10 watts of power, comparable to that of a lightbulb. This exceptional energy efficiency is one of the reasons why scientists are interested in modeling the next generation of microchips after the human brain.
In the bustling tech landscape of Hangzhou, China, a team of researchers at Zhejiang University has made a significant leap in the world of neuromorphic computing with the development of their latest innovation, the Darwin3 chip. This groundbreaking piece of technology promises to transform how we simulate brain activity, paving the way for advancements in artificial intelligence, robotics, and beyond.
Neuromorphic chips are designed to emulate the architecture and functioning of the human brain. Unlike traditional computers that process information in a linear, step-by-step manner, these chips operate more like our brains, processing multiple streams of information simultaneously and adapting to new data in real-time.
The Darwin3 chip is a marvel of modern engineering, specifically designed to work with Spiking Neural Networks (SNNs). SNNs are a type of artificial neural network that mimics the way neurons and synapses in the human brain communicate. While conventional neural networks use continuous signals to process information, SNNs use discrete spikes, much like the bursts of electrical impulses that our neurons emit.
Test environment. (a) The test chip and system board. (b) Application development process.![]()
One of the standout features of Darwin3 is its flexibility in simulating various types of neurons. Just as an orchestra can produce a wide range of sounds by utilizing different instruments, Darwin3 can emulate different neuron models to suit a variety of tasks, from basic pattern recognition to complex decision-making processes.
To achieve this goal, Darwin3’s key innovations is its domain-specific instruction set architecture (ISA). This custom-designed set of instructions allows the chip to efficiently describe diverse neuron models and learning rules, including the integrate-and-fire (LIF) model, Izhikevich model, and Spike-Timing-Dependent Plasticity (STDP). This versatility enables Darwin3 to tackle a wide range of computational tasks, making it a highly adaptable tool for AI development.
Another significant breakthrough is Darwin3’s efficient memory usage. Neuromorphic computing faces the challenge of managing vast amounts of data involved in simulating neuronal connections. Darwin3 overcomes this hurdle with an innovative compression mechanism that dramatically reduces memory usage. Imagine shrinking a massive library of books into a single, compact e-reader without losing any content—this is akin to what Darwin3 achieves with synaptic connections.
Perhaps the most exciting feature of Darwin3 is its on-chip learning capability. This allows the chip to learn and adapt in real-time, much like how humans learn from experience. Darwin3 can modify its behavior based on new information, leading to smarter and more autonomous systems.
The implications of Darwin3’s technology are far-reaching and transformative. In healthcare, prosthetic limbs powered by Darwin3 could learn and adapt to a user’s movements, offering a more intuitive and natural experience. This could significantly enhance the quality of life for amputees.
In robotics, robots equipped with Darwin3 could navigate complex environments with greater ease and efficiency, similar to how humans learn to maneuver through crowded spaces. This capability could revolutionize industries from manufacturing to space exploration.
Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
The Darwin3 chip represents a monumental step forward in neuromorphic computing, bringing us closer to creating machines that can think and learn in ways previously thought impossible. As this technology continues to evolve, we anticipate a future where intelligent systems seamlessly integrate into our daily lives, enhancing everything from medical care to environmental conservation. The research is recently published in the journal National Science Review.
Source: China Academy