BRN Discussion Ongoing

zeeb0t

Administrator
Staff member
Unsolicitation alert ... but if I can't ask the BRN fam for help, who can I ask?

I have launched a new product today on a special website dedicated to tech product launches. It may even interest you personally (feel free to visit the website and sign up to the free beta if it does!) but if all of you could wonder on over to https://www.producthunt.com/posts/hellocaller-ai and vote for my launch, I would be most appreciative.

Season 3 Smiling GIF by The Simpsons
 
  • Like
  • Love
  • Thinking
Reactions: 32 users

7für7

Top 20
Unsolicitation alert ... but if I can't ask the BRN fam for help, who can I ask?

I have launched a new product today on a special website dedicated to tech product launches. It may even interest you personally (feel free to visit the website and sign up to the free beta if it does!) but if all of you could wonder on over to https://www.producthunt.com/posts/hellocaller-ai and vote for my launch, I would be most appreciative.

Season 3 Smiling GIF by The Simpsons
The BRN community right now:

For zeeb0t

1716809502880.gif
 
  • Haha
  • Like
  • Love
Reactions: 9 users
  • Like
Reactions: 8 users

Frangipani

Regular
Has anyone else stumbled upon this 3 year EU-funded research project called Nimble AI, kick-started in November 2022, that “aims to unlock the potential of neuromorphic vision?“ Couldn’t find anything here on TSE with the help of the search function except a reference to US-based company Nimble Robotics, but they seem totally unrelated.

The 19 project partners include imec in Leuven (Belgium) as well as Paris-based GrAI Matter Labs, highly likely Brainchip’s most serious competitor, according to other posters.

An article about Nimble AI’s ambitious project was published today:

What do you make of of the consortium’s claim that their 3D neuromorphic vision chip will have more than an edge over Akida once it will be ready to hit the market? 🤔


NimbleAI: Ultra-Energy Efficient and Secure Neuromorphic Sensing and Processing at the Endpoint​

“Today only very light AI processing tasks are executed in ubiquitous IoT endpoint devices, where sensor data are generated and access to energy is usually constrained. However, this approach is not scalable and results in high penalties in terms of security, privacy, cost, energy consumption, and latency as data need to travel from endpoint devices to remote processing systems such as data centres. Inefficiencies are especially evident in energy consumption.
To keep up pace with the exponentially growing amount of data (e.g. video) and allow more advanced, accurate, safe and timely interactions with the surrounding environment, next-generation endpoint devices will need to run AI algorithms (e.g. computer vision) and other compute intense tasks with very low latency (i.e. units of ms or less) and energy envelops (i.e. tens of mW or less).
NimbleAI will harness the latest advances in microelectronics and integrated circuit technology to create an integral neuromorphic sensing-processing solution to efficiently run accurate and diverse computer vision algorithms in resource- and area-constrained chips destined to endpoint devices. Biology will be a major source of inspiration in NimbleAI, especially with a focus to reproduce adaptivity and experience-induced plasticity that allow biological structures to continuously become more efficient in processing dynamic visual stimuli.
NimbleAI is expected to allow significant improvements compared to state-of-the-art (e.g. commercially available neuromorphic chips), and at least 100x improvement in energy efficiency and 50x shorter latency compared to state-of-the-practice (e.g. CPU/GPU/NPU/TPUs processing frame-based video). NimbleAI will also take a holistic approach for ensuring safety and security at different architecture levels, including silicon level.”


What I find a little odd, though, is that this claim re expected superiority over “state-of-the-art (e.g. commercially available neuromorphic chips)“ doesn’t get any mention on the official Nimble AI website (https://www.nimbleai.eu/), in contrast to the expectation of “at least 100x improvement in energy efficiency and 50x shorter latency compared to state-of-the-practice (e.g. CPU/GPU/NPU/TPUs Processing Frame-based Video).”


A few months back, I shared an article featuring Brainchip in a project called Nimble AI. This isn't a small PhD project; Nimble AI has 19 project partners across Europe with €10 million funding from both the EU and UK governments.

While keeping tabs on Nimble, there hasn't been further mention of Brainchip or any more media releases I've found. However, I did notice the project coordinator of Nimble AI liking a Brainchip related post on LinkedIn.

View attachment 59295

Have a good dig into their website. It's interesting stuff. https://www.nimbleai.eu/

For the tech heads, there's a scientific paper discussing how SNNs integrate and operate within the chip stack. While it doesn't explicitly mention Brainchip, it predates the article referencing Brainchip, suggesting that Brainchip might have been incorporated later on. DATE23_nimbleai.pdf

Take a look at the project partners and their respective roles. There are some heavyweight companies and contributors involved, hopefully providing exposure to Brainchip. https://www.nimbleai.eu/consortium/

Also worth noting is Xabier Iturbe's got a second new role as the coordinator of the Spanish Association of the Semiconductor Industry's newly formed working group for neuromorphic tech.


Found a slightly updated illustration and project description of the Nimble AI neuromorphic 3D vision prototype, inspired by the operation of an insect’s brain, that will use the AKD1500 as a neuromorphic processor to perform 3D perception inference. This will be benchmarked against a non-neuromorphic Edge AI processor by Hailo.

The EU-funded project, kicked off in November 2022, is about half-way through its three-year duration.



0D2AE138-4AA1-4154-BDC9-0258076E1FCA.jpeg





48629092-28C1-4280-A55B-B68F8E836F7D.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 45 users

Frangipani

Regular
Hi FJ-215,

the article you linked to refers to a different Fraunhofer Institute, Fraunhofer IPMS in Dresden, whereas the Fraunhofer Institute shown in the video is Fraunhofer HHI (Heinrich-Hertz-Institut) in Berlin. (There are 76 Fraunhofer Institutes in total.)

At the very end of the video, there is a reference to a research paper, that I posted about a few weeks ago:

View attachment 63664



https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417987

View attachment 63666
View attachment 63667



View attachment 63665

And thanks to the video we now know what neuromorphic hardware the researchers used, even though they didn’t reveal it in their paper! 😍

Forgot to mention:

That research paper’s future outlook…


AB45047B-AC01-4C69-872A-2E3E39737FE1.jpeg




… ties in nicely with this job ad the Fraunhofer Heinrich-Hertz-Institut had published in November, looking for “several student assistants to support research projects on neuromorphic signal processing in the field of (medical) sensory applications”:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-399275

F1F41CCF-3B63-479C-B993-050BDB860F4D.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 32 users

goodvibes

Regular
Unsupervised Neuromorphic Motion Segmentation


The Western Sydney University unveils a novel unsupervised event-based motion segmentation algorithm, employing the #Prophesee Gen4 HD event camera. Source Code announced, not relesead yet.

𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Unsupervised segmention of moving objects
✅Dynamic mask refinement, appearance from DINO
✅Ev-Airborne: HD data w/ ground truth annotations
✅Superior segmentation performance on major benchmarks

#artificialintelligence #machinelearning #ml #AI #deeplearning #computervision #AIwithPapers #metaverse

👉Discussion https://lnkd.in/dMgakzWm
👉Paper https://lnkd.in/dCxfDFFK
👉Project https://lnkd.in/d4dxcNMT
👉Repo (empty) https://lnkd.in/dbTBZArg
 
  • Like
  • Love
  • Thinking
Reactions: 12 users

manny100

Regular
But are GPUs "AI accelerators"?

Anyway, what would TSMC know about AI accelerators?

We are not alone:

https://www.synsense.ai/synsense-ad...ic-audio-processing-with-xyloaudio-3-tapeout/

SynSense advances ultra-low-power neuromorphic audio processing with Xylo™Audio 3 tapeout​

2023-07-07
By SynSense

SynSense, the world’s leading commercial supplier of ultra-low-power neuromorphic hardware and application solutions, has completed the tapeout of Xylo™Audio 3, their advanced ultra-low-power audio processing platform built on the neuromorphic inference coreXylo™. Xylo™Audio 3 is based on the TSMC 40nm CMOS LOGIC Low Power process, delivering real-time, ultra-low-power audio signal processing capabilities while reducing chip costs. This tapeout marks a milestone for the commercialization of SynSense’s neuromorphic audio processing technology.
SynSense now have a chip with learning capabilities.
How do they compare to AKIDA. Are they full on competition?
Is this the reason we we are flooging the AKIDA Gen 2 TENNS combination because we have real AKIDA only competition?
Appears they are analogue or CNN only.
Seems they they have near cloud capabilities but are not cloudless like AKIDA.
 
  • Like
  • Fire
Reactions: 9 users

manny100

Regular
But are GPUs "AI accelerators"?

Anyway, what would TSMC know about AI accelerators?

We are not alone:

https://www.synsense.ai/synsense-ad...ic-audio-processing-with-xyloaudio-3-tapeout/

SynSense advances ultra-low-power neuromorphic audio processing with Xylo™Audio 3 tapeout​

2023-07-07
By SynSense

SynSense, the world’s leading commercial supplier of ultra-low-power neuromorphic hardware and application solutions, has completed the tapeout of Xylo™Audio 3, their advanced ultra-low-power audio processing platform built on the neuromorphic inference coreXylo™. Xylo™Audio 3 is based on the TSMC 40nm CMOS LOGIC Low Power process, delivering real-time, ultra-low-power audio signal processing capabilities while reducing chip costs. This tapeout marks a milestone for the commercialization of SynSense’s neuromorphic audio processing technology.
The reason I asked the earlier question is mainly because when it comes to public or workplace safety any delays are unacceptable from an OH &S perspective as it could lead to injuries. Near cloud capabilities will not be good enough.
So anything from motor vehicle safety, industrial safety or health uses will require a real time uninterrupted service.
Can SyneSense or other companies offer this.??
 
  • Like
  • Fire
Reactions: 5 users
Not watched either



 
  • Like
  • Fire
  • Thinking
Reactions: 5 users
Unsolicitation alert ... but if I can't ask the BRN fam for help, who can I ask?

I have launched a new product today on a special website dedicated to tech product launches. It may even interest you personally (feel free to visit the website and sign up to the free beta if it does!) but if all of you could wonder on over to https://www.producthunt.com/posts/hellocaller-ai and vote for my launch, I would be most appreciative.

Season 3 Smiling GIF by The Simpsons
Will it block her indoors calls?
 
  • Haha
  • Love
Reactions: 6 users

zeeb0t

Administrator
Staff member
  • Haha
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Here's a sample of the report "AI Chips for Edge Applications 2024-2034". The full report costs about $11,000. What a bargain! Who wants to "chip" in for a copy?

BTW, we are listed in the Hardware Start-Up and New Players diagram.🥳






Annual revenue generated by AI Chips for edge devices is set to exceed US$22 billion by 2034.
AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge

AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge​

Technology analyses and market forecasts for the global sale of AI chips for edge applications by geography, architecture, packaging, end-user, application, and industry vertical.​




The global AI chips market for edge devices will grow to US$22.0 billion by 2034, with the three largest industry verticals at that time being Consumer Electronics, Industrial, and Automotive. Artificial Intelligence (AI) is already displaying significant transformative potential across a number of different applications, from fraud detection in high-frequency trading to the use of generative AI (such as the likes of ChatGPT) as a significant time-saver for the preparation of written documentation, as well as a creative prompt. While the use of semiconductor chips with neural network architectures (these architectures being especially well-equipped in handling machine learning workloads, machine learning being an integral facet to functioning AI) is prevalent within data centers, it is at the edge where significant opportunity for adoption of AI lies. The benefits to end-users of providing a greater array of functionalities to edge devices, as well as - in certain applications - being able to fully outsource human-hours to intelligent systems, is significant. AI has already found its way into the flagship smartphones of the world's leading designers, and is set to be rolled out across a number of different devices, from automotive vehicles to smart appliances in the home.

Following a period of dedicated research by expert analysts, IDTechEx has published a report that offers unique insights into the global edge AI chip technology landscape and corresponding markets. The report contains a comprehensive analysis of 23 players involved with AI chip design for edge devices, as well as a detailed assessment of technology innovations and market dynamics. The market analysis and forecasts focus on total revenue (where this corresponds to the revenue that can be attributed to the specific neural network architecture included in sold chips/chipsets that is responsible for handling machine learning workloads), with granular forecasts that are segmented by geography (APAC, Europe, North America, and Rest of World), type of buyer (consumer and enterprise), chip architecture (GPU, CPU, ASIC, DSP, and FPGA), packaging type (System-on-Chip, Multi-Chip Module, and 2.5D+), application (language, computer vision, and predictive), and industry vertical (industrial, healthcare, automotive, retail, media & advertising, consumer electronics, and others).

The report presents an unbiased analysis of primary data gathered via our interviews with key players, and it builds on our expertise in the semiconductor, computing and electronics sectors.

This research delivers valuable insights for:
  • Companies that require AI-capable hardware.
  • Companies that design/manufacture AI chips and/or AI-capable embedded systems.
  • Companies that supply components used in AI-capable embedded systems.
  • Companies that invest in AI and/or semiconductor design, manufacture, and packaging.
  • Companies that develop devices that may require AI functionality.

87.png

Computing can be segmented with regards to the different environments, designated by where computation takes place within the network (i.e. within the cloud or at the edge of the network). This report covers the consumer edge and enterprise edge environments. Source: IDTechEx

Artificial Intelligence at the Edge
The differentiation between edge and cloud computing environments is not a trivial one, as each environment has its own requirements and capabilities. An edge computing environment is one in which computations are performed on a device - usually the same device on which the data is created - that is at the edge of the network (and, therefore, close to the user). This contrasts with cloud or data center computing, which is at the center of the network. Such edge devices include cars, cameras, laptops, mobile phones, autonomous vehicles, etc. In all of these instances, computation is carried out close to the user, at the edge of the network where the data is located. Given this definition of edge computing, edge AI is therefore the deployment of AI applications at the edge of the network, in the types of devices listed above. The benefits of running AI applications on edge devices include not having to send data back and forth between the cloud and the edge device to carry out the computation; as such, edge devices running AI algorithms can make decisions quickly without needing a connection to the internet or the cloud. Given that many edge devices run on a power cell, AI chips used for such edge devices need to have lower power consumption than within data centers, in order to be able to run effectively on these devices. This results in typically simpler algorithms being deployed, that don't require as much power.

Edge devices can be split into two categories depending on who they are intended for; consumer devices are sold directly to end-users, and so are developed with end-user requirements in mind. Enterprise devices, on the other hand, are purchased by businesses or institutions, who may have different requirements to the end-user. Both types of edge devices are considered in the report.

82.png

The consumer electronics, industrial, and automotive industry verticals are expected to generate the most revenue for AI chips at the edge by 2034. Source: IDTechEx

AI: A crucial technology for an Internet of Things
AI's capabilities in natural language processing (understanding of textual data, not just from a linguistic perspective but also a contextual one), speech recognition (being able to decipher a spoken language and convert it to text in the same language, or convert to another language), recommendation (being able to send personalized adverts/suggestions to consumers based on their interactions with service items), reinforcement learning (being able to make predictions based on observations/exploration, such as is used when training agents to play a game), object detection, and image classification (being able to distinguish objects from an environment, and decide on what that object is) are such that AI can be applied to a number of different devices across industry verticals and thoroughly transform the ways in which human users interact with these devices. This can range from additional functionality that enhances user experience (such as in smartphones, smart televisions, personal computers, and tablets), to functionality that is inherently crucial to the technology (such as is the case for autonomous vehicles and industrial robots, which would simply not be able to function in the desired manner without the inclusion of AI).

The Smart Home in particular is a growing avenue for AI (which primarily comprises consumer electronics products), given that artificial intelligence (allowing for automation and hands-free access) and Wi-Fi connectivity are two key technologies for realizing an Internet of Things (IoT), where appliances can communicate directly with one another. Smart televisions, mirrors, virtual reality headsets, sensors, kitchen appliances, cleaning appliances, and safety systems are all devices that can be brought into a state of interconnectivity through the deployment of artificial intelligence and Wi-Fi, where AI allows for hands-free access and voice command over smart home devices. The opportunity afforded by bringing AI into the home is reflected somewhat by the growth of the consumer electronics vertical over the forecast period, with it being the industry that generates the most revenue for edge AI chips in 2034.

8A.png

The Edge AI chip landscape. Source: IDTechEx

The growth of AI at the edge
While the forecast presented in this report does predict substantial growth of AI at the edge over the next ten years - where global revenue is in excess of US$22 billion by 2034 - this growth is anything but steady. This is due to the saturation and stop-start nature of certain markets that have already employed AI architectures in their incumbent chipsets, and where rigorous testing is necessary prior to high volume rollout, respectively. For example, the smartphone market has already begun to saturate; though premiumization of smartphones continues (where the percentage share of total smartphones sold given over to premium smartphones is, year-on-year, increasing), where AI revenue increases as more premium smartphones are sold given that these smartphones incorporate AI coprocessing in their chipsets, it is expected that this will itself begin to saturate over the next ten years.

In contrast to this, two notable jumps in revenue on the forecast presented in the report are from 2024 to 2025, and 2026 to 2027. The first of these jumps can be largely attributed to the most cutting-edge ADAS (Advanced Driver-Assistance Systems) finding their way into car manufacturers' 2025 production line. The second jump is due in part to increased adoption of ADAS systems, as well as the relative maturation of start-ups operating presently targeting embedded devices, especially for smart home appliances. These applications are discussed in greater detail in the report, with a particular focus on the smartphone and automotive markets.

89.png

Smartphone price as compared to the node process that incumbent chipsets have been manufactured in. This plot has been created from a survey - carried out specifically for this report - of 196 smartphones released since 2020, 91 of which incorporate neural network architectures to allow for AI acceleration. Source: IDTechEx

Market developments and roadmaps
IDTechEx's model of the edge AI chips market considers architectural trends, developments in packaging, the dispersion/concentration of funding and investments, historical financial data, individual industry vertical market saturation, and geographically-localized ecosystems to give an accurate representation of the evolving market value over the next ten years.

Our report answers important questions such as:
  • Which industry verticals will AI chips for edge devices be used most prominently in?
  • What opportunities are there for growth within the edge computing environments?
  • How has the adoption of AI within more mature markets been received, and what are the obstacles to adoption in more emergent applications?
  • How will each AI chip application and industry vertical grow in the short and long-term?
  • What are the trends associated with the design and manufacture of chips that incorporate neural network architectures?

Summary
This report provides critical market intelligence concerning AI hardware at the edge, particularly chips used for accelerating machine learning workloads. This includes:

Market forecasts and analysis
  • Market forecasts from 2024-2034, segmented in six different ways: by geography, architecture, packaging, end-user, application and industry vertical.
  • Analysis of market forecasts, including assumptions, methodologies, limitations, and explanations for the characteristics of each forecast.

A review of the technology behind AI chips
  • History and context for AI chip design and manufacture.
  • Overview of different architectures.
  • General capabilities of AI chips.
  • Review of semiconductor manufacture processes, from raw material to wafer to chip.
  • Review of the physics behind transistor technology.
  • Review of transistor technology development, and industry/company roadmaps in this area.
  • Analysis of the benchmarking used in the industry for AI chips.

Surveys and analysis of key edge AI applications
  • Analysis of the chipsets included in almost 200 smartphones released since 2020, along with pricing estimations and key trends.
  • Analysis of the chipsets included in almost 50 tablets released since 2020, along with pricing estimations and key trends.
  • Performance comparisons for automotive chipsets, along with key trends with regards performance, power consumption, and efficiency.

Full market characterization for each major edge AI chip product
  • Review of the edge AI chip landscape, including key players across edge applications.
  • Profiles of 23 of the most prominent companies designing AI chips for edge applications today, with a focus on their latest and in-development chip technologies.
  • Reviews of promising start-up companies developing AI chips for edge applications.

Report MetricsDetails
Historic Data2019 - 2022
CAGRThe global market for AI chips at the edge will reach US$22.0 billion by 2034. This represents a CAGR of 7.63% over the forecast period (2024 to 2034).
Forecast Period2024 - 2034
Forecast UnitsUSD$ Billions
Regions CoveredWorldwide, All Asia-Pacific, North America (USA + Canada), Europe
Segments CoveredGeography (North America, APAC, Europe, Rest of World), architecture (FPGA, CPU, GPU, DSP, ASIC), packaging (SoC, MCM, 2.5D+), end-user (consumer, enterprise), application (computer vision, language, predictive), and industry vertical (consumer electronics, industrial, automotive, healthcare, retail, media & advertising, other).

Analyst access from IDTechEx
All report purchases include up to 30 minutes telephone time with an expert analyst who will help you link key findings in the report to the business issues you're addressing. This needs to be used within three months of purchasing the report.




Table of Contents
1.EXECUTIVE SUMMARY
1.1.Edge AI
1.2.IDTechEx definition of Edge AI
1.3.Edge vs Cloud characteristics
1.4.Advantages and disadvantages of edge AI
1.5.Edge devices that employ AI chips
1.6.The edge AI chip landscape - overview
1.7.The edge AI chip landscape - key hardware players
1.8.The edge AI chip landscape - hardware start-ups
1.9.The AI chip landscape - other than hardware
1.10.Edge AI landscape - geographic split: China
1.11.Edge AI landscape - geographic split: North America
1.12.Edge AI landscape - geographic split: Rest of World
1.13.Inference at the edge
1.14.Deep learning: How an AI algorithm is implemented
1.15.AI chip capabilities
2.FORECASTS
2.1.Total revenue forecast
2.2.Methodology and analysis
2.3.Estimating annual revenue from smartphone chipsets
2.4.Smartphone chipset costs
2.5.Costs garnered by AI in smartphone chipsets
2.6.Revenue forecast by geography
2.7.Percentage shares of market by geography
2.8.Chip types: architecture
2.9.Forecast by chip type
2.10.Semiconductor packaging timeline
2.11.From 1D to 3D semiconductor packaging
2.12.2D packaging - System-on-Chip
2.13.2D packaging - Multi-Chip Modules
2.14.2.5D and 3D packaging - System-in-Package
2.15.3D packaging - System-on-Package
2.16.Forecast by packaging
2.17.Consumer vs Enterprise forecast
2.18.Forecast by application
2.19.Forecast by industry vertical
2.20.Forecast by industry vertical - full
3.TECHNOLOGY: FROM SEMICONDUCTOR WAFERS TO AI CHIPS
3.1.Wafer and chip manufacture processes
3.1.1.Raw material to wafer: process flow
3.1.2.Wafer to chip: process flow
3.1.3.Wafer to chip: process flow
3.1.4.The initial deposition stage
3.1.5.Thermal oxidation
3.1.6.Oxidation by vapor deposition
3.1.7.Photoresist coating
3.1.8.How a photoresist coating is applied
3.1.9.Lithography
3.1.10.Lithography: DUV
3.1.11.Lithography: Enabling higher resolution
3.1.12.Lithography: EUV
3.1.13.Etching
3.1.14.Deposition and ion implantation
3.1.15.Deposition of thin films
3.1.16.Silicon Vapor Phase Epitaxy
3.1.17.Atmospheric Pressure CVD
3.1.18.Low Pressure CVD and Plasma-Enhanced CVD
3.1.19.Atomic Layer Deposition
3.1.20.Molecular Beam Epitaxy
3.1.21.Evaporation and Sputtering
3.1.22.Ion Implantation: Generation
3.1.23.Ion Implantation: Penetration
3.1.24.Metallization
3.1.25.Wafer: The final form
3.1.26.Semiconductor supply chain players
3.2.Transistor technology
3.2.1.How transistors operate: p-n junctions
3.2.2.How transistors operate: electron shells
3.2.3.How transistors operate: valence electrons
3.2.4.How transistors work: back to p-n junctions
3.2.5.How transistors work: connecting a battery
3.2.6.How transistors work: PNP operation
3.2.7.How transistors work: PNP
3.2.8.How transistors switch
3.2.9.From p-n junctions to FETs
3.2.10.How FETs work
3.2.11.Moore's law
3.2.12.Gate length reductions
3.2.13.FinFET
3.2.14.GAAFET, MBCFET, RibbonFET
3.2.15.Process nodes
3.2.16.Device architecture roadmap
3.2.17.Evolution of transistor device architectures
3.2.18.Carbon nanotubes for transistors
3.2.19.CNTFET designs
3.2.20.Semiconductor foundry node roadmap
3.2.21.Roadmap for advanced nodes
4.EDGE INFERENCE AND KEY APPLICATIONS
4.1.Inference at the edge and benchmarking
4.1.1.Edge AI
4.1.2.Edge vs Cloud characteristics
4.1.3.Advantages and disadvantages of edge AI
4.1.4.Edge devices that employ AI chips
4.1.5.AI in smartphones and tablets
4.1.6.Recent history: Siri
4.1.7.Text-to-speech
4.1.8.AI in personal computers
4.1.9.AI chip basics
4.1.10.Parallel computing
4.1.11.Low-precision computing
4.1.12.AI in speakers
4.1.13.AI in smart appliances
4.1.14.AI in automotive vehicles
4.1.15.AI in sensors and structural health monitoring
4.1.16.AI in security cameras
4.1.17.AI in robotics
4.1.18.AI in wearables and hearables
4.1.19.The edge AI chip landscap
4.1.20.Inference at the edge
4.1.21.Deep learning: How an AI algorithm is implemented
4.1.22.AI chip capabilities
4.1.23.AI chip capabilities
4.1.24.MLPerf - Inference
4.1.25.MLPerf Edge
4.1.26.Inference: Edge, Nvidia vs Nvidia
4.1.27.MLPerf Mobile - Qualcomm HTP
4.1.28.The battle for domination: Qualcomm vs MediaTek
4.1.29.MLPerf Tiny
4.2.AI in smartphones
4.2.1.Mobile device competitive landscape
4.2.2.Samsung and Oppo chipsets
4.2.3.US restrictions on China
4.2.4.Smartphone chipset landscape 2022 - Present
4.2.5.MediaTek and Qualcomm 2020 - Present
4.2.6.AI processing in smartphones: 2020 - Present
4.2.7.Node concentrations 2020 - Present
4.2.8.Chipset concentrations 2020 - Present
4.2.9.Chipset designer concentrations 2020 - Present
4.2.10.Node concentrations for each chipset designer
4.2.11.AI-capable versus non AI-capable smartphones
4.2.12.Chipset volume: 2021 and 2022
4.3.AI in tablets
4.3.1.Tablet competitive landscape
4.3.2.Tablet chipset landscape 2020 - Present
4.3.3.AI processing in tablets: 2020 - Present
4.3.4.Node concentrations 2020 - Present
4.3.5.Chipset designer concentrations 2021 - Present
4.3.6.Node concentrations for each chipset designer
4.3.7.AI-capable versus non AI-capable tablets
4.4.AI in automotive
4.4.1.AI in automobiles: Competitive landscape
4.4.2.Levels of driving automation
4.4.3.Computational efficiencies
4.4.4.AI chips for automotive vehicles
4.4.5.Performance and node trends
4.4.6.Rising power consumption
5.SUPPLY CHAIN PLAYERS
5.1.Smartphone chipset case studies
5.1.1.MediaTek: Dimensity and APU
5.1.2.Qualcomm: MLPerf results - Inference Mobile and Inference Tiny
5.1.3.Qualcomm: Mobile AI
5.1.4.Apple: Neural Engine
5.1.5.Apple: The ANE's capabilities and shortcomings
5.1.6.Google: Pixel Neural Core and Pixel Tensor
5.1.7.Google: Edge TPU
5.1.8.Samsung: Exynos
5.1.9.Huawei: Kirin chipsets
5.1.10.Unisoc: T618 and T710
5.2.Automotive case studies
5.2.1.Nvidia: DRIVE AGX Orin and Thor
5.2.2.Qualcomm: Snapdragon Ride Flex
5.2.3.Ambarella: CV3-AD685 for automotive applications
5.2.4.Ambarella: CVflow architecture
5.2.5.Hailo
5.2.6.Blaize
5.2.7.Tesla: FSD
5.2.8.Horizon Robotics: Journey 5
5.2.9.Horizon Robotics: Journey 5 Architecture
5.2.10.Renesas: R-Car 4VH
5.2.11.Mobileye
5.2.12.Mobileye: EyeQ Ultra
5.2.13.Texas Instruments: TDA4VM
5.3.Embedded device case studies
5.3.1.Nvidia: Jetson AGX Orin
5.3.2.NXP Semiconductors: Introduction
5.3.3.NXP Semiconductors: MCX N
5.3.4.NXP Semiconductors: i.MX 95 and NPU
5.3.5.Intel: AI hardware portfolio
5.3.6.Intel: Core
5.3.7.Perceive
5.3.8.Perceive: Ergo 2 architecture
5.3.9.GreenWaves Technologies
5.3.10.GreenWaves Technologies: GAP9 architecture
5.3.11.AMD Xilinx: ACAP
5.3.12.AMD: Versal AI
5.3.13.NationalChip: GX series
5.3.14.NationalChip: GX8002 and gxNPU
5.3.15.Efinix: Quantum architecture
5.3.16.Efinix: Titanium and Trion FPGAs
6.APPENDICES
6.1.List of smartphones surveyed
6.1.1.Appendix: List of smartphones surveyed - Apple and Asus
6.1.2.Appendix: List of smartphones surveyed - Google and Honor
6.1.3.Appendix: List of smartphones surveyed - Huawei, HTC and Motorola
6.1.4.Appendix: List of smartphones surveyed - Nokia, OnePlus, Oppo
6.1.5.Appendix: List of smartphones surveyed - realme
6.1.6.Appendix: List of smartphones surveyed - Samsung and Sony
6.1.7.Appendix: List of smartphones surveyed - Tecno Mobile
6.1.8.Appendix: List of smartphones surveyed - Xiaomi
6.1.9.Appendix: List of smartphones surveyed - Vivo and ZTE
6.2.List of tablets surveyed
6.2.1.Appendix: List of tablets surveyed - Acer, Amazon and Apple
6.2.2.Appendix: List of tablets surveyed - Barnes & Noble, Google, Huawei, Lenovo
6.2.3.Appendix: List of tablets surveyed - Microsoft, OnePlus, Samsung, Xiaomi

Ordering Information​

AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge​

£€$¥元
Electronic (1-5 users)
$7,000.00
Electronic (6-10 users)
$10,000.00
Electronic and 1 Hardcopy (1-5 users)
$7,975.00
Electronic and 1 Hardcopy (6-10 users)
$10,975.00
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users

7für7

Top 20
Here's a sample of the report "AI Chips for Edge Applications 2024-2034". The full report costs about $11,000. What a bargain! Who wants to "chip" in for a copy?

BTW, we are listed in the Hardware Start-Up and New Players diagram.🥳






Annual revenue generated by AI Chips for edge devices is set to exceed US$22 billion by 2034.
AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge

AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge​

Technology analyses and market forecasts for the global sale of AI chips for edge applications by geography, architecture, packaging, end-user, application, and industry vertical.​




The global AI chips market for edge devices will grow to US$22.0 billion by 2034, with the three largest industry verticals at that time being Consumer Electronics, Industrial, and Automotive. Artificial Intelligence (AI) is already displaying significant transformative potential across a number of different applications, from fraud detection in high-frequency trading to the use of generative AI (such as the likes of ChatGPT) as a significant time-saver for the preparation of written documentation, as well as a creative prompt. While the use of semiconductor chips with neural network architectures (these architectures being especially well-equipped in handling machine learning workloads, machine learning being an integral facet to functioning AI) is prevalent within data centers, it is at the edge where significant opportunity for adoption of AI lies. The benefits to end-users of providing a greater array of functionalities to edge devices, as well as - in certain applications - being able to fully outsource human-hours to intelligent systems, is significant. AI has already found its way into the flagship smartphones of the world's leading designers, and is set to be rolled out across a number of different devices, from automotive vehicles to smart appliances in the home.

Following a period of dedicated research by expert analysts, IDTechEx has published a report that offers unique insights into the global edge AI chip technology landscape and corresponding markets. The report contains a comprehensive analysis of 23 players involved with AI chip design for edge devices, as well as a detailed assessment of technology innovations and market dynamics. The market analysis and forecasts focus on total revenue (where this corresponds to the revenue that can be attributed to the specific neural network architecture included in sold chips/chipsets that is responsible for handling machine learning workloads), with granular forecasts that are segmented by geography (APAC, Europe, North America, and Rest of World), type of buyer (consumer and enterprise), chip architecture (GPU, CPU, ASIC, DSP, and FPGA), packaging type (System-on-Chip, Multi-Chip Module, and 2.5D+), application (language, computer vision, and predictive), and industry vertical (industrial, healthcare, automotive, retail, media & advertising, consumer electronics, and others).

The report presents an unbiased analysis of primary data gathered via our interviews with key players, and it builds on our expertise in the semiconductor, computing and electronics sectors.

This research delivers valuable insights for:
  • Companies that require AI-capable hardware.
  • Companies that design/manufacture AI chips and/or AI-capable embedded systems.
  • Companies that supply components used in AI-capable embedded systems.
  • Companies that invest in AI and/or semiconductor design, manufacture, and packaging.
  • Companies that develop devices that may require AI functionality.

87.png

Computing can be segmented with regards to the different environments, designated by where computation takes place within the network (i.e. within the cloud or at the edge of the network). This report covers the consumer edge and enterprise edge environments. Source: IDTechEx

Artificial Intelligence at the Edge
The differentiation between edge and cloud computing environments is not a trivial one, as each environment has its own requirements and capabilities. An edge computing environment is one in which computations are performed on a device - usually the same device on which the data is created - that is at the edge of the network (and, therefore, close to the user). This contrasts with cloud or data center computing, which is at the center of the network. Such edge devices include cars, cameras, laptops, mobile phones, autonomous vehicles, etc. In all of these instances, computation is carried out close to the user, at the edge of the network where the data is located. Given this definition of edge computing, edge AI is therefore the deployment of AI applications at the edge of the network, in the types of devices listed above. The benefits of running AI applications on edge devices include not having to send data back and forth between the cloud and the edge device to carry out the computation; as such, edge devices running AI algorithms can make decisions quickly without needing a connection to the internet or the cloud. Given that many edge devices run on a power cell, AI chips used for such edge devices need to have lower power consumption than within data centers, in order to be able to run effectively on these devices. This results in typically simpler algorithms being deployed, that don't require as much power.

Edge devices can be split into two categories depending on who they are intended for; consumer devices are sold directly to end-users, and so are developed with end-user requirements in mind. Enterprise devices, on the other hand, are purchased by businesses or institutions, who may have different requirements to the end-user. Both types of edge devices are considered in the report.

82.png

The consumer electronics, industrial, and automotive industry verticals are expected to generate the most revenue for AI chips at the edge by 2034. Source: IDTechEx

AI: A crucial technology for an Internet of Things
AI's capabilities in natural language processing (understanding of textual data, not just from a linguistic perspective but also a contextual one), speech recognition (being able to decipher a spoken language and convert it to text in the same language, or convert to another language), recommendation (being able to send personalized adverts/suggestions to consumers based on their interactions with service items), reinforcement learning (being able to make predictions based on observations/exploration, such as is used when training agents to play a game), object detection, and image classification (being able to distinguish objects from an environment, and decide on what that object is) are such that AI can be applied to a number of different devices across industry verticals and thoroughly transform the ways in which human users interact with these devices. This can range from additional functionality that enhances user experience (such as in smartphones, smart televisions, personal computers, and tablets), to functionality that is inherently crucial to the technology (such as is the case for autonomous vehicles and industrial robots, which would simply not be able to function in the desired manner without the inclusion of AI).

The Smart Home in particular is a growing avenue for AI (which primarily comprises consumer electronics products), given that artificial intelligence (allowing for automation and hands-free access) and Wi-Fi connectivity are two key technologies for realizing an Internet of Things (IoT), where appliances can communicate directly with one another. Smart televisions, mirrors, virtual reality headsets, sensors, kitchen appliances, cleaning appliances, and safety systems are all devices that can be brought into a state of interconnectivity through the deployment of artificial intelligence and Wi-Fi, where AI allows for hands-free access and voice command over smart home devices. The opportunity afforded by bringing AI into the home is reflected somewhat by the growth of the consumer electronics vertical over the forecast period, with it being the industry that generates the most revenue for edge AI chips in 2034.

8A.png

The Edge AI chip landscape. Source: IDTechEx

The growth of AI at the edge
While the forecast presented in this report does predict substantial growth of AI at the edge over the next ten years - where global revenue is in excess of US$22 billion by 2034 - this growth is anything but steady. This is due to the saturation and stop-start nature of certain markets that have already employed AI architectures in their incumbent chipsets, and where rigorous testing is necessary prior to high volume rollout, respectively. For example, the smartphone market has already begun to saturate; though premiumization of smartphones continues (where the percentage share of total smartphones sold given over to premium smartphones is, year-on-year, increasing), where AI revenue increases as more premium smartphones are sold given that these smartphones incorporate AI coprocessing in their chipsets, it is expected that this will itself begin to saturate over the next ten years.

In contrast to this, two notable jumps in revenue on the forecast presented in the report are from 2024 to 2025, and 2026 to 2027. The first of these jumps can be largely attributed to the most cutting-edge ADAS (Advanced Driver-Assistance Systems) finding their way into car manufacturers' 2025 production line. The second jump is due in part to increased adoption of ADAS systems, as well as the relative maturation of start-ups operating presently targeting embedded devices, especially for smart home appliances. These applications are discussed in greater detail in the report, with a particular focus on the smartphone and automotive markets.

89.png

Smartphone price as compared to the node process that incumbent chipsets have been manufactured in. This plot has been created from a survey - carried out specifically for this report - of 196 smartphones released since 2020, 91 of which incorporate neural network architectures to allow for AI acceleration. Source: IDTechEx

Market developments and roadmaps
IDTechEx's model of the edge AI chips market considers architectural trends, developments in packaging, the dispersion/concentration of funding and investments, historical financial data, individual industry vertical market saturation, and geographically-localized ecosystems to give an accurate representation of the evolving market value over the next ten years.

Our report answers important questions such as:
  • Which industry verticals will AI chips for edge devices be used most prominently in?
  • What opportunities are there for growth within the edge computing environments?
  • How has the adoption of AI within more mature markets been received, and what are the obstacles to adoption in more emergent applications?
  • How will each AI chip application and industry vertical grow in the short and long-term?
  • What are the trends associated with the design and manufacture of chips that incorporate neural network architectures?

Summary
This report provides critical market intelligence concerning AI hardware at the edge, particularly chips used for accelerating machine learning workloads. This includes:

Market forecasts and analysis
  • Market forecasts from 2024-2034, segmented in six different ways: by geography, architecture, packaging, end-user, application and industry vertical.
  • Analysis of market forecasts, including assumptions, methodologies, limitations, and explanations for the characteristics of each forecast.

A review of the technology behind AI chips
  • History and context for AI chip design and manufacture.
  • Overview of different architectures.
  • General capabilities of AI chips.
  • Review of semiconductor manufacture processes, from raw material to wafer to chip.
  • Review of the physics behind transistor technology.
  • Review of transistor technology development, and industry/company roadmaps in this area.
  • Analysis of the benchmarking used in the industry for AI chips.

Surveys and analysis of key edge AI applications
  • Analysis of the chipsets included in almost 200 smartphones released since 2020, along with pricing estimations and key trends.
  • Analysis of the chipsets included in almost 50 tablets released since 2020, along with pricing estimations and key trends.
  • Performance comparisons for automotive chipsets, along with key trends with regards performance, power consumption, and efficiency.

Full market characterization for each major edge AI chip product
  • Review of the edge AI chip landscape, including key players across edge applications.
  • Profiles of 23 of the most prominent companies designing AI chips for edge applications today, with a focus on their latest and in-development chip technologies.
  • Reviews of promising start-up companies developing AI chips for edge applications.

Report MetricsDetails
Historic Data2019 - 2022
CAGRThe global market for AI chips at the edge will reach US$22.0 billion by 2034. This represents a CAGR of 7.63% over the forecast period (2024 to 2034).
Forecast Period2024 - 2034
Forecast UnitsUSD$ Billions
Regions CoveredWorldwide, All Asia-Pacific, North America (USA + Canada), Europe
Segments CoveredGeography (North America, APAC, Europe, Rest of World), architecture (FPGA, CPU, GPU, DSP, ASIC), packaging (SoC, MCM, 2.5D+), end-user (consumer, enterprise), application (computer vision, language, predictive), and industry vertical (consumer electronics, industrial, automotive, healthcare, retail, media & advertising, other).

Analyst access from IDTechEx
All report purchases include up to 30 minutes telephone time with an expert analyst who will help you link key findings in the report to the business issues you're addressing. This needs to be used within three months of purchasing the report.




Table of Contents
1.EXECUTIVE SUMMARY
1.1.Edge AI
1.2.IDTechEx definition of Edge AI
1.3.Edge vs Cloud characteristics
1.4.Advantages and disadvantages of edge AI
1.5.Edge devices that employ AI chips
1.6.The edge AI chip landscape - overview
1.7.The edge AI chip landscape - key hardware players
1.8.The edge AI chip landscape - hardware start-ups
1.9.The AI chip landscape - other than hardware
1.10.Edge AI landscape - geographic split: China
1.11.Edge AI landscape - geographic split: North America
1.12.Edge AI landscape - geographic split: Rest of World
1.13.Inference at the edge
1.14.Deep learning: How an AI algorithm is implemented
1.15.AI chip capabilities
2.FORECASTS
2.1.Total revenue forecast
2.2.Methodology and analysis
2.3.Estimating annual revenue from smartphone chipsets
2.4.Smartphone chipset costs
2.5.Costs garnered by AI in smartphone chipsets
2.6.Revenue forecast by geography
2.7.Percentage shares of market by geography
2.8.Chip types: architecture
2.9.Forecast by chip type
2.10.Semiconductor packaging timeline
2.11.From 1D to 3D semiconductor packaging
2.12.2D packaging - System-on-Chip
2.13.2D packaging - Multi-Chip Modules
2.14.2.5D and 3D packaging - System-in-Package
2.15.3D packaging - System-on-Package
2.16.Forecast by packaging
2.17.Consumer vs Enterprise forecast
2.18.Forecast by application
2.19.Forecast by industry vertical
2.20.Forecast by industry vertical - full
3.TECHNOLOGY: FROM SEMICONDUCTOR WAFERS TO AI CHIPS
3.1.Wafer and chip manufacture processes
3.1.1.Raw material to wafer: process flow
3.1.2.Wafer to chip: process flow
3.1.3.Wafer to chip: process flow
3.1.4.The initial deposition stage
3.1.5.Thermal oxidation
3.1.6.Oxidation by vapor deposition
3.1.7.Photoresist coating
3.1.8.How a photoresist coating is applied
3.1.9.Lithography
3.1.10.Lithography: DUV
3.1.11.Lithography: Enabling higher resolution
3.1.12.Lithography: EUV
3.1.13.Etching
3.1.14.Deposition and ion implantation
3.1.15.Deposition of thin films
3.1.16.Silicon Vapor Phase Epitaxy
3.1.17.Atmospheric Pressure CVD
3.1.18.Low Pressure CVD and Plasma-Enhanced CVD
3.1.19.Atomic Layer Deposition
3.1.20.Molecular Beam Epitaxy
3.1.21.Evaporation and Sputtering
3.1.22.Ion Implantation: Generation
3.1.23.Ion Implantation: Penetration
3.1.24.Metallization
3.1.25.Wafer: The final form
3.1.26.Semiconductor supply chain players
3.2.Transistor technology
3.2.1.How transistors operate: p-n junctions
3.2.2.How transistors operate: electron shells
3.2.3.How transistors operate: valence electrons
3.2.4.How transistors work: back to p-n junctions
3.2.5.How transistors work: connecting a battery
3.2.6.How transistors work: PNP operation
3.2.7.How transistors work: PNP
3.2.8.How transistors switch
3.2.9.From p-n junctions to FETs
3.2.10.How FETs work
3.2.11.Moore's law
3.2.12.Gate length reductions
3.2.13.FinFET
3.2.14.GAAFET, MBCFET, RibbonFET
3.2.15.Process nodes
3.2.16.Device architecture roadmap
3.2.17.Evolution of transistor device architectures
3.2.18.Carbon nanotubes for transistors
3.2.19.CNTFET designs
3.2.20.Semiconductor foundry node roadmap
3.2.21.Roadmap for advanced nodes
4.EDGE INFERENCE AND KEY APPLICATIONS
4.1.Inference at the edge and benchmarking
4.1.1.Edge AI
4.1.2.Edge vs Cloud characteristics
4.1.3.Advantages and disadvantages of edge AI
4.1.4.Edge devices that employ AI chips
4.1.5.AI in smartphones and tablets
4.1.6.Recent history: Siri
4.1.7.Text-to-speech
4.1.8.AI in personal computers
4.1.9.AI chip basics
4.1.10.Parallel computing
4.1.11.Low-precision computing
4.1.12.AI in speakers
4.1.13.AI in smart appliances
4.1.14.AI in automotive vehicles
4.1.15.AI in sensors and structural health monitoring
4.1.16.AI in security cameras
4.1.17.AI in robotics
4.1.18.AI in wearables and hearables
4.1.19.The edge AI chip landscap
4.1.20.Inference at the edge
4.1.21.Deep learning: How an AI algorithm is implemented
4.1.22.AI chip capabilities
4.1.23.AI chip capabilities
4.1.24.MLPerf - Inference
4.1.25.MLPerf Edge
4.1.26.Inference: Edge, Nvidia vs Nvidia
4.1.27.MLPerf Mobile - Qualcomm HTP
4.1.28.The battle for domination: Qualcomm vs MediaTek
4.1.29.MLPerf Tiny
4.2.AI in smartphones
4.2.1.Mobile device competitive landscape
4.2.2.Samsung and Oppo chipsets
4.2.3.US restrictions on China
4.2.4.Smartphone chipset landscape 2022 - Present
4.2.5.MediaTek and Qualcomm 2020 - Present
4.2.6.AI processing in smartphones: 2020 - Present
4.2.7.Node concentrations 2020 - Present
4.2.8.Chipset concentrations 2020 - Present
4.2.9.Chipset designer concentrations 2020 - Present
4.2.10.Node concentrations for each chipset designer
4.2.11.AI-capable versus non AI-capable smartphones
4.2.12.Chipset volume: 2021 and 2022
4.3.AI in tablets
4.3.1.Tablet competitive landscape
4.3.2.Tablet chipset landscape 2020 - Present
4.3.3.AI processing in tablets: 2020 - Present
4.3.4.Node concentrations 2020 - Present
4.3.5.Chipset designer concentrations 2021 - Present
4.3.6.Node concentrations for each chipset designer
4.3.7.AI-capable versus non AI-capable tablets
4.4.AI in automotive
4.4.1.AI in automobiles: Competitive landscape
4.4.2.Levels of driving automation
4.4.3.Computational efficiencies
4.4.4.AI chips for automotive vehicles
4.4.5.Performance and node trends
4.4.6.Rising power consumption
5.SUPPLY CHAIN PLAYERS
5.1.Smartphone chipset case studies
5.1.1.MediaTek: Dimensity and APU
5.1.2.Qualcomm: MLPerf results - Inference Mobile and Inference Tiny
5.1.3.Qualcomm: Mobile AI
5.1.4.Apple: Neural Engine
5.1.5.Apple: The ANE's capabilities and shortcomings
5.1.6.Google: Pixel Neural Core and Pixel Tensor
5.1.7.Google: Edge TPU
5.1.8.Samsung: Exynos
5.1.9.Huawei: Kirin chipsets
5.1.10.Unisoc: T618 and T710
5.2.Automotive case studies
5.2.1.Nvidia: DRIVE AGX Orin and Thor
5.2.2.Qualcomm: Snapdragon Ride Flex
5.2.3.Ambarella: CV3-AD685 for automotive applications
5.2.4.Ambarella: CVflow architecture
5.2.5.Hailo
5.2.6.Blaize
5.2.7.Tesla: FSD
5.2.8.Horizon Robotics: Journey 5
5.2.9.Horizon Robotics: Journey 5 Architecture
5.2.10.Renesas: R-Car 4VH
5.2.11.Mobileye
5.2.12.Mobileye: EyeQ Ultra
5.2.13.Texas Instruments: TDA4VM
5.3.Embedded device case studies
5.3.1.Nvidia: Jetson AGX Orin
5.3.2.NXP Semiconductors: Introduction
5.3.3.NXP Semiconductors: MCX N
5.3.4.NXP Semiconductors: i.MX 95 and NPU
5.3.5.Intel: AI hardware portfolio
5.3.6.Intel: Core
5.3.7.Perceive
5.3.8.Perceive: Ergo 2 architecture
5.3.9.GreenWaves Technologies
5.3.10.GreenWaves Technologies: GAP9 architecture
5.3.11.AMD Xilinx: ACAP
5.3.12.AMD: Versal AI
5.3.13.NationalChip: GX series
5.3.14.NationalChip: GX8002 and gxNPU
5.3.15.Efinix: Quantum architecture
5.3.16.Efinix: Titanium and Trion FPGAs
6.APPENDICES
6.1.List of smartphones surveyed
6.1.1.Appendix: List of smartphones surveyed - Apple and Asus
6.1.2.Appendix: List of smartphones surveyed - Google and Honor
6.1.3.Appendix: List of smartphones surveyed - Huawei, HTC and Motorola
6.1.4.Appendix: List of smartphones surveyed - Nokia, OnePlus, Oppo
6.1.5.Appendix: List of smartphones surveyed - realme
6.1.6.Appendix: List of smartphones surveyed - Samsung and Sony
6.1.7.Appendix: List of smartphones surveyed - Tecno Mobile
6.1.8.Appendix: List of smartphones surveyed - Xiaomi
6.1.9.Appendix: List of smartphones surveyed - Vivo and ZTE
6.2.List of tablets surveyed
6.2.1.Appendix: List of tablets surveyed - Acer, Amazon and Apple
6.2.2.Appendix: List of tablets surveyed - Barnes & Noble, Google, Huawei, Lenovo
6.2.3.Appendix: List of tablets surveyed - Microsoft, OnePlus, Samsung, Xiaomi

Ordering Information​

AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge​

£€$¥元
Electronic (1-5 users)
$7,000.00
Electronic (6-10 users)
$10,000.00
Electronic and 1 Hardcopy (1-5 users)
$7,975.00
Electronic and 1 Hardcopy (6-10 users)
$10,975.00
11 tsd. Dollar for a full report? Jesus Christ… better invest it into BRN what the…..😱👻
 
  • Haha
  • Like
Reactions: 5 users

Frangipani

Regular

Sony Semiconductor Brings Inference Close To The Edge​

Steve McDowell
Contributor
Chief Analyst & CEO, NAND Research.


https://www.forbes.com/sites/stevem...close-to-the-edge/?sh=660e80bb34f9#open-web-0
Mar 27, 2024,04:12pm EDT
Sony Semiconductor Solutions Group

Sony Semiconductor Solutions Group
NURPHOTO VIA GETTY IMAGES

AI only matters to businesses if the technology enables competitive differentiation or drives increased efficiencies. The past year has seen technology companies focus on training models that promise to change enterprises across industries. While training has been the focus, the recent NVIDIA GTC event showcased a rapid transition towards inference, where actual business value lay.


AI at the Retail Edge​

Retail is one of the industries that promises to benefit most from AI. Generative AI and large language models aside, retail organizations are already deploying image recognition systems for diverse tasks, from inventory control and loss prevention to customer service.

Earlier this year, Nvidia published its 2024 State of AI in Retail and CPG report that takes a survey-based approach to understanding the use of AI in the retail sector. Nvidia found that 42% percent of retailers already use AI, with an additional 34% assessing or piloting AI programs. Narrowing the aperture, among large retailers with revenues of more than $500 million, the adoption of AI stretches to 64%. That’s a massive market.


The challenge for retailers and the array of ecosystem partners catering to them is that AI can be complex. Large language models and generative AI require infrastructure that scales beyond the capabilities of many retail locations. Using the cloud to solve those problems isn't always practical, either, as applications like vision processing need to be done at the edge, where the data lives.


Sony’s Platform Approach to On-Device Inference​

Sony Semiconductor Solutions Corporation took on the challenge of simplifying vision processing and inference, resulting in the introduction of its AITRIOS edge AI sensing platform. AITRIOS addresses six significant cloud challenges based IoT systems, including handling large data volumes, enhancing data privacy, reducing latency, conserving energy, ensuring service continuity, and securing data.


AITRIOS accelerates the deployment of edge AI-powered sensing solutions across industries, enabling a comprehensive ecosystem for creating solutions that blend edge computing and cloud technologies.




View attachment 60127



April 24, 2024

Edge AI-Driven Vision Detection Solution Introduced at 500 Convenience Store Locations to Measure Advertising Effectiveness​

Sony Semiconductor Solutions Corporation
Atsugi, Japan, April 24, 2024 —

Today, Sony Semiconductor Solutions Corporation (SSS) announced that it has introduced and begun operating an edge AI-driven vision detection solution at 500 convenience store locations in Japan to improve the benefits of in-store advertising.

imx500ai-camera-installed.jpg

Edge AI technology automatically detects the number of digital signage viewers and how long they viewed it.

SSS has been providing 7-Eleven and other retail outlets in Japan with vision-based technology to improve the implementation of digital signage systems and in-store advertising at their brick-and-mortar locations as part of their retail media*1 strategy. To help ensure that effective content is shown for brands and stores, this solution gives partners sophisticated tools to evaluate the effectiveness of advertising on their customers.

As part of this effort, SSS has recently introduced a solution that uses edge devices with on-sensor AI processing to automatically detect when customers see digital signage, count how many people paused to view it, and measure the percentage of viewers. The AI capabilities of the sensor collects data points such as the number of shoppers who enter the detection area, whether they saw the signage, the number who stopped to view the signage, and how long they watched for. The system does not output image data capable of identifying individuals, making it possible to provide insightful measurements while helping to preserve privacy.


Click here for an overview video of the solution and interview with 7-Eleven Japan.


Solution features:

-IMX500 intelligent vision sensor delivers optimal data collection, while helping to preserve privacy.

SSS’s IMX500 intelligent vision sensor with AI-processing capabilities
automatically detects the number of customers who enter the detection area, the number who stopped to view the signage, and how long they viewed it. The acquired metadata (semantic information) is then sent to a back-end system where it’s combined with content streaming information and purchasing data to conduct a sophisticated analysis of advertising effectiveness. Because the system does not output image data that could be used to identify individuals, it helps to preserve customer privacy.

-Edge devices equipped with the IMX500 save space in store.

The IMX500 is made using SSS’s proprietary structure with the pixel chip and logic chip stacked, enabling the entire process, from imaging to AI inference, to be done on a single sensor.
Compact, IMX500-equipped edge devices (approx. 55 x 40 x 35 mm) are unobtrusive in shops, and compared to other solutions that require an AI box or other additional devices for AI inference, can be installed more flexibly in convenience stores and shops with limited space.

-The AITRIOS™ platform contributes to operational stability and system expandability.
  • Only light metadata is output from IMX500 edge devices, minimizing the amount of data transmitted to the cloud. This helps lessen network load, even when adding more devices in multiple stores, compared to solutions that send full image data to the cloud. This curtails communication, cloud storage, and computing costs.
    The IMX500 also handles AI computing, eliminating the need for other devices such as an AI box, resulting in a simple device configuration, streamlining device maintenance and reducing costs of installation.
  • AITRIOS*2, SSS’s edge AI sensing platform, which is used to build and operate the in-store solution, delivers a complete service without the need for third-party tools, enabling simple, sustainable operations.​
  • This solution was developed with Console Enterprise Edition, one of the services offered by AITRIOS, and is installed on the partner’s Microsoft Azure cloud infrastructure. It not only connects easily and compatibly with their existing systems, but also offers system customizability and security benefits, since there is no need to output various data outside the company.
image_e.jpg


*1 A new form of advertising media that provides advertising space for retailers and e-commerce sites using their own platforms
*2 AITRIOS is an AI sensing platform for streamlined device management, AI development, and operation. It offers the development environment, tools, features, etc., which are necessary for deploying AI-driven solutions, and it contributes to shorter roll-out times when launching operations, while ensuring privacy, reducing introductory cost, and minimizing complications. For more information on AITRIOS, visit: https://www.aitrios.sony-semicon.com/en


About Sony Semiconductor Solutions Corporation
Sony Semiconductor Solutions Corporation is a wholly owned subsidiary of Sony Group Corporation and the global leader in image sensors. It operates in the semiconductor business, which includes image sensors and other products. The company strives to provide advanced imaging technologies that bring greater convenience and fun. In addition, it also works to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights. For more information, please visit https://www.sony-semicon.com/en/index.html.

AITRIOS and AITRIOS logos are the registered trademarks or trademarks of Sony Group Corporation or its affiliated companies.
Microsoft and Azure are registered trademarks of Microsoft Corporation in the United States and other countries.
All other company and product names herein are trademarks or registered trademarks of their respective owners.








Here is some wild speculation: Could this 👆🏻possibly be a candidate for the mysterious Custom Customer SoC, featured in the recent Investor Roadshow presentation (provided the licensing of Akida IP was done via MegaChips)? 🤔


Post in thread 'AITRIOS'
https://thestockexchange.com.au/threads/aitrios.18971/post-31633

342F966B-C42C-412B-BC75-939E72D2CD9A.jpeg
 
  • Like
  • Thinking
  • Love
Reactions: 17 users

toasty

Regular


April 24, 2024

Edge AI-Driven Vision Detection Solution Introduced at 500 Convenience Store Locations to Measure Advertising Effectiveness​

Sony Semiconductor Solutions Corporation
Atsugi, Japan, April 24, 2024 —

Today, Sony Semiconductor Solutions Corporation (SSS) announced that it has introduced and begun operating an edge AI-driven vision detection solution at 500 convenience store locations in Japan to improve the benefits of in-store advertising.

imx500ai-camera-installed.jpg

Edge AI technology automatically detects the number of digital signage viewers and how long they viewed it.

SSS has been providing 7-Eleven and other retail outlets in Japan with vision-based technology to improve the implementation of digital signage systems and in-store advertising at their brick-and-mortar locations as part of their retail media*1 strategy. To help ensure that effective content is shown for brands and stores, this solution gives partners sophisticated tools to evaluate the effectiveness of advertising on their customers.

As part of this effort, SSS has recently introduced a solution that uses edge devices with on-sensor AI processing to automatically detect when customers see digital signage, count how many people paused to view it, and measure the percentage of viewers. The AI capabilities of the sensor collects data points such as the number of shoppers who enter the detection area, whether they saw the signage, the number who stopped to view the signage, and how long they watched for. The system does not output image data capable of identifying individuals, making it possible to provide insightful measurements while helping to preserve privacy.


Click here for an overview video of the solution and interview with 7-Eleven Japan.


Solution features:

-IMX500 intelligent vision sensor delivers optimal data collection, while helping to preserve privacy.

SSS’s IMX500 intelligent vision sensor with AI-processing capabilities
automatically detects the number of customers who enter the detection area, the number who stopped to view the signage, and how long they viewed it. The acquired metadata (semantic information) is then sent to a back-end system where it’s combined with content streaming information and purchasing data to conduct a sophisticated analysis of advertising effectiveness. Because the system does not output image data that could be used to identify individuals, it helps to preserve customer privacy.

-Edge devices equipped with the IMX500 save space in store.

The IMX500 is made using SSS’s proprietary structure with the pixel chip and logic chip stacked, enabling the entire process, from imaging to AI inference, to be done on a single sensor.
Compact, IMX500-equipped edge devices (approx. 55 x 40 x 35 mm) are unobtrusive in shops, and compared to other solutions that require an AI box or other additional devices for AI inference, can be installed more flexibly in convenience stores and shops with limited space.

-The AITRIOS™ platform contributes to operational stability and system expandability.
  • Only light metadata is output from IMX500 edge devices, minimizing the amount of data transmitted to the cloud. This helps lessen network load, even when adding more devices in multiple stores, compared to solutions that send full image data to the cloud. This curtails communication, cloud storage, and computing costs.
    The IMX500 also handles AI computing, eliminating the need for other devices such as an AI box, resulting in a simple device configuration, streamlining device maintenance and reducing costs of installation.
  • AITRIOS*2, SSS’s edge AI sensing platform, which is used to build and operate the in-store solution, delivers a complete service without the need for third-party tools, enabling simple, sustainable operations.​
  • This solution was developed with Console Enterprise Edition, one of the services offered by AITRIOS, and is installed on the partner’s Microsoft Azure cloud infrastructure. It not only connects easily and compatibly with their existing systems, but also offers system customizability and security benefits, since there is no need to output various data outside the company.
image_e.jpg


*1 A new form of advertising media that provides advertising space for retailers and e-commerce sites using their own platforms
*2 AITRIOS is an AI sensing platform for streamlined device management, AI development, and operation. It offers the development environment, tools, features, etc., which are necessary for deploying AI-driven solutions, and it contributes to shorter roll-out times when launching operations, while ensuring privacy, reducing introductory cost, and minimizing complications. For more information on AITRIOS, visit: https://www.aitrios.sony-semicon.com/en


About Sony Semiconductor Solutions Corporation
Sony Semiconductor Solutions Corporation is a wholly owned subsidiary of Sony Group Corporation and the global leader in image sensors. It operates in the semiconductor business, which includes image sensors and other products. The company strives to provide advanced imaging technologies that bring greater convenience and fun. In addition, it also works to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights. For more information, please visit https://www.sony-semicon.com/en/index.html.

AITRIOS and AITRIOS logos are the registered trademarks or trademarks of Sony Group Corporation or its affiliated companies.
Microsoft and Azure are registered trademarks of Microsoft Corporation in the United States and other countries.
All other company and product names herein are trademarks or registered trademarks of their respective owners.








Here is some wild speculation: Could this 👆🏻possibly be a candidate for the mysterious Custom Customer SoC, featured in the recent Investor Roadshow presentation (provided the licensing of Akida IP was done via MegaChips)? 🤔


Post in thread 'AITRIOS'
https://thestockexchange.com.au/threads/aitrios.18971/post-31633

View attachment 63835

All very nice but what has it got to do with BRN??
 
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
On Dec 29, Chinese researchers from Zhejiang University Hangzhou published a paper on arXiv titled Darwin3: A large-scale neuromorphic chip with a Novel ISA and On-Chip Learning. (Take note that submissions on arXiv must be from registered authors and are moderated but not peer-reviewed, although some authors posting preprints on arXiv - and thus benefitting from immediate feedback in the open-access community and extending their potential citation readership - go on to publish them in peer-reviewed journals).

Not for the first time, however, Akida is missing from the comparison with other state-of-the-art neuromorphic chips (plus the table still lists IBM’s TrueNorth instead of the recently unveiled NorthPole). This of course begs the question “Why?!” And the two likeliest answers IMO are: a) the authors did not know about Akida or b) they did not want Akida to outshine their baby.

I’ll leave it to our resident hardware experts to comment on the question whether Darwin3, which constitutes the third generation of the Darwin family of neuromorphic chips and is claimed to have up to 2.35 million neurons and on-chip learning, could be serious future competition.
A quick search here on TSE did not yield any reference to either its predecessors Darwin (2015) or Darwin2 (2019).



View attachment 53414

View attachment 53415

Article published 15 hours ago.

Chinese Chip Ignites Global Neuromorphic Computing Competition​

Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
by SLG Syndication

May 27, 2024

2 mins read

630bd42e-e52c-4796-aab0-7705c07b09da-lk.jpg
[ Illustration: The China Academy]
A typical computer chip, such as one found in a personal desktop for non-professional use, consumes around 100 watts of power. AI, on the other hand, requires significantly more energy. It is estimated that ChatGPT would consume approximately 300 watts per second to answer a single question. In contrast, the human brain is much more energy-efficient, requiring only around 10 watts of power, comparable to that of a lightbulb. This exceptional energy efficiency is one of the reasons why scientists are interested in modeling the next generation of microchips after the human brain.
In the bustling tech landscape of Hangzhou, China, a team of researchers at Zhejiang University has made a significant leap in the world of neuromorphic computing with the development of their latest innovation, the Darwin3 chip. This groundbreaking piece of technology promises to transform how we simulate brain activity, paving the way for advancements in artificial intelligence, robotics, and beyond.

Neuromorphic chips are designed to emulate the architecture and functioning of the human brain. Unlike traditional computers that process information in a linear, step-by-step manner, these chips operate more like our brains, processing multiple streams of information simultaneously and adapting to new data in real-time.
The Darwin3 chip is a marvel of modern engineering, specifically designed to work with Spiking Neural Networks (SNNs). SNNs are a type of artificial neural network that mimics the way neurons and synapses in the human brain communicate. While conventional neural networks use continuous signals to process information, SNNs use discrete spikes, much like the bursts of electrical impulses that our neurons emit.
982bbbe5-ad3d-4bb8-bfaf-7e9546b4147a-lk.jpg
Test environment. (a) The test chip and system board. (b) Application development process.
One of the standout features of Darwin3 is its flexibility in simulating various types of neurons. Just as an orchestra can produce a wide range of sounds by utilizing different instruments, Darwin3 can emulate different neuron models to suit a variety of tasks, from basic pattern recognition to complex decision-making processes.
To achieve this goal, Darwin3’s key innovations is its domain-specific instruction set architecture (ISA). This custom-designed set of instructions allows the chip to efficiently describe diverse neuron models and learning rules, including the integrate-and-fire (LIF) model, Izhikevich model, and Spike-Timing-Dependent Plasticity (STDP). This versatility enables Darwin3 to tackle a wide range of computational tasks, making it a highly adaptable tool for AI development.

Another significant breakthrough is Darwin3’s efficient memory usage. Neuromorphic computing faces the challenge of managing vast amounts of data involved in simulating neuronal connections. Darwin3 overcomes this hurdle with an innovative compression mechanism that dramatically reduces memory usage. Imagine shrinking a massive library of books into a single, compact e-reader without losing any content—this is akin to what Darwin3 achieves with synaptic connections.
Perhaps the most exciting feature of Darwin3 is its on-chip learning capability. This allows the chip to learn and adapt in real-time, much like how humans learn from experience. Darwin3 can modify its behavior based on new information, leading to smarter and more autonomous systems.

The implications of Darwin3’s technology are far-reaching and transformative. In healthcare, prosthetic limbs powered by Darwin3 could learn and adapt to a user’s movements, offering a more intuitive and natural experience. This could significantly enhance the quality of life for amputees.
In robotics, robots equipped with Darwin3 could navigate complex environments with greater ease and efficiency, similar to how humans learn to maneuver through crowded spaces. This capability could revolutionize industries from manufacturing to space exploration.
Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
The Darwin3 chip represents a monumental step forward in neuromorphic computing, bringing us closer to creating machines that can think and learn in ways previously thought impossible. As this technology continues to evolve, we anticipate a future where intelligent systems seamlessly integrate into our daily lives, enhancing everything from medical care to environmental conservation. The research is recently published in the journal National Science Review.
Source: China Academy

 
  • Like
  • Wow
  • Sad
Reactions: 20 users

7für7

Top 20
So, you all sell now because zeeb0t told you to buy his product yeah? Nice move fellas… nice move! HOOOOLD
 

Gazzafish

Regular
https://www.wevolver.com/article/en...tion-using-neuromorphic-computing-and-edge-ai

Extract only below:-

Enhancing Smart Homes with Pose Detection Using Neuromorphic Computing and Edge AI​

author avatar

Ravi Rao
22 May, 2024
FOLLOW
Sponsored by

Enhancing Smart Homes with Pose Detection Using Neuromorphic Computing and Edge AI


Pose detection technology leverages advanced machine learning algorithms to interpret human movements in real-time, enabling seamless, intuitive device control through simple gestures.​

Artificial Intelligence
- Electronics
- Embedded Machine Learning
- Embedded Systems
- Microcontroller

Introduction​

Edge AI transforms smart home technology by enabling real-time data processing directly on devices, reducing latency, and enhancing privacy. In home automation, this leads to more responsive and efficient control systems. One notable application is gesture recognition through pose detection, which allows users to control devices with simple movements.
This article features a project on developing a gesture-based appliance control system using the BrainChip Akida Neural Processor AKD1000 SoC and the Edge Impulse platform. We'll discuss hardware and software requirements, the setup process, data collection, model training, deployment, and practical demonstrations. Additionally, we'll explore integrating the system with Google Assistant for enhanced functionality.

Edge AI in Home Automation​

In home automation, Edge AI enables smart devices to respond quickly to user inputs and environmental changes. This local processing power is crucial for applications requiring immediate feedback, such as security systems, smart lighting, and environmental controls.
By processing data at the edge, smart home devices can operate independently of an internet connection, ensuring continuous functionality. This also reduces the risk of data breaches as sensitive information remains within the local network.

Pose Detection with Edge AI​

Pose detection is a technology that captures and analyzes human body movements and postures in real time. Using machine learning algorithms, pose detection systems identify key points on the human body, such as joints and limbs, and track their positions and movements. This data can then be used to recognize specific gestures and postures, enabling intuitive, hands-free interaction with various devices.
Pose detection typically involves several steps:
  1. Image Capture: A camera or other sensor captures images or video of the user.
  2. Preprocessing: The captured images are processed to enhance quality and remove noise.
  3. Key Point Detection: Machine learning models identify and track key points on the body, such as elbows, knees, and shoulders.
  4. Pose Estimation: The system estimates the user's pose by analyzing the positions and movements of the detected key points.
  5. Gesture Recognition: Specific gestures are identified based on predefined patterns in the user's movements.
Pose detection has a wide range of applications beyond home automation, including:
  • Gaming: Enhancing user experience with motion-controlled games.
  • Healthcare: Monitoring patients' movements and posture for rehabilitation and physical therapy.
  • Fitness: Providing real-time feedback on exercise form and performance.
  • Security: Recognizing suspicious behavior in surveillance systems.
In home automation, pose detection can be particularly powerful, turning everyday tasks into seamless, interactive experiences, and enhancing the overall functionality and appeal of smart homes. In this context, the project "Gesture Appliances Control with Pose Detection" stands out as a great example of how pose detection can be used for home automation. Developed by Christopher Mendez, this innovative idea leverages the BrainChip AKD1000 to enable users to control household appliances with simple finger-pointing gestures.
Further reading: Gesture Recognition and Classification Using Infineon PSoC 6 and Edge AI
By combining neuromorphic processing with machine learning, the system achieves high accuracy and low power consumption, making it a practical and efficient solution for modern smart homes.

Gesture Appliances Control with Pose Detection - BrainChip AKD1000​

Control your TV, Air Conditioner or Lightbulb by just pointing your finger at them, using the BrainChip AKD1000 achieving great accuracy and low power consumption.
Created By: Christopher Mendez
Public Project Link:
Edge Impulse Experts / Brainchip-Appliances-Control-Full-Body
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Dugnal

Member
Article published 15 hours ago.

Chinese Chip Ignites Global Neuromorphic Computing Competition​

Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
by SLG Syndication

May 27, 2024

2 mins read

630bd42e-e52c-4796-aab0-7705c07b09da-lk.jpg
[ Illustration: The China Academy]
A typical computer chip, such as one found in a personal desktop for non-professional use, consumes around 100 watts of power. AI, on the other hand, requires significantly more energy. It is estimated that ChatGPT would consume approximately 300 watts per second to answer a single question. In contrast, the human brain is much more energy-efficient, requiring only around 10 watts of power, comparable to that of a lightbulb. This exceptional energy efficiency is one of the reasons why scientists are interested in modeling the next generation of microchips after the human brain.
In the bustling tech landscape of Hangzhou, China, a team of researchers at Zhejiang University has made a significant leap in the world of neuromorphic computing with the development of their latest innovation, the Darwin3 chip. This groundbreaking piece of technology promises to transform how we simulate brain activity, paving the way for advancements in artificial intelligence, robotics, and beyond.

Neuromorphic chips are designed to emulate the architecture and functioning of the human brain. Unlike traditional computers that process information in a linear, step-by-step manner, these chips operate more like our brains, processing multiple streams of information simultaneously and adapting to new data in real-time.
The Darwin3 chip is a marvel of modern engineering, specifically designed to work with Spiking Neural Networks (SNNs). SNNs are a type of artificial neural network that mimics the way neurons and synapses in the human brain communicate. While conventional neural networks use continuous signals to process information, SNNs use discrete spikes, much like the bursts of electrical impulses that our neurons emit.
982bbbe5-ad3d-4bb8-bfaf-7e9546b4147a-lk.jpg
Test environment. (a) The test chip and system board. (b) Application development process.
One of the standout features of Darwin3 is its flexibility in simulating various types of neurons. Just as an orchestra can produce a wide range of sounds by utilizing different instruments, Darwin3 can emulate different neuron models to suit a variety of tasks, from basic pattern recognition to complex decision-making processes.
To achieve this goal, Darwin3’s key innovations is its domain-specific instruction set architecture (ISA). This custom-designed set of instructions allows the chip to efficiently describe diverse neuron models and learning rules, including the integrate-and-fire (LIF) model, Izhikevich model, and Spike-Timing-Dependent Plasticity (STDP). This versatility enables Darwin3 to tackle a wide range of computational tasks, making it a highly adaptable tool for AI development.

Another significant breakthrough is Darwin3’s efficient memory usage. Neuromorphic computing faces the challenge of managing vast amounts of data involved in simulating neuronal connections. Darwin3 overcomes this hurdle with an innovative compression mechanism that dramatically reduces memory usage. Imagine shrinking a massive library of books into a single, compact e-reader without losing any content—this is akin to what Darwin3 achieves with synaptic connections.
Perhaps the most exciting feature of Darwin3 is its on-chip learning capability. This allows the chip to learn and adapt in real-time, much like how humans learn from experience. Darwin3 can modify its behavior based on new information, leading to smarter and more autonomous systems.

The implications of Darwin3’s technology are far-reaching and transformative. In healthcare, prosthetic limbs powered by Darwin3 could learn and adapt to a user’s movements, offering a more intuitive and natural experience. This could significantly enhance the quality of life for amputees.
In robotics, robots equipped with Darwin3 could navigate complex environments with greater ease and efficiency, similar to how humans learn to maneuver through crowded spaces. This capability could revolutionize industries from manufacturing to space exploration.
Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.
The Darwin3 chip represents a monumental step forward in neuromorphic computing, bringing us closer to creating machines that can think and learn in ways previously thought impossible. As this technology continues to evolve, we anticipate a future where intelligent systems seamlessly integrate into our daily lives, enhancing everything from medical care to environmental conservation. The research is recently published in the journal National Science Review.
Source: China Academy

Here is a good example of what Brainchip is missing.. Good PR. I thought the last question at the AMG was the best question there as it eluded to our poor PR. Brainchip only have about 13K of followers when we should have 100'sK of followers given we are suppose to be a world leader in our field. Also having a small amateur looking stand at trade shows etc. I believe our management need to urgently address this situation and start getting a PR agency on the job so we get onto Business channels and business journals etc. The CEO having a yearly interview with a small Australian Stock Analyst company is not going to cut it. He should be having interviews with the likes of Bloomberg channel and others like that... More impressive professional trade show stands and not a table with creased tabletop and a few PC like items demo'ing. Promoting our leading technology to the masses and how we can help improve the world and humanity. Maybe Management have it in mind and are awaiting till we have a couple more contracts.. If we want to be a big professional company then we need to think and act like one... I sure hope we will very soon.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

wilzy123

Founding Member
7für7 said:
Mate….I know people whose grandparents also run a business in which they specialized. That doesn't mean they ( the grandchildren) automatically know everything. The media landscape is constantly changing. Camera work, which is meant to evoke a certain drama, is also evolving. Imagine if films were still shot the same way they were in your grandparents' time… I come from this field as well, so relax… my great-great-great-grandparents were already doing theater, and my ancestors invented drama and comedy…so!? 🤷🏻‍♂️🤦🏻‍♂️

Reply,

Respect is urnt not given

I have taken a few days off as it’s in my blood to get very angry at disrespectful people like yourself 7fur7.

As you started all this....I’ll finish it with this note,

I have spent all my whole life in TV and Film Production being 63 years now with my first job at 16 you do maths, my comment of ...... it’s in my blood ...you took out of context and proceeded to be a smart arse and try and insult me with your above rambling comments, accusing me of being the grandkid running off his grandparents not knowing what I am talking about.

You don’t have a clue but chose to take the path of insult.
I have attached the standard video presentations formats professional producers use and the reasons why it is the standard requirements to engage with the audience in a professional manner , your cut and paste of documentaries / non fiction films that you quickly put up without any reason but IMO to be a smart arse and which you think is correct are just plan wrong and irrelevant for this format.

By the way I have sent this on to the stock broker video team so they take more care next time around when Sean is trying to find which camera to look at, very unprofessional and relevant to me as a shareholder that it needs attention.

My apologies to everyone for this continued discussion, however this is relevant and needs to be cleared up.

I won’t except disrespect from someone whom thinks he knows everything and tries to belittle people to make himself look good imo.

I have said my peace now and that’s all I have to say moving forward.

See below.

1000023411.png
 
  • Haha
Reactions: 4 users
Top Bottom