BRN Discussion Ongoing

CHIPS

Regular
  • Like
Reactions: 4 users

Mt09

Regular
But in my opinion, he is referring to ESA and not to Samsung ... though ... there might be more or else why would he mention BrainChip?
Fair chance he’s a shareholder..
 
  • Like
  • Fire
Reactions: 9 users
But in my opinion, he is referring to ESA and not to Samsung ... though ... there might be more or else why would he mention BrainChip?
His last three posts on his LinkedIn page are on brainchip
 
  • Like
Reactions: 5 users

Diogenese

Top 20
Yeah, I heard that.

I think what he meant was that Akida can use inputs from all sensor for which there is an Akida compatible model library.

They would have needed to configure their data into a format suitable for Akida.

ISL has been working with Akida for more than 15 months.

https://brainchip.com/information-systems-labs-early-access-program/

Laguna Hills, Calif. – January 9, 2022 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), a leading provider of ultra-low power, high performance artificial intelligence technology and the world’s first commercial producer of neuromorphic AI chips and IP, today announced that Information Systems Laboratories, Inc. (ISL) is developing an AI-based radar research solution for the Air Force Research Laboratory (AFRL) based on its Akida™ neural networking processor.

ISL has won several SBIRs.

This sit.e has a list of ISL SBIRs:

https://www.sbir.gov/sbc/information-systems-laboratories-inc

This one looks particularly germane:

https://www.sbir.gov/sbirsearch/detail/2575607

A New Approach to the Testing & Evaluation of Advanced RF Applications of Deep Learning AI​

Start 20230920 End 20250919 (dicked them for the leap year?).

Under this effort ISL will provide the Air Force a comprehensive solution that addresses both the explainable AI (XAI) problem (what's a neuromorphic computer anyway?) and the test and evaluation (T&E) acceptance problem for a broad class of sensor applications. ISL has developed a novel method

https://www.sbir.gov/sbirsearch/detail/2565041

Virtual RF Target (VRT)
202211
https://www.islinc.com/isl-awarded-a-sbir-phase-iii-contract-option

San Diego, CA: ISL is excited to continue our partnership with the Simulator Division to provide Mission Critical Electronic Warfare (EW) and Electronic Counter Measure (ECM) Procedural Training using RFView® Training, a Physics-Based RF/EW/Radar Simulation System. ISL is working on option year 1 of a follow-on contract with the Air Force Life Cycle Management Center (AFLCMC) Simulators Division Innovation Program, located at Wright-Patterson Air Force Base, to develop a B-1B Part-Task Trainer for the Defensive Operator Station (DSO) to train new B-1B Weapon Systems Officers (WSOs).

“ISL has ushered in a training innovation to perform radar and EW simulation in real-time,” says Margaret Merkle, Innovation Program Manager. The Innovation team has been pleased with all the benefits that have come from this project. This project contributes to increases in readiness and mission success, savings in flight hour costs, and an improved instructor retention rate.”

“The combined team of the AFLCMC Simulator Division, 7th Bomb Wing Rapid Capabilities Office, ISL and subcontractor ZedaSoft, revolutionized aircrew training critical to future missions in contested and denied environments.” Dr Joe Guerci, ISL CEO.

The contract continues the development of the B-1B Part-Task Trainer which started with radar scope interpretation and procedural training for the Offensive Systems Operator (OSO) and AFWERX “Accelerating Pilot to Combat-ready Aviators” task to demonstrate a F-15E application.


About USAF/AFMC/AFLCMC Innovation Cell:

The Simulators Division’s Innovation Cell has the mission to devise avenues to marry new technologies with old problems. The Innovation Cell also recognizes the benefits offered by combining real and virtual assets during training. The Simulators Division as a whole is responsible for delivering training and simulation capabilities to warfighters. They currently manage 2,300 devices across the world that train pilots, aircrews, maintainers, etc. Having the ability to integrate all these systems would offer huge benefits to the readiness of the Air Force and the other US Services.


About Information Systems Laboratories, Inc.:

ISL was founded to tackle the toughest problems facing society, from national security to energy independence and climate change, and find sustainable and effective solutions that work in the real world. ISL’s approach is to begin with a “fresh eye” approach, questioning any and all preconceived notions and assumptions, then use a rigorous scientific, physics-based, and engineering approach to develop entirely new and often disruptive approaches that have a major industry impact, not just an incremental improvement. We are also a fully vertically

integrated “one-stop” solutions provider, from nascent ideas through to full-rate production (ISO-9001, AS-9100, etc.).

ISL is an industry leader in advanced radio frequency (RF) Digital Engineering (DE): from high-fidelity modeling and simulation via our industry-leading RFView® software, to its family of products that support the whole DE chain: RFView®-ModSim, RFView®-HWIL (hardware-in-the-loop), and RFView®-Training. ISL is also a pioneer in advanced sensor signal processing and has published numerous peer-reviewed publications including the first book on Cognitive Radar (Artech House, 2010, 2nd Ed. 2020). ISL has a successful record of commercialization (recipient of three (3) Phase III SBIR contracts) and was the subject of an Air Force Research Laboratory (AFRL) SBIR Success story (see https://www.sbir.gov/node/1526807)
.

You can understand Joe's frustration with the valley of death when you see they have been working on this project since 2018, but the tech has changed markedly in the interim.

20181115
https://www.sbir.gov/node/1526807

The technology meets the needs of all military commands and services that rely on advanced radar modeling, testing and training, so it has the potential for widespread adoption across the Department of Defense. Information Systems Laboratories has already earned Phase III contracts, which denote funding from outside the Air Force SBIR/ STTR Program and indicate a critical commercialization benchmark.

Additionally, commercial variations of the system are now available and beginning to be sold.

BEHIND THE TECHNOLOGY

Advanced radar flight tests that mimic conditions in highly contested environments are some of the most expensive flight tests to conduct, costing the Air Force millions of dollars annually. This SBIR/STTR project was intended to model complex radar systems operating in those conditions.

Throughout the development process, the company took a new approach to high-fidelity radar/RF modeling which includes high-performance embedded computing. The idea is that design engineers and operators would be able to realistically conduct countless virtual sorties from a secure location while capturing all of the real-world effects a radar is likely to encounter.

Working on the Air Force solution – in collaboration with Muralidhar Rangaswamy, senior advisor for radar research at the Air Force Research Laboratory Sensors Directorate – also allowed the company to launch a commercial version called RFView.

That product allows a user to fly their radar/RF system anywhere in the world from their office or laboratory, according to Joseph Guerci, president and CEO of Information Systems Laboratories.

Through RFView, users enter the simulation parameters then submit a job that runs remotely on a high-performance computer cluster to ensure timely simulation results. No special computing software or hardware is required. A sister product, RTEMES, allows for real-time hardware-in-the-loop operation of RFView with the actual flight hardware
.
 
  • Like
  • Love
  • Fire
Reactions: 21 users

CHIPS

Regular
Fair chance he’s a shareholder..

Yes, it looks very much like it (y). He reposts every BrainChip post but is also very interested in Hydrogen.
 
  • Like
Reactions: 4 users

CHIPS

Regular

How much electricity does AI consume?​

It’s not easy to calculate the watts and joules that go into a single Balenciaga pope. But we’re not completely in the dark about the true energy cost of AI.​

It’s common knowledge that machine learning consumes a lot of energy. All those AI models powering email summaries, regicidal chatbots, and videos of Homer Simpson singing nu-metal are racking up a hefty server bill measured in megawatts per hour. But no one, it seems — not even the companies behind the tech — can say exactly what the cost is.

Estimates do exist, but experts say those figures are partial and contingent, offering only a glimpse of AI’s total energy usage. This is because machine learning models are incredibly variable, able to be configured in ways that dramatically alter their power consumption. Moreover, the organizations best placed to produce a bill — companies like Meta, Microsoft, and OpenAI — simply aren’t sharing the relevant information. (Judy Priest, CTO for cloud operations and innovations at Microsoft said in an e-mail that the company is currently “investing in developing methodologies to quantify the energy use and carbon impact of AI while working on ways to make large systems more efficient, in both training and application.” OpenAI and Meta did not respond to requests for comment.)

One important factor we can identify is the difference between training a model for the first time and deploying it to users. Training, in particular, is extremely energy intensive, consuming much more electricity than traditional data center activities. Training a large language model like GPT-3, for example, is estimated to use just under 1,300 megawatt hours (MWh) of electricity; about as much power as consumed annually by 130 US homes. To put that in context, streaming an hour of Netflix requires around 0.8 kWh (0.0008 MWh) of electricity. That means you’d have to watch 1,625,000 hours to consume the same amount of power it takes to train GPT-3.

But it’s difficult to say how a figure like this applies to current state-of-the-art systems. The energy consumption could be bigger, because AI models have been steadily trending upward in size for years and bigger models require more energy. On the other hand, companies might be using some of the proven methods to make these systems more energy efficient — which would dampen the upward trend of energy costs.
The challenge of making up-to-date estimates, says Sasha Luccioni, a researcher at French-American AI firm Hugging Face, is that companies have become more secretive as AI has become profitable. Go back just a few years and firms like OpenAI would publish details of their training regimes — what hardware and for how long. But the same information simply doesn’t exist for the latest models, like ChatGPT and GPT-4, says Luccioni.
“With ChatGPT we don’t know how big it is, we don’t know how many parameters the underlying model has, we don’t know where it’s running … It could be three raccoons in a trench coat because you just don’t know what’s under the hood.”

“It could be three raccoons in a trench coat because you just don’t know what’s under the hood.”

Luccioni, who’s authored several papers examining AI energy usage, suggests this secrecy is partly due to competition between companies but is also an attempt to divert criticism. Energy use statistics for AI — especially its most frivolous use cases — naturally invite comparisons to the wastefulness of cryptocurrency. “There’s a growing awareness that all this doesn’t come for free,” she says.
Training a model is only part of the picture. After a system is created, it’s rolled out to consumers who use it to generate output, a process known as “inference.” Last December, Luccioni and colleagues from Hugging Face and Carnegie Mellon University published a paper (currently awaiting peer review) that contained the first estimates of inference energy usage of various AI models.

Luccioni and her colleagues ran tests on 88 different models spanning a range of use cases, from answering questions to identifying objects and generating images. In each case, they ran the task 1,000 times and estimated the energy cost. Most tasks they tested use a small amount of energy, like 0.002 kWh to classify written samples and 0.047 kWh to generate text. If we use our hour of Netflix streaming as a comparison, these are equivalent to the energy consumed watching nine seconds or 3.5 minutes, respectively. (Remember: that’s the cost to perform each task 1,000 times.) The figures were notably larger for image-generation models, which used on average 2.907 kWh per 1,000 inferences. As the paper notes, the average smartphone uses 0.012 kWh to charge — so generating one image using AI can use almost as much energy as charging your smartphone.

The emphasis, though, is on “can,” as these figures do not necessarily generalize across all use cases. Luccioni and her colleagues tested ten different systems, from small models producing tiny 64 x 64 pixel pictures to larger ones generating 4K images, and this resulted in a huge spread of values. The researchers also standardized the hardware used in order to better compare different AI models. This doesn’t necessarily reflect real-world deployment, where software and hardware are often optimized for energy efficiency.
“Definitely this is not representative of everyone’s use case, but now at least we have some numbers,” says Luccioni. “I wanted to put a flag in the ground, saying ‘Let’s start from here.’”

“The generative AI revolution comes with a planetary cost that is completely unknown to us.”
The study provides useful relative data, then, though not absolute figures. It shows, for example, that AI models require more power to generate output than they do when classifying input. It also shows that anything involving imagery is more energy intensive than text. Luccioni says that although the contingent nature of this data can be frustrating, this tells a story in itself. “The generative AI revolution comes with a planetary cost that is completely unknown to us and the spread for me is particularly indicative,” she says. “The tl;dr is we just don’t know.”
So trying to nail down the energy cost of generating a single Balenciaga pope is tricky because of the morass of variables. But if we want to better understand the planetary cost, there are other tacks to take. What if, instead of focusing on model inference, we zoom out?
This is the approach of Alex de Vries, a PhD candidate at VU Amsterdam who cut his teeth calculating the energy expenditure of Bitcoin for his blog Digiconomist, and who has used Nvidia GPUs — the gold standard of AI hardware — to estimate the sector’s global energy usage. As de Vries explains in commentary published in Joule last year, Nvidia accounts for roughly 95 percent of sales in the AI market. The company also releases energy specs for its hardware and sales projections.

By combining this data, de Vries calculates that by 2027 the AI sector could consume between 85 to 134 terawatt hours each year. That’s about the same as the annual energy demand of de Vries’ home country, the Netherlands.
“You’re talking about AI electricity consumption potentially being half a percent of global electricity consumption by 2027,” de Vries tells The Verge. “I think that’s a pretty significant number.”

A recent report by the International Energy Agency offered similar estimates, suggesting that electricity usage by data centers will increase significantly in the near future thanks to the demands of AI and cryptocurrency. The agency says current data center energy usage stands at around 460 terawatt hours in 2022 and could increase to between 620 and 1,050 TWh in 2026 — equivalent to the energy demands of Sweden or Germany, respectively.

But de Vries says putting these figures in context is important. He notes that between 2010 and 2018, data center energy usage has been fairly stable, accounting for around 1 to 2 percent of global consumption. (And when we say “data centers” here we mean everything that makes up “the internet”: from the internal servers of corporations to all the apps you can’t use offline on your smartphone.) Demand certainly went up over this period, says de Vries, but the hardware got more efficient, thus offsetting the increase.

His fear is that things might be different for AI precisely because of the trend for companies to simply throw bigger models and more data at any task. “That is a really deadly dynamic for efficiency,” says de Vries. “Because it creates a natural incentive for people to just keep adding more computational resources, and as soon as models or hardware becomes more efficient, people will make those models even bigger than before.”
The question of whether efficiency gains will offset rising demand and usage is impossible to answer. Like Luccioni, de Vries bemoans the lack of available data but says the world can’t just ignore the situation. “It’s been a bit of a hack to work out which direction this is going and it’s certainly not a perfect number,” he says. “But it’s enough foundation to give a bit of a warning.”

Some companies involved in AI claim the technology itself could help with these problems. Priest, speaking for Microsoft, said AI “will be a powerful tool for advancing sustainability solutions,” and emphasized that Microsoft was working to reach “sustainability goals of being carbon negative, water positive and zero waste by 2030.”

But the goals of one company can never encompass the full industry-wide demand. Other approaches may be needed.
Luccioni says that she’d like to see companies introduce energy star ratings for AI models, allowing consumers to compare energy efficiency the same way they might for appliances. For de Vries, our approach should be more fundamental: do we even need to use AI for particular tasks at all? “Because considering all the limitations AI has, it’s probably not going to be the right solution in a lot of places, and we’re going to be wasting a lot of time and resources figuring that out the hard way,” he says.
 
  • Like
  • Love
Reactions: 11 users

Cgc516

Regular
Still NDA?

Fool day !

 
  • Like
Reactions: 2 users

TheFunkMachine

seeds have the potential to become trees.
Just incase anyone like me was wondering what the valley of death meant, here is a basic understanding of the term of speech

In this graph of the Vally of death Brainchip is in its final stage to come out of this Vally.

This is however the first out of 5. Valleys of death in tech start up 😅

I personally believe if we clear this first Vally it’s because of expanding IP adoption and Royalties . I predict this will start occurring mid 2024 and exponentially increasing from 2025—->

The reason I believe this is because of Brainchips established ecosystem of partners, Brainchips position in the edge market in terms of Technology and the timing of Brainchips relevance in the current and evolving market.

In other words… Brainchip has a lot of friends, they have a revolutionary technology that solves an ever pressing problem to expand AI forward.

So because of this I believe if brainchip can survive this first Vally, the market will move Brainchip straight past the other 4 typical valleys explained in this article.

It is of course not financial advice and I will forever be an optimist ☺️

Happy Easter everyone and Gods favour on all of us.
 

Attachments

  • 026470D0-35B6-4309-B3B9-6311E0EAA4B4.png
    026470D0-35B6-4309-B3B9-6311E0EAA4B4.png
    1.5 MB · Views: 137
  • Like
  • Love
  • Fire
Reactions: 28 users
Wishing you all a very blessed week as short as it will be it’s one week closer than last
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
In this graph of the Vally of death Brainchip is in its final stage to come out of this Vally.

This is however the first out of 5. Valleys of death in tech start up 😅

I personally believe if we clear this first Vally it’s because of expanding IP adoption and Royalties . I predict this will start occurring mid 2024 and exponentially increasing from 2025—->

The reason I believe this is because of Brainchips established ecosystem of partners, Brainchips position in the edge market in terms of Technology and the timing of Brainchips relevance in the current and evolving market.

In other words… Brainchip has a lot of friends, they have a revolutionary technology that solves an ever pressing problem to expand AI forward.

So because of this I believe if brainchip can survive this first Vally, the market will move Brainchip straight past the other 4 typical valleys explained in this article.

It is of course not financial advice and I will forever be an optimist ☺️

Happy Easter everyone and Gods favour on all of us.

Please for the love of goshkins could everyone who has questions surrounding the “valley of death” please read this post.


And, what’s more, if you don’t believe Dodgy-knee’s post, you can listen to the actual podcast and decide for yourself!



Screenshot 2024-04-01 at 11.57.40 pm.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 28 users

Frangipani

Regular
New interview with our CMO Nandan Nayampally:


IMG_2177_head.jpg

INTERVIEWSTECH
·APRIL 1, 2024·6 MIN READ

BRAINCHIP, MAKING AI UBIQUITOUS​

BrainChip is the worldwide leader in on-chip edge AI processing and learning technology, that enables faster, efficient, secure, and customizable intelligent devices untethered from the cloud. The company’s first-to-market neuromorphic processor, AkidaTM, mimics the human brain, the most efficient inference and learning engine known, to analyze only essential sensor inputs at the point of acquisition, executing only necessary operations and therefore, processing data with unparalleled efficiency and precision. This supports a distributed intelligence approach keeping machine learning local to the chip, independent of the cloud, dramatically reducing latency, while simultaneously improving privacy and data security.

The Akida neural processor is designed to provide a complete ultra-low power Edge AI network processor for vision, audio, smart transducers, vital signs and, broadly, any sensor application.

BrainChip’s scalable solutions, that can be used standalone or integrated into systems on chip to execute today’s models and future networks directly in hardware, empowering the market to create much more intelligent, cost-effective devices and services universally deployable across real-world applications in connected cars, healthcare, consumer electronics, industrial IoT, smart-agriculture and more, including use in a space mission and the most stringent conditions.

BrainChip is the foundation for cost-effective, fan-less, portable, real-time Edge AI systems that can offload the cloud, reducing the rapid growth in carbon footprint of datacenters. In addition, Akida’s unique capability to learn locally on device also reduces retraining of models in the cloud whose skyrocketing cost is a barrier to the growth of AIoT.


Interview with Nandan Nayampally, CMO at BrainChip.

Easy Engineering: What are the main areas of activity of the company?

Nandan Nayampally:
BrainChip is focused on AI at the Edge. The vision of the company is to make AI ubiquitous. Therefore, the mission for the company is to enable every device to have on-board AI acceleration, the key to which is extremely energy-efficient, and yet performant neural network processing. The company has been inspired by the human brain – the most efficient inference and learning engine to build neuromorphic AI acceleration solutions. The company delivers this as IP which can be integrated into customers’ System on Chip (SoCs). To achieve this, BrainChip has built a very configurable, event-based neural processor unit that is extremely energy-efficient and has a small footprint. It is complemented with BrainChip’s model compilation tools in MetaTFTM and its silicon reference platforms that can be used by customers to develop initial prototypes and then taken to market.

BrainChip continues to invest heavily in next generation neuromorphic architecture to stay ahead of the current AI landscape. To democratize GenAI, and pave the path to Artificial General Intelligence (AGI).

E.E: What’s the news about new products/services?

N.N:
Built in collaboration with VVDN Technologies, the Akida Edge Box is designed to meet the demanding needs of retail and security, Smart City, automotive, transportation and industrial applications. The device combines a powerful quad-core CPU platform with Akida AI accelerators to provide a huge boost in AI performance. The compact, light Edge box is cost-effective and versatile with built in ethernet and Wi-Fi connectivity, HD display support, extensible storage and USB interfaces. BrainChip and VVDN are finalizing the set of AI applications that will run out of the box. With the ability to personalize and learn on device, the box can be customized per application and per user without need of Cloud support, enhancing the privacy and security.

From an IP perspective, the 2nd generation of the Akida IP adds some big differentiators including a mechanism that can radically improve the performance and efficiency of processing multi-dimensional streaming data (video, audio, sensor) by orders of magnitude without compromising on accuracy. It also accelerates the most common use-case in AI – vision – in hardware much more effectively.

E.E: What are the ranges of products/services?

N.N:
BrainChip offers a range of products and services centered around its Akida neural processor technology. This includes:

Akida IP: These are BrainChip’s core offerings, representing the neuromorphic computing technology that powers edge AI applications. It has substantial benefits in multi-dimensional streaming data, accelerating structured state space models and vision.

MetaTF: A machine learning toolchain that integrates with TensorFlow, PyTorch designed to facilitate the transition to neuromorphic computing for developers working in the convolutional neural network space.

Akida1000, AKD1500 Ref SoC: A reference systems-on-chip (SoCs) that showcases the capabilities of the Akida technology, and enables prototyping, and small-volume production.

Akida Enablement Platforms/Dev Kits: Tools and platforms designed to support the development, training, and testing of neural networks on the Akida event domain neural processor.

IMG_2231_text-768x1024.jpg


E.E: What is the state of the market where you are currently active?

N.N:
We see three different groups of customers in the edge AI industry. The early adopters have already integrated AI acceleration into their edge application and are seeing the benefits of improved efficiency and the ability to run more complex models and use cases.

The second group are currently running AI models on the edge, but they are doing it without dedicated hardware acceleration. They are running on their MCU/MPU. It works but is not as efficient as it could be.

The last group we’re seeing have not yet integrated AI into their edge application. They are trying to understand the use cases, the unique value proposition that AI can unlock for them, and how to manage their data and train models.

We are actively engaged with customers at all three stages and understand the unique challenges and opportunities at each stage.

E.E: What can you tell us about market trends?

N.N:
As was evidenced at CES 2024, we’re seeing growth of AI everywhere. For this to be scalable and successful, the growth is happening not just in the data center, but increasingly at the Edge Network and growing to most IoT end devices. We’re at a point where the growth in energy-efficient compute capacity can now run complex use cases like object detection and segmentation on the Edge – not just at the Network Edge, but given technologies like BrainChip’s Akida, even on portable, fanless end-point devices.

By doing more compute at the end point, you can substantially reduce the bandwidth congestion to cloud, improve real-time response and most importantly improve privacy by minimizing or eliminating the transmission of sensitive data to cloud.

Models are becoming larger and more capable but storage and compute capacity at the Edge are constrained, so we see the need for efficient performance, massive compression and innovative solutions take precedence in hardware and software.


Generative AI has a great deal of momentum and it will only become monetizable if there is more done on the Edge. Even on smartphones, there are already thousands of generative AI applications.

There is a clear need to do more with lesswhich is fundamental to making AI economically viable. The costs include memory, compute, thermal management, bandwidth, and battery capacity to name a few. Customers, therefore, are demanding more power-efficient, storage-efficient and energy-efficient and cost-effective solutions. They want to unlock use cases like object detection in the wild. In addition to limited or no connectivity their use case might require running on battery for months. Traditional MPU/MCU based solutions won’t allow this. BrainChip’s neuromorphic architecture is well positioned for these ultra-low power scenarios.

E.E: What are the most innovative products/services marketed?

N.N:
We are seeing great progress in intuitive Human Machine Interface (HMI), where voice-based and vision-based communication with devices is on the rise – in consumer devices, smart home, automotive, remote healthcare and more. For example, automotive is using driver monitoring for emotions, focus and fatigue could help save lives and losses. Remote ECG and predictive vital signs monitoring remotely can also improve not just fatalities but quality of life. AI-driven fitness training is beginning to help individuals stay healthy.

There are lots more.

E.E: What estimations do you have for the beginning of 2024?

N.N:
We expect AI to truly go mainstream in 2024, but it’s still the tip of the iceberg.

The big transition you will see is the more mainstream adoption of Edge AI – without it, pure cloud-based solutions especially with Generative AI, would be cost-prohibitive. We therefore see the move towards Small Language Models (SLMs) that draw from (Large Language Models) LLMs to fit better into Edge devices while still providing the accuracy and response time that is expected.

In short, the AI innovation is moving to the Edge, and in 2024, you will see this coming together clearly.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 95 users

IloveLamp

Top 20
1000014703.jpg
 
  • Like
  • Fire
  • Love
Reactions: 37 users

IloveLamp

Top 20
New interview with our CMO Nandan Nayampally:


IMG_2177_head.jpg

INTERVIEWSTECH
·APRIL 1, 2024·6 MIN READ

BRAINCHIP, MAKING AI UBIQUITOUS​

BrainChip is the worldwide leader in on-chip edge AI processing and learning technology, that enables faster, efficient, secure, and customizable intelligent devices untethered from the cloud. The company’s first-to-market neuromorphic processor, AkidaTM, mimics the human brain, the most efficient inference and learning engine known, to analyze only essential sensor inputs at the point of acquisition, executing only necessary operations and therefore, processing data with unparalleled efficiency and precision. This supports a distributed intelligence approach keeping machine learning local to the chip, independent of the cloud, dramatically reducing latency, while simultaneously improving privacy and data security.

The Akida neural processor is designed to provide a complete ultra-low power Edge AI network processor for vision, audio, smart transducers, vital signs and, broadly, any sensor application.

BrainChip’s scalable solutions, that can be used standalone or integrated into systems on chip to execute today’s models and future networks directly in hardware, empowering the market to create much more intelligent, cost-effective devices and services universally deployable across real-world applications in connected cars, healthcare, consumer electronics, industrial IoT, smart-agriculture and more, including use in a space mission and the most stringent conditions.

BrainChip is the foundation for cost-effective, fan-less, portable, real-time Edge AI systems that can offload the cloud, reducing the rapid growth in carbon footprint of datacenters. In addition, Akida’s unique capability to learn locally on device also reduces retraining of models in the cloud whose skyrocketing cost is a barrier to the growth of AIoT.


Interview with Nandan Nayampally, CMO at BrainChip.

Easy Engineering: What are the main areas of activity of the company?

Nandan Nayampally:
BrainChip is focused on AI at the Edge. The vision of the company is to make AI ubiquitous. Therefore, the mission for the company is to enable every device to have on-board AI acceleration, the key to which is extremely energy-efficient, and yet performant neural network processing. The company has been inspired by the human brain – the most efficient inference and learning engine to build neuromorphic AI acceleration solutions. The company delivers this as IP which can be integrated into customers’ System on Chip (SoCs). To achieve this, BrainChip has built a very configurable, event-based neural processor unit that is extremely energy-efficient and has a small footprint. It is complemented with BrainChip’s model compilation tools in MetaTFTM and its silicon reference platforms that can be used by customers to develop initial prototypes and then taken to market.

BrainChip continues to invest heavily in next generation neuromorphic architecture to stay ahead of the current AI landscape. To democratize GenAI, and pave the path to Artificial General Intelligence (AGI).

E.E: What’s the news about new products/services?

N.N:
Built in collaboration with VVDN Technologies, the Akida Edge Box is designed to meet the demanding needs of retail and security, Smart City, automotive, transportation and industrial applications. The device combines a powerful quad-core CPU platform with Akida AI accelerators to provide a huge boost in AI performance. The compact, light Edge box is cost-effective and versatile with built in ethernet and Wi-Fi connectivity, HD display support, extensible storage and USB interfaces. BrainChip and VVDN are finalizing the set of AI applications that will run out of the box. With the ability to personalize and learn on device, the box can be customized per application and per user without need of Cloud support, enhancing the privacy and security.

From an IP perspective, the 2nd generation of the Akida IP adds some big differentiators including a mechanism that can radically improve the performance and efficiency of processing multi-dimensional streaming data (video, audio, sensor) by orders of magnitude without compromising on accuracy. It also accelerates the most common use-case in AI – vision – in hardware much more effectively.

E.E: What are the ranges of products/services?

N.N:
BrainChip offers a range of products and services centered around its Akida neural processor technology. This includes:

Akida IP: These are BrainChip’s core offerings, representing the neuromorphic computing technology that powers edge AI applications. It has substantial benefits in multi-dimensional streaming data, accelerating structured state space models and vision.

MetaTF: A machine learning toolchain that integrates with TensorFlow, PyTorch designed to facilitate the transition to neuromorphic computing for developers working in the convolutional neural network space.

Akida1000, AKD1500 Ref SoC: A reference systems-on-chip (SoCs) that showcases the capabilities of the Akida technology, and enables prototyping, and small-volume production.

Akida Enablement Platforms/Dev Kits: Tools and platforms designed to support the development, training, and testing of neural networks on the Akida event domain neural processor.

IMG_2231_text-768x1024.jpg


E.E: What is the state of the market where you are currently active?

N.N:
We see three different groups of customers in the edge AI industry. The early adopters have already integrated AI acceleration into their edge application and are seeing the benefits of improved efficiency and the ability to run more complex models and use cases.

The second group are currently running AI models on the edge, but they are doing it without dedicated hardware acceleration. They are running on their MCU/MPU. It works but is not as efficient as it could be.

The last group we’re seeing have not yet integrated AI into their edge application. They are trying to understand the use cases, the unique value proposition that AI can unlock for them, and how to manage their data and train models.

We are actively engaged with customers at all three stages and understand the unique challenges and opportunities at each stage.

E.E: What can you tell us about market trends?

N.N:
As was evidenced at CES 2024, we’re seeing growth of AI everywhere. For this to be scalable and successful, the growth is happening not just in the data center, but increasingly at the Edge Network and growing to most IoT end devices. We’re at a point where the growth in energy-efficient compute capacity can now run complex use cases like object detection and segmentation on the Edge – not just at the Network Edge, but given technologies like BrainChip’s Akida, even on portable, fanless end-point devices.

By doing more compute at the end point, you can substantially reduce the bandwidth congestion to cloud, improve real-time response and most importantly improve privacy by minimizing or eliminating the transmission of sensitive data to cloud.

Models are becoming larger and more capable but storage and compute capacity at the Edge are constrained, so we see the need for efficient performance, massive compression and innovative solutions take precedence in hardware and software.


Generative AI has a great deal of momentum and it will only become monetizable if there is more done on the Edge. Even on smartphones, there are already thousands of generative AI applications.

There is a clear need to do more with lesswhich is fundamental to making AI economically viable. The costs include memory, compute, thermal management, bandwidth, and battery capacity to name a few. Customers, therefore, are demanding more power-efficient, storage-efficient and energy-efficient and cost-effective solutions. They want to unlock use cases like object detection in the wild. In addition to limited or no connectivity their use case might require running on battery for months. Traditional MPU/MCU based solutions won’t allow this. BrainChip’s neuromorphic architecture is well positioned for these ultra-low power scenarios.

E.E: What are the most innovative products/services marketed?

N.N:
We are seeing great progress in intuitive Human Machine Interface (HMI), where voice-based and vision-based communication with devices is on the rise – in consumer devices, smart home, automotive, remote healthcare and more. For example, automotive is using driver monitoring for emotions, focus and fatigue could help save lives and losses. Remote ECG and predictive vital signs monitoring remotely can also improve not just fatalities but quality of life. AI-driven fitness training is beginning to help individuals stay healthy.

There are lots more.

E.E: What estimations do you have for the beginning of 2024?

N.N:
We expect AI to truly go mainstream in 2024, but it’s still the tip of the iceberg.

The big transition you will see is the more mainstream adoption of Edge AI – without it, pure cloud-based solutions especially with Generative AI, would be cost-prohibitive. We therefore see the move towards Small Language Models (SLMs) that draw from (Large Language Models) LLMs to fit better into Edge devices while still providing the accuracy and response time that is expected.

In short, the AI innovation is moving to the Edge, and in 2024, you will see this coming together clearly.
Great interview, possibly the best one. I like this bit

E.E: What estimations do you have for the beginning of 2024?

N.N: We expect AI to truly go mainstream in 2024, but it’s still the tip of the iceberg.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

AARONASX

Holding onto what I've got
Great interview, possibly the best one. I like this bit

E.E: What estimations do you have for the beginning of 2024?

N.N: We expect AI to truly go mainstream in 2024, but it’s still the tip of the iceberg.

Hi ILL,

Just below that "We therefore see the move towards Small Language Models (SLMs) that draw from (Large Language Models)"

Is this possible part of the reason why we're drawing on available funds with LDA?

From announcement:
1712005744656.png
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 31 users

stuart888

Regular
Lev Selector: Sharp guy, does a sweet AI Weekly Update.



1712006333723.png
 
  • Like
Reactions: 3 users

IloveLamp

Top 20
  • Like
  • Wow
  • Thinking
Reactions: 17 users

stuart888

Regular
Advanced Rag! "Just Rag It". Thumbs up for everything neuromorphic based like Brainchip.

All sorts of cutting-edge technology explained here. Like Small-to-Big Retrieval.



1712008701269.png
 
  • Like
  • Wow
  • Thinking
Reactions: 5 users

stuart888

Regular
Lots of discussion on Quantize to 1-2-4 Bits lately. The video starts right at the Quantization focus.

Via Lev Selector. He covers so much, all quickly, with lots of references to more detail on every subject. Just look at what he covers in 30 minutes.

1712009600871.png

 
  • Like
  • Thinking
Reactions: 6 users

stuart888

Regular
If you write code, watch David Ondrej. He shows you exactly how to do everything LLM. From very length Prompt Text-to-Code, to all things to make happen so you can make money, or improve your business.

He understands the Edge Neuromorphic focus of the industry and Brainchip.


 
  • Like
Reactions: 13 users

stuart888

Regular
Great overview. Bubbly and speaks knowledge. Some videos are pure code writing, hands on.

LLMs make coding in English available for all that want to move ahead of the rest of society, that does not learn and upgrade.



Brainchip is on the correct path.
 
  • Like
Reactions: 4 users
Top Bottom