BRN Discussion Ongoing

Kachoo

Regular
So what is peoples take on how socio does not already have an IP licence. They literally have designed products that rely on Akida.

I ask because if they have somehow avoided the need what sort of deal has been struck.

Why would they not sign it now rather than later as they are not hiding the relationship or product.

Because if they can get away with not having one is this the same situation that will apply to prophesee?

They had some agreement in 2019 could have been part of the deal.
 
  • Like
Reactions: 7 users

Violin1

Regular
@Sirod69
Hey Sirod - hope you are doing ok. Sending you good, healthy recovery vibes.
1000 eyes.
 
  • Like
  • Love
Reactions: 33 users
Define need? And define AGI? 😄

In regards to competiotion with other companies, somebody are going to develop something that we can discuss if is AGI and Brainchip better be ahead of it. I think that we would love Brainchip to convince the world that they have the first AGI, because then we will see something unprecedented with the Brainchip stock price.

We're probably going to discuss if it's really AGI forever, with the flat earthers claming AGI is flat and politicians denying it's more intelligent than them.

Do we need it as humans? No, not as of now, but it's debateable if it's nice to have and we may become addicted to it.
Someone else will develop A.G.I. before us..

But it would be an energy sapping monster, with servers etc and dependant on constant "connection".

When BrainChip talks of AGI, they are talking about true independant AGI, with no connections, running at extremely low wattage.

As to my definition, it's the full monty, the "capability" for human like learning, reasoning and perception etc..

But BrainChip, certainly doesn't need that, to be hugely successful, as we will see in time..

That's just a justifiable measure of my confidence, in the future of our Company.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 12 users
I'm going to share something with you, that I find immensely valuable and it's something else than Brainchip, which is of course also immensely valuable. I have two primary tools that I use when I do stock research, Inoreader and Gurufocus.

Here I'm going to focus on what I think could strengthen the already very strong 1000 eyes here on this forum and not least help you in other kinds of research.

Inoreader is at first glance just a news-aggregator, with a relatively simple interface. It comes as a web interface and an app, that have almost the same functionality. I use both with different purposes.

Inoreader let you subscribe to virtually anything, given it's publicly available on the Internet... and sometimes it accidentially let's me read what's behind the paywalls, without me being aware of it at all (when I'm clicking the "coffee cup" icon)! So, I'm very sorry if I ever accidentially posted something in this forum, from behind a paywall.

You can subscribe to RSS, YouTubers, Twitter channels, Telegram channels, Facebook pages, Reddit feeds and virutually any webpages.

In the web interface click "Add new" -> "Feed" and search amongst millions of feeds or past a webpage link and make it a feed with their AI.

If you have the paid version, click their deceptivly anonymously "Search" box and search for something random, let's say "Brainchip". Now, don't stop there, click the small arrow down besides "All Articles" and click "All Public Articles" and then "All sites". Now you just searched all feeds that Inoreader knows of for "Brainchip".

And you found a new article about Brainchip:

Now, don't stop there, click "+Monitor Keyword" and you just created a small channel only regarding Brainchip. You can even make it filter out duplicates and set up rules for notification e.t.c.

Is it in Chinese? Click the translate icon.

In the app, click the read out loud icon and have it read in an almost perfect synthetic voice.

This was just a small primer, there's more to Inoreader than that, hope you find it useful.
 
  • Like
  • Fire
  • Wow
Reactions: 38 users
Someone else will develop A.G.I. before us..

But it would be an energy sapping monster, with servers etc and dependant on constant "connection".

When BrainChip talks of AGI, they are talking about true independant AGI, with no connections, running at extremely low wattage.

As to my definition, it's the full monty, the "capability" for human like learning, reasoning and perception etc..

But BrainChip, certainly doesn't need that, to be hugely successful, as we will see in time..

That's just a justifiable measure of my confidence, in the future of our Company.
Still, what is AGI?

I suppose that a human is GI, but what about a dolphin? A Pig? A Dog? A Cat? At what level does General Intelligence stop?

Where does AGI start?

Could Akida-P maybe compete with a cat in regards to intelligence, although it's a significantly different kind of intelligence?

I understand that wet-ware neural networks operate at very low frequencies, something like 50-70 hz. But have much more neurons. Supposedly a cat has a bit more than 500 million neurons. The previous Akida 1.0 could be connected to have up around 70 million neurons, but operating at hundreds of millions of hz. How about Akida 2.0 P? Performing 50 TFLOPS (the Akida 2.0 kind)?

I think the debate about if AGI is here starts now and will last until long after AGI has surpassed GI.
 
  • Like
  • Love
  • Fire
Reactions: 12 users

TopCat

Regular
Just out of interest, has anyone here ever raced against GT Sophy. I haven’t as I’m not into gaming and so on , but I believe it’s free to try till the end of March.


While GT Sophy shares much with these previous milestones, especially the use of neural networks and reinforcement learning, it differs from them in important ways.

First, Gran Turismo is a physically realistic simulation of motorsports. Racing is a very unforgiving sport; to be competitive a driver must be pushing the car to its absolute traction limits and one small mistake leads to disaster. So it must learn not only how to drive fast, but how to master tactical driving skills like slipstream passing — all while making the decisions in real time.

At the same time, because competitors can physically interact with each other, there is an element of sportsmanship that is not present in other video gamesthat is very hard to teach to the agent. The agent had to learn how to play in the same physical space as the humans without being able to practice against actual humans (there are too few humans at these elite levels). Unlike other games, playing against yourself does not fully prepare you to play against humans; slight errors/differences by the human can lead to penalties for the agent if it isn’t prepared.
 
  • Like
Reactions: 3 users

IloveLamp

Top 20
Screenshot_20230321_055004_LinkedIn.jpg




Screenshot_20230321_055623_LinkedIn.jpg




"In the last two years at Tenstorrent, Keller has brought on board new chief customer officer David Bennett, formerly president of Lenovo Japan and CEO of NEC Personal Computers, who is also an AMD veteran. Keller has also brought in a new operations team and has effectively become the face of the company, Grim said.


Screenshot_20230321_055919_LinkedIn.jpg



 
Last edited:
  • Like
  • Fire
Reactions: 27 users
Has anyone come across Acusensus? Just saw them in the news. Australian company using AI to monitor drunk driving and phone usage.
Any links or has BRN courted them?
FC129E85-9186-4C52-B479-F38B5997BC2B.jpeg
 
  • Like
  • Thinking
  • Fire
Reactions: 8 users

Baisyet

Regular
  • Like
  • Fire
Reactions: 7 users

chapman89

Founding Member
So what’s the consensus on if Mercedes Benz are going to have vehicles containing akida in 2024? I’ve read a few reports now that it “may” contain the tech that was used in the EQXX (akida)



“Mercedes-Benz have also been eager to reveal that this new electric saloon will be the pioneer of the company’s proprietary MB Operating System, which will be used instead of the Google connectivity options integrated into the vehicles of competitors such as Volvo. This homemade infotainment system may even feature the experimental processor being developed by Mercedes that executes tasks in “neuromorphic spikes”, which essentially translates to the processor steadily accumulating tasks until a certain quantity has been reached, when they are all carried out simultaneously”
 
  • Like
  • Fire
  • Love
Reactions: 66 users
So what’s the consensus on if Mercedes Benz are going to have vehicles containing akida in 2024? I’ve read a few reports now that it “may” contain the tech that was used in the EQXX (akida)



“Mercedes-Benz have also been eager to reveal that this new electric saloon will be the pioneer of the company’s proprietary MB Operating System, which will be used instead of the Google connectivity options integrated into the vehicles of competitors such as Volvo. This homemade infotainment system may even feature the experimental processor being developed by Mercedes that executes tasks in “neuromorphic spikes”, which essentially translates to the processor steadily accumulating tasks until a certain quantity has been reached, when they are all carried out simultaneously”
I decided that I would hold my breath when the neuromorphic presentation that won the competition is released.

Until then 2024 seems to be the best guess.

My speculation that the presentation would be released in March is fast running out of days.😂🤣😂

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 26 users

cosors

👀
For me it's 2024
 
  • Like
Reactions: 7 users

stockduck

Regular
So what’s the consensus on if Mercedes Benz are going to have vehicles containing akida in 2024? I’ve read a few reports now that it “may” contain the tech that was used in the EQXX (akida)



“Mercedes-Benz have also been eager to reveal that this new electric saloon will be the pioneer of the company’s proprietary MB Operating System, which will be used instead of the Google connectivity options integrated into the vehicles of competitors such as Volvo. This homemade infotainment system may even feature the experimental processor being developed by Mercedes that executes tasks in “neuromorphic spikes”, which essentially translates to the processor steadily accumulating tasks until a certain quantity has been reached, when they are all carried out simultaneously”
...that`s a nice one. Well maybe I don`t get it right but someone could understand that as a direct IP licence coming up if these developments by mercedes would be a success and if Akida IP is involved, right? So too many "if" statements at the moment.
 
  • Like
  • Thinking
Reactions: 9 users

Steve10

Regular
I am expecting BRN licence agreements with the top 5 MCU players.

Renesas = tick
NXP
Microchip
STM
Infineon

ChatGPT has mentioned BRN partnerships with NXP, STM & Infineon.

Today I asked chatGPT about Microchip.

As far as I know, Microchip Technology Inc. and BrainChip Holdings Ltd. announced a strategic partnership in 2019. The partnership aimed to develop and market improved, cost-effective hardware solutions for accelerating the deployment of artificial intelligence (AI) applications in a variety of industries.

Under the partnership, Microchip and BrainChip combined their expertise in hardware design, system-level integration, and machine learning to create an integrated solution. Microchip contributed its microcontrollers and field-programmable gate array (FPGA) devices, while BrainChip provided its Akida Neuromorphic System-on-Chip (NSoC) device, which is designed for low-power AI processing.

The resulting solution is intended to offer superior performance in applications such as object detection, facial recognition, and machine vision. The joint solution is aimed at addressing a range of applications in the industrial, automotive, and consumer markets.

Overall, the Microchip and BrainChip partnership aims to create a solution that can help accelerate the deployment of AI applications in a range of industries while minimizing cost and complexity.

NXP, Microchip, STM & Infineon will not compromise their MCU market share by allowing Renesas to dominate the AI MCU market alone.

They will not be buying via their competitor Renesas so they will have to manufacture their own chips with Akida IP.

The FPGA market also appears a good fit for BRN tech.

FPGA market is projected to reach USD $23.1B in 2030 @ 14.2% CAGR.

Some of the big players in the FPGA market are:
Intel
Lattice Semiconductor
Microchip
Xilinx (AMD)
Cypress Semiconductor (Infineon)
Microchip
Texas Instruments
Achronix Semiconductor
Quicklogic

FPGAs for AI and Machine Learning​

01/01/2023

1679349937716.png

Written by Al Mahmud Al Mamun

FPGA chips come with a million logic gates and reconfigurable architecture that can deliver the best solutions for artificial intelligence (AI) and machine learning (ML), enable entire processing optimization, and fit for neural network infrastructures.​


Field-programmable gate array (FPGA) chips enable the reprogramming of logic gates that enable overwrite configurations and custom circuits. FPGAs are helpful for artificial intelligence (AI) and machine learning (ML) and are suitable for a wide area of applications through the re-configurable capability. The chips support accelerating development and data processing and are flexible, scalable, and reusable for the embedded systems.

The global AI market size was estimated at $136.6 billion in 2022 (the base year for estimation was 2021 and historical data from 2017 to 2020) and is expected to reach $1.8 billion by 2030, at a 38.1% compound annual growth rate (CAGR), (forecast period 2022 to 2030) [1]. The global market of FPGA chips is growing due to the increasing adoption of AI and ML technologies for edge computing in data centers. The global FPGA market size was estimated at $6 billion in 2021 and is expected to reach $14 billion by 2028, at a 12% CAGR, (forecast period 2022 to 2028) [2].

BRIEF REVIEW OF FPGA​

The FPGA chip systems combine an array of logic blocks, ways to program the logic blocks, and relationships among them. They can be customized for multiple uses by changing the specification of the configuration using hardware description languages (HDLs), such as Verilog and VHDL. Proper re-configuration enables them to perform nearly similar to application-specific integrated circuits (ASICs) and the chips can perform better than the central processing unit (CPU) and graphics processing unit (GPU) for data processing acceleration.

Xilinx invented FPGAs in 1985, their first FPGA was the XC2064 (Figure 1), which offered 800 gates and was produced on a 2.0µ process [3]. In the late 1980s, the Naval Surface Warfare Center initiated a research project proposed by Steve Casselman to build an FPGA with more than 600,000 reprogrammable gates, which were successful and the design was patented in 1992 [4]. In the early 2000s, FPGAs reached millions of reprogrammable gates, and Xilinx introduced the Virtex XCVU440 All Programmable 3D IC in 2013, which offers 50 million (equivalent) ASIC logic gates.

1679350002681.png

Figure 1
The XC2064 FPGA with 800 gates and a 2.0µ process

The FPGA chip manufacturers have their own architecture for their products that generally consist of configurable logic blocks, configurable I/O blocks, and programmable interconnect. The FPGAs have three basic types of programmable elements including static RAM, anti-fuses, and flash EPROM. The chips are available with several system gates, including shift registers, logic cells, and look-up tables.

To select the FPGAs you should analyze three things including memory, performance, and interface requirements. FPGA chips are available with several types of memory including CAM, Flash, RAM, Dual-port RAM, ROM, EEPROM, FIFO, and LIFO. The logic families of FPGA chips include Crossbar switch technology (CBT), Gallium arsenide (GaAs), integrated injection logic (I2L), and Silicon on sapphire (SOS). There are four basic IC package types for FPGA chips, including the ball grid array (BGA), the quad flat package (QFP), the single in-line package (SIP), and the dual in-line package (DIP).

FPGA CHIPS FOR AI AND MACHINE LEARING​

AI and ML play key roles in the modern technology revolution in all prospective areas of application that involve a large amount of data and real-time data processing. 5G technology has already started and comes with high speeds and vast amounts of data transferring capability that create new opportunities for AI and ML. For the rapidly growing technology, general computing systems are not enough, and parallel computing becomes necessary. The technology environments are updating and changing rapidly so the use of ASICs becomes difficult and very costly. The FPGA chips with the ability to re-configurable architecture are the best solutions for such purposes and developers are deploying FPGA solutions (Figure 2).

1679350082606.png

Figure 2
The FPGA chips with the ability to re-configurable architecture are the best solutions for AI and ML.

Automotive and industrial control systems need to collect data from sensors or measuring devices, process the collected data, and take action by deriving command functions through several elements. The control system is associated with the instrumentation used for real-time data processing, manufacturing, and process control, which vary in size ranges from a few modular panel-mounted controllers to large interconnected distributed control systems. The FPGAs enable the optimization of the entire process controller applications in the industrial and automotive fields. The latest FPGA chips open up opportunities to utilize controller systems designed for any specific applications.

ML algorithms and artificial neural networks (ANNs) became more sophisticated and require a huge amount of data for training and validation to gain higher accuracy and minimize the error rate. The systems are generally task-specific and related to almost every type of applications. So the systems need re-programmable solutions that fit every application purpose and using FPGA to build a neural network infrastructure delivers higher performance.

AUTOMOTIVE SOLUTIONS​

With the revolution of AI and ML, the automotive industry is also growing rapidly. The automotive industry is mostly application-specific for the different areas of applications. To support the modern automotive industry for a specific purpose, we need application-specific chip configuration to accelerate data processing. Because the industry increasing requires a variety of applications, it is difficult to find an application-specific chip configuration for every area. In this situation, an FPGA with its application-specific configuration can be the best solution and able to deliver high performance with scalability.

Many manufacturers offer several FPGA chips that can be re-configurable for application-specific automotive solutions. Xilinx XA Artix-7 FPGAs can be useful for your automated systems. XA Artix-7 (Figure 3) is an automotive-grade FPGA that offers optimization for the lowest cost and power with small form-factor packaging for automotive applications and allows designers to leverage more logic per watt. The UltraScale+ MPSoC devices integrate a 64-bit quad-core ARM Cortex-A53 and dual-core ARM Cortex-R5-based processing system. The flexible and scalable FPGA solution is ideally suited for automotive platforms including driver assistance and automated driving systems. It helps to accelerate design productivity with a strong network of automotive-specific third-party ecosystems.

1679350148522.png

Figure 3
Xilinx XA Artix-7 is an automotive-grade FPGA that integrates a 64-bit quad-core ARM Cortex-A53 and dual-core ARM Cortex-R5-based processing system and increased system performance through 100,000 logic cells.

NLP AND REAL-TIME VIDEO SOLUTION​

Natural language processing (NLP) requires processing and analyzing large amounts of data that involves speech recognition, understanding, and re-generating. Real-time video analytics is a powerful technology evolution that allows monitoring and identification of violations, troubling behaviors, and unusual actions. Video analytics involves several areas of video processing such as object detection, facial recognition, and anomaly detection. The FPGA chips with efficient configuration are very effective for NPL and real-time video analytics systems to utilize machine learning algorithms and ANNs. Many manufacturers are offering several FPGAs that provide high-level processing performance for natural languages and real-time videos.

Intel Stratix 10 NX 2100 (Figure 4) is Intel’s first AI-optimized FPGA for high-bandwidth and low-latency AI acceleration applications. The chips are good for several NLP applications such as speech recognition, speech synthesis, and real-time video analytic applications such as content recognition and video pre-processing or post-processing. You can use the FPGA chips for AI-based security applications including fraud detection, deep packet inspection, and congestion control identification. They support extending AI+ large models across the multi-node solution.

1679350198313.png

Figure 4
Intel Stratix 10 NX 2100 FPGA embeds AI Tensor Blocks and supports extending AI+ large models across the multi-node solution.

Stratix 10 NX FPGA embeds AI Tensor Blocks that are tuned for the common matrix-matrix or vector-matrix multiplications. The AI Tensor Block is used in AI computations with capabilities designed to work efficiently. Its integrated memory stacks allow for large, persistent AI models to be stored on-chip that ensure lower latency with large memory bandwidth to prevent memory-bound performance challenges in large models. Maximum non-return to zero (NRZ) and pulse-amplitude modulation (PAM4) transceivers are 96 and 36, respectively. The maximum data rate for NRZ and PAM4 is 28.9Gbps and 57.8Gbps, respectively.

The PAM4 transceivers implement multi-node AI inference solutions, reducing or eliminating bandwidth connectivity as a limiting factor in multi-node designs and providing scalable connectivity and flexible adaptability to your requirements. The transceivers incorporate hard IPs such as PCIe Gen3, 50/100G Ethernet, and 10/25/100G Ethernet.

INTELLIGENT EDGE SOLUTION​

In intelligent edges, data is generated, analyzed, interpreted, and addressed. Its major categories include operational technology edges, information technology edges, and IoT edges. It is a set of connected devices and systems that gather and analyze data and develop solutions related to data, users, or both.

An intelligent edge makes the business more efficient by reducing unexpected delays, costs, and risks. Deployment of FPGAs provides good solutions for data load, tasks, and real-time operation. The AI Engine on an FPGA allows resolving the compromise between performance and latency. The applicability of FPGAs for intelligent edges is proved through their configurability, low latency, parallel computing capability, and higher flexibility.

You can use the iCE40 UltraPlus FPGA chip (Figure 5) from Lattice Semiconductor for your intelligent edge solutions. With 5,000 lookup tables, the FPGA chip is capable of implementing Neural Networks for pattern matching necessary to bring always-on intelligence to the edge.

1679350264265.png

Figure 5
The iCE40 UltraPlus FPGA chip is designed with 5,000 lookup tables and delivers high performance in signal processing using DSP blocks and the soft neural network IPs and compiler for flexible AI/ML implementation.

The UltraPlus deliver the lowest power AI and ML solutions with flexible interfaces and allow designers to eliminate latency associated with cloud intelligence at a lower cost. It can solve the connectivity issues with a variety of interfaces and protocols for the rapidly growing system complexity of powering smart homes, factories, and cities. The FPGA chip provides the low-power computation for higher levels of intelligence and multiple packages are available to fit a wide range of applications needs.

Lattice Semiconductor is expanding its mobile FPGA product family with the iCE40 UltraPlus, delivering 1.1 Mbit RAM, twice the digital signal processing blocks, and improved I/O over previous generations. The FPGA chip is designed with Flexible logic architecture with UP3K and UP5K parameters, 2800 and 5280 four-input density LUTs, customizable I/O, and up to 80Kbits dual port and 1Mbit single port embedded memory. The chip delivers high performance in signal processing using DSP blocks, and the soft neural network IPs and compiler for flexible AI/ML implementation.

EMBEDDED VISION SOLUTION​

Embedded vision integrates a camera and processing board that opens up several new possibilities (Figure 6). The systems have a wide area of applications including autonomous vehicles, digital dermoscopic, medical vision, and other cutting-edge applications. Embedded vision systems can be deployed for specific-purpose of applications that require application-specific chips for the processing and operation. The FPGA with application-specific configuration can deliver high performance and efficiency for embedded vision solutions. With the rapidly growing vision technology, FPGA chips enable powerful processing in a wide range of applications and deliver the maximum processing solution with their flexible reconfiguration and performance capability.

You can select the PolarFire FPGA chip (Figure 7) from Microchip for your embedded vision system. The FPGA chips offer a variety of solutions for smart embedded vision such as video, imaging, and machine learning IP and tools for accelerating system designs. They come with cost-optimized architecture and power optimization capability. PolarFire FPGA chips support process optimizations for 100K/500K LE devices, transceiver performance optimized for 12.7Gbps, and 1.6Gbps I/Os supporting DDR4/DDR3/LPDDR3, LVDS-hardened I/O gearing logic with CDR. The PolarFire integrated hard IP includes DDR PHY, PCIe endpoint/root port, and crypto processor.

1679350329846.png

Figure 6
FPGA chips enable powerful processing that can deliver high performance and efficiency for embedded vision solutions.

The PolarFire FPGA family has five product models including MPF050 (logic elements 48K and total I/O 176), MPF100 (logic elements 109K and total I/O 296), MPF200 (logic elements 192K and total I/O 364), MPF300 (logic elements 300K and total I/O 512), and MPF500 (logic elements 481K and total I/O 584). The solutions deliver high performance in low-power and small form factors across the industrial, medical, broadcast, automotive, aerospace, and defense solutions.

CONCLUSIONS​

FPGA chips are designed with a lightweight, smaller form factor, and very low power consumption and they can process a huge amount of data faster than CPUs and GPUs. The chips are easy to deploy for rapidly growing AI and ML fields. AI is everywhere, and hardware upgrades of a satellite are very expensive whereas FPGAs provide long-term solutions with flexibility.

FPGA chips are the complete ecosystem solution and System-on-Chip (SoC) FPGA chips will expand applicability with real-time compilation and automatic FPGA program generation for next-generation technology demands.

 
  • Like
  • Fire
  • Love
Reactions: 32 users

Steve10

Regular
Some interesting stock market analysis with chart showing average S&P500 trajectory following CPI peaks.

1679351021040.png


Clearly, the next few weeks will be the real test. It’s pretty clear after what happened to Credit Suisse (NYSE:CS) this week, with their share price plunging over 30% intraday on Wednesday and their 5-year CDS exploding to 700+ bps, that potential contagion fears over a dollar funding squeeze on EU banks and other big users of the Eurodollar markets are growing.

The important takeaway from all of this is that, while contagion risks are real and deflation risks are rising, the worse things get now, the more the Fed will do – bad news = good news – and more cowbell is on its way.

US Fed liquidity is rising.

1679351141285.png


 
  • Like
  • Fire
  • Thinking
Reactions: 31 users

Euks

Regular
I am expecting BRN licence agreements with the top 5 MCU players.

Renesas = tick
NXP
Microchip
STM
Infineon

ChatGPT has mentioned BRN partnerships with NXP, STM & Infineon.

Today I asked chatGPT about Microchip.

As far as I know, Microchip Technology Inc. and BrainChip Holdings Ltd. announced a strategic partnership in 2019. The partnership aimed to develop and market improved, cost-effective hardware solutions for accelerating the deployment of artificial intelligence (AI) applications in a variety of industries.

Under the partnership, Microchip and BrainChip combined their expertise in hardware design, system-level integration, and machine learning to create an integrated solution. Microchip contributed its microcontrollers and field-programmable gate array (FPGA) devices, while BrainChip provided its Akida Neuromorphic System-on-Chip (NSoC) device, which is designed for low-power AI processing.

The resulting solution is intended to offer superior performance in applications such as object detection, facial recognition, and machine vision. The joint solution is aimed at addressing a range of applications in the industrial, automotive, and consumer markets.

Overall, the Microchip and BrainChip partnership aims to create a solution that can help accelerate the deployment of AI applications in a range of industries while minimizing cost and complexity.

NXP, Microchip, STM & Infineon will not compromise their MCU market share by allowing Renesas to dominate the AI MCU market alone.

They will not be buying via their competitor Renesas so they will have to manufacture their own chips with Akida IP.

The FPGA market also appears a good fit for BRN tech.

FPGA market is projected to reach USD $23.1B in 2030 @ 14.2% CAGR.

Some of the big players in the FPGA market are:
Intel
Lattice Semiconductor
Microchip
Xilinx (AMD)
Cypress Semiconductor (Infineon)
Microchip
Texas Instruments
Achronix Semiconductor
Quicklogic

FPGAs for AI and Machine Learning​

01/01/2023

View attachment 32699
Written by Al Mahmud Al Mamun

FPGA chips come with a million logic gates and reconfigurable architecture that can deliver the best solutions for artificial intelligence (AI) and machine learning (ML), enable entire processing optimization, and fit for neural network infrastructures.​


Field-programmable gate array (FPGA) chips enable the reprogramming of logic gates that enable overwrite configurations and custom circuits. FPGAs are helpful for artificial intelligence (AI) and machine learning (ML) and are suitable for a wide area of applications through the re-configurable capability. The chips support accelerating development and data processing and are flexible, scalable, and reusable for the embedded systems.

The global AI market size was estimated at $136.6 billion in 2022 (the base year for estimation was 2021 and historical data from 2017 to 2020) and is expected to reach $1.8 billion by 2030, at a 38.1% compound annual growth rate (CAGR), (forecast period 2022 to 2030) [1]. The global market of FPGA chips is growing due to the increasing adoption of AI and ML technologies for edge computing in data centers. The global FPGA market size was estimated at $6 billion in 2021 and is expected to reach $14 billion by 2028, at a 12% CAGR, (forecast period 2022 to 2028) [2].

BRIEF REVIEW OF FPGA​

The FPGA chip systems combine an array of logic blocks, ways to program the logic blocks, and relationships among them. They can be customized for multiple uses by changing the specification of the configuration using hardware description languages (HDLs), such as Verilog and VHDL. Proper re-configuration enables them to perform nearly similar to application-specific integrated circuits (ASICs) and the chips can perform better than the central processing unit (CPU) and graphics processing unit (GPU) for data processing acceleration.

Xilinx invented FPGAs in 1985, their first FPGA was the XC2064 (Figure 1), which offered 800 gates and was produced on a 2.0µ process [3]. In the late 1980s, the Naval Surface Warfare Center initiated a research project proposed by Steve Casselman to build an FPGA with more than 600,000 reprogrammable gates, which were successful and the design was patented in 1992 [4]. In the early 2000s, FPGAs reached millions of reprogrammable gates, and Xilinx introduced the Virtex XCVU440 All Programmable 3D IC in 2013, which offers 50 million (equivalent) ASIC logic gates.

View attachment 32700
Figure 1
The XC2064 FPGA with 800 gates and a 2.0µ process

The FPGA chip manufacturers have their own architecture for their products that generally consist of configurable logic blocks, configurable I/O blocks, and programmable interconnect. The FPGAs have three basic types of programmable elements including static RAM, anti-fuses, and flash EPROM. The chips are available with several system gates, including shift registers, logic cells, and look-up tables.

To select the FPGAs you should analyze three things including memory, performance, and interface requirements. FPGA chips are available with several types of memory including CAM, Flash, RAM, Dual-port RAM, ROM, EEPROM, FIFO, and LIFO. The logic families of FPGA chips include Crossbar switch technology (CBT), Gallium arsenide (GaAs), integrated injection logic (I2L), and Silicon on sapphire (SOS). There are four basic IC package types for FPGA chips, including the ball grid array (BGA), the quad flat package (QFP), the single in-line package (SIP), and the dual in-line package (DIP).

FPGA CHIPS FOR AI AND MACHINE LEARING​

AI and ML play key roles in the modern technology revolution in all prospective areas of application that involve a large amount of data and real-time data processing. 5G technology has already started and comes with high speeds and vast amounts of data transferring capability that create new opportunities for AI and ML. For the rapidly growing technology, general computing systems are not enough, and parallel computing becomes necessary. The technology environments are updating and changing rapidly so the use of ASICs becomes difficult and very costly. The FPGA chips with the ability to re-configurable architecture are the best solutions for such purposes and developers are deploying FPGA solutions (Figure 2).

View attachment 32701
Figure 2
The FPGA chips with the ability to re-configurable architecture are the best solutions for AI and ML.

Automotive and industrial control systems need to collect data from sensors or measuring devices, process the collected data, and take action by deriving command functions through several elements. The control system is associated with the instrumentation used for real-time data processing, manufacturing, and process control, which vary in size ranges from a few modular panel-mounted controllers to large interconnected distributed control systems. The FPGAs enable the optimization of the entire process controller applications in the industrial and automotive fields. The latest FPGA chips open up opportunities to utilize controller systems designed for any specific applications.

ML algorithms and artificial neural networks (ANNs) became more sophisticated and require a huge amount of data for training and validation to gain higher accuracy and minimize the error rate. The systems are generally task-specific and related to almost every type of applications. So the systems need re-programmable solutions that fit every application purpose and using FPGA to build a neural network infrastructure delivers higher performance.

AUTOMOTIVE SOLUTIONS​

With the revolution of AI and ML, the automotive industry is also growing rapidly. The automotive industry is mostly application-specific for the different areas of applications. To support the modern automotive industry for a specific purpose, we need application-specific chip configuration to accelerate data processing. Because the industry increasing requires a variety of applications, it is difficult to find an application-specific chip configuration for every area. In this situation, an FPGA with its application-specific configuration can be the best solution and able to deliver high performance with scalability.

Many manufacturers offer several FPGA chips that can be re-configurable for application-specific automotive solutions. Xilinx XA Artix-7 FPGAs can be useful for your automated systems. XA Artix-7 (Figure 3) is an automotive-grade FPGA that offers optimization for the lowest cost and power with small form-factor packaging for automotive applications and allows designers to leverage more logic per watt. The UltraScale+ MPSoC devices integrate a 64-bit quad-core ARM Cortex-A53 and dual-core ARM Cortex-R5-based processing system. The flexible and scalable FPGA solution is ideally suited for automotive platforms including driver assistance and automated driving systems. It helps to accelerate design productivity with a strong network of automotive-specific third-party ecosystems.

View attachment 32702
Figure 3
Xilinx XA Artix-7 is an automotive-grade FPGA that integrates a 64-bit quad-core ARM Cortex-A53 and dual-core ARM Cortex-R5-based processing system and increased system performance through 100,000 logic cells.

NLP AND REAL-TIME VIDEO SOLUTION​

Natural language processing (NLP) requires processing and analyzing large amounts of data that involves speech recognition, understanding, and re-generating. Real-time video analytics is a powerful technology evolution that allows monitoring and identification of violations, troubling behaviors, and unusual actions. Video analytics involves several areas of video processing such as object detection, facial recognition, and anomaly detection. The FPGA chips with efficient configuration are very effective for NPL and real-time video analytics systems to utilize machine learning algorithms and ANNs. Many manufacturers are offering several FPGAs that provide high-level processing performance for natural languages and real-time videos.

Intel Stratix 10 NX 2100 (Figure 4) is Intel’s first AI-optimized FPGA for high-bandwidth and low-latency AI acceleration applications. The chips are good for several NLP applications such as speech recognition, speech synthesis, and real-time video analytic applications such as content recognition and video pre-processing or post-processing. You can use the FPGA chips for AI-based security applications including fraud detection, deep packet inspection, and congestion control identification. They support extending AI+ large models across the multi-node solution.

View attachment 32703
Figure 4
Intel Stratix 10 NX 2100 FPGA embeds AI Tensor Blocks and supports extending AI+ large models across the multi-node solution.

Stratix 10 NX FPGA embeds AI Tensor Blocks that are tuned for the common matrix-matrix or vector-matrix multiplications. The AI Tensor Block is used in AI computations with capabilities designed to work efficiently. Its integrated memory stacks allow for large, persistent AI models to be stored on-chip that ensure lower latency with large memory bandwidth to prevent memory-bound performance challenges in large models. Maximum non-return to zero (NRZ) and pulse-amplitude modulation (PAM4) transceivers are 96 and 36, respectively. The maximum data rate for NRZ and PAM4 is 28.9Gbps and 57.8Gbps, respectively.

The PAM4 transceivers implement multi-node AI inference solutions, reducing or eliminating bandwidth connectivity as a limiting factor in multi-node designs and providing scalable connectivity and flexible adaptability to your requirements. The transceivers incorporate hard IPs such as PCIe Gen3, 50/100G Ethernet, and 10/25/100G Ethernet.

INTELLIGENT EDGE SOLUTION​

In intelligent edges, data is generated, analyzed, interpreted, and addressed. Its major categories include operational technology edges, information technology edges, and IoT edges. It is a set of connected devices and systems that gather and analyze data and develop solutions related to data, users, or both.

An intelligent edge makes the business more efficient by reducing unexpected delays, costs, and risks. Deployment of FPGAs provides good solutions for data load, tasks, and real-time operation. The AI Engine on an FPGA allows resolving the compromise between performance and latency. The applicability of FPGAs for intelligent edges is proved through their configurability, low latency, parallel computing capability, and higher flexibility.

You can use the iCE40 UltraPlus FPGA chip (Figure 5) from Lattice Semiconductor for your intelligent edge solutions. With 5,000 lookup tables, the FPGA chip is capable of implementing Neural Networks for pattern matching necessary to bring always-on intelligence to the edge.

View attachment 32704
Figure 5
The iCE40 UltraPlus FPGA chip is designed with 5,000 lookup tables and delivers high performance in signal processing using DSP blocks and the soft neural network IPs and compiler for flexible AI/ML implementation.

The UltraPlus deliver the lowest power AI and ML solutions with flexible interfaces and allow designers to eliminate latency associated with cloud intelligence at a lower cost. It can solve the connectivity issues with a variety of interfaces and protocols for the rapidly growing system complexity of powering smart homes, factories, and cities. The FPGA chip provides the low-power computation for higher levels of intelligence and multiple packages are available to fit a wide range of applications needs.

Lattice Semiconductor is expanding its mobile FPGA product family with the iCE40 UltraPlus, delivering 1.1 Mbit RAM, twice the digital signal processing blocks, and improved I/O over previous generations. The FPGA chip is designed with Flexible logic architecture with UP3K and UP5K parameters, 2800 and 5280 four-input density LUTs, customizable I/O, and up to 80Kbits dual port and 1Mbit single port embedded memory. The chip delivers high performance in signal processing using DSP blocks, and the soft neural network IPs and compiler for flexible AI/ML implementation.

EMBEDDED VISION SOLUTION​

Embedded vision integrates a camera and processing board that opens up several new possibilities (Figure 6). The systems have a wide area of applications including autonomous vehicles, digital dermoscopic, medical vision, and other cutting-edge applications. Embedded vision systems can be deployed for specific-purpose of applications that require application-specific chips for the processing and operation. The FPGA with application-specific configuration can deliver high performance and efficiency for embedded vision solutions. With the rapidly growing vision technology, FPGA chips enable powerful processing in a wide range of applications and deliver the maximum processing solution with their flexible reconfiguration and performance capability.

You can select the PolarFire FPGA chip (Figure 7) from Microchip for your embedded vision system. The FPGA chips offer a variety of solutions for smart embedded vision such as video, imaging, and machine learning IP and tools for accelerating system designs. They come with cost-optimized architecture and power optimization capability. PolarFire FPGA chips support process optimizations for 100K/500K LE devices, transceiver performance optimized for 12.7Gbps, and 1.6Gbps I/Os supporting DDR4/DDR3/LPDDR3, LVDS-hardened I/O gearing logic with CDR. The PolarFire integrated hard IP includes DDR PHY, PCIe endpoint/root port, and crypto processor.

View attachment 32705
Figure 6
FPGA chips enable powerful processing that can deliver high performance and efficiency for embedded vision solutions.

The PolarFire FPGA family has five product models including MPF050 (logic elements 48K and total I/O 176), MPF100 (logic elements 109K and total I/O 296), MPF200 (logic elements 192K and total I/O 364), MPF300 (logic elements 300K and total I/O 512), and MPF500 (logic elements 481K and total I/O 584). The solutions deliver high performance in low-power and small form factors across the industrial, medical, broadcast, automotive, aerospace, and defense solutions.

CONCLUSIONS​

FPGA chips are designed with a lightweight, smaller form factor, and very low power consumption and they can process a huge amount of data faster than CPUs and GPUs. The chips are easy to deploy for rapidly growing AI and ML fields. AI is everywhere, and hardware upgrades of a satellite are very expensive whereas FPGAs provide long-term solutions with flexibility.

FPGA chips are the complete ecosystem solution and System-on-Chip (SoC) FPGA chips will expand applicability with real-time compilation and automatic FPGA program generation for next-generation technology demands.

Don’t get sucked into the ChatGPT vortex Steve.

She/he is a dirty little lying bastard 🤥 most of the time 😂😂
 
  • Like
  • Haha
  • Love
Reactions: 35 users
If you add the following known facts together in my opinion you get Microchip already working with Brainchip:

1. Brainchip partnered with SiFive with announced compatibility with the x280 Intelligence Series,

2. Brainchip partnered with NASA,

3. Brainchip partnered with GlobalFoundries, and

4. Brainchip taping out AKD1500 minus the ARM Cortex 4, plus

5. The following article:

SL-013023.jpg

January 30, 2023

NASA Recruits Microchip, SiFive, and RISC-V to Develop 12-Core Processor SoC for Autonomous Space Missions​


by Steven Leibson
NASA’s JPL (Jet Propulsion Lab) has selected Microchip to design and manufacture the multi-core High Performance Spaceflight Computer (HPSC) microprocessor SoC based on eight RISC-V X280 cores from SiFive with vector-processing instruction extensions organized into two clusters, with four additional RISC-V cores added for general-purpose computing. The project’s operational goal is to develop “flight computing technology that will provide at least 100 times the computational capacity compared to current spaceflight computers.” During a talk at the recent RISC-V Summit, Pete Fiacco, a member of the HPSC Leadership Team and JPL Consultant, explained the overall HPSC program goals.
Despite the name, the HPSC is not strictly a processor SoC for space. It’s designed to be a reliable computer for a variety of applications on the Earth – such as defense, commercial aviation, industrial robotics, and medical equipment – as well as being a good candidate for use in government and commercial spacecraft. Three characteristics that the HPSC needs beyond computing capability are fault tolerance, radiation tolerance, and overall platform security. The project will result in the development of the HPSC chip, boards, a software stack, and reference designs with initial availability in 2024 and space-qualified hardware available in 2025. Fiacco said that everything NASA JPL does in the future will be based on the HPSC.
NASA JPL set the goals for the HPSC based on its mission requirements to put autonomy into future spacecraft. Simply put, the tasks associated with autonomy are sensing, perceiving, deciding, and actuating. Sensing involves remote imaging using multi-spectral sensors and image processing. Perception instills meaning into the sensed data using additional image processing. Decision making includes mission planning that incorporates the vehicle’s current and future orientation. Actuation involves orbital and surface maneuvering and experiment activation and management.
Correlating these tasks with NASA’s overall objectives for its missions, Fiacco explained that the HPSC is designed to allow space-bound equipment to go, land, live, and explore extraterrestrial environments. Spacecraft also need to report back to earth, which is why Fiacco also included communications in all four major tasks. All of this will require a huge leap in computing power. Simulations suggest that the HPSC increases computing performance by 1000X compared to the processors currently flying in space, and Fiacco expects that number to improve with further optimization of the HPSC’s software stack.
lg.php


It’s hard to describe how much of an upgrade the HPSC represents for NASA JPL’s computing platform without contrasting the new machine with computers currently operating off planet. For example, the essentially similar, nuclear-powered Curiosity and Perseverance rovers currently trundling around Mars with semi-autonomy are based on RAD750 microprocessors from BAE Systems. (See “Baby You Can Drive My Rover.”) The RAD750 employs the 32-bit PowerPC 750 architecture and is manufactured with a radiation-tolerant semiconductor process. This chip has a maximum clock rate of 200 MHz and represents the best of computer architecture circa 2001. Reportedly, more than 150 RAD750 processors have been launched into space. Remember, NASA likes to fly hardware that’s flown before. One of the latest space artifacts to carry a RAD750 into space is the James Webb Space Telescope (JWST), which is now imaging the universe in the infrared spectrum and is collecting massive amounts of new astronomical data while sitting in a Lagrange orbit one million miles from Earth. (That’s four times greater than the moon’s orbit.) The JWST’s RAD750 processor lopes along at 118 MHz.
Our other great space observatory, the solar-powered Hubble Space Telescope (HST), sports an even older processor. The HST payload computer is an 18-bit NASA Standard Spacecraft Computer-1 (NSSC-1) system built in the 1980s but designed even earlier. This payload computer controls and coordinates data streams from the HST’s various scientific instruments and monitors their condition. (See “Losing Hubble – Saving Hubble.”)
The original NSSC-1 computer was developed by the NASA Goddard Space Flight Center and Westinghouse Electric in the early 1970s. The design is so old that it’s not based on a microprocessor. The initial version of this computer incorporated 1700 DTL flat-pack ICs from Fairchild Semiconductor and used magnetic core memory. Long before the HST launched in 1990, the NSSC-1 processor design was “upgraded” to fit into some very early MSI TTL gate arrays, each incorporating approximately 130 gates of logic.
I’m not an expert in space-based computing, so I asked an expert for his opinion. The person I know who is most versed in space-based computing with microprocessors and FPGAs is my friend Adam Taylor, the founder and president of Adiuvo Engineering in the UK. I asked Taylor what he thought of the HPSC and he wrote:
“The HPSC is actually quite exciting for me. We do a lot in space and computation is a challenge. Many of the current computing platforms are based on older architectures like the SPARC (LEON series) or Power PC (RAD750 / RAD5545). Not only do these [processors] have less computing power, they also have ecosystems which are limited. Limited ecosystems mean longer development times (less reuse, more “fighting” with the tools as they are generally less polished) and they also limit attraction of new talent, people who want to work with modern frameworks, processors, and tools. This also limits the pool of experienced talent (which is an increasing issue like it is in many industries).
“The creation of a high-performance multicore processor based around RISC-V will open up a wide ecosystem of tools and frameworks while also providing attraction to new talent and widening the pool of experienced talent. The processors themselves look very interesting as they are designed with high performance in mind, so they have SIMD / Vector processing and AI (urgh such an overstated buzz word). It also appears they have considered power management well, which is critical for different applications, especially in space.
“It is interesting that as an FPGA design company (primarily), we have designed in several MicroChip SAM71 RT and RH [radiation tolerant and radiation hardened] microcontrollers recently, which really provide some great capabilities where processing demands are low. I see HPSC as being very complementary to this range of devices, leaving the ultrahigh performance / very hard real time applications to be implemented in FPGA. Ultimately HPSC gives engineers another tool to choose from, and it is designed to prevent the all-too-common, start-from-scratch approach, which engineers love. Sadly, that approach always increases costs and technical risk on these projects, and we have enough of that already.”
One final note: During my research for this article, I discovered that NASA’s HPSC has not always been based on the RISC-V architecture. A presentation made at the Radiation Hardened Electronics Technology (RHET) Conference in 2018 by Wesley Powell, Assistant Chief for Technology at NASA Goddard Space Flight Center’s Electrical Engineering Division, includes a block diagram of the HPSC, which shows an earlier conceptual design based on eight Arm Cortex-A53 microprocessor cores with NEON SIMD vector engines and floating-point units. Powell continues to be the Principal Technologist on the HPSC program. At some point in the HPSC’s evolution over the past four years, at least by late 2020 when NASA published a Small Business Innovation Research (SBIR) project Phase I solicitation for the HPSC, the Arm processor cores had been replaced by a requirement for RISC-V processor cores. That change was formally cast in stone last September with the announcement of the project awards to Microchip and SiFive. A sign of the times, perhaps?

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 53 users

Dhm

Regular
I have since last week my meteorites
View attachment 32709
I'm good at palm reading amongst other skills I have acquired.

You are going to come into a generous amount of money within the next 12 months. If you are wise you won't attempt to 'take the money and run'. Rather, let your good luck/skill ride out and enjoy a life of contentment.
 
  • Like
  • Haha
  • Fire
Reactions: 31 users
A thought retrospective:

“Teksun focuses on end to end IoT product development and enabling intelligent solutions, such as predictive and preventative maintenance devices, analytics and diagnostics for portable healthcare, and vision based devices for security and surveillance. The partnership between BrainChip and Teksun proliferates intelligence through the Teksun product development channels.”

The above quote is from the Partners page on the Brainchip website. Teksun is categorised under the same heading as MegaChips that Peter van der Made as acting CEO said of that the market did not understand the significance of this partnership to Brainchip’s commercial success.

In the website quote above Brainchip claims that with Teksun they will proliferate intelligence through Teksun development channels.

What does proliferate mean:

“proliferate \pruh-LIF-uh-rayt\ verb. 1 : to grow or cause to grow by rapid production of new parts, cells, buds, or offspring. 2 : to increase or cause to increase in number as if by proliferating : multiply.3 Mar 2023”

So what are Teksun’s development channels:

Too HUGE to fit here so enjoy its YouTube presentations:


If you have less time than the rest of your life if you avoid sleeping to watch all of the above then this is a brief summary:

Our Domains

Teksun helps businesses, technology providers, and start-ups build products in the domains of:

Home Automation,
Wearable,
Consumer Electronics,
Industrial Automation, Semiconductor,
Aerospace,
Automotive,
Healthcare,
Agritech, and
more - to be found here:

We provide consulting, development, testing, support, and maintenance to a wide range of mentioned domains.”

So when you consider that Teksun was so keen to get the word out about Brainchip it put up an unauthorised statement about Brainchip on their Website revealing not yet revealed customers of Brainchip in Cisco and Toshiba which was quickly removed once Brainchip discovered this breach how do you think Peter van der Made would describe the partnership with Teksun if he was still the Acting CEO by reference to how he rated the partnership with MegaChip.

Is that the sound of fireworks I can hear in the background.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 56 users
Top Bottom