WBT Discussion 2022

SiDEvans

Regular
The following report contains a decent in-depth analysis of WeebitNano’s ReRAM technology, as well as a quick look at the history of memory and several other memory technologies. It alludes to the demise of standard FLASH and it’s seemingly inevitable replacement by ReRAM. It culminates with a buy recommendation and an indicative 50% share price appreciation for WeebitNano in the next 12-18 months.

https://www.weebit-nano.com/wp-cont...oming-Non-volatile-Memory-Technology-RRAM.pdf
I may be of course here but wouldn't a 50% increase over that period simply take it back to its share price pre the recent overall market pullback.
Feb 22 saw prices at $3.95. $2.50 ish + 50% = $3.75.
Whilst i'd be more than happy and its not something to complain about, it is hardly a significant achievement.
I would hope what they are saying is that all things being equal the rise would be 50% with hopefully additional support from a general improvement over that time in market conditions.
1663110316890.png
 
  • Like
Reactions: 1 users

cosors

👀
A demonstration of the real-world silicon-based NVM capability of Weebit’s ReRAM. Demo developed by CEA-Leti and Weebit Nano.

Weebit ReRAM NVM Demo Video: RRAM Technology​



"Weebit at Flash Memory Summit​

Weebit ReRAM in action​

AUGUST 19, 2022

The Flash Memory Summit (FMS) 2022 is in full swing, so we thought we’d take a moment to share some of what’s happening in Santa Clara this week.
First, we’re showing two demos of our ReRAM technology in our booth. The first shows the real-world capability of Weebit ReRAM as a non-volatile memory (NVM) integrated into an actual subsystem, and also highlights the faster write speed of the Weebit ReRAM module compared to typical flash memory. The second demo shows how using neuromorphic techniques based on ReRAM greatly increases parallel connectivity and significantly improves energy efficiency compared to traditional computing approaches.
As you can see from the picture, we’re seeing a great deal of interest in the demos.
In addition, our VP of Tech Development, Amir Regev, just presented some new test results during his session, “ReRAM’s Development Path Towards Commercialization.”
Amir-Regev_Weebit-ReRAM_FMS22-1-1024x768.jpg

These test results are part of the qualification process, a requirement for products like NVM. As you may know from our previous blog on the topic of qualification, it is a long and intensive process by which we ensure a design is ready for production, confirming it meets commercial specifications and will continue to do so over the expected lifetime of the product. The idea is to test many instances of the product from different manufacturing lots, and do so in an accelerated manner. In this way we can simulate the possible effects of environmental factors over a product’s expected lifetime.
This qualification process is based on our demo chip which includes our embedded ReRAM module, unlike previous results that have been based on R&D tests of our memory array. The demo chip – which we recently demonstrated publicly for the first time (and are now showing at FMS) includes the ReRAM array as well as control logic, decoders, IOs (Input/Output communication elements) and error correcting code (ECC) as well as patent-pending analog and digital smart circuitry – implemented in actual silicon in 130nm.
Qualification is a gradual process, whereby test conditions are continuously intensified, pushing a product’s boundaries to spec and beyond. For example, we begin testing the ReRAM module at room temperature, and steadily increase the temperature over time.
We are putting the module through tests that follow industry standards such as NVM tests developed by industry standards body JEDEC. Gathering in-depth statistics with a standards-based approach is critical to showing the maturity of the technology as we approach productization.
The test results Mr. Regev shared at FMS are based on early qualification test results of the demo chips we recently received from Leti, which are better than normally expected at such an early phase. We are delighted with the impressive data retention, endurance and high-temperature stability we see in these initial tests. Qualification of the module is an ongoing process, and we expect to have final qualification results before the end of 2022."
https://www.yolegroup.com/industry-...y-summitweebit-reram-in-action/?cn-reloaded=1
 
Last edited:
  • Love
  • Like
Reactions: 2 users

cosors

👀

"The Importance of Character Development
Semiconductor Characterization Explained​

Amir Regev

Amir Regev​

VP Technology Development

22 Sep, 22

“He’s the guy who’s been all around the world. … he is an archeologist and an anthropologist. A Ph.D. … he’s also a sort of rough and tumble guy … a sort of expert in the occult… he not only is not afraid to stand up against any man, but he’s also not afraid to stand up against the unknown. … He should be able to talk his way out of things. … The guy should be a great gambler too. … The doctor with the bullwhip. … A soldier of fortune in the thirties…”

One of the most important parts of a good screenplay, novel or other type of story is character development. If you’ve seen the Indiana Jones movies, you’ll probably recognize the character described in the quote excerpts above, which are from a long discussion between George Lucas, Steven Spielberg and Larry Kasden as they initially developed the Indiana Jones character in January 1978. A transcript of that discussion is available (registration is required) that shows just how much thought went into creating the personality, back story, motivations, and other traits of the character.

This description was not part of the screenplay itself (which is also available at the above link), but it represents some of the critical preliminary work done to make the character come to life. Such a description is critical in development of a realistic screenplay (and ultimately a movie) because it provides guidelines for the hero’s behavior in any possible situation, ensuring consistency and resulting in a believable character.

Indiana-Jones-image-courtesy-of-the-raider.jpg


Indiana Jones image courtesy of http://www.theraider.net

Similarly, although using a much more rigorous and scientific process), semiconductor devices must be characterized. For a semiconductor device, ‘characterization’ is the testing process which assesses exactly what the chip looks like and how it functions under any given condition.

Semiconductor Device Characterization

The semiconductor characterization process is used to develop the nominal and maximal boundaries of a device’s behavior across a range of conditions. It tests the assumptions of the initial device definition and specification while making various tweaks and optimizations. Specifically for NVM devices, the characterization process also involves employing smart algorithms to ensure the spec boundaries. Similar to the specification we receive when we buy a car or other complex product, the end result is a final specification that enables customers to understand the device’s expected behavior and limitations so they know what they can and can’t do with it when they design their application. The customer will use the boundaries set through the characterization process when designing their own product or application.

Characterization must be done on every new semiconductor design when its first silicon units arrive after fabrication. The process focuses on performing very accurate electrical measurements to gather as much data as possible.

In a previous article, we discussed qualification, a process focused on stress testing samples from multiple production lots to ensure the technology is robust over a product’s expected life span. In contrast, the characterization of a device is used to:

  • Assess functionality for parameters such as yield, performance and stability
  • Determine optimal operating conditions – looking at the immunity to process, voltage and temperature variations
  • Discover potential problems and sensitivities – to correct any process errors
  • Verify the final specification limits and production test program – used for further testing and reliability qualification
While characterization and qualification are different processes, they happen in tandem and data feeds between the two. For example, when we find optimal or boundary conditions through characterization, we apply those conditions using various stresses during qualification.

Answering Key Questions

As we go through the characterization process with our ReRAM module, we are aiming to address a range of key questions. While a technology like ReRAM starts with few single cells, we want to know how those cells will behave if we put millions, tens of millions or more of them together on the same chip. What does their distribution look like? What is their cell-to-cell variability? For example, in the same array, do we have cells that can be written faster than the rest of the cells, while other cells are slower?

Beyond the chip, we need to understand how the on-chip ReRAM cells behave across an entire wafer. What is the die-to-die variability? Is the performance uniform? What is the die yield (percentage of good dies per wafer)? Then we look at the variance between different wafers and different production lots. Does each lot have different results? What is the lot-to-lot variability? Is there any sensitivity to process variations?

Weebit-RRAM-memory-technology-Lab-Semiconductor-customize-NVM-1024x540.png



The Characterization Process


To answer these critical questions, we begin by building a system to cover all operating conditions. The main challenge with characterizing a chip based on a new technology like ReRAM is to create the specific methodologies that suit that technology. We start with methodologies originally developed for other NVM technologies such as flash memory, and then we tune those for our technology. For the tests, we use an evaluation board designed specifically to test and operate the device, as well as a production tester (Automated Testing Equipment or ATE).

Before jumping in, the first thing we must do is verify that there aren’t any process issues that impact the wafers. This is especially important for new technologies. Once the technology is verified, we then move on to testing the basic functionality of the chip. In the case of ReRAM, this means measuring the initial characteristics of the memory cells such as initial resistance. We must also make sure we can access each of the memory cells in every chip, and that we can write, read and erase each cell multiple times. We also confirm that the cells behave according to expectations in terms of electric current and voltage levels as well as timing.

We check all these parameters using code and test programs that automatically perform operations on the ReRAM cells/arrays. The tests use various patterns of zeros and ones to ensure the device can correctly store and retain the information, and we vary the data patterns to make sure each cell can handle any combinations of data types.

After testing several units, we move on to testing a larger amount of dies and collect statistics based on a broad range of electrical and physical conditions. This includes environmental conditions such as temperature, since we want to make sure customers can use the device reliably in products that are used not only at room temperatures but also in extremes – whether it be outdoors in the desert or in the arctic tundra.

Characterizing the Weebit ReRAM Module

Weebit is currently characterizing our embedded ReRAM module developed with our partner CEA-Leti. The module includes the ReRAM array as well as control logic, decoders, IOs (Input/Output communication elements) and error correcting code (ECC) as well as patent-pending analog and digital smart circuitry, implemented 130nm. This process is happening concurrently with the module’s qualification (you can read here about our initial qualification results here).

Our team of product and test engineers are working alongside engineers from CEA-Leti’s LIST design team in a new lab in Israel that we established for this purpose. We expect to have final results before the end of 2022."
https://www.weebit-nano.com/semiconductor-development-characterization-rram-memory/
 
  • Like
Reactions: 1 users

cosors

👀
Be patient with reading it leads to ReRAM.

"SEMICONDUCTOR MANUFACTURING​

Of Frankenstein chips and computing memory​

Chip development is moving away from the monolithic all-rounder - at least in part . For AI , on the other hand, memory should no longer just store.

A reportfrom Johannes Hiltscher published on July 13, 2022
1664120520156.png

Many individual silicon dies on a wafer: This almost certainly results in a working giant chip.

The term highly integrated circuit - also known as Very Large Scale Integration (VLSI) - is getting a bit old. It dates back to the early days of semiconductor development, when 10,000 transistors on a chip was a revolution. The VLSI Symposium has had this name since 1981 - and it is still about the further development of semiconductor production. The 2022 Symposium was held in Honolulu from June 12-17.

The bandwidth of the submissions goes from the chip housing and the package to developments in production technology such as silicon photonics and semiconductors for quantum computers to new architectures for components such as memory. We have picked out some interesting topics from this year's VLSI Symposium and present them. The first is the so-called heterogeneous integration - it is found in everyday devices, is gaining importance and is becoming more and more complex.

AMD has shown the way with Ryzen and Epyc: Powerful processors can be assembled from several parts and are therefore cheaper to manufacture than a single, large die. Because the larger such a silicon plate is, the more likely it is defective somewhere. In addition, the individual dies can be manufactured in different processes - hence the name component "heterogeneous". With the current Ryzen 5000, for example, the compute dies are manufactured with 7 nm, while the I/O die is made with 12 nm , which is cheaper.

Circuit boards have too few conductors that are too slow​

Simply soldering the dies together on a circuit board, however, will foreseeably come up against limits for a number of reasons. For particularly powerful chips - GPUs and some particularly large FPGAs - special dies, so-called silicon interposers , are used. The dies to be connected are mounted on them.

Since the interposers are also manufactured using semiconductor technology, particularly thin conductors and closely spaced contacts are possible. In this way, significantly more connections can be established between the individual dies - this means higher data rates. However, silicon interposers have a disadvantage: they are expensive. And in the end they have to be mounted on a circuit board, if only for the power supply. The interposer is completely useless for that.

Micrometer sized spring contacts​

Muhannad Bakir from the Georgia Institute of Technology spoke about an alternative in which the die-to-die contacts are grouped and only connected using silicon interposers. Contacts that leave the package are soldered directly to its circuit board. The silicon interposers - or other chips that are mounted under the large dies - are mounted contacts up on the package. Since the distance to the dies below is smaller than to the package, solder balls of different sizes would be required when soldering.

  • Inexpensive 3D chips with a large number of contacts and a wide variety of dies are conceivable with micro spring contacts.  (Image: Georgia Institute of Technology)

  • Inexpensive 3D chips with a large number of contacts and a wide variety of dies are conceivable with micro spring contacts. (Image: Georgia Institute of Technology)
Besides, soldering brings problems when the chip gets warm. Package and silicon expand at different rates, the tiny connections between the dies can break. So Bakir's research group developed tiny spring contacts. They are mounted on the silicon interposer, similar to the connection to the chip package with bonding wires. If the dies to be connected are soldered to the package, their contact surfaces press onto the spring contacts.

The resulting connection is as good as a soldered one. The flexibility of the contacts also compensates for differences in height; if the length of the spring contacts is adjusted, dies of different heights can even be contacted. This allows chips to be assembled from a wide variety of semiconductors - Frankenstein's monster made of silicon.

For the time being, however, silicon interposers remain state-of-the-art - and why not use an entire wafer as an interposer?

Wafer-Scale Integration​


Puneet Gupta of the University of California, Los Angeles (UCLA) spoke about wafer-sized chips. The so-called wafer-scale integration currently uses Cerebras for its AI processors. They consist of a complete wafer with hundreds of thousands of individual computing cores . A connection network (interconnect) is also integrated, which enables communication between the cores.

Manufacturing all processors in the same piece of silicon has a number of advantages. There are no transitions to other materials as with soldering on a circuit board (substrate). This allows higher signal frequencies. In addition, with semiconductor manufacturing - as with interposers - conductors can be packed much more closely. In this way, significantly more connections can be implemented between the processors.

With many lines, high data rates can be transmitted without serial interfaces, which saves chip area and energy and reduces latency. There's only one catch: Some of the individual processors will be defective. In normal chip production, they would be sorted out, but if the entire wafer is used as a huge chip, that doesn't work. Then logic must be built in to deal with the defects.

A huge interposer​

The problem can be avoided by manufacturing the logic and interconnect on different wafers. The logic wafers are regularly tested, sawed into dies and faulty ones sorted out. They are then mounted on the interconnect wafer. This has the additional advantage that dies from different manufacturing processes can be combined. Although fewer lines can be integrated than in a monolithic chip, the approach is still far superior to a printed circuit board.

  • Structure of a waferscale interposer (Image: University of California)
  • Structure of a waferscale interposer (Image: University of California)

With the interposer approach, only simple conductors and small copper columns are produced on the interconnect wafer. There are hardly any defects, since the structures are huge compared to the transistors and smallest conductors of current manufacturing processes. The copper pillars are 10 μm apart - by the way, they make contact with the logic chips, which are attached using thermocompression bonding . The process was originally used in flip chip assembly , but is also used at HBM.

A wafer full of problems​

A whole wafer full of computing units, however, causes further problems even if it has been successfully manufactured. The many dies also require a lot of energy, and it first has to get to them in the form of electricity - and then away again in the form of heat. Gupta illustrated this on a waferscale chip with GPUs. Theoretically, 72 GPU dies, each with two associated HBM stacks, would fit on a 300 mm wafer.

However, the practical maximum is 40 GPUs, and even that only with two-stage regulation of the supply voltage. Since each GPU consumes 270 W of power together with the HBM stacks, at least 10.8 kW must be supplied in the form of electrical power and dissipated again as heat. Conversion losses are not yet taken into account. With Cerebra's Wafer Scale Engine 2 it's even 20 kW - they can only be cooled with water .

  • With waferscale integration, UCLA has realized a huge GPU.  The design of the power supply (VRMs, Voltage Regulator Modules) had to be adapted for this.  (Image: University of California

  • With waferscale integration, UCLA has realized a huge GPU. The design of the power supply (VRMs, Voltage Regulator Modules) had to be adapted for this. (Image: University of California
Waferscale integration aims to increase computer performance through faster connections. In some cases, however, it can make sense to rethink the architecture itself.

Computing memory​


AI applications in particular have quite an efficiency problem: neural networks like Megatron have hundreds of billions of parameters - even larger AIs are only a matter of time. Even if only one byte is used per parameter (e.g. Int8), that's hundreds of gigabytes - and they have to be moved from memory to processors on a regular basis.

However, they are only used for a few arithmetic operations there. This not only means latency due to memory access, but also requires a lot of energy. In the RAM chip, the data is read from the memory array into a buffer and then transferred to the processor via the mainboard (or an interposer). There they are buffered several times until they end up in a register and the calculation takes place. The result must then return on the same path.

One possible solution to this efficiency disaster is called compute in memory (CIM, not to be confused with in-memory computing for databases). The memory itself becomes the computer, which saves electrical power, since only the results leave the memory. In addition, the calculations take less time due to the direct access to the storage array.

The idea has been around for a while, but...​

The idea isn't new, one of the most famous projects, the University of Berkeley's Intelligent RAM (IRAM) , started in 1998 (led by David Patterson , one of the fathers of RISC design). So far, the concept has not caught on, the niche was perhaps too small. But it could be given a new chance for neural networks.

  • This is how ReRAM calculates: The individual memory cells, implemented with adjustable resistors, record the weight coefficients of a neuron, the digital-to-analog converters (DACs) enter the activations.  The columns add up the individual currents, and an analog-to-digital converter (ADC) generates a digital output.  (Image: University of Michigan)
  • This is how ReRAM calculates: The individual memory cells, implemented with adjustable resistors, record the weight coefficients of a neuron, the digital-to-analog converters (DACs) enter the activations. The columns add up the individual currents, and an analog-to-digital converter (ADC) generates a digital output. (Image: University of Michigan)
Instead of integrating a CPU into the memory chip, as is the case with IRAM, the memory chip itself becomes a computer - an analog computer at that. This is made possible by resistive RAM, in which the memory cells do not store an electrical charge, but rather a resistance. You can even set different values. The secret of the analog computer lies in the resistances: If a voltage is applied, the current is the quotient of voltage and resistance - i.e. a division. If several resistors are connected in parallel, the currents add up.

These two operations are sufficient for a neural network, at least for inferencing: In each neuron, input values are multiplied by a weight (the inverse of division) and the results are summed. Justin Correll presented an implementation from the University of Michigan at the VLSI Symposium. It clearly sets itself apart from older, also ReRAM-based publications through higher resolution for weighting and input values as well as a larger number of weighting coefficients. In terms of efficiency, however, with the achieved 20.7 TOPS/W (20.7 trillion arithmetic operations per watt), it remains far behind the 2,900 TOPS/W of an SRAM-based chip presented in 2020 .

There is also room for improvement in terms of size: the experimental memory can only hold 64 neurons, each with 256 4-bit weight coefficients. So for Megatron it has to be written to often, but it is also not intended as a replacement for normal DRAM. Rather, the ReRAM exists as a computing unit parallel to the normal DRAM. In the test chip, which was manufactured in cooperation with Applied Materials, a CIM block with 8 Kbytes of capacity occupies about a third of the area of a 256 Kbyte DRAM. One reason for this is the required digital-to-analog and analog-to-digital converters, which serve as an interface between the analog CIM module and a digital processor.

  • Four CIM blocks with associated DRAM are implemented in the ReRAM test chip.  In the CIM modules, DACs and ADCs take up a lot of space.  (Image: University of Michigan)

  • Four CIM blocks with associated DRAM are implemented in the ReRAM test chip. In the CIM modules, DACs and ADCs take up a lot of space. (Image: University of Michigan)

And when is all this coming?​

With the CIM presented last, it is not yet foreseeable that it will end up in products. However, interest in these and other analog computers has increased again with the increasing importance of AI. Here they are superior to a classic processor in terms of efficiency, especially in mobile, battery-powered devices.

When it comes to packaging, on the other hand, there is considerable movement: the trend is clearly towards more dies per package. In addition - as with Frankenstein's monster - more and more different semiconductors are combined. The stronger integration is used both in high-performance chips such as GPUs and in efficiency-optimized SoCs such as smartphones. TSMC, for example , has built a new factory for increasingly complex packaging methods. This increases the possibilities for chip designers. The VLSI Symposium showed that there is no shortage of ideas."
https://www.golem.de/news/halbleite...ps-und-rechnendem-speicher-2207-166713-3.html
 
Last edited:
  • Fire
  • Like
  • Thinking
Reactions: 5 users

Slymeat

Move on, nothing to see.
The following article explores an interesting use case for WeebitNano ReRAM

The power of ReRAM to PMIC

If nothing else it will introduce you to an acronym that is increasing in importance (PMIC)—power management integrated circuits.

”According to a 2021 report from Yole, the PMIC market will grow to more than US$25.6 billion by 2026”

As well as explaining how to pronounce it—so you can impress people with your knowledge.

The article gives a brief, but educational, look into the history of PMICs and explains the advantages of using ReRAM. One huge advantage is the ability of ReRAM to run in high temperature environments as most PMIcs operate in 150 °C temperatures.

WeebitNano’s ReRAM, being a back-end-of-line (BEOL) integration technology, aids the production of the more complex PMICs that require a system-on-a-chip (SOC) implementation.
 
Last edited:
  • Fire
  • Like
Reactions: 2 users

Slymeat

Move on, nothing to see.
Here’s an interesting article talking of the future of ReRAM. Well worth the read IMHO.
Spoiler: it thinks it is a grand future

The future of memory—the time for reram

The limitations of flash are increasingly clear, especially the fact that it’s just not economically feasible to embed flash memories into SoCs beyond 28nm for most applications

The technology that offers the best balance is ReRAM, which is emerging as a leading candidate to replace flash memory for a broad range of applications.

To me, it seems that WeebitNano is entering the market at EXACTLY the right time.

ReRAM technologies will start to enter the market in the next 12 months. This technology is an ideal NVM for a broad range of applications

As soon as testing has completed, and commercial production started, there should be a plethora of industries eager to take Weebit’s ReRAM on board.
 
  • Fire
  • Like
Reactions: 4 users

Slymeat

Move on, nothing to see.
A huge announcement from Weebit Nano today.

Weebit Nano advances its ReRAM selector development to support embedded and discrete applications.

This is absolutely huge and opens the door for system on chip (SOC) development including ReRAM.

This is particularly important when combined with knowledge that this can all be build on CMOS technology, using back-end-processing, and at 28nm and less.

SOC is yet another achievement that cannot be achieved with FLASH. At least as far as I know, and even if so, only with GREAT expense in both dollars and power consumption.

Hello BrainChip—I trust you are taking notice!

And of potentially overlooked importance is that this selector works on individual bytes of data, as well as consuming basically no power. A huge advantage over discrete FLASH which does extremely wasteful block-wise writing—meaning even unused bytes go through a write cycle when include din a block. FLASH is extremely inefficient at a byte level.

Even if it was possible, I would not like FLASH embedded in a chip I designed or even just used. FLASH has a limited lifetime (in read/write cycles) and is prone to failure. FLASH would definitely be the weak link. Whereas, ReRAM is robust and uses the same technology as every other part of the chip, with no need for any introduced exotic materials—it cannot be the weak link!

Weebit’s ReRAM will soon be ubiquitous.
 
  • Like
  • Love
Reactions: 4 users

Slymeat

Move on, nothing to see.
Be patient with reading it leads to ReRAM.

"SEMICONDUCTOR MANUFACTURING​

Of Frankenstein chips and computing memory​

Chip development is moving away from the monolithic all-rounder - at least in part . For AI , on the other hand, memory should no longer just store.

A reportfrom Johannes Hiltscher published on July 13, 2022
View attachment 17339
Many individual silicon dies on a wafer: This almost certainly results in a working giant chip.

The term highly integrated circuit - also known as Very Large Scale Integration (VLSI) - is getting a bit old. It dates back to the early days of semiconductor development, when 10,000 transistors on a chip was a revolution. The VLSI Symposium has had this name since 1981 - and it is still about the further development of semiconductor production. The 2022 Symposium was held in Honolulu from June 12-17.

The bandwidth of the submissions goes from the chip housing and the package to developments in production technology such as silicon photonics and semiconductors for quantum computers to new architectures for components such as memory. We have picked out some interesting topics from this year's VLSI Symposium and present them. The first is the so-called heterogeneous integration - it is found in everyday devices, is gaining importance and is becoming more and more complex.

AMD has shown the way with Ryzen and Epyc: Powerful processors can be assembled from several parts and are therefore cheaper to manufacture than a single, large die. Because the larger such a silicon plate is, the more likely it is defective somewhere. In addition, the individual dies can be manufactured in different processes - hence the name component "heterogeneous". With the current Ryzen 5000, for example, the compute dies are manufactured with 7 nm, while the I/O die is made with 12 nm , which is cheaper.

Circuit boards have too few conductors that are too slow​

Simply soldering the dies together on a circuit board, however, will foreseeably come up against limits for a number of reasons. For particularly powerful chips - GPUs and some particularly large FPGAs - special dies, so-called silicon interposers , are used. The dies to be connected are mounted on them.

Since the interposers are also manufactured using semiconductor technology, particularly thin conductors and closely spaced contacts are possible. In this way, significantly more connections can be established between the individual dies - this means higher data rates. However, silicon interposers have a disadvantage: they are expensive. And in the end they have to be mounted on a circuit board, if only for the power supply. The interposer is completely useless for that.

Micrometer sized spring contacts​

Muhannad Bakir from the Georgia Institute of Technology spoke about an alternative in which the die-to-die contacts are grouped and only connected using silicon interposers. Contacts that leave the package are soldered directly to its circuit board. The silicon interposers - or other chips that are mounted under the large dies - are mounted contacts up on the package. Since the distance to the dies below is smaller than to the package, solder balls of different sizes would be required when soldering.

  • Inexpensive 3D chips with a large number of contacts and a wide variety of dies are conceivable with micro spring contacts.  (Image: Georgia Institute of Technology)

  • Inexpensive 3D chips with a large number of contacts and a wide variety of dies are conceivable with micro spring contacts. (Image: Georgia Institute of Technology)
Besides, soldering brings problems when the chip gets warm. Package and silicon expand at different rates, the tiny connections between the dies can break. So Bakir's research group developed tiny spring contacts. They are mounted on the silicon interposer, similar to the connection to the chip package with bonding wires. If the dies to be connected are soldered to the package, their contact surfaces press onto the spring contacts.

The resulting connection is as good as a soldered one. The flexibility of the contacts also compensates for differences in height; if the length of the spring contacts is adjusted, dies of different heights can even be contacted. This allows chips to be assembled from a wide variety of semiconductors - Frankenstein's monster made of silicon.

For the time being, however, silicon interposers remain state-of-the-art - and why not use an entire wafer as an interposer?

Wafer-Scale Integration​


Puneet Gupta of the University of California, Los Angeles (UCLA) spoke about wafer-sized chips. The so-called wafer-scale integration currently uses Cerebras for its AI processors. They consist of a complete wafer with hundreds of thousands of individual computing cores . A connection network (interconnect) is also integrated, which enables communication between the cores.

Manufacturing all processors in the same piece of silicon has a number of advantages. There are no transitions to other materials as with soldering on a circuit board (substrate). This allows higher signal frequencies. In addition, with semiconductor manufacturing - as with interposers - conductors can be packed much more closely. In this way, significantly more connections can be implemented between the processors.

With many lines, high data rates can be transmitted without serial interfaces, which saves chip area and energy and reduces latency. There's only one catch: Some of the individual processors will be defective. In normal chip production, they would be sorted out, but if the entire wafer is used as a huge chip, that doesn't work. Then logic must be built in to deal with the defects.

A huge interposer​

The problem can be avoided by manufacturing the logic and interconnect on different wafers. The logic wafers are regularly tested, sawed into dies and faulty ones sorted out. They are then mounted on the interconnect wafer. This has the additional advantage that dies from different manufacturing processes can be combined. Although fewer lines can be integrated than in a monolithic chip, the approach is still far superior to a printed circuit board.

  • Structure of a waferscale interposer (Image: University of California)
  • Structure of a waferscale interposer (Image: University of California)

With the interposer approach, only simple conductors and small copper columns are produced on the interconnect wafer. There are hardly any defects, since the structures are huge compared to the transistors and smallest conductors of current manufacturing processes. The copper pillars are 10 μm apart - by the way, they make contact with the logic chips, which are attached using thermocompression bonding . The process was originally used in flip chip assembly , but is also used at HBM.

A wafer full of problems​

A whole wafer full of computing units, however, causes further problems even if it has been successfully manufactured. The many dies also require a lot of energy, and it first has to get to them in the form of electricity - and then away again in the form of heat. Gupta illustrated this on a waferscale chip with GPUs. Theoretically, 72 GPU dies, each with two associated HBM stacks, would fit on a 300 mm wafer.

However, the practical maximum is 40 GPUs, and even that only with two-stage regulation of the supply voltage. Since each GPU consumes 270 W of power together with the HBM stacks, at least 10.8 kW must be supplied in the form of electrical power and dissipated again as heat. Conversion losses are not yet taken into account. With Cerebra's Wafer Scale Engine 2 it's even 20 kW - they can only be cooled with water .

  • With waferscale integration, UCLA has realized a huge GPU.  The design of the power supply (VRMs, Voltage Regulator Modules) had to be adapted for this.  (Image: University of California

  • With waferscale integration, UCLA has realized a huge GPU. The design of the power supply (VRMs, Voltage Regulator Modules) had to be adapted for this. (Image: University of California
Waferscale integration aims to increase computer performance through faster connections. In some cases, however, it can make sense to rethink the architecture itself.

Computing memory​


AI applications in particular have quite an efficiency problem: neural networks like Megatron have hundreds of billions of parameters - even larger AIs are only a matter of time. Even if only one byte is used per parameter (e.g. Int8), that's hundreds of gigabytes - and they have to be moved from memory to processors on a regular basis.

However, they are only used for a few arithmetic operations there. This not only means latency due to memory access, but also requires a lot of energy. In the RAM chip, the data is read from the memory array into a buffer and then transferred to the processor via the mainboard (or an interposer). There they are buffered several times until they end up in a register and the calculation takes place. The result must then return on the same path.

One possible solution to this efficiency disaster is called compute in memory (CIM, not to be confused with in-memory computing for databases). The memory itself becomes the computer, which saves electrical power, since only the results leave the memory. In addition, the calculations take less time due to the direct access to the storage array.

The idea has been around for a while, but...​

The idea isn't new, one of the most famous projects, the University of Berkeley's Intelligent RAM (IRAM) , started in 1998 (led by David Patterson , one of the fathers of RISC design). So far, the concept has not caught on, the niche was perhaps too small. But it could be given a new chance for neural networks.

  • This is how ReRAM calculates: The individual memory cells, implemented with adjustable resistors, record the weight coefficients of a neuron, the digital-to-analog converters (DACs) enter the activations.  The columns add up the individual currents, and an analog-to-digital converter (ADC) generates a digital output.  (Image: University of Michigan)
  • This is how ReRAM calculates: The individual memory cells, implemented with adjustable resistors, record the weight coefficients of a neuron, the digital-to-analog converters (DACs) enter the activations. The columns add up the individual currents, and an analog-to-digital converter (ADC) generates a digital output. (Image: University of Michigan)
Instead of integrating a CPU into the memory chip, as is the case with IRAM, the memory chip itself becomes a computer - an analog computer at that. This is made possible by resistive RAM, in which the memory cells do not store an electrical charge, but rather a resistance. You can even set different values. The secret of the analog computer lies in the resistances: If a voltage is applied, the current is the quotient of voltage and resistance - i.e. a division. If several resistors are connected in parallel, the currents add up.

These two operations are sufficient for a neural network, at least for inferencing: In each neuron, input values are multiplied by a weight (the inverse of division) and the results are summed. Justin Correll presented an implementation from the University of Michigan at the VLSI Symposium. It clearly sets itself apart from older, also ReRAM-based publications through higher resolution for weighting and input values as well as a larger number of weighting coefficients. In terms of efficiency, however, with the achieved 20.7 TOPS/W (20.7 trillion arithmetic operations per watt), it remains far behind the 2,900 TOPS/W of an SRAM-based chip presented in 2020 .

There is also room for improvement in terms of size: the experimental memory can only hold 64 neurons, each with 256 4-bit weight coefficients. So for Megatron it has to be written to often, but it is also not intended as a replacement for normal DRAM. Rather, the ReRAM exists as a computing unit parallel to the normal DRAM. In the test chip, which was manufactured in cooperation with Applied Materials, a CIM block with 8 Kbytes of capacity occupies about a third of the area of a 256 Kbyte DRAM. One reason for this is the required digital-to-analog and analog-to-digital converters, which serve as an interface between the analog CIM module and a digital processor.

  • Four CIM blocks with associated DRAM are implemented in the ReRAM test chip.  In the CIM modules, DACs and ADCs take up a lot of space.  (Image: University of Michigan)

  • Four CIM blocks with associated DRAM are implemented in the ReRAM test chip. In the CIM modules, DACs and ADCs take up a lot of space. (Image: University of Michigan)

And when is all this coming?​

With the CIM presented last, it is not yet foreseeable that it will end up in products. However, interest in these and other analog computers has increased again with the increasing importance of AI. Here they are superior to a classic processor in terms of efficiency, especially in mobile, battery-powered devices.

When it comes to packaging, on the other hand, there is considerable movement: the trend is clearly towards more dies per package. In addition - as with Frankenstein's monster - more and more different semiconductors are combined. The stronger integration is used both in high-performance chips such as GPUs and in efficiency-optimized SoCs such as smartphones. TSMC, for example , has built a new factory for increasingly complex packaging methods. This increases the possibilities for chip designers. The VLSI Symposium showed that there is no shortage of ideas."
https://www.golem.de/news/halbleite...ps-und-rechnendem-speicher-2207-166713-3.html
I have finally found the time to sit down and digest this article. Thanks for sharing it @cosors, much of it was a trip down memory lane, but then so much has changed since I was last involved in designing chips and working on circuit boards that could be manually soldered. The things designers need to think about these days, brought about by small size of things, is mind boggling.

To put size in perspective—Silicon's atomic size is about 0.2 nanometers. Today's transistors are about 70 silicon atoms wide, so the possibility of making them even smaller is itself shrinking.

When you get close to the size of atoms, quantum effects come into play and cause quantum paradoxes. Traditional chips need 1’s to always be 1’s and 0’s to always be 0’s. Quantum paradoxes and traditional chips don‘t mix well.

The wafersized chip of GPUs absolutely astounds me. Here is a single chip that consumes 20kW of electrical power. My ducted air conditioner for my house consumes less than that. That truly is getting a bit beyond absurd, no matter how powerful the chip may be.

A lot of the struggle seems to be related to connections and the limitations of size and distance. Weebit’s recent announcement of their selector working embedded using 28nm CMOS may be a huge solution to this.

Now developers can develop SOC solutions that contain efficient and robust embedded non-volatile memory—all the connections required to pass parameters around will be contained within the chip and weights, results etc can be stored internally. Hence none of those springs or troublesome solder joints.

Plus there would be no need to shuffle data outside the chip. Only need the relatively few inputs and outputs to actually be transmitted to and from the chip, and hence very minimal physical connections.

On initially reading the announcement yesterday, I thought it could be huge. This affirms that!
 
  • Fire
Reactions: 2 users

cosors

👀
We are mentioned directly in the page of skywater. I didn't saw that yet.
Screenshot_2022-10-17-11-26-43-52_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

https://www.skywatertechnology.com/cmos/

And even in the German press they are now writing about ReRAM:

1666003491136.png

"Market analysis of emerging storage technologiesPCM, ReRAM, FRAM and MRAM on the rise​

10/11/2022 By Michael Eckstein

44 billion US dollars by 2032: Young memory technologies such as PCM, ReRAM, FRAM and MRAM will conquer a significant share of the overall market over the next decade. The market analysts Objective Analysis and Coughlin Associates are convinced of this. Investments in the required production facilities are also increasing accordingly.

Slowly but surely, other promising memory technologies such as PCM, ReRAM, FRAM and MRAM are establishing themselves alongside the top dogs Flash, SRAM and DRAM.  Analysts say their market volume is set to increase rapidly in the coming years. Slowly but surely, other promising memory technologies such as PCM, ReRAM, FRAM and MRAM are establishing themselves alongside the top dogs Flash, SRAM and DRAM. Analysts say their market volume is set to increase rapidly in the coming years.
(Image: Coughlin Associates)
Emerging non-volatile memory technologies are on the rise: by 2032 they will generate a market volume of around 44 billion US dollars. That's according to a recent joint report by market analysts Objective Analysis and Coughlin Associates, "Emerging Memories Enter the Next Phase." dr Thomas Coughlin, President of Coughlin Associates, states, "This is the semiconductor market to watch over the next decade."

Companies active in this market could expect significant growth. As such, memory manufacturers and foundries should consider investing in this area if they want to benefit from this development, says memory expert Coughlin.

Established storage technologies are reaching their limits​

Current memory technologies, including flash memory (NAND and NOR), DRAM and SRAM, are reaching technological limits as they continue to improve. Flash cells, for example, cannot be arbitrarily reduced in size because below a minimum size they simply do not contain enough charge carriers for stable operation.


For this reason, intensive work is being done worldwide on the development of new memory technologies such as PCM (Phase Change Memory), ReRAM (Resistive RAM), FRAM (Ferroelectric RAM) and MRAM (Magnetoresistive RAM) as well as on a number of less common technologies such as carbon nanotubes. According to memory expert Coughlin, most of these are non-volatile and can be used for long-term storage or as memory that doesn't lose its information when power is off. This offers advantages for battery-powered or energy-harvested devices, but also enables energy savings in data centers.

Different storage technologies for different use cases​

Based on the current sophistication and characteristics of these technologies, resistive RAM (RRAM) appears to be a potential replacement for flash memory, Coughlin said. However, the change will not take place abruptly, but gradually over the next decade. Until then, Flash will still be developed for a few technology generations.

The introduction of 3D XPoint memory by Micron and Intel will have an increased impact on the need for DRAM, said Jim Handy, general manager of Objective Analysis. 3D XPoint is a type of phase change memory (PCM) and has proven to be highly durable, has a higher density than DRAM and achieves a performance that lies between NAND flash and DRAM. Intel introduced NVMe SSDs with its Optane technology (using 3D XPoint) in 2017 and started shipping DIMM Optane modules in 2019 - but has since dropped out of the joint venture with development partner Micron, which is now developing the memory in-house developed and manufactured.



Magnetic RAM (MRAM) and Spin Tunnel Torque RAM (STT MRAM) are beginning to replace NOR, SRAM, and possibly DRAM, according to the analysis. The ability to replace volatile memory with high-speed, long-endurance non-volatile memory makes these techniques very attractive. As production volumes increase, manufacturing costs and ultimately sales prices will fall – and Coughlin believes that this will make MRAM technologies more competitive.

Ferroelectric RAM (FRAM) and some RRAM technologies have already found some niche applications, according to the augurs, and with the deployment of HfO FRAM, the number of niche markets available for FRAM could increase.

Spin-based logic for modern processors​

The transition to non-volatile solid-state storage and cache storage will help reduce power consumption, as well as enable new power-saving modes and faster system-state recovery after power-off. According to Coughlin and Handy, this can be used to build more stable computer architectures that retain their operating state even when switched off. Finally, spintronics technology, which uses electron spin rather than electricity for logic processing, could be used to make future microprocessors. Spin-based logic could enable very efficient in-memory processing.

The use of non-volatile memory in combination with CMOS logic is of great importance in the electronics industry - for example in microcontrollers. As a replacement for multi-transistor SRAM, STT MRAM could reduce the transistor count, making it a cost-effective, higher-density solution. A number of enterprise and consumer devices already use MRAM as embedded cache memory. Coughlin points out that all major foundry companies are already offering processes for MRAM as embedded memory in SoC products.

The availability of STT MRAM has accelerated this trend, according to the analysts. Due to the compatibility of MRAM and STT-RAM processes with conventional CMOS processes, these memories can be built directly on CMOS logic wafers or even integrated directly into the CMOS fabrication. "This is an advantage over flash memory, which doesn't have the same compatibility with conventional CMOS processes," says Coughlin.


The potential energy savings of non-volatile and simpler MRAM and STT-MRAM compared to SRAM are significant, according to the study authors. As the cost of MRAM in US dollars per GB approaches that of SRAM, this replacement could lead to significant market expansion.

Emerging storage technologies as separate ICs or embedded IP​

Already today, developers and users of system-on-chips (SoCs) would integrate the new non-volatile memories into designs to use their advantages, for example to reduce power consumption and achieve better system responsiveness. In some areas, the new memory designs could displace previously dominant technologies such as NOR flash, SRAM and DRAM. They could replace both standalone memory chips and embedded memory in microcontrollers, ASICs, and even computing processors, and potentially create new markets of their own.

The Japanese microcontroller manufacturer Renesas presented its optimized embedded STT MRAM technology at the "2022 Symposium on VLSI" in June and demonstrated a chip manufactured in the 22 nm process.

And Taiwan's Industrial Technology Research Institute (ITRI) has announced two new MRAM collaborations. So you want to develop together with the largest chip contract manufacturer TSMC SOT MRAM array chips, which are characterized by high write efficiency and low write voltage. ITRI states that its SOT-MRAM achieves a write speed of 0.4 nanoseconds and a high endurance of 7 trillion reads and writes. The memory should also be able to store data for more than ten years. In addition, ITRI wants to work with the National Yang Ming Chiao Tung University (NYCU) to develop a magnetic storage technology that can work in a wide operating temperature range of almost 400 degrees Celsius.

A few months ago, Everspin Technologies also launched a new family of MRAM products with SPI/QSPI/xSPI interface. The persistent memory achieves a read and write bandwidth of 400 MB/s via the new JEDEC standard interface Extended Serial Peripheral Interface (xSPI). The EMxxLX family is currently available with memory densities from 8 MBit to 64 MBit.

The German semiconductor group Infineon has also had fast FRAM chips in its range since taking over Cypress. The non-volatile memory is able to instantly capture and store critical data in the event of a power failure. This makes it suitable, for example, for mission-critical data acquisition applications such as high-performance programmable logic controllers (PLCs) that require highly reliable control and high throughput, for life-support patient monitoring devices or accident data recorders.

Advantages over previous storage technologies​

"Designers of all types of systems are finding that new memories offer advantages over existing technologies," said Jim Handy, general manager of Objective Analysis. "The Internet of Things will be revolutionized as new types of embedded memory reduce power consumption." Even larger systems are already changing architectures and using persistent memory to improve latency and data integrity.

The report explains how standalone MRAM and STT RAM sales will grow to approximately $1.4 billion, more than thirty times 2021 standalone MRAM sales. At the same time, according to Handy, embedded ReRAM and MRAM are increasingly competing with embedded NOR and SRAM memory in SoCs, which should also lead to strong sales growth.

New technologies are capturing a significant share of the storage market​

According to Coughlin and Handy, emerging technologies will capture a significant portion of the overall storage market over the next decade. However, this will continue to be dominated by (3D) NAND flash and DRAM in the future.

The already heavily consolidated market for NAND flash memory alone - here dominated by Samsung, Kioxia, Micron and SK Hynix (which took over Intel's NAND memory division for 9 billion US dollars in 2021) - had a volume of around 2021 66.5 billion US dollars and will grow to around 94 billion US dollars by 2027 according to analyzes by the augurs of Mordor Intelligence . Micron estimates that DRAM and NAND flash combined will reach a total volume of around US$330 billion in 2030 (2021: US$161 billion).

In all cases, the driving force is the explosively increasing data volume across all application areas, whereby the overall increasing digitization, artificial intelligence/machine learning, mobility and connectivity can be identified as macro trends. IDC estimates that the worldwide, annually newly generated data volume will increase from 81 zetabytes in 2021 to 180 zetabytes in 2025.

Manufacturers of production systems and plants also benefit​

“Many of these emerging storage types require new tooling and manufacturing processes to integrate disparate materials and processes. This will also boost the growth of the capital goods market,” adds Coughlin. Total sales of MRAM fabrication equipment will increase to more than forty-nine times 2021 levels and reach approximately $1.5 billion in 2032.

The 241-page publication, Emerging Memories Enter the Next Phase , examines not only PCM, ReRAM, FRAM and MRAM, but also a number of less common technologies."
https://www.ip-insider.de/pcm-reram-fram-und-mram-im-aufwind-a-bf67c5c02db812ba2bfb1ca11be1be29/
 
  • Like
  • Love
Reactions: 4 users

Slymeat

Move on, nothing to see.
In addition to @corsor’s find above, Coby Hanoch has introduced the SkyWater-Weebit IP parter page via Linked in.

What absolutely brilliant exposure for Weebit Nano.

SkyWater Technology Foundry now have a page about Weebit Nano Ltd and their ReRAM on their web site so all SkyWater customers can learn about Weebit Nano’s offering. The sales teams are working together to approach SkyWater customers and offer them Weebit's ReRAM.

IP Partner Weebit Nano



And the associated press release

Weebit and SkyWater Announce Agreement to Take ReRAM Technology to Volume Production



Weebit Nano Limited (ASX: WBT), a leading developer of next-generation memory technologies for the global semiconductor industry, and SkyWater Technology(NASDAQ: SKYT), the trusted technology realization partner, announced an agreement to take Weebit’s innovative Resistive RAM (ReRAM) technology to volume production. In addition, SkyWater has licensed the technology for use with customer designs”

. . .

”Commercialization of ReRAM technology will provide enhancements to a range of new electronics in industries such as automotive which require high-temperature performance. Weebit’s ReRAM allows semiconductor memory elements to be significantly faster, less expensive, more reliable and more energy efficient than those using existing embedded Flash memory solutions.”




The following is from the SkyWater IP Partner Page

5541a5d5b58e19a1cc7c1e007f35c002fadd71.png
 
  • Like
  • Love
Reactions: 3 users

Slymeat

Move on, nothing to see.
We are mentioned directly in the page of skywater. I didn't saw that yet.
View attachment 19162
https://www.skywatertechnology.com/cmos/

And even in the German press they are now writing about ReRAM:

View attachment 19163

"Market analysis of emerging storage technologiesPCM, ReRAM, FRAM and MRAM on the rise​

10/11/2022 By Michael Eckstein

44 billion US dollars by 2032: Young memory technologies such as PCM, ReRAM, FRAM and MRAM will conquer a significant share of the overall market over the next decade. The market analysts Objective Analysis and Coughlin Associates are convinced of this. Investments in the required production facilities are also increasing accordingly.

Slowly but surely, other promising memory technologies such as PCM, ReRAM, FRAM and MRAM are establishing themselves alongside the top dogs Flash, SRAM and DRAM.  Analysts say their market volume is set to increase rapidly in the coming years. Slowly but surely, other promising memory technologies such as PCM, ReRAM, FRAM and MRAM are establishing themselves alongside the top dogs Flash, SRAM and DRAM. Analysts say their market volume is set to increase rapidly in the coming years.
(Image: Coughlin Associates)
Emerging non-volatile memory technologies are on the rise: by 2032 they will generate a market volume of around 44 billion US dollars. That's according to a recent joint report by market analysts Objective Analysis and Coughlin Associates, "Emerging Memories Enter the Next Phase." dr Thomas Coughlin, President of Coughlin Associates, states, "This is the semiconductor market to watch over the next decade."

Companies active in this market could expect significant growth. As such, memory manufacturers and foundries should consider investing in this area if they want to benefit from this development, says memory expert Coughlin.

Established storage technologies are reaching their limits​

Current memory technologies, including flash memory (NAND and NOR), DRAM and SRAM, are reaching technological limits as they continue to improve. Flash cells, for example, cannot be arbitrarily reduced in size because below a minimum size they simply do not contain enough charge carriers for stable operation.


For this reason, intensive work is being done worldwide on the development of new memory technologies such as PCM (Phase Change Memory), ReRAM (Resistive RAM), FRAM (Ferroelectric RAM) and MRAM (Magnetoresistive RAM) as well as on a number of less common technologies such as carbon nanotubes. According to memory expert Coughlin, most of these are non-volatile and can be used for long-term storage or as memory that doesn't lose its information when power is off. This offers advantages for battery-powered or energy-harvested devices, but also enables energy savings in data centers.

Different storage technologies for different use cases​

Based on the current sophistication and characteristics of these technologies, resistive RAM (RRAM) appears to be a potential replacement for flash memory, Coughlin said. However, the change will not take place abruptly, but gradually over the next decade. Until then, Flash will still be developed for a few technology generations.

The introduction of 3D XPoint memory by Micron and Intel will have an increased impact on the need for DRAM, said Jim Handy, general manager of Objective Analysis. 3D XPoint is a type of phase change memory (PCM) and has proven to be highly durable, has a higher density than DRAM and achieves a performance that lies between NAND flash and DRAM. Intel introduced NVMe SSDs with its Optane technology (using 3D XPoint) in 2017 and started shipping DIMM Optane modules in 2019 - but has since dropped out of the joint venture with development partner Micron, which is now developing the memory in-house developed and manufactured.



Magnetic RAM (MRAM) and Spin Tunnel Torque RAM (STT MRAM) are beginning to replace NOR, SRAM, and possibly DRAM, according to the analysis. The ability to replace volatile memory with high-speed, long-endurance non-volatile memory makes these techniques very attractive. As production volumes increase, manufacturing costs and ultimately sales prices will fall – and Coughlin believes that this will make MRAM technologies more competitive.

Ferroelectric RAM (FRAM) and some RRAM technologies have already found some niche applications, according to the augurs, and with the deployment of HfO FRAM, the number of niche markets available for FRAM could increase.

Spin-based logic for modern processors​

The transition to non-volatile solid-state storage and cache storage will help reduce power consumption, as well as enable new power-saving modes and faster system-state recovery after power-off. According to Coughlin and Handy, this can be used to build more stable computer architectures that retain their operating state even when switched off. Finally, spintronics technology, which uses electron spin rather than electricity for logic processing, could be used to make future microprocessors. Spin-based logic could enable very efficient in-memory processing.

The use of non-volatile memory in combination with CMOS logic is of great importance in the electronics industry - for example in microcontrollers. As a replacement for multi-transistor SRAM, STT MRAM could reduce the transistor count, making it a cost-effective, higher-density solution. A number of enterprise and consumer devices already use MRAM as embedded cache memory. Coughlin points out that all major foundry companies are already offering processes for MRAM as embedded memory in SoC products.

The availability of STT MRAM has accelerated this trend, according to the analysts. Due to the compatibility of MRAM and STT-RAM processes with conventional CMOS processes, these memories can be built directly on CMOS logic wafers or even integrated directly into the CMOS fabrication. "This is an advantage over flash memory, which doesn't have the same compatibility with conventional CMOS processes," says Coughlin.


The potential energy savings of non-volatile and simpler MRAM and STT-MRAM compared to SRAM are significant, according to the study authors. As the cost of MRAM in US dollars per GB approaches that of SRAM, this replacement could lead to significant market expansion.

Emerging storage technologies as separate ICs or embedded IP​

Already today, developers and users of system-on-chips (SoCs) would integrate the new non-volatile memories into designs to use their advantages, for example to reduce power consumption and achieve better system responsiveness. In some areas, the new memory designs could displace previously dominant technologies such as NOR flash, SRAM and DRAM. They could replace both standalone memory chips and embedded memory in microcontrollers, ASICs, and even computing processors, and potentially create new markets of their own.

The Japanese microcontroller manufacturer Renesas presented its optimized embedded STT MRAM technology at the "2022 Symposium on VLSI" in June and demonstrated a chip manufactured in the 22 nm process.

And Taiwan's Industrial Technology Research Institute (ITRI) has announced two new MRAM collaborations. So you want to develop together with the largest chip contract manufacturer TSMC SOT MRAM array chips, which are characterized by high write efficiency and low write voltage. ITRI states that its SOT-MRAM achieves a write speed of 0.4 nanoseconds and a high endurance of 7 trillion reads and writes. The memory should also be able to store data for more than ten years. In addition, ITRI wants to work with the National Yang Ming Chiao Tung University (NYCU) to develop a magnetic storage technology that can work in a wide operating temperature range of almost 400 degrees Celsius.

A few months ago, Everspin Technologies also launched a new family of MRAM products with SPI/QSPI/xSPI interface. The persistent memory achieves a read and write bandwidth of 400 MB/s via the new JEDEC standard interface Extended Serial Peripheral Interface (xSPI). The EMxxLX family is currently available with memory densities from 8 MBit to 64 MBit.

The German semiconductor group Infineon has also had fast FRAM chips in its range since taking over Cypress. The non-volatile memory is able to instantly capture and store critical data in the event of a power failure. This makes it suitable, for example, for mission-critical data acquisition applications such as high-performance programmable logic controllers (PLCs) that require highly reliable control and high throughput, for life-support patient monitoring devices or accident data recorders.

Advantages over previous storage technologies​

"Designers of all types of systems are finding that new memories offer advantages over existing technologies," said Jim Handy, general manager of Objective Analysis. "The Internet of Things will be revolutionized as new types of embedded memory reduce power consumption." Even larger systems are already changing architectures and using persistent memory to improve latency and data integrity.

The report explains how standalone MRAM and STT RAM sales will grow to approximately $1.4 billion, more than thirty times 2021 standalone MRAM sales. At the same time, according to Handy, embedded ReRAM and MRAM are increasingly competing with embedded NOR and SRAM memory in SoCs, which should also lead to strong sales growth.

New technologies are capturing a significant share of the storage market​

According to Coughlin and Handy, emerging technologies will capture a significant portion of the overall storage market over the next decade. However, this will continue to be dominated by (3D) NAND flash and DRAM in the future.

The already heavily consolidated market for NAND flash memory alone - here dominated by Samsung, Kioxia, Micron and SK Hynix (which took over Intel's NAND memory division for 9 billion US dollars in 2021) - had a volume of around 2021 66.5 billion US dollars and will grow to around 94 billion US dollars by 2027 according to analyzes by the augurs of Mordor Intelligence . Micron estimates that DRAM and NAND flash combined will reach a total volume of around US$330 billion in 2030 (2021: US$161 billion).

In all cases, the driving force is the explosively increasing data volume across all application areas, whereby the overall increasing digitization, artificial intelligence/machine learning, mobility and connectivity can be identified as macro trends. IDC estimates that the worldwide, annually newly generated data volume will increase from 81 zetabytes in 2021 to 180 zetabytes in 2025.

Manufacturers of production systems and plants also benefit​

“Many of these emerging storage types require new tooling and manufacturing processes to integrate disparate materials and processes. This will also boost the growth of the capital goods market,” adds Coughlin. Total sales of MRAM fabrication equipment will increase to more than forty-nine times 2021 levels and reach approximately $1.5 billion in 2032.

The 241-page publication, Emerging Memories Enter the Next Phase , examines not only PCM, ReRAM, FRAM and MRAM, but also a number of less common technologies."
https://www.ip-insider.de/pcm-reram-fram-und-mram-im-aufwind-a-bf67c5c02db812ba2bfb1ca11be1be29/
Thanks for the translation @corsors, a truly riveting read.

It is wonderful to see esteemed others talking of the concept of future computers having a combination of fast NVM and faster volatile cache RAM so as to remember state when switched off and make for much more efficient re-start. The modern, over-complicated, operating systems have caused such a delay on start-up, and I would love to see that done away with.

I am still hoping the cost efficiencies of ReRAM manufacture help that technology become predominant. I am certain it will at least replace FLASH.

I expect the complexities of MRAM manufacture will be its major hurdle. I also don‘t believe economies of scale will be sufficient to counter that argument for decades, especially when considering MRAM requires exotic materials to be introduced into the wafer production. I can‘t accept MRAM cost approaching SDRAM, but I do accept that it will, one day, replace it. I do recognise that MRAM is far superior in speed to ReRAM, and hence will have uses that ReRAM can‘t Be used for. I see no reason why both technologies can‘t exist in harmony.

I also believe that, once proven (and that will be this year), Weebit Nano’s ReRAM will start replacing FLASH sooner than later, as already FLASH has real limitations and is already failing catastrophically. Users (such as Tesla) are already looking for alternatives. And technologies are begging for NVM that can be embedded—as Weebit Nano’s ReRAM has just been proven can be done.

I didn‘t previously know of the existence of Spin Tunnel Torque RAM (STT MRAM), what a mind boggling concept that is.

I studied electron spin during my university days, back in the 80’s, it is wonderful seeing what was then expressed as a mathematical concept with physical properties being put to use in real world situations. I was already aware that it is part of the basis to the operation of quantum computers, and now see it mentioned in NVM discussions. I find that neat.

I am reminded of the only intelligent question a colleague (Michael) once asked during a physics lecture. He was drunk at the time, and even attended the lecture with a bottle of beer that he proudly displayed on his desk (a longneck of course), as also did I.

It’s a memory from long ago, from 1983 in fact, but I’ll try to do it justice:
The theatrics of the question asking were quite memorable also.

The lecturer notably ignored Michael‘s frantically waved hand and only after he stood up, bottle of beer at hand, did the lecturer concede to giving him an audience.

Michael asked:
”If an electron is merely a probability distribution of charge in space, how can it spin?”

Then he fell back into his chair.

The audience was stunned. Michael had never before asked a question, and the one he had just asked was on the lips of many of them.

The lecturer responded, in his difficult to understand broken English (he was from Poland—which actually seems appropriate for a modern physics lecturer considering a lot of the theory emanated from that neck-o-the-woods), “In Poland we have no word for spin.”

The auditorium initially was stunned into silence but then burst out in laughter. But even in my inebriated state I accepted his answer and understood he meant, the English word “spin” was an unfortunately loose interpretation of the Polish word used to describe the concept.

People who now conceive ways to utilise this phenomenon really do think on a different level to the rest of us.
 
  • Love
  • Like
Reactions: 3 users

Slymeat

Move on, nothing to see.
Here’s a report from Pitt Street Research that values Weebit at $4.75 per share. The report is quite thorough, at 19 pages long.

$4.75 will be cheap when qualification of the latest wafers, from SkyWater, completes and Weebit goes into full scale production (expected mid 2023).

And look, this report even talks about one of my other pet companies—Brainchip. Maybe that seed I planted about marrying their two technologies may one day come to fruition. ReRam and Akida living side by side on the same chip. Now that would be grand.

WBT research report_101122.pdf
 
  • Like
  • Love
Reactions: 4 users

alwaysgreen

Top 20
Here’s a report from Pitt Street Research that values Weebit at $4.75 per share. The report is quite thorough, at 19 pages long.

$4.75 will be cheap when qualification of the latest wafers, from SkyWater, completes and Weebit goes into full scale production (expected mid 2023).

And look, this report even talks about one of my other pet companies—Brainchip. Maybe that seed I planted about marrying their two technologies may one day come to fruition. ReRam and Akida living side by side on the same chip. Now that would be grand.

WBT research report_101122.pdf
Pitt St research is paid for by the company so don't get too excited haha but I agree, I think potentially we'll run to $5 when we sign with a tier 1 fab and $7-$10 by late 2023 once we are selling to the market.
 
  • Like
  • Fire
Reactions: 3 users

Wags

Regular
Hi all, full disclosure here, Im being lazy. Can anyone tell me if WBT, SLX or AXE are likely to enter the asx300 this rebalance?
cheers in advance
 

Slymeat

Move on, nothing to see.
Weebit’s VP of Marketing and Business Development Eran Briman will present how #ReRAM IP can differentiate your next design at IP-SOC 2022. It is fantastic to see Weebit rubbing shoulders with the right people and getting their message out to the world.

This conference is in Grenoble France so the Weebit presentation will be at 8:25 pm Dec 1 AEDST.

IP-SoC 2022 will be the 25th edition of the working conference fully dedicated to IP (Silicon Intellectual Property) and IP based electronic systems. Seems a perfect fit for Weebit!

The event is the annual opportunity for IP providers and IP consumers to share information about technology trends, innovative IP SoC products, Breaking IP/SoC News, Market evolution and more.

The Grenoble event is a special event as it is also the annual IP Think Tank meeting where high level executives, market analyzer and technical experts in all the design track from Foundry, technology, design methodology, EDA tools share their vision about the future of the IP concept. It will be the right time to analyze the fast evolution and consolidation in the IP market and IP business.





1669494273852.jpeg
 
  • Like
Reactions: 2 users

cosors

👀

Gaining exposure to the booming semiconductor sector through 3 ASX chip stocks​

Stocks Down Under
Our own semiconductor analyst Marc Kennis was part of a panel of semiconductor specialists, including senior executives at BluGlass (ASX:BLG), Weebit Nano (ASX:WBT) and Revasum (ASX:RVS), and spoke about investing in ASX-listed chip stocks and the future of the nascent Australian semiconductor industry.
 
  • Like
Reactions: 3 users

cosors

👀
Not necessarily. I have quite a few BRN and before today, I feel like I had about 55,000 "spare" BRN shares. I was waiting for a day where BRN had a good green day to sell and buy more WBT.

I am equally bullish on both. I guess I am starting to feel there is more upside potential in WBT but who knows. 🤷🏽 I'd love for both to end up with $30 billion market caps one day.
How right you are. Slow and steady. At the moment you can really rely on the SP!
What could be the next big milestone or big thing? Is it already an off take or contract or will that be some final analysis report or certificate or something like that? What is that I should keep my eyes on the horizon? Sorry if the question is naive.
 

alwaysgreen

Top 20
How right you are. Slow and steady. At the moment you can really rely on the SP!
What could be the next big milestone or big thing? Is it already an off take or contract or will that be some final analysis report or certificate or something like that? What is that I should keep my eyes on the horizon? Sorry if the question is naive.
The recent share price increase is likely due to management advising that we will be signing on with a tier 1 fab. Price could explode on signing but be careful because last time we ran high on good news then came crashing down .

This rise seems better though. Slowly rising every day, no major leaps.
 
  • Like
Reactions: 1 users

cosors

👀
The recent share price increase is likely due to management advising that we will be signing on with a tier 1 fab. Price could explode on signing but be careful because last time we ran high on good news then came crashing down .

This rise seems better though. Slowly rising every day, no major leaps.
Yes, I am used to these peaks from the ASX by now. The main thing is that it finds a new level this time. But I'm sure it will. The world is waiting for this next logical step in development I think.
Do you think the qualification process plays an important role in the negotiations? I think I heard that they want to be through the entire qualification programme by the end of the first half of 2023 at the latest.
 

alwaysgreen

Top 20
Yes, I am used to these peaks from the ASX by now. The main thing is that it finds a new level this time. But I'm sure it will. The world is waiting for this next logical step in development I think.
Do you think the qualification process plays an important role in the negotiations? I think I heard that they want to be through the entire qualification programme by the end of the first half of 2023 at the latest.
I think it's important but my understanding is that the Tier 1's are basically chomping at the bit for Weebits ReRam solution. I think we will sign before qualification is complete but there would be a clause that it is pending qualification.
 
  • Like
Reactions: 3 users
Top Bottom