WBT Discussion 2022

Bravo

If ARM was an arm, BRN would be its biceps💪!
Holy shirts and pants! This is not cool bro.
 
  • Like
  • Thinking
  • Sad
Reactions: 3 users

alwaysgreen

Top 20
  • Like
Reactions: 3 users

Slymeat

Move on, nothing to see.
I wonder if being removed from the Emerging Companies index had anything to do with this share price fall?

I assume fundies invested in that index needed to pull their investments which, with the recent growth, may have been substantial. Although I’m just clutching at straws here in the absence of anything else to explain the drop.

Here’s to hoping, at least an equal value of ASX 300 and Small Ordinaries based funds and investors will soon be topping up.

Luckily I wasn’t planning on selling anytime soon! In for the long haul—and that’s at least 5 more years.

The ride was much more enjoyable up until the 10th of March.

Validation of 22nm should give us another boost.🤞🏻
 
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Looks like it was a leaky ship ahead of the announcement "in connection with a proposed capital raising comprising an institutional placement and a share purchase plan". It'll be interesting to see if this is any of the institutions who Cody did the road show with, in which case it shows confidence in the technology and the direction in which the company is heading, which is a plus. But, on the flip side, it's not good for us, not in the short term at least.
 
  • Like
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Announcement just released - $40mil @ $5 placement.
 
  • Like
  • Sad
Reactions: 3 users

Slymeat

Move on, nothing to see.
Assuming negotiations for this capital raise DID NOT happen during the last week alone—what company, in their right mind, with a share price trading around $9, negotiates a capital raise at $5?

I accept that this has happened, but something simply doesn‘t smell right here. And most certainly DOES NOT explain the drop in share price that happened BEFORE the capital raise announcement.

We have an annoyingly leaky boat indeed.

So when the halt is lifted, we all know where the share price will head—toward $5 and the value the company has placed on themselves.

This also adds somewhat of an explanation to short interests increasing over the last couple of weeks. Weebit had no shorts for a very long time but at the start of March short interest jumped to 2%. Those shorters have just made themselves a very good return.

The short interest and share price drop this week cannot be a coincidence!

1679533470806.png
 
  • Like
Reactions: 3 users

alwaysgreen

Top 20
Assuming negotiations for this capital raise DID NOT happen during the last week alone—what company, in their right mind, with a share price trading around $9, negotiates a capital raise at $5?

I accept that this has happened, but something simply doesn‘t smell right here. And most certainly DOES NOT explain the drop in share price that happened BEFORE the capital raise announcement.

We have an annoyingly leaky boat indeed.

So when the halt is lifted, we all know where the share price will head—toward $5 and the value the company has placed on themselves.

This also adds somewhat of an explanation to short interests increasing over the last couple of weeks. Weebit had no shorts for a very long time but at the start of March short interest jumped to 2%. Those shorters have just made themselves a very good return.

The short interest and share price drop this week cannot be a coincidence!

View attachment 32827
Agreed but also, when the share price is hopefully $15 in December due to the cash injection allow us to progress faster, this will be a distant memory.

We are mere pawns in the market. At the end of the day, retail always gets stuffed around while the big boys make money day to day. In the long run, if Weebit can reach the potential that management has alluded to, we will all be rolling in it.
 
  • Like
Reactions: 4 users

cosors

👀

"Neuromorphic Computing: Self-Adapting HW With ReRAMs​


APRIL 3RD, 2023 - BY: TECHNICAL PAPER LINK
popularity

A new technical paper titled “A self-adaptive hardware with resistive switching synapses for experience-based neurocomputing” was published by researchers at Infineon Technologies, Politecnico di Milano and IUNET, Weebit Nano, and CEA Leti.

Abstract
“Neurobiological systems continually interact with the surrounding environment to refine their behaviour toward the best possible reward. Achieving such learning by experience is one of the main challenges of artificial intelligence, but currently it is hindered by the lack of hardware capable of plastic adaptation. Here, we propose a bio-inspired recurrent neural network, mastered by a digital system on chip with resistive-switching synaptic arrays of memory devices, which exploits homeostatic Hebbian learning for improved efficiency. All the results are discussed experimentally and theoretically, proposing a conceptual framework for benchmarking the main outcomes in terms of accuracy and resilience. To test the proposed architecture for reinforcement learning tasks, we study the autonomous exploration of continually evolving environments and verify the results for the Mars rover navigation. We also show that, compared to conventional deep learning techniques, our in-memory hardware has the potential to achieve a significant boost in speed and power-saving.”
Find the technical paper here. Published March 2023.
Bianchi, S., Muñoz-Martin, I., Covi, E. et al. A self-adaptive hardware with resistive switching synapses for experience-based neurocomputing. Nat Commun 14, 1565 (2023). https://doi.org/10.1038/s41467-023-37097-5."
https://semiengineering.com/neuromorphic-computing-self-adapting-hw-with-rerams/

wbt.png

A self-adaptive hardware with resistive switching synapses for experience-based neurocomputing
 
Last edited:
  • Like
  • Love
Reactions: 4 users

Slymeat

Move on, nothing to see.
We all already knew about the commercial milestone of Weebit’s 130nm technology, but it is reassuring to see others talking about it now. It seems to be kicking some goals of late.

 
  • Love
Reactions: 1 users

Slymeat

Move on, nothing to see.
In a member’s-only interview I attended (virtually) on Anzac Day, which I unfortunately cannot share, Coby mentioned some very interesting tidbits, that I believe I can share as they have been said publicly before and/or are generally known concepts. I feel that Coby re-stating them is crucial.

- He mentioned the technology getting into the teens of nanometers. That will be an achievement.

- He also mentioned advances with their selector and how important that is to the discrete market with some customers extremely interested in that. My take on the discussion was that it appears that their patented selector may be saleable by itself.

- Concentrating on simplifying the fab process by using no new materials and only adding 7% cost to include Weebit ReRAM IP is their massive advantage over other ReRAM producers (of which there are a few, and of whom Coby is not concerned). Other ReRAM technologies that were attempted to be commercialised by others (some of them years ago) failed because the fabs wouldn’t have anything to do with them for difficulty of incorporating their technology. So this talk of signing on a level 1 fab is about as critical as it can get!

- MRAM is an often spoken of competing NVM technology. Coby brought up something that I was not previously aware of on this front. It is obvious that MRAM requires magnetic materials to be incorporated into the silicon wafer (that’s what the first ‘M’ stands for), what I didn’t realise is that anything magnetic is “toxic” to the fab environment and they have to isolate that part from all other parts at great expense. Even requiring specially built equipment. I knew MRAM required exotic materials and added about 40% to the fabrication process, but this mention of toxicity was a first for me. Such a strong word.

- It looks like 2024-2025 might be about the time significant revenue starts to roll in for IP incorporated into SOC solutions.

- Weebit’s board is full of some anmazingly experienced people - including the No.2 from ARM and someone who brought the Pentium processor to the world. I can’t remember them all, but Coby was impressed with how he was surrounded by people with decades of experience (average age 55) who know how to commercialise a good product. I got reassurances that IT WILL HAPPEN.
 
  • Love
Reactions: 2 users

alwaysgreen

Top 20
What a close to end a phenomenal day Weebiters!!
 
  • Fire
Reactions: 1 users

cosors

👀
In a member’s-only interview I attended (virtually) on Anzac Day, which I unfortunately cannot share, Coby mentioned some very interesting tidbits, that I believe I can share as they have been said publicly before and/or are generally known concepts. I feel that Coby re-stating them is crucial.

- He mentioned the technology getting into the teens of nanometers. That will be an achievement.

- He also mentioned advances with their selector and how important that is to the discrete market with some customers extremely interested in that. My take on the discussion was that it appears that their patented selector may be saleable by itself.

- Concentrating on simplifying the fab process by using no new materials and only adding 7% cost to include Weebit ReRAM IP is their massive advantage over other ReRAM producers (of which there are a few, and of whom Coby is not concerned). Other ReRAM technologies that were attempted to be commercialised by others (some of them years ago) failed because the fabs wouldn’t have anything to do with them for difficulty of incorporating their technology. So this talk of signing on a level 1 fab is about as critical as it can get!

- MRAM is an often spoken of competing NVM technology. Coby brought up something that I was not previously aware of on this front. It is obvious that MRAM requires magnetic materials to be incorporated into the silicon wafer (that’s what the first ‘M’ stands for), what I didn’t realise is that anything magnetic is “toxic” to the fab environment and they have to isolate that part from all other parts at great expense. Even requiring specially built equipment. I knew MRAM required exotic materials and added about 40% to the fabrication process, but this mention of toxicity was a first for me. Such a strong word.

- It looks like 2024-2025 might be about the time significant revenue starts to roll in for IP incorporated into SOC solutions.

- Weebit’s board is full of some anmazingly experienced people - including the No.2 from ARM and someone who brought the Pentium processor to the world. I can’t remember them all, but Coby was impressed with how he was surrounded by people with decades of experience (average age 55) who know how to commercialise a good product. I got reassurances that IT WILL HAPPEN.
Look, nice to see the two faces on one page.

wbtbrn.png
 
  • Like
  • Love
Reactions: 2 users

Slymeat

Move on, nothing to see.
Catchy little YouTube video just released. It sums up the technology VERY well.

 
  • Like
Reactions: 5 users

Slymeat

Move on, nothing to see.
The following publication is an interesting, short (4 pages) read that discusses NVM in an unbiased view. Basically it states RRAM has a long way to go before it becomes comparative, price-wise, with DRAM and FLASH for discrete memory, but wins hands down in embedded solutions.

The article mentions the massive advantage of embedded SOC and back-end-of-line manufacturing that ReRAM offers.:

1683928696069.png


The biggest hurdle, according to the author, is density packing in the wafer and academia testing with individual devices. They conclude that testing needs to be performed on complete wafers, which interestingly is what Weebit is currently doing, as well as researching sub-22nm to increase density.

 
  • Like
  • Fire
Reactions: 3 users

cosors

👀

"AI Reinforcement Learning with Weebit ReRAM

05 June, 23

Alessandro Bricalli​


1685971119759.png

A paper from Weebit and our partners at CEA-Leti and the Nano-Electronic Device Lab (NEDL) at Politecnico di Milano was recently published in the prestigious journal Nature Communications. It details how bio-inspired systems can learn using ReRAM (RRAM) technology in a way that is much closer to how our own brains learn to solve problems compared to traditional deep learning techniques.

The teams demonstrated this by implementing a bio-inspired neural network using ReRAM arrays in conjunction with an FPGA system and testing whether the network could learn from its experiences and adapt to its environment. The experiments showed that our in-memory hardware not only does this better than conventional deep learning techniques, but it has the potential to achieve a significant boost in speed and power-saving.

Learning by experience


Humans and other animals continuously interact with each other and the surrounding environment to refine their behavior towards the best possible reward. Through a continuous stream of trial-and-error events, we are constantly evolving, learning, improving the efficiency of routine tasks and increasing our resilience to daily life.

The acquisition of experience-based knowledge is an interdisciplinary subject of biology, computer science and neuroscience known as “reinforcement learning,” and it is at the heart of a major objective of the AI community: to build machines that can learn by experience. The goal is machines that can infer concepts and make autonomous decisions in the context of constantly evolving situations.

In reinforcement learning, an agent (the neural network) interacts with its environment and receives feedback based on that interaction in the form of penalties or rewards. Through this feedback, it learns from its experiences and constructs a set of rules that will enable it to reach the best possible outcomes.

In developing such resilient bio-inspired systems, what’s needed is hardware with plasticity, i.e., the ability to adjust its state based on specific inputs and rules, as in the case of biological synapses. The lack of such commercial hardware is one of the current main limitations in implementing systems capable of learning from experience in an efficient way.

NVMs for in-memory computing

Researchers are now looking at non-volatile memories (NVMs) like ReRAM to enable hardware plasticity for neuromorphic computing. ReRAM is particularly well-suited for use in hardware capable of plastic adaptation, as its conductance can be easily modulated by controlling few electrical parameters. We’ve talked about this previously in several papers and a recent demonstration.

When voltage pulses are applied, the conductance of ReRAM can be increased or decreased by set and reset processes. This is how ReRAM stores information. In the brain, synapses provide the connections between neurons, and they can change their strength and connectivity over time in response to patterns of neural activity. Because of this similarity, ReRAM (RRAM) arrays can be used to create artificial synapses in a neural network which change their strength and connectivity over time in response to patterns of input. This allows them to learn and adapt to new information, just like biological synapses.

In addition to their ability to mimic the plasticity of biological synapses, memristors like ReRAM have several other advantages for these systems. ReRAM is small, low-power, and can be fabricated using standard semiconductor manufacturing techniques in the backend-of-the-line (BEOL), making it easy to integrate into electronic systems.

Power and bandwidth

Deep learning is extremely computationally intensive, involving large numbers of computations which can be very power-hungry, particularly when training large models on large datasets. A great deal of power is also consumed through the high number of iterative optimizations needed to adjust the weights of the network.

Deep learning models also require a lot of memory to store the weights and activations of the neurons in the network, and since they rely on traditional computing architectures, they are impacted by communication delays between the processing unit and the memory elements. This can be a bottleneck that not only slows down computations but also consumes a lot of power.

In the brain, there are no such bottlenecks. Processing and storage are inextricably intertwined, leading to fast and efficient learning. This is where in-memory computing with ReRAM can make a huge difference for neural networks. With ReRAM, fast computation can be done in-situ, with computing and storage in the same place.

The maze runner

While memristor-based networks are not always as accurate as standard deep learning approaches, they are very well-suited to implementing systems capable of adapting to changing situations. In our joint paper with CEA-Leti and NEDL we propose a bio-inspired recurrent neural network (RNN) using arrays of ReRAM devices as synaptic elements, that achieves plasticity as well as state-of-the-art accuracy.

To test our proposed architecture for reinforcement learning tasks, we studied the autonomous exploration of continually evolving environments including a two-dimensional dynamic maze showing environmental changes over time. The maze was experimentally implemented using a microcontroller and a field programmable-gate-array (FPGA), which ran the main program, enabled learning rules and kept track of the position of the agent. Weebit’s ReRAM devices were used to store information and adjust the strength of connections between neurons, and also to map the internal state of each neuron.

Pic-A.-AI-Reinforcement-Learning-with-Weebit-ReRAM-1024x576.jpg


Above: a Scanning Electron Microscope image of the SiOx RRAM devices and
sample photo of the packaged RRAM arrays used in this work




Our experiments followed the same procedure used in the case of the Morris Water Maze in biology: the agent has a limited time to explore the environment under successive trials, and once a trial starts, the sequence of firing neurons maps the movement of the agent.

Pic-B.-AI-Reinforcement-Learning-with-Weebit-ReRAM-1024x576.jpg


Above: Representation of high-level reinforcement learning for autonomous
navigation considering eight main directions of movement




The maze exploration is configured as successive random walks which progressively develop a model of the environment. Here is how it generally progressed:

  • At the beginning, the network cannot find the solution and spends the maximum amount of time available in the maze.
  • As the network progressively maps the configuration of its environment, it becomes a master of the problem trial after trial, and it finally finds the optimum path towards the objective.
  • Once the solution is found, the network decreases the computing time with each successive attempt at solving the same maze configuration, because it remembers the solution.
  • Next, the maze changes shape and a different escape path must be found. As it attempts to find the solution, the network receives a penalty in unexpected positions. After an exploration period, it successfully gets to the target again.
  • Finally, the system comes back to the original configuration and the network easily retrieves the first solution – faster than before. This is thanks to the residual memory of the internal states and to the intrinsic recurrent structure.
Pic-C.-AI-Reinforcement-Learning-with-Weebit-ReRAM-1024x576.jpg


Above: (left) the system re-learns quickly when presented with “maze 1” the second time; (right) ReRAM resistance can be easily modulated by using different programming currents, enabling some memory of the original maze configuration due to gradual adaptation of the internal voltage of the neurons



You can see a short video here showing the experimental setup and the hardware demonstration of the exploration of the dynamic environment via reinforcement learning.

In our paper, we go into much more detail on the experiments, including testing the hardware for complex cases such as the Mars rover navigation to investigate the scalability and reconfigurability properties of the system.

Saving space with fewer neurons

One of the key features that makes our implementation so effective is that it uses an optimized design based on only eight CMOS neurons, representing the eight possible directions of movement inside the maze. CMOS neurons are generally integrated in the front-end of line (FEOL) and require a large amount of circuitry, so that an increase in the number of neurons is associated to an increase in area/cost.

In our system, the ReRAM, acting as the threshold modulator, is the only thing that changes for each explored position in the maze, while the remaining hardware of the neurons remains the same. For this reason, the size of the network can be increased with very small costs in terms of circuit area by increasing the amount of ReRAM – which is dense and easily integrated in the back-end-of-line (BEOL).

Our bio-inspired approach shows far better management of computing resources compared to standard solutions. In fact, to carry out an exploration at a certain average accuracy (99%), our solution turns out to be 10 times less expensive, as it requires 10 times less synaptic elements (the number of computing elements is directly proportional to the area/power consumption).

Pic-D.-AI-Reinforcement-Learning-with-Weebit-ReRAM-1024x576.jpg


Above: Thanks to the reinforcement learning, the energy consumed by
each neuron drastically decreases as more and more trials are allowed




Key Takeaways

Deep learning techniques using standard Von Neumann processors can enable accurate autonomous navigation but require a great deal of power and a long time to make training algorithms effective. This is because the environmental information is often sparse, noisy and delayed, while training procedures are supervised and require direct association between inputs and targets during the backpropagation. This means that complex models of convolutional neural networks are needed to numerically find the best combination of parameters for the deep reinforcement computation.

Our proposed solution overcomes the standard approaches used for autonomous navigation using ReRAM based synapses and algorithms inspired by the human brain. The framework highlights the benefits of the ReRAM-based in-situ computation including high efficiency, resilience, low power consumption and accuracy.

Since biological organisms draw their capability from the inherent parallelism, stochasticity, and resilience of neuronal and synaptic computation, introducing bio-inspired dynamics into neural networks would improve robustness and reliability of artificial intelligent systems.

Read the entire paper here: A self-adaptive hardware with resistive switching synapses for experience-based neurocomputing."
https://www.weebit-nano.com/ai-reinforcement-learningwith-weebit-reram/
 
  • Love
Reactions: 1 users

cosors

👀

"Design Considerations for Embedded NVM in High-Radiation Applications​


June 05, 2023 WHITEPAPER

Radiation can impact the operation of non-volatile memory (NVM) technologies, with the potential to cause permanent damage to semiconductor devices used in high-radiation environments. Selecting the right embedded NVM is critical for devices in these environments, including aerospace and medical devices.​


Embedded NVM in High-Radiation Applications


Embedded floating gate memories such as flash are particularly sensitive to even relatively low radiation doses, so using flash for applications in high-radiation environments adds complexity to the design process, potentially increasing die size, cost and latency. Research shows that this problem only increases with the move to smaller process geometries. Until recently, there hasn’t been an alternative solution, but new NVM technologies like Resistive Random Access Memory (ReRAM) provide an alternative.

In this whitepaper, we describe initial results of research conducted by Weebit Nano and the Nino Research Group (NRG) in the University of Florida’s Department of Materials Science and Engineering, who are studying the effects of radiation on Weebit ReRAM technology under high doses of gamma irradiation. Learn why this technology is inherently tolerant to radiation, and how it can simplify your designs for rad-hard applications."

See attachment
 

Attachments

  • WP_Design-Considerations-for-Embedded-NVM-in-High-Radiation-Applications_Weebit-ReRAM-RRAM-IP-...pdf
    747.7 KB · Views: 76
  • Like
Reactions: 1 users

cosors

👀
""ISI Highly Cited" is a database of "highly cited researchers"—scientific researchers whose publications are most often cited in academic journals over the past decade, published by the Institute for Scientific Information. Inclusion in this list is taken as a measure of the esteem of these academics and is used, for example, by the Academic Ranking of World Universities. It was founded under ISI and as of 2018 continues under the same name at Clarivate.[4]"
https://en.wikipedia.org/wiki/Institute_for_Scientific_Information#ISI_Highly_Cited

"James Mitchell Tour (born 1959) is an American chemist and nanotechnologist. He is a Professor of Chemistry, Professor of Materials Science and Nanoengineering, and Professor of Computer Science at Rice University in Houston, Texas. Tour is a top researcher in his field, having an h-index of 165 with total citations index over 125,000 and was listed as an ISI highly cited researcher."

"Tour is on the board and working with companies including Weebit (silicon oxide electronic memory), ..."
https://en.wikipedia.org/wiki/James_Tour

tour.png

https://www.weebit-nano.com/company/leadership/

For you perhaps long known but I did not know him yet. Very good to have him on board!
 
  • Fire
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
wee-ride.gif




giphy.gif
 
  • Haha
  • Fire
Reactions: 3 users
Top Bottom