BRN Discussion Ongoing

  • Like
Reactions: 2 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Shorters must be crapping themselves........TA traders will be getting strong buy signals on their platforms........

Screenshot_20240210_173946_Chrome.jpg



Happy as Larry
 
  • Like
  • Fire
  • Love
Reactions: 40 users

Learning

Learning to the Top 🕵‍♂️
  • Like
  • Fire
  • Love
Reactions: 42 users
Last edited:
  • Fire
  • Like
  • Wow
Reactions: 29 users

HopalongPetrovski

I'm Spartacus!
BrainChip Investors when we get a little taste of the green. 🤣🤣🤣......

 
  • Love
  • Haha
  • Like
Reactions: 13 users

Damo4

Regular

Attachments

  • 1000005689.gif
    1000005689.gif
    79.1 KB · Views: 1,556
  • 1000005690.gif
    1000005690.gif
    578.1 KB · Views: 37
  • Haha
  • Fire
Reactions: 10 users

Diogenese

Top 20
Wow Dio. Do you think Weebit Nano ReRam memory fits into this somewhere. This question is not to imply that I could understand all of your post.
Hi wasMADX,

I haven't studied Weebit's tech

This is what they say about Edge AI:

Edge Artificial Intelligence​

Regardless of the specific application, storing weights for artificial Neural Networks (NNs) requires significant on-chip memory.​

Depending on the network size, requirements typically range between 10Mb – 100Mb. For AI edge products where low power consumption is so important, what’s needed is small, fast, on-chip (embedded) NVM.
Although it is common and simple for near-memory computation, SRAM won’t work for these applications because it is extremely large and volatile. This volatility means it must stay connected to power, consuming a great deal of power and also risking data loss in the event that power is unexpectedly cut off. Given its size, it would also require additional off-chip NVM, leading to memory bottlenecks and power waste. On-chip flash memory is also far from ideal. As NVM, it can persistently hold weights even during power-off, but it can’t scale below 28nm as embedded on-chip memory. This means a separate chip is needed – leading to memory bottlenecks.
ReRAM (RRAM) is 4x smaller than SRAM so more data can reside locally. It scales well below 28nm, it is non-volatile, and it enables quick memory access. Weebit ReRAM is ideal for advanced edge AI chips.


https://www.weebit-nano.com/market/applications/#edge

They talk about storing weights for ANNs.

They do not propose their ReRAM for in-memory compute, which would be analog.

I would guess that there is the pervasive analog problem of manufacturing variations which would result in unreliable calculations.

One of the advantages of their ReRAM is its robustness in hostile environments:

https://www.weebit-nano.com/market/applications/#aerospace

Aerospace and Defense​

ICs for aerospace and defense have unique requirements for robustness, reliability at high temperatures, and tolerance to radiation (rad-hard) and electromagnetic fields. As these products are often required to last for years – mostly without maintenance – longevity is another key trait. Memory must be reliable for the lifetime of the product.

Weebit ReRAM has significantly better endurance than flash, ensuring it can support products with long lifetimes. It is also able to maintain its reliability at a broad range of temperatures, from (-55)0 Celsius up to 1750 Celsius. ReRAM (RRAM) cells are inherently immune to various types of radiation and electromagnetic fields. In fact, Weebit ReRAM can withstand 350x more radiation than flash. These features make Weebit ReRAM ideal for aerospace and defense applications.


It could be used as a backup memory for Akida's configuration data (weights, connexions ...) in remote/inaccessible applications.
Hi Romper

Interesting read particularly the reviewer comments and corrections. They are quite long but the third reviewer actually points out the lack of clarity where the work of Mass and Simon Thorpe are concerned. In answering this comment the authors add to the Mass references but completely ignore Thorpe and in so doing ignore integrate and fire SNN.

Not that I can say with absolute certainty that AKIDA type integrate and fire SNN is not covered by any of the references cited but running off memory none jump out to me as having looked at same.

It is almost as if this paper was trying desperately not to in anyway even hint at the existence of integrate and fire SNN or Brainchips AKIDA solution.

My opinion only DYOR
Fact Finder
Hi FF,

This sounds like the pre-Thorpe rate coding.

"An enhanced version of the integrate and fire model is the leaky integrate and fire (LIF) model which also takes the membrane voltage leak into account. SFA, i.e. increase in the inter-spike interval (ISI) over time for a regular spike train, is an intrinsic feature of biological neurons. In this paper, we will focus on SFA as an important feature to explore in SNNs."

Thorpe deduced from Adrian's research from 70 years before, that the information in the spike train repetition rate was largely redundant, the relevant information being conveyed by the amplitude of the initial spike, and that the larger spikes (those conveying the most significant information, arrived before weaker spikes (possibly because they reached the firing threshold earlier?).

This then led to N-of-M coding in which only the first N incoming spikes were passed on for processing. This is quite similar to DVS cameras where pixels whose output does not exceed a threshold are ignored.


1707543918388.png




1707543946967.png




The paper postulates a number of reasons for SFA.

The biological phenomenon of spike frequency adaptation​

In biology, if a neuron is stimulated in a repeated and prolonged fashion, for example by constant sensory stimulation or artificially by applying an electric current, it first shows a strong onset response, followed by an increase in the time between spikes.

Hence the spike rate attenuates and the so-called spike frequency adaptation takes place.

Experimental data from the Allen Institute show that17 a substantial fraction of excitatory neurons of the neocortex, ranging from 20% in the mouse visual cortex to 40% in the human frontal lobe, exhibit SFA as shown in Fig. 2a, b.

There can be different causes for SFA:

First, short-term depression of the synapse through depletion of the synaptic vesicle pool. This means that at the connection site between neurons, the signal from the pre-synaptic neuron cannot be transmitted to the next neuron.

Second, by an increase in the spiking threshold of the post-synaptic neuron due to the activation of potassium channels by calcium, which has a subtractive effect on the input current. Hence, the same input current that previously caused a spike does not lead to a spike anymore.

Third, lateral and feedback inhibition in the local network reduces the effect of excitatory inputs in a delayed fashion20. Therefore, like in the second case, spike generation is hampered
.

Advantages of spike frequency adaptation​

From a biological standpoint, multiple advantages of the SFA mechanism have been observed. First, it lowers the metabolic costs, by facilitating sparse coding21: When there is no significant information in the presented inputs, as the input is either being repeated or there is a high-intensity constant stimulant, the firing rate is decreased leading to a reduction in metabolic cost and hence power consumption. Moreover, the separation of high-frequency signals from noisy environments is facilitated by SFA22. In addition, SFA can be seen as a simple form of short-term memory on the single-cell level23.

In other words, SFA improves the efficiency24 and accuracy of the neural code and hence optimizes information transmission25. SFA can be seen as an adaptation of the spike output range to the statistical range of the environment, meaning that it contrasts fluctuations of the input rather than its absolute intensity26. Thereby noise is reduced and, as mentioned above, repetitive information is suppressed which leads to an increase in entropy. Consequently, the detection of a salient stimulus can be enhanced27. These biological advantages of SFA can also be exploited for low-power and high-entropy computations in artificial neural networks.

To introduce SFA in spiking neural networks, a neuron model can be used which includes an adaptive threshold property28. SSNs with these kinds of neurons learn quickly, even without synaptic plasticity29. Moreover, SFA helps in attaining higher computational efficiency in SNNs17. For example, to achieve a store-and-recall cycle (working memory) of duration 1200 ms, a single exponential adaptive model requires a decay constant, τa = 1200 ms in ref. 17, while a double exponential adaptive threshold model requires decay constants of τa1 = 30 ms and τa2 = 300 ms19—the latter being more efficient and sophisticated with four adaptation parameters compared to two parameters in ref
. 17.


However, it is not clear that attempting to mimic biological neurons too closely is beneficial in an electronic context. This is where Rain came unstuck.

Does it make the process faster/more power efficient/more accurate/improve ML?

Does the claim that using the rate change is more efficient needs to take into account the cost of monitoring the rate.

N-of-M coding is highly efficient in weeding out the also-rans. On that front, it is notable that a couple of Steve Furber's papers are cited, but Steve independently of Thorpe came up with N-of-M coding, yet there is no mention of this.
 
  • Like
  • Fire
  • Love
Reactions: 30 users
  • Haha
Reactions: 7 users
  • Haha
  • Like
  • Fire
Reactions: 6 users
Hi wasMADX,

I haven't studied Weebit's tech

This is what they say about Edge AI:

Edge Artificial Intelligence​

Regardless of the specific application, storing weights for artificial Neural Networks (NNs) requires significant on-chip memory.​

Depending on the network size, requirements typically range between 10Mb – 100Mb. For AI edge products where low power consumption is so important, what’s needed is small, fast, on-chip (embedded) NVM.
Although it is common and simple for near-memory computation, SRAM won’t work for these applications because it is extremely large and volatile. This volatility means it must stay connected to power, consuming a great deal of power and also risking data loss in the event that power is unexpectedly cut off. Given its size, it would also require additional off-chip NVM, leading to memory bottlenecks and power waste. On-chip flash memory is also far from ideal. As NVM, it can persistently hold weights even during power-off, but it can’t scale below 28nm as embedded on-chip memory. This means a separate chip is needed – leading to memory bottlenecks.
ReRAM (RRAM) is 4x smaller than SRAM so more data can reside locally. It scales well below 28nm, it is non-volatile, and it enables quick memory access. Weebit ReRAM is ideal for advanced edge AI chips.


https://www.weebit-nano.com/market/applications/#edge

They talk about storing weights for ANNs.

They do not propose their ReRAM for in-memory compute, which would be analog.

I would guess that there is the pervasive analog problem of manufacturing variations which would result in unreliable calculations.

One of the advantages of their ReRAM is its robustness in hostile environments:

https://www.weebit-nano.com/market/applications/#aerospace

Aerospace and Defense​

ICs for aerospace and defense have unique requirements for robustness, reliability at high temperatures, and tolerance to radiation (rad-hard) and electromagnetic fields. As these products are often required to last for years – mostly without maintenance – longevity is another key trait. Memory must be reliable for the lifetime of the product.

Weebit ReRAM has significantly better endurance than flash, ensuring it can support products with long lifetimes. It is also able to maintain its reliability at a broad range of temperatures, from (-55)0 Celsius up to 1750 Celsius. ReRAM (RRAM) cells are inherently immune to various types of radiation and electromagnetic fields. In fact, Weebit ReRAM can withstand 350x more radiation than flash. These features make Weebit ReRAM ideal for aerospace and defense applications.


It could be used as a backup memory for Akida's configuration data (weights, connexions ...) in remote/inaccessible applications.

Hi FF,

This sounds like the pre-Thorpe rate coding.

"An enhanced version of the integrate and fire model is the leaky integrate and fire (LIF) model which also takes the membrane voltage leak into account. SFA, i.e. increase in the inter-spike interval (ISI) over time for a regular spike train, is an intrinsic feature of biological neurons. In this paper, we will focus on SFA as an important feature to explore in SNNs."

Thorpe deduced from Adrian's research from 70 years before, that the information in the spike train repetition rate was largely redundant, the relevant information being conveyed by the amplitude of the initial spike, and that the larger spikes (those conveying the most significant information, arrived before weaker spikes (possibly because they reached the firing threshold earlier?).

This then led to N-of-M coding in which only the first N incoming spikes were passed on for processing. This is quite similar to DVS cameras where pixels whose output does not exceed a threshold are ignored.


View attachment 56450



View attachment 56451



The paper postulates a number of reasons for SFA.

The biological phenomenon of spike frequency adaptation​

In biology, if a neuron is stimulated in a repeated and prolonged fashion, for example by constant sensory stimulation or artificially by applying an electric current, it first shows a strong onset response, followed by an increase in the time between spikes.

Hence the spike rate attenuates and the so-called spike frequency adaptation takes place.

Experimental data from the Allen Institute show that17 a substantial fraction of excitatory neurons of the neocortex, ranging from 20% in the mouse visual cortex to 40% in the human frontal lobe, exhibit SFA as shown in Fig. 2a, b.

There can be different causes for SFA:

First, short-term depression of the synapse through depletion of the synaptic vesicle pool. This means that at the connection site between neurons, the signal from the pre-synaptic neuron cannot be transmitted to the next neuron.

Second, by an increase in the spiking threshold of the post-synaptic neuron due to the activation of potassium channels by calcium, which has a subtractive effect on the input current. Hence, the same input current that previously caused a spike does not lead to a spike anymore.

Third, lateral and feedback inhibition in the local network reduces the effect of excitatory inputs in a delayed fashion20. Therefore, like in the second case, spike generation is hampered
.

Advantages of spike frequency adaptation​

From a biological standpoint, multiple advantages of the SFA mechanism have been observed. First, it lowers the metabolic costs, by facilitating sparse coding21: When there is no significant information in the presented inputs, as the input is either being repeated or there is a high-intensity constant stimulant, the firing rate is decreased leading to a reduction in metabolic cost and hence power consumption. Moreover, the separation of high-frequency signals from noisy environments is facilitated by SFA22. In addition, SFA can be seen as a simple form of short-term memory on the single-cell level23.

In other words, SFA improves the efficiency24 and accuracy of the neural code and hence optimizes information transmission25. SFA can be seen as an adaptation of the spike output range to the statistical range of the environment, meaning that it contrasts fluctuations of the input rather than its absolute intensity26. Thereby noise is reduced and, as mentioned above, repetitive information is suppressed which leads to an increase in entropy. Consequently, the detection of a salient stimulus can be enhanced27. These biological advantages of SFA can also be exploited for low-power and high-entropy computations in artificial neural networks.

To introduce SFA in spiking neural networks, a neuron model can be used which includes an adaptive threshold property28. SSNs with these kinds of neurons learn quickly, even without synaptic plasticity29. Moreover, SFA helps in attaining higher computational efficiency in SNNs17. For example, to achieve a store-and-recall cycle (working memory) of duration 1200 ms, a single exponential adaptive model requires a decay constant, τa = 1200 ms in ref. 17, while a double exponential adaptive threshold model requires decay constants of τa1 = 30 ms and τa2 = 300 ms19—the latter being more efficient and sophisticated with four adaptation parameters compared to two parameters in ref
. 17.


However, it is not clear that attempting to mimic biological neurons too closely is beneficial in an electronic context. This is where Rain came unstuck.

Does it make the process faster/more power efficient/more accurate/improve ML?

Does the claim that using the rate change is more efficient needs to take into account the cost of monitoring the rate.

N-of-M coding is highly efficient in weeding out the also-rans. On that front, it is notable that a couple of Steve Furber's papers are cited, but Steve independently of Thorpe came up with N-of-M coding, yet there is no mention of this.
Thanks Diogenese but I just cannot help thinking that RainAi is based on Sweet F… All.🤡🤣😂🤣🤡

My opinion only DYOR
Fact Finder
 
  • Haha
  • Wow
  • Like
Reactions: 14 users
  • Like
  • Fire
  • Love
Reactions: 3 users
Hi All

This link takes you to a new website run by three known neuromorphic researchers and does cover Brainchip.

Interestingly they have a section devoted to failed or now unsupported neuromorphic technology attempts and prominent amongst them is Loihi 1:


Quite a lot of interesting information and links.

My opinion only DYOR
Fact Finder
 
  • Like
  • Fire
  • Love
Reactions: 36 users

BEISHA

Top 20
Hi All

Thought i would provide a update chart .......

1707559045537.png


I think the last time i posted, sub wave 4 was close to complete and my expectation was that sub wave 5 up would top around 35c, unfortunately that didnt happen, a double top occurred at 24.5c , then crashed back to 14.8 support, a clever fake out essentially..... since then, good accumulation at 14.8 and now SP has risen nicely, with above average volume which is a good sign , no doubt coinciding with the encouraging 4c announcement..;)

So we have a situation now where SP has reached 24.5c resistance once again, previous candle rejected by the wick, with RSI 14 now 76 ( over bought ) , its debatable whether it can push thru that 24.5 / 27.5 resistance zone short term, triple top pattern usually is bearish, so a minor push back could be on the cards, just have to wait and see.

Overall, BRN TA & FA is on the improve, 14/18c zone looks strong support base going forward, a break of the strong 24.5 / 27.5 resistance zone would confirm a bullish upswing.

imo
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 37 users

Diogenese

Top 20
Not new, but worth a revisit. This is what the company says about Automotive:

https://brainchip.com/markets/

BrainChip Automotive enables the next generation of smarter cars.

In-cabin experience is improved with on-chip learning for keyword spotting, “hey car,” face recognition, driver authentication, gesture recognition, and the unique ability to combine sensory modalities, creating a roadmap for the in-cabin experience of the future. Advanced Driver Assistance Systems (ADAS) focus on the automobile industry, as embedded sensors provide surrounding data that radically improve safety and pave the way for fully autonomous vehicles.

However, the amount of sensor data processed “in-car” can require significant compute that can be power-hungry, which is a drain on an Electric Vehicle’s range. BrainChip’s solution has hardware at the sensor to analyze the data in real-time and forward “inference data” to the car’s central processor. This architecture substantially improves real-time performance and radically reduces system-level power consumption.

BrainChip’s use cases for automotive include:

  • In-Cabin Experience
  • Real-time Sensing
  • ECU Control
  • Intuitive HMI


1. Keyword spotting (eg, EQXX)
2. Face recognition
3. Driver authentication
4. Gesture recognition
5. Combine sensory modalities
6. In-sensor processing

A single Akida P could handle items 1 to 5, but the in-sensor processing requires an Akida at each sensor:
lidar(s),
camera(s),
ultrasound,
radar,
...

The auto makers cannot afford to use slow, inefficient, power hungry CPU/GPU software-based solutions for the heavy lifting of CNN sensor processing.

ARM is adopting a new licensing principle attempting to capture some of the value their IP contributes to the final product. No doubt BRN management is aware of this business model. Indeed, it may well be that the partnerships we have already capture more of this value than a conventional licence. Akida will contribute greatly to automotive value by making a real contribution to increased driving range, a huge selling point, by reducing processing power consumption, reduced latency in processing voice and sensor signals, ... not to mention improved safety and driver satisfaction.

We can speculate about any fanciful figure on a per car basis, but there will be millions of cars fitted with several Akida based NNs. Valeo has forward contracts with Stellantis and Toyota, and we have good reason to hope we will incorporated in Scala 3.

And that is just one sector ...

There's also In-home, Industrial, Health, Defence, ...
 
  • Like
  • Love
  • Fire
Reactions: 62 users

Learning

Learning to the Top 🕵‍♂️
Screenshot_20240210_214045_LinkedIn.jpg



Screenshot_20240210_214058_LinkedIn.jpg


"Abstract
The human brain’s unparalleled efficiency in executing complex cognitive tasks stems from neurons communicating via short, intermittent bursts or spikes. This has inspired Spiking Neural Networks (SNNs), now incorporating neuron models with spike frequency adaptation (SFA). SFA adjusts these spikes’ frequency based on recent neuronal activity, much like an athlete’s varying sprint speed. SNNs with SFA demonstrate improved computational performance and energy efficiency. This review examines various adaptive neuron models in computational neuroscience, highlighting their relevance in artificial intelligence and hardware integration. It also discusses the challenges and potential of these models in driving the development of energy-efficient neuromorphic systems "


Learning 🪴
 
  • Like
  • Love
  • Fire
Reactions: 41 users

Tothemoon24

Top 20

IMG_8326.jpeg

Wearable Technology with Environmental Sensors​

Along with the emergency response equipment first responders bring to a rescue situation, there are emerging technologies that can equip them for safety in a number of ways. Wearable tech can inform supervisors if team members are experiencing any spikes in heart rate or blood pressure, as well as other biometric data, while environmental sensors can determine if any toxins or dangerous chemicals are present in the surrounding environment. Measuring blood oxygen levels via pulse oximetry sensors can tell firefighters when they’ve been overexposed to smoke-filled air, and body positioning sensors like the kind used in some step counters and other fitness trackers can sound the alarm when a first responder is lying prone or in any awkward position that might indicate potential distress. Something as simple as monitoring body temperature can let firefighters know when to pull back from the front lines and rehydrate.

Environmental sensors capable of measuring the presence of airborne pollutants or particulate matter are commonly implemented in industrial manufacturing and processing facilities for employee safety. First responders can utilize similar technology in a more mobile application to provide them with important safety information about environments they’re encountering with limited prior knowledge.

Concentrations of potentially poisonous and invisible gases in the air, like carbon monoxide or dioxide or volatile organic compounds, can be detected through chromatography and light refraction. Particulate matter created by combustion, like the kind made by forest fires, can also be detected and measured through light reflection. Larger pieces of particulate reflect more light than smaller ones and pose a greater health risk, so measuring the size of particulate fragments as precisely as possible is essential. Environmental data collected from scenes of disasters has value for medical personnel as well. Having prior knowledge of the types of airborne toxins or pollutants victims and evacuees have been exposed to before they’ve even been examined can help develop treatment plans more quickly.

Real-time Data Collection​

Mobile sensors collecting real-time data on first responders’ persons feed the information into an intelligent processing layer and then display data on a “dashboard” of sorts, presenting a digital readout of the various vital signs and environmental factors being monitored. The dashboard can be monitored remotely by first responders on the scene or supervisors offsite to ensure that any responders in distress can be helped as quickly as possible. Real-time data on the surrounding air quality can tell firefighters precisely when they have to employ oxygen tanks in the field in order to breathe safely or when toxic fumes from a chemical spill have become too dangerous to be exposed to without a special breathing apparatus. Vital sign monitoring lets supervisors know when individual firefighters on the front lines of a blaze need a break or medical intervention, similar to the technology being implemented in sports to monitor athletes’ body temperature and blood oxygen levels.

Data collected in real time can also be saved and fed into algorithms that recognize patterns and make predictions to help optimize future emergency response plans. Knowing that personnel can only safely fight fires burning at certain temperatures from specific distances helps spare future hospitalizations, or worse, heatstroke or smoke inhalation. Furthermore, knowing how first responders’ bodies have reacted to the presence of certain gases or volatile organic compounds in the environment can help design emergency treatment options if first responders or victims are exposed in the future and require immediate medical attention in the field. In large-scale personnel deployments, like forest fires or natural disaster relief, historical data on employee health and wellness can help supervisors determine the optimal length and frequency of shifts to maximize overall efficiency and help responders get the appropriate amount of sleep and nutrition.

Conclusion​

Working in potentially hazardous environments is something asked of first responders every day, so monitoring their vital signs as well as key environmental factors is critical to ensuring safety. Using the data collected from first responders on the front lines to optimize future emergency response plans may also save lives and ensure first responders live longer, healthier lives post-retirement. The short- and long-term benefits of wearable technology and environmental sensors are so self-evident that you may see firefighters, EMTs, and even police officers wearing bio- and environmental-metric sensors on a daily basis in the near future.
 
  • Like
  • Love
  • Fire
Reactions: 24 users

Sirod69

bavarian girl ;-)

SIA Applauds Launch of Over $5 Billion in CHIPS R&D Investments, Workforce Initiatives​

Friday, Feb 09, 2024, 1:00pm
by Semiconductor Industry Association


Administration announces launch of National Semiconductor Technology Center as well as funding for chip workforce, other programs

WASHINGTON—Feb. 9, 2024—The Semiconductor Industry Association (SIA) today released the following statement from SIA President and CEO John Neuffer commending the administration’s launch of over $5 billion in semiconductor R&D investments through the National Semiconductor Technology Center (NSTC), as well as funding for vital semiconductor workforce initiatives and other programs. The NSTC is a critically important entity established by the CHIPS and Science Act of 2022 to promote U.S. semiconductor R&D. SIA represents 99% of the U.S. semiconductor industry by revenue and nearly two-thirds of non-U.S. chip firms.
[READ THE WHITE HOUSE FACT SHEET]
“Today’s announcement ushers in the next phase of implementing the landmark semiconductor R&D and workforce initiatives in the CHIPS and Science Act and fulfilling its tremendous promise to reinforce America’s economy, national security, and technological leadership. We applaud leaders in Washington for advancing funding for the NSTC and other vital semiconductor programs. I was honored to attend today’s announcement at the White House, and we look forward to continuing to work with the administration to ensure effective and expeditious implementation of these initiatives, which will strengthen U.S. semiconductor innovation, production, and the domestic chip workforce for many years to come.”
Semiconductor R&D fuels America’s economic growth, national security, and technological competitiveness. The NSTC was established to invigorate semiconductor innovation in the U.S. and drive workforce development opportunities to meet the needs of our rapidly growing industry.
SIA and the Boston Consulting Group (BCG) in October 2022 released a report identifying five key areas of the semiconductor R&D ecosystem that should be strengthened by the CHIPS Act’s R&D funding. The report, titled “American Semiconductor Research: Leadership Through Innovation,” highlights the importance of government-industry collaboration on the NSTC and the National Advanced Packaging Manufacturing Program (NAPMP). The study also calls for CHIPS funding to be used to bridge key gaps in the current semiconductor R&D ecosystem to help pave the way for sustained U.S. chip innovation leadership.
The CHIPS Act’s manufacturing incentives have already sparked substantial investments in the U.S. In fact, companies in the semiconductor ecosystem have announced dozens of new projects across America—totaling more than $220 billion in private investments—since the CHIPS Act was introduced. These announced projects will create more than 40,000 jobs in the semiconductor ecosystem and support hundreds of thousands of additional U.S. jobs throughout this economy.

 
  • Like
  • Fire
  • Love
Reactions: 25 users

hotty4040

Regular
Hi All

This link takes you to a new website run by three known neuromorphic researchers and does cover Brainchip.

Interestingly they have a section devoted to failed or now unsupported neuromorphic technology attempts and prominent amongst them is Loihi 1:


Quite a lot of interesting information and links.

My opinion only DYOR
Fact Finder

Could this be a somewhat spooky find of facts, Fact Finder, or am I barking up the wrong tree possibly ?

Well done I think, but then again !!! Curious indeed, none the less.

Akida Ballista


hotty...
 
  • Love
Reactions: 1 users

charles2

Regular
Too many shares issued. Almost 2 billion.

This seems to worry some...but not me....with caveats to follow

If a 50:1 reverse split was undertaken we would have <40million shares at $7.50 USD or $11.25 AUD.

These numbers are commonplace on NASDAQ....likely our ultimate home.

The issue for me is that stocks priced at <$1 in USD are viewed with suspicion/derision in the US....and trust me that is a massive understatement. Most prospective buyers will dismiss without even taking a look.

Easy to buy/sell a $7.50 equity, almost impossible to buy/sell a $0.15 equity.

My suggestion....Brainchip, the company, wake up to the real world. Stock markets are conservative and appearance matters and they don't perform well as science fiction in fantasyland.
 
  • Like
  • Fire
  • Sad
Reactions: 11 users

cosors

👀
View attachment 56439
Volumes are still low but are increasing, as is the percentage.
American investors may have a problem trying to buy large quantities of shares in USA.
Possibly, as more Americans become aware of Brianchip's Akida they have to find
new places to buy stocks in Brainchip.
Could be interesting to see if they turn up to buy on the ASX next week.. just a thought bubble.
Volumes are 'ok' here
Screenshot_2024-02-10-20-40-01-91_40deb401b9ffe8e1df2f1cc5ba480b12.jpg
 
  • Like
  • Fire
  • Love
Reactions: 13 users
Top Bottom