BRN Discussion Ongoing

D

Deleted member 118

Guest
looking like Battle of the Bots this arvo ......
Game GIF by Fortnite
A few more goes at the $1.20 should see us heading to the next resistance at $1.25
 
  • Like
  • Fire
Reactions: 4 users

buena suerte :-)

BOB Bank of Brainchip
  • Like
Reactions: 3 users

gex

Regular
A few more goes at the $1.20 should see us heading to the next resistance at $1.25
I have a suspicion the large 1.250 order will disappear.
 
  • Like
  • Fire
Reactions: 5 users

buena suerte :-)

BOB Bank of Brainchip
Yep Esq has been keeping an eye on that one!! nice work :)

Morning Chippers,

Well, well, well ,

That mysterious sell order has appeared again.

700,515 shares @ $1.25

LETS WATCH IT BOUNCE AROUND FOR THE DAY, CAPPING THE SHARE PRICE.

Esq.
 
  • Like
  • Fire
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess


D18F00-877x432.jpg

Researches Use IoT for Cancer Diagnosis​

Artificial neurons and an AI system are being used to assess benign or malignant tumors
  • Written by Scarlett Evans
  • 14th July 2022

Researchers from the Korea Institute of Science and Technology (KIST) have developed a novel cancer diagnosis technology; a simple but accurate method that uses tactile neuron devices combined with AI technology.
Typically, a non-invasive method of diagnosis is ultrasound elastography; however, interpretation of the results can vary. The new method identifies and measures the stiffness and distribution of a tumor, allowing for accurate cancer diagnosis.

The KIST team developed this alternative method to improve accuracy and speed up the time of prognosis. For their experiments, the team combined tactile neuron devices with artificial neural network learning methods, applying pressure to a potentially cancerous site, with the pressing force generating electrical spikes that increase or decrease depending on the stiffness of the object encountered.

The method falls under the category of “neuromorphic technology,” a data processing technology that has become increasingly popular given its compatibility with AI, IoT and autonomous technologies. It seeks to emulate the human brain’s method of processing vast amounts of information using minimal energy, with neurons receiving external stimuli through sensory receptors which are then converted into electrical spike signals.
Deploying this for disease diagnosis, the team used elastography images of malignant and benign breast tumors in combination with a spiking neural network learning method. The pixels from the color-coded ultrasound elastography image correlated to the stiffness of the object encountered and were converted to a frequency value to train the AI.
Following this process, the team reported a breast tumor diagnosis accuracy of 95.8%, saying the developed artificial tactile neuron technology is capable of “detecting and learning mechanical properties with a simple structure and method.”
The team also anticipated the device could be used in low-power and high-accuracy disease diagnosis and applications such as robotic surgery, where a surgical site needs to be quickly determined with minimal to no human interaction.

 
  • Like
  • Love
  • Fire
Reactions: 40 users

equanimous

Norse clairvoyant shapeshifter goddess


1660021153572.png
 
  • Like
  • Fire
Reactions: 14 users
D

Deleted member 118

Guest
Spoke to soon lol

 
  • Haha
  • Like
Reactions: 12 users

Newk R

Regular
  • Like
  • Haha
Reactions: 5 users

equanimous

Norse clairvoyant shapeshifter goddess


D18F00-877x432.jpg

Researches Use IoT for Cancer Diagnosis​

Artificial neurons and an AI system are being used to assess benign or malignant tumors
  • Written by Scarlett Evans
  • 14th July 2022

Researchers from the Korea Institute of Science and Technology (KIST) have developed a novel cancer diagnosis technology; a simple but accurate method that uses tactile neuron devices combined with AI technology.
Typically, a non-invasive method of diagnosis is ultrasound elastography; however, interpretation of the results can vary. The new method identifies and measures the stiffness and distribution of a tumor, allowing for accurate cancer diagnosis.

The KIST team developed this alternative method to improve accuracy and speed up the time of prognosis. For their experiments, the team combined tactile neuron devices with artificial neural network learning methods, applying pressure to a potentially cancerous site, with the pressing force generating electrical spikes that increase or decrease depending on the stiffness of the object encountered.

The method falls under the category of “neuromorphic technology,” a data processing technology that has become increasingly popular given its compatibility with AI, IoT and autonomous technologies. It seeks to emulate the human brain’s method of processing vast amounts of information using minimal energy, with neurons receiving external stimuli through sensory receptors which are then converted into electrical spike signals.
Deploying this for disease diagnosis, the team used elastography images of malignant and benign breast tumors in combination with a spiking neural network learning method. The pixels from the color-coded ultrasound elastography image correlated to the stiffness of the object encountered and were converted to a frequency value to train the AI.
Following this process, the team reported a breast tumor diagnosis accuracy of 95.8%, saying the developed artificial tactile neuron technology is capable of “detecting and learning mechanical properties with a simple structure and method.”
The team also anticipated the device could be used in low-power and high-accuracy disease diagnosis and applications such as robotic surgery, where a surgical site needs to be quickly determined with minimal to no human interaction.


In this document it states Korea Advanced Institute of Science and Technology is using Spiking Neural Network.

Having a quick look they hold patents for the following:

LOW-POWER, HIGH-PERFORMANCE ARTIFICIAL NEURAL NETWORK TRAINING ACCELERATOR AND ACCELERATION METHOD​

APPARATUS AND METHOD FOR TRAINING LOW BIT-PRECISION DEEP NEURAL NETWORK​

 
  • Like
  • Fire
Reactions: 14 users

mcm

Regular


D18F00-877x432.jpg

Researches Use IoT for Cancer Diagnosis​

Artificial neurons and an AI system are being used to assess benign or malignant tumors
  • Written by Scarlett Evans
  • 14th July 2022

Researchers from the Korea Institute of Science and Technology (KIST) have developed a novel cancer diagnosis technology; a simple but accurate method that uses tactile neuron devices combined with AI technology.
Typically, a non-invasive method of diagnosis is ultrasound elastography; however, interpretation of the results can vary. The new method identifies and measures the stiffness and distribution of a tumor, allowing for accurate cancer diagnosis.

The KIST team developed this alternative method to improve accuracy and speed up the time of prognosis. For their experiments, the team combined tactile neuron devices with artificial neural network learning methods, applying pressure to a potentially cancerous site, with the pressing force generating electrical spikes that increase or decrease depending on the stiffness of the object encountered.

The method falls under the category of “neuromorphic technology,” a data processing technology that has become increasingly popular given its compatibility with AI, IoT and autonomous technologies. It seeks to emulate the human brain’s method of processing vast amounts of information using minimal energy, with neurons receiving external stimuli through sensory receptors which are then converted into electrical spike signals.
Deploying this for disease diagnosis, the team used elastography images of malignant and benign breast tumors in combination with a spiking neural network learning method. The pixels from the color-coded ultrasound elastography image correlated to the stiffness of the object encountered and were converted to a frequency value to train the AI.
Following this process, the team reported a breast tumor diagnosis accuracy of 95.8%, saying the developed artificial tactile neuron technology is capable of “detecting and learning mechanical properties with a simple structure and method.”
The team also anticipated the device could be used in low-power and high-accuracy disease diagnosis and applications such as robotic surgery, where a surgical site needs to be quickly determined with minimal to no human interaction.

Using Akida?
 

equanimous

Norse clairvoyant shapeshifter goddess


I found an interesting article/interview published a few days ago with Jean-Rene Leuepeys who is the deputy director and CTO at CEA-Leti. He says they are preparing the next generation of their own neuromorphic chip which will have "more than 100,000 neurons on a chip and more than 75 million synapses". 🤏I'm thinking Jean-Rene must have missed the memo about AKIDA and it's 1.2 million neurons and 10 billion synapses.🤭




View attachment 10599

Brainchip and WBT ?​


CEA–Leti has made significant developments in pMUT sensors and spiking neural networks based on RRAM technology during the last decade. “We would like to thank the H2020 MeM–Scales project [871371] that partially funded the work,” Vianello said.

The present study demonstrates that combining visual sensors such as DVS cameras with the suggested pMUT–based hearing sensor should be investigated to create future consumer robots.

Neuromorphic Device with Low Power Consumption​

By Maurizio Di Paolo Emilio 08.01.2022 0
Share Post
Share on Facebook
Share on Twitter



Compact, low–latency, and low–power computer systems are required for real–world sensory–processing applications. Hybrid memristive CMOS neuromorphic architectures, with their in–memory event–driven computing capabilities, present an appropriate hardware substrate for such tasks.
To demonstrate the full potential of such systems and drawing inspiration from the barn owl’s neuroanatomy, CEA–Leti has developed an event–driven, object–localization system that couples state–of–the–art piezoelectric, ultrasound transducer sensors with a neuromorphic computational map based on resistive random–access memory (RRAM).
CEA–Leti built and tested this object tracking system with the help of researchers from CEA–List, the University of Zurich, the University of Tours, and the University of Udine.

The researchers conducted measurements findings from a system built out of RRAM–based coincidence detectors, delay–line circuits, and a fully customized ultrasonic sensor. This experimental data has been used to calibrate the system–level models. These simulations have then been used to determine the object localization model’s angular resolution and energy efficiency. Presented in a paper published recently in Nature Communications, the research team describes the development of an auditory–processing system that increases energy efficiency by up to five orders of magnitude compared with conventional localization systems based on microcontrollers.
“Our proposed solution represents a first step in demonstrating the concept of a biologically inspired system to improve efficiency in computation,” said Elisa Vianello, senior scientist and edge AI program coordinator and senior author of the paper. “It paves the way toward more complex systems that perform even more sophisticated tasks to solve real–world problems by combining information extracted from different sensors. We envision that such an approach to conceive a bio–inspired system will be key to build the next generation of edge AI devices, in which information is processed locally and with minimal resources. In particular, we believe that small animals and insects are a great source of inspiration for an efficient combination of sensory information processing and computation. Thanks to the latest advancements in technology, we can couple innovative sensors with advanced RRAM–based computation to build ultra–low–power systems.”

BIO–INSPIRED ANALOG RRAM–BASED CIRCUIT

Two essential ideas underpin biological signal processing: event–driven sensing and in–memory analog processing.
“The goal is, as always, to get the best power efficiency for the level of performance needed by a specific application,” Vianello said. “Further improvements in energy efficiency are certainly possible with our system. For example, one could optimize our design and implement it in a more advanced technological node or with a specific low–power technology such as FD–SOI for the same level of performance. Concerning accuracy, our limiting factor is SNR. We have a clear performance/consumption tradeoff with the amplitude of the emitted pulse or the number of TX membranes, but technological advancement resulting in increased piezoelectric micromachined ultrasonic transducer [pMUT] sensitivity would also help improve the SNR for no extra power consumption. The use of pulses with good autocorrelation properties would be an interesting development in that sense if the matched filtering could be done with a small overhead.”
The team leveraged CEA–Leti’s successes in building pMUTs and its developments in RRAM–based spiking neural networks. The initial difficulty for the researchers was to create a pre–processing pipeline that pulls critical information from pMUTs, which encode information using brief events or spikes. This temporal encoding of the signal saves energy over standard continuous analog or digital data because only relevant data is handled.
PMUTs are becoming one of the most demanding ultrasonic systems due to their ability to create and detect ultrasound signals at the microscale in a highly efficient and well–controlled manner. The high–yield MEMS production technique, combined with thin–film piezoelectric materials (AlN, AlScN, PZT, etc.), enhances PMUT systems. Furthermore, the ability to install thin–film piezoelectric materials in a CMOS–compatible manner opens the door to innovative, extremely small systems that use the same substrate for the sensor and the conditioning electronics.
With this scenario, PMUT transducers are pushing the applicability of ultrasound as a physical magnitude in a variety of systems where size, power, sensitivity, and cost are important. These include intravascular medical imaging, biometric identification, gesture recognition, rangefinders, proximity sensors, acoustic wireless communication systems, acoustophoresis, photoacoustic systems, and so on.
CEA022684-HD.jpg
Elisa Vianello
According to Vianello, pMUT devices are mature for industrialization. “One of the main restrictions to the development of pMUT devices is the competition of bulk PZT transducer and cMUT MEMs transducers. Bulk PZT transducers are easy to prototype and relatively cheap for low–volume production. cMUT MEMS transducers are more appropriate for biomedical applications due to their higher bandwidth and higher output pressure. One of the physical limitations of pMUT is the relatively low Q factor that results in transient regime that is detrimental to the spatial resolution and may impede short–distance measurements. Industrially matured piezoelectric materials for pMUT are PZT and AlN. PZT is more appropriate for actuating and AlN for sensing. For this application, we need both actuation and sensing, and our approach would have been valid with either of these materials. Yet we choose AlN because the four–electrode–pair scheme, which is not possible with PZT material, partially balances the relatively low output pressure per volt. Moreover, output pressure may be easily increased by the use of higher actuation voltage, at the price of higher consumption.”
Another difficulty was developing and building an analog circuit based on biologically inspired RRAM to analyze extracted events and estimate an object’s location. RRAM is a non–volatile technology that suits the asynchronous nature of events in the team’s proposed system, resulting in negligible power usage while the system is idle.
RRAM stores information in its non–volatile conductive state. The primary operational assumption of this technology is that altering the atomic state via precise programming operations controls the conductance of the device.
The researchers used an oxide–based RRAM with a 5–nm hafnium–dioxide layer sandwiched between top and bottom electrodes made of titanium and titanium nitride. By applying current/voltage waveforms that construct or break a conductive filament made up of oxygen vacancies between the electrodes, the conductivity of an RRAM device may be changed. They co–integrated these devices in a standard 130–nm CMOS process to build a reconfigurable neuromorphic circuit that included coincidence detectors and delay–line circuits (Figure 1). The non–volatile and analog nature of these devices perfectly match the event–driven nature of the neuromorphic circuits, resulting in low power consumption.
The circuit has an instant on/off feature: It begins operating immediately after being turned on, allowing the power supply to be entirely shut off as soon as the circuit is idle. Figure 1 displays the basic building block of the proposed circuit. It is composed of N parallel one–resistor–one–transistor (1T1R) structures that contain synaptic weights and is used to extract a weighted current that is then injected into a common differential pair integrator (DPI) synapse and subsequently into a leaky integrate–and–fire (LIF) neuron.
The input spikes are applied to the gates of the 1T1R structures as trains of voltage pulse with pulse lengths in the range of hundreds of nanoseconds. RRAM may be set into a high–conductance state (HCS) and reset into a low–conductance state (LCS) by providing an external positive voltage reference on Vtop and grounding Vbottom (LCS). The mean value of the HCS may be controlled by limiting the set programming (compliance) current (ICC) through the gate–source voltage of the series transistor. In the circuit, RRAMs perform two functions: They route and weigh input pulses.
Neuromorphic Figure 1: The role of RRAM devices in neuromorphic circuits: (a) scanning electron microscopy (SEM) image of an HfO2 1T1R RRAM device, in blue, integrated on 130–nm CMOS technology, with its selector transistor (width of 650 nm) in green; (b) basic building block of the proposed neuromorphic circuit; (c) cumulative density function of the conductance of a population of 16–Kb RRAM devices, as a function of the compliance current ICC, which effectively controls the conductance level; (d) measurement of the circuit in (a); (e) measurement of the circuit in (b). (Source: “Neuromorphic object localization using resistive memories and ultrasonic transducers,” in Nature Communications)
“The op amp in Figure 1, along with transistors M1, M2, and M3, form the front–end circuit, which reads the current from the RRAM array and injects the current into the DPI synapse,” Vianello said. “The RRAM bottom electrode has a constant DC voltage Vbot applied to it, and the common top electrode is pinned to the voltage Vx by a rail–to–rail operational–amplifier circuit. The op–amp output is connected in negative feedback to its non–inverting input and has the constant DC bias voltage Vtop applied to its inverting input. As a result, the output of the op amp will modulate the gate voltage of transistor M1 such that the current it sources onto the node Vx will maintain its voltage as close as possible to the DC bias Vtop. Whenever an input pulse Vin arrives, a current equal to (VxVbot)Gn will flow out of the bottom electrode. The negative feedback of the op amp will then act to ensure that Vx = Vtop by sourcing an equal current from transistor M1. By connecting the op–amp output to the gate of transistor M2, a current equal to it will therefore also be buffered into the branch composed of transistors M2 and M3 in series. This current is injected into a CMOS differential–pair integrator synapse circuit model, which generates an exponentially decaying waveform from the onset of the pulse with an amplitude proportional to the injected current.”
While traditional processing techniques sample the detected signal continuously and perform calculations to extract useful information, the proposed neuromorphic solution calculates asynchronously when useful information arrives, increasing the system’s energy efficiency by up to five orders of magnitude.
CEA–Leti has made significant developments in pMUT sensors and spiking neural networks based on RRAM technology during the last decade. “We would like to thank the H2020 MeM–Scales project [871371] that partially funded the work,” Vianello said.
 
  • Like
  • Fire
Reactions: 14 users

Sirod69

bavarian girl ;-)
It wouldn't surprise me if we were involved in this in some way, shape or form. 🧐


Ericsson, Thales, global leader in Aerospace, Defence, Security & Digital Identity, and wireless technology innovator Qualcomm Technologies, Inc. are planning to take 5G out of this world and across a network of Earth-orbiting satellites.


View attachment 13685 /ATTACH]




View attachment 13683




since Qualcomm is a Brainchip customer, I have a hard time assuming Brainchip is involved 🥰😘
 
  • Like
Reactions: 1 users

uiux

Regular
since Qualcomm is a Brainchip customer, I have a hard time assuming Brainchip is involved 🥰😘

Qualcomm isn't a brainchip customer though?
 
  • Like
Reactions: 10 users
Fresh tweet




Sensor Fusion with Deep Learning​

Image
Suad Jusuf

Suad Jusuf
Senior Manager



Sensors are increasingly being used in our everyday lives to help collect meaningful data across a wide range of applications, such as building HVAC systems, industrial automation, healthcare, access control, and security systems, just to name a few. Sensor Fusion network assists in retrieving data from multiple sensors to provide a more holistic view of the environment around a smart endpoint device. In other words, Sensor Fusion provides techniques to combine multiple physical sensor data to generate accurate ground truth, even though each individual sensor might be unreliable on its own. This process helps to reduce the amount of uncertainty that may be involved in overall task performance.
To increase intelligence and reliability, the application of deep learning for sensor fusion is becoming progressively important across a wide range of industrial and consumer segments.
From a data science perspective, this paradigm shift allows extracting relevant knowledge from monitored assets through the adoption of intelligent monitoring and sensor fusion strategies, as well as by the application of machine learning and optimization methods. One of the main goals of data science in this context is to effectively predict abnormal behaviour in industrial machinery, tools, and processes to anticipate critical events and damage, eventually preventing important economic losses and safety issues.
Renesas Electronics provides intelligent endpoint sensing devices as well as a wide range of analog rich Microcontrollers that can become the heart of smart sensors, which enable a more accurate sensor fusion solution across different applications. In this context combining sensor data in a typical sensor fusion network may be achieved as follows:
  • Redundant sensors: All sensors give the same information to the world.
  • Complementary sensors: The sensors provide independent (disjointed) types of information about the world.
  • Coordinated sensors: The sensors collect information about the world sequentially.
Image
sensors

The communication in a sensor network is the backbone of the entire solution and could be in any of the schemes mentioned below:
  • Decentralized: No communication exists between the sensor nodes.
  • Centralized: All sensors provide measurements to a central node.
  • Distributed: The nodes interchange information at a given communication rate (e.g., every five scans, i.e., one-fifth communication rate).
The centralized scheme can be regarded as a special case of the distributed scheme where the sensors communicate every scan to each other. A pictorial representation of the fusion process is given in the figure below.
Image
A pictorial representation of the fusion process

From Industry 4.0 perspective, feedback from one sensor is typically not enough, particularly for the implementation of control algorithms.

Deep Learning​

Precisely calibrated and synchronized sensors are a precondition for effective sensor fusion. Renesas provides a range of solutions to enable informed decision-making by executing advanced sensor fusion at the endpoint on a centralized processing platform.
Performing late fusion allows for interoperable solutions, while early fusion gives AI rich data for predictions. Leveraging the complementary strengths of different strategies gives us the key advantage. The modern approach involves time and space synchronization of all onboard sensors before feeding synchronized data to the neural network for predictions. This data is then used for AI training or Software-In-the-Loop (SIL) testing of real-time algorithm that receives just a limited piece of information.
Deep learning involves the use of neural networks for the purpose of advanced machine learning techniques that leverage high-performance computational platforms such as Renesas RA MCU and RZ MPU for enhanced training and execution. These deep neural networks consist of many processing layers arranged to learn data representations with varying levels of abstraction from sensor fusion. The more layers in the deep neural network, the more abstract the learned representations become.
Deep learning offers a form of representation learning that aims to express complicated data representations by using other simpler representations. Deep learning techniques can understand features using a composite of several layers, each with unique mathematical transforms, to generate abstract representations that better distinguish high-level features in the data for enhanced separation and understanding of true form.
Multi-stream neural networks are useful in generating predictions from multi-modal data, where each data stream is important to the overall joint inference generated by the network. Multi-stream approaches have been shown successful for multi-modal data fusion, and deep neural networks have been applied successfully in multiple applications such as neural machine translation and time-series sensor data fusion.
This is a tremendous breakthrough that allows deep neural networks to train and deploy on MCU-based Endpoint applications, thereby helping to accelerate industrial adoption. Renesas RA MCU platform and associated Flexible SW Package combined with AI modeling tools offer the ability to apply many of the neural network layers as a multi-layer structure. Typically, more layers lead to more abstract features learned by the network. It has been proven that stacking multiple types of layers in a heterogeneous mixture can outperform a homogeneous mixture of layers. Renesas sensing solutions can be used to compensate for deficiencies in information by utilizing feedback from multiple sensors. The deficiencies associated with individual sensors to calculate types of information can be compensated for by combining the data from multiple sensors.
The flexible Renesas Advanced (RA) Microcontrollers (MCUs) are industry-leading 32-bit MCUs and are a great choice for building smart sensors. With a wide range of Renesas RA family MCUs, you can choose the best one as per your application needs. The Renesas RA MCU platform, combined with strong support & SW ecosystem, will help accelerate the development of Industry 4.0 applications with sensor fusion and deep learning modules.
As part of Renesas' extensive solution and design support, Renesas provides a reference design for a versatile Artificial Internet of Things (AIoT) sensor board solution. It targets applications in industrial predictive maintenance, smart home/IoT appliances with gesture recognition, wearables (activity tracking), and mobile for innovative human-machine interface, or HMI, (FingerSense) solutions. As part of this solution, Renesas can provide a complete range of devices, including an IoT-specified RA microcontroller, air quality sensor, light sensor, temperature and humidity sensor, a 6-axis inertial measurement unit as well as Cellular and Bluetooth communication support.
Image
Diagram
With the increasing number of sensors in Industry 4.0 systems comes a growing demand for sensor fusion to make sense of the mountains of data that those sensors produce. Suppliers are responding with integrated sensor fusion devices. For example, an intelligent condition monitoring box is available designed for machine condition monitoring based on fusing data from vibration, sound, temperature, and magnetic field sensors. Additional sensor modalities for monitoring acceleration, rotational speeds, and shock and vibration can be included optionally.
The system implements sensor fusion through AI algorithms to classify abnormal operating conditions with better granularity resulting in high probability decision making. This edge AI architecture can simplify handling the big data produced by sensor fusion, ensuring that only the most relevant data is sent to the edge AI processor or to the cloud for further analysis and possible use in training ML algorithms.
The use of AI-based Deep Learning has several benefits:
  • The AI algorithm can employ sensor fusion to utilize the data from one sensor to compensate for weaknesses in the data from other sensors.
  • The AI algorithm can classify the relevance of each sensor to specific tasks and minimize or ignore data from sensors determined to be less important.
  • Through continuous training at the edge or in the cloud, AI/ML algorithms can learn to identify changes in system behaviour that were previously unrecognized.
  • The AI algorithm can predict possible sources of failures, enabling preventative maintenance and improving overall productivity.
Sensor fusion combined with AI deep learning produces a powerful tool to maximize the benefits when using a variety of sensor modalities. AI/ML-based enhanced sensor fusion can be employed at several levels in a system, including at the data level, the fusion level, and the decision level. Basic functions in sensor fusion implementations include smoothing and filtering sensor data and predicting sensor and system states.
At Renesas Electronics, we invite you to take advantage of our high-performance MCUs and A&P portfolio combined with a complete SW platform providing targeted deep learning models and tools to build next generation sensor fusion solutions.
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers,

Our mystery manipulator has dropped their sell order .

700,515 units for $1.195

Esq.
 
  • Like
  • Haha
  • Fire
Reactions: 14 users

gex

Regular
Afternoon Chippers,

Our mystery manipulator has dropped their sell order .

700,515 units for $1.195

Esq.
ASX need to do their fucking job, goes for both buy and sell
 
  • Like
  • Fire
Reactions: 8 users

equanimous

Norse clairvoyant shapeshifter goddess

Must watch​

BrainCog: 9 years ongoing effort to develop a spiking neural network platform for brain inspired AI​


 
  • Like
  • Thinking
  • Fire
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
ASX need to do their fucking job, goes for both buy and sell
Afternoon Gex,

I think you would find , this would require them growing a pair & then getting off their a$%#'s.

Esq.
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

Sirod69

bavarian girl ;-)
Qualcomm isn't a brainchip customer though?
I strongly assume so. I regularly look at Rob Telson's likes on LinkedIn and wonder why he likes something when it has nothing to do with Brainchip, he must have better things to do. I know it's just a guess.
Today I see posts from iRobot, LG and innoviz that he likes.
I also see contacts between individuals as very interesting.
I know @uiux this is all just a guess but it's conceivable right?
 
  • Like
  • Love
  • Fire
Reactions: 12 users
Top Bottom