BRN Discussion Ongoing

Lex555

Regular
When the next $1.00 party? 🥳 🤣
I still have good memories from last weeks $0.60 party hosted by Bravo. Maybe $0.50 party soon 😬
 
  • Haha
  • Sad
  • Like
Reactions: 9 users

Diogenese

Top 20
Some more research indicates that STMicroelectronics are attempting to do neuromorphic computing with RRAM.

In an article in February, 2022 STM mentioned that neuromorphic chips are not mature enough. They want to move computation to the memory.

In the plenary session of the ISSCC conference, Marco Cassis, president of ST’s Analog, MEMS and Sensors Group, looked at the various AI technologies for sensors. He rules out spiking neural network chips, also called neuromorphic, as not mature, saying current convolutional neural networks can tap into reduced precision and semiconductor scaling to get more performance. However these CNN devices struggle with power consumption and memory bandwidth challenges that get in the way of scalability.

“To overcome this limitation is to partially or completely move the computation to the memory,” he said. “In Memory Computing can bring big benefits, 100x densities and efficiencies compared to current state of the art solutions. Here an especially promising avenue is the use of non volatile resistive memory devices to perform computations in the memory itself.



In May, 2019 to January 2023 a consortium of companies including STM & European Commission got together for the TEMPO project.

Technology and hardware for neuromorphic computing​

Project description​

New ways to integrate emerging memories to enable neuromorphic computing systems​

Artificial intelligence (AI) and machine learning are used today for computing all kinds of data, making predictions and solving problems. These are processes based increasingly on deep neuronal network (DNN) models. As the volume of produced data slow down machines and consume greater amounts of energy, there is a new generation of neural units. The spiking neural networks (SNNs) incorporate biologically-feasible spiking neurons with their temporal dynamics. The EU-funded TEMPO project will leverage emerging memory technology to design new innovative technological solutions that make data integration simpler and easier via new neuronal DNN and SNN computing engines. Reduced core computational operational systems’ neuromorphic algorithms will serve as demonstrators.

Objective​

Massive adoption of computing in all aspects of human activity has led to unprecedented growth in the amount of data generated. Machine learning has been employed to classify and infer patterns from this abundance of raw data, at various levels of abstraction. Among the algorithms used, brain-inspired, or “neuromorphic”, computation provides a wide range of classification and/or prediction tools. Additionally, certain implementations come about with a significant promise of energy efficiency: highly optimized Deep Neural Network (DNN) engines, ranging up to the efficiency promise of exploratory Spiking Neural Networks (SNN). Given the slowdown of silicon-only scaling, it is important to extend the roadmap of neuromorphic implementations by leveraging fitting technology innovations. Along these lines, the current project aims to sweep technology options, covering emerging memories and 3D integration, and attempt to pair them with contemporary (DNN) and exploratory (SNN) neuromorphic computing paradigms. The process- and design-compatibility of each technology option will be assessed with respect to established integration practices. Core computational kernels of such DNN/SNN algorithms (e.g. dot-product/integrate-and-fire engines) will be reduced to practice in representative demonstrators.

Some other well known companies involved are Valeo, Phillips, Thales, Bosch, Infineon & SynSense.



In June, 2022 STM released new inertial sensors containing the intelligent sensor processing unit (ISPU) for on device processing.

The ISM330IS embeds a new ST category of processing, ISPU (intelligent sensor processing unit) to support real-time applications that rely on sensor data. The ISPU is an ultra-low-power, high-performance programmable core which can execute signal processing and AI algorithms in the edge. The main benefits of the ISPU are C programming and an enhanced ecosystem with libraries and 3rd party tools/IDE.

The ISM330ISN is scheduled to enter production in H2 2022 and will be available from st.com or distributors for $3.48 for orders of 1000 pieces. NanoEdge AI Studio enabling the creation of libraries designed for specific ISPU part numbers is available at no charge on ST.com

Due to the release date the ISPU is unlikely to have Akida IP. As neuromorphic hardware becomes available & matures they may be interested.



STM has a big ecosystem & 200,000 customers.

An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of the Internet of Things and connectivity.

I haven't researched ReRAM NNs in depth, but I think Marco Cassis, president of ST’s Analog, MEMS and Sensors Group was not talking about Akida when he ruled out "spiking neural network chips, also called neuromorphic, as not mature, saying current convolutional neural networks can tap into reduced precision and semiconductor scaling to get more performance. However these CNN devices struggle with power consumption and memory bandwidth challenges that get in the way of scalability."

As we have discussed repeatedly, ReRAMs have their own problems. It is true that, in theory, they provide a much closer synaptic analogy with wetware, but the lack of precision of IC manufacturing at a micro-scale means that they lack accuracy due to resistance variations between individual ReRAMs. The currents from a few hundred (or more) need to be added together to reach a synaptic threshold voltage, so while some errors may cancel out, there is the possibility of cumulative errors.

There are techniques to compensate for the inherent variability, but they immediately reduce a major advanyage of ReRAM, the footprint of each ReRAM cell on the silicon wafer ... this from Weebit:

https://www.weebit-nano.com/technology/reram-memory-module-technology/

"An efficient ReRAM module must be designed and developed in close relation with the memory bitcell so it can optimize the functionality of the memory array. Due to the inherent variability of ReRAM (RRAM) cells, specially developed algorithms are key to the process of programming and erasing cells. These algorithms must be delicately balanced between programming time (the quicker, the better), current (the lower, the better), and cell endurance (allowing each individual cell to operate for as many program/erase [P/E] cycles as possible). Voltage levels, P/E pulse widths and the number of such pulses must be optimized to work with a given bitcell technology.

When reading any given bit, the data must be verified against other assistive information to make sure there are no read errors that could impair overall system performance.

Voltage and current levels must be carefully examined throughout the memory module for any operation – including read, program and erase – to keep power consumption to a minimum and ensure the robustness and reliability of the memory array
."

In addition, they need larger operating voltages than digital CMOS because they need to divide the voltage into a number of voltage steps corresponding to the number of synaptic inputs which are added to reach the synaptic threshold. The size of the operating voltage limits the size of the manufacturing process, eg 22 nm, before the voltage can jump between conductors.

Our friend Weebit has planted their flag at 12 nm, but I don't know whether this achievable or aspirational.

https://www.weebit-nano.com/technology/overview/
1678876124051.png



https://www.weebit-nano.com/technology/reram-bitcell/

Weebit scaling down its ReRAM technology to 22nm​

Weebit is scaling its embedded ReRAM technology down to 22nm – one of the industry’s most common process nodes.


To be useful in a CPU or GPU, ReRAM output must be converted to digital in an ADC (analog to digital converter)

It sounds like a lot of fluster, but the Weebit ReRAM hybrid analog/digital neuromorphic circuit (something I have previously dubbed Frankenstein) is well received in the market, even though it is not spruiked as having the capabilities of Akida, but rather for its memory.

How many more bells and whistles are needed to develop a ReRAM NN?
 
  • Like
  • Fire
  • Love
Reactions: 20 users

Diogenese

Top 20
I still have good memories from last weeks $0.60 party hosted by Bravo. Maybe $0.50 party soon 😬
Speaking of that, did anyone find a set of dentures in a glass by the side of the hot tub?

... asking for a friend.
 
  • Haha
  • Like
Reactions: 27 users

Diogenese

Top 20
Some more research indicates that STMicroelectronics are attempting to do neuromorphic computing with RRAM.

In an article in February, 2022 STM mentioned that neuromorphic chips are not mature enough. They want to move computation to the memory.

In the plenary session of the ISSCC conference, Marco Cassis, president of ST’s Analog, MEMS and Sensors Group, looked at the various AI technologies for sensors. He rules out spiking neural network chips, also called neuromorphic, as not mature, saying current convolutional neural networks can tap into reduced precision and semiconductor scaling to get more performance. However these CNN devices struggle with power consumption and memory bandwidth challenges that get in the way of scalability.

“To overcome this limitation is to partially or completely move the computation to the memory,” he said. “In Memory Computing can bring big benefits, 100x densities and efficiencies compared to current state of the art solutions. Here an especially promising avenue is the use of non volatile resistive memory devices to perform computations in the memory itself.



In May, 2019 to January 2023 a consortium of companies including STM & European Commission got together for the TEMPO project.

Technology and hardware for neuromorphic computing​

Project description​

New ways to integrate emerging memories to enable neuromorphic computing systems​

Artificial intelligence (AI) and machine learning are used today for computing all kinds of data, making predictions and solving problems. These are processes based increasingly on deep neuronal network (DNN) models. As the volume of produced data slow down machines and consume greater amounts of energy, there is a new generation of neural units. The spiking neural networks (SNNs) incorporate biologically-feasible spiking neurons with their temporal dynamics. The EU-funded TEMPO project will leverage emerging memory technology to design new innovative technological solutions that make data integration simpler and easier via new neuronal DNN and SNN computing engines. Reduced core computational operational systems’ neuromorphic algorithms will serve as demonstrators.

Objective​

Massive adoption of computing in all aspects of human activity has led to unprecedented growth in the amount of data generated. Machine learning has been employed to classify and infer patterns from this abundance of raw data, at various levels of abstraction. Among the algorithms used, brain-inspired, or “neuromorphic”, computation provides a wide range of classification and/or prediction tools. Additionally, certain implementations come about with a significant promise of energy efficiency: highly optimized Deep Neural Network (DNN) engines, ranging up to the efficiency promise of exploratory Spiking Neural Networks (SNN). Given the slowdown of silicon-only scaling, it is important to extend the roadmap of neuromorphic implementations by leveraging fitting technology innovations. Along these lines, the current project aims to sweep technology options, covering emerging memories and 3D integration, and attempt to pair them with contemporary (DNN) and exploratory (SNN) neuromorphic computing paradigms. The process- and design-compatibility of each technology option will be assessed with respect to established integration practices. Core computational kernels of such DNN/SNN algorithms (e.g. dot-product/integrate-and-fire engines) will be reduced to practice in representative demonstrators.

Some other well known companies involved are Valeo, Phillips, Thales, Bosch, Infineon & SynSense.



In June, 2022 STM released new inertial sensors containing the intelligent sensor processing unit (ISPU) for on device processing.

The ISM330IS embeds a new ST category of processing, ISPU (intelligent sensor processing unit) to support real-time applications that rely on sensor data. The ISPU is an ultra-low-power, high-performance programmable core which can execute signal processing and AI algorithms in the edge. The main benefits of the ISPU are C programming and an enhanced ecosystem with libraries and 3rd party tools/IDE.

The ISM330ISN is scheduled to enter production in H2 2022 and will be available from st.com or distributors for $3.48 for orders of 1000 pieces. NanoEdge AI Studio enabling the creation of libraries designed for specific ISPU part numbers is available at no charge on ST.com

Due to the release date the ISPU is unlikely to have Akida IP. As neuromorphic hardware becomes available & matures they may be interested.



STM has a big ecosystem & 200,000 customers.

An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of the Internet of Things and connectivity.

I think Marco Cassis, president of ST’s Analog, MEMS and Sensors Group was not talking about Akida when he ruled out "spiking neural network chips, also called neuromorphic, as not mature, saying current convolutional neural networks can tap into reduced precision and semiconductor scaling to get more performance. However these CNN devices struggle with power consumption and memory bandwidth challenges that get in the way of scalability."

As we have discussed repeatedly, ReRAMs have their own problems. It is true that, in theory, they provide a much closer synaptic analogy with wetware, but the lack of precision of IC manufacturing at a micro-scale means that they lack accuracy due to resistance variations between ReRAMs when the currents from a few hundred (or more) need to be added together to reach a synaptic threshold voltage.

In addition, they need larger operating voltages than digital CMOS because they need to divide the voltage into a number of voltage steps corresponding to the number of synaptic inputs. The size of the operating voltage limits the size of the manufacturing process, eg 22 nm, before the voltage can jump between conductors.
 
  • Like
  • Fire
Reactions: 8 users

TECH

Regular
I am not saying you are definitely wrong but Anil Mankar likes are problematic because he is so well known in the industry and respected. He is just a genuine nice person so his likes and interest in others is not always about Brainchip.

My opinion only DYOR
FF

AKIDA BALLISTA

I agree whole heartedly, on a number of occasions over the years in brief exchanges with Anil he has come across as a very humble,
conservative, polite person, cut from the same cloth as Peter, they have been great business partners, bouncing ideas off each other years before where we find ourselves today, but more importantly, great friends for many years.

To see others achieve in this industry and acknowledge that, shows a true sense of character, and Anil and Peter have that in spades.

I also personally think Anil encourages and acknowledges young engineers from India making a mark in this industry for themselves and
their country...why do I love our company, it's obvious, it's the people involved from the earliest days.....I politely salute them.

Tech :)
 
  • Like
  • Love
  • Fire
Reactions: 38 users

FJ-215

Regular
Thanks for posting this @Doz,

Watching Bloomberg TV atm, Credit Suisse hitting new lows. More pain coming for the bank sector by the looks. Raising capital is going to become more difficult/expensive.

BRN are in a very good position financially, money in the bank and an early call on LDA for more.

LDA still with around 10 M shares left, plus potentially another 10M at the boards discretion. Hopefully BRN have a rabbit or two they can pull out of the hat and raise the whole $27.9M under the POA.

Going to be some sleepless nights for Ken and the BoD over the coming weeks I think.
 
  • Like
  • Fire
Reactions: 13 users

Diogenese

Top 20
Interesting.

ST to launch its first neural microcontroller with NPU​

Business news | May 12, 2022
By Nick Flaherty

STMicroelectronics is set to launch its first microcontroller with a full neural processing unit (NPU).


The STM32N6 includes a proprietary NPU alongside a ARM Cortex core. This gives the same AI performance as a quad core processor with AI accelerator but at one tenth the cost and one twelfth the power consumption, says Remi El-Ouazzane President, Microcontrollers & Digital ICs Group at ST.

The chip will sample at the end of 2022, he said. While there is no mention of the ARM core that will be used, the performance and power figures point to ARM’s M55 or even its latest core, the M85, which has recently been announced. The M85 is ARM’s highest performance M core, issuing up to three instruction per cycle with has internal accelerators for AI that help to boost the performance.

ST is a key lead developer for ARM’s microcontroller cores and uses the ARM M7 core in the dual core M4/M7 STM32H7 and in the STM32F7 family alongside an ART AI accelerator. A new family, N6, points to the use of a new core.


Supporting AI in industrial and embedded designs was a key driver for the recent acquisition of French software tool developer Cartesiam and is part of the strategy to achieve $20bn of revenue between 2025 and 2027.

“The new STM32N6 Neural MCU is dramatically lowering the AI technology implementation price point. This breakthrough supports our roadmap of new generation intelligent sensors allowing rapidly growing adoption in Smart Cities,” said Vincent SABOT, Executive Managing Director of developer Lacroix.

The choice of architecture is key for adding security support and integration with cloud services, and the M55 and M85 supports the ARMv8.1-M architecture. Yesterday ST announced deals with Microsoft and Amazon to connect the microcontrollers to the cloud.

ST is integrating its STM32U5 microcontrollers (MCUs), based on the 160MHz ARM Cortex-M33 core, with Microsoft Azure real time operating system (RTOS) and IoT Middleware and a certified secure implementation of ARM Trusted Firmware -M (TF-M) secure services for embedded systems.

The integration uses the hardened security features of the STM32U5 complemented with the hardened key store of an STSAFE-A110 secure element.

The integration with Amazon Web Services (AWS) also uses the STM32U5, this time with Amazon’s FreeRTOS real time operating system and the ARM trusted-firmware for embedded systems (TF-M).

The reference implementation is built on ST’s B-U585I-IOT02A discovery kit for IoT nodes with support for USB, WiFi, and Bluetooth Low Energy connectivity, as well as multiple sensors. The STSAFE-A110 secure element support is pre-loaded with IoT object credentials to help secure and simplifies attachment between the connected objects and the AWS cloud.

FreeRTOS comprises a kernel optimized for resource-constrained embedded systems and software libraries for connecting various types of IoT endpoints to the AWS cloud or other edge devices. AWS’s long-term support (LTS) is maintained on FreeRTOS releases for two years, which provides developers with a stable platform for deploying and maintaining their IoT devices.

Hardware cryptographic accelerators, secure firmware installation and update, and enhanced resistance to physical attacks provide PSA Certified Level-3 and SESIP 3 certifications.

ST will release an STM32Cube-based integration of reference implementation for both the Azure and AWS integrations in Q3 2022 that will further simplify IoT-device design leveraging tight integration with the wider STM32 ecosystem.


The NPU of the
I haven't researched ReRAM NNs in depth, but I think Marco Cassis, president of ST’s Analog, MEMS and Sensors Group was not talking about Akida when he ruled out "spiking neural network chips, also called neuromorphic, as not mature, saying current convolutional neural networks can tap into reduced precision and semiconductor scaling to get more performance. However these CNN devices struggle with power consumption and memory bandwidth challenges that get in the way of scalability."

As we have discussed repeatedly, ReRAMs have their own problems. It is true that, in theory, they provide a much closer synaptic analogy with wetware, but the lack of precision of IC manufacturing at a micro-scale means that they lack accuracy due to resistance variations between individual ReRAMs. The currents from a few hundred (or more) need to be added together to reach a synaptic threshold voltage, so while some errors may cancel out, there is the possibility of cumulative errors.

There are techniques to compensate for the inherent variability, but they immediately reduce a major advanyage of ReRAM, the footprint of each ReRAM cell on the silicon wafer ... this from Weebit:

https://www.weebit-nano.com/technology/reram-memory-module-technology/

"An efficient ReRAM module must be designed and developed in close relation with the memory bitcell so it can optimize the functionality of the memory array. Due to the inherent variability of ReRAM (RRAM) cells, specially developed algorithms are key to the process of programming and erasing cells. These algorithms must be delicately balanced between programming time (the quicker, the better), current (the lower, the better), and cell endurance (allowing each individual cell to operate for as many program/erase [P/E] cycles as possible). Voltage levels, P/E pulse widths and the number of such pulses must be optimized to work with a given bitcell technology.

When reading any given bit, the data must be verified against other assistive information to make sure there are no read errors that could impair overall system performance.

Voltage and current levels must be carefully examined throughout the memory module for any operation – including read, program and erase – to keep power consumption to a minimum and ensure the robustness and reliability of the memory array
."

In addition, they need larger operating voltages than digital CMOS because they need to divide the voltage into a number of voltage steps corresponding to the number of synaptic inputs which are added to reach the synaptic threshold. The size of the operating voltage limits the size of the manufacturing process, eg 22 nm, before the voltage can jump between conductors.

Our friend Weebit has planted their flag at 12 nm, but I don't know whether this achievable or aspirational.

https://www.weebit-nano.com/technology/overview/
View attachment 32255


https://www.weebit-nano.com/technology/reram-bitcell/

Weebit scaling down its ReRAM technology to 22nm​

Weebit is scaling its embedded ReRAM technology down to 22nm – one of the industry’s most common process nodes.


To be useful in a CPU or GPU, ReRAM output must be converted to digital in an ADC (analog to digital converter)

It sounds like a lot of fluster, but the Weebit ReRAM hybrid analog/digital neuromorphic circuit (something I have previously dubbed Frankenstein) is well received in the market, even though it is not spruiked as having the capabilities of Akida, but rather for its memory.

How many more bells and whistles are needed to develop a ReRAM NN?

The NPU of the STM32N6 is internally developed by STM.


Remi El-Ouazzane 10mo

Earlier today during STMicroelectronics Capital Markets Day (https://cmd.st.com), I gave a presentation on MDG contribution to our ambition of reaching $20B revenue ambition by 2025-27.

During the event, I was proud to pre-announce the #STM32N6: a high performance #STM32 #MCU with our new internally developed Neural Processing Unit (#NPU) providing an order of magnitude benefit in both inference/w and inference/$ against alternative MPU solutions

The #STM32N6 will deliver #MPU #AI workloads at the cost and the power consumption of #MCU. This a complete game changer that will open new ranges of applications for our customers and allow them to democratise #AI at the edge.

I am excited to say we are on track to deliver first samples of the #STM32N6by the end of 2022. I

I am even more excited to announce that LACROIX will leverage this technology in their next generation smart city products.

Stay tuned for more news on the #STM32N6 in the coming months
:=)

########################################################

Bells and whistles:

This is how STM does in-memory compute (not a pretty sight):

EP3761236A2 ELEMENTS FOR IN-MEMORY COMPUTE

1678881479642.png





1678881497902.png


A memory array arranged in multiple columns and rows. Computation circuits that each calculate a computation value from cell values in a corresponding column. A column multiplexer cycles through multiple data lines that each corresponds to a computation circuit. Cluster cycle management circuitry determines a number of multiplexer cycles based on a number of columns storing data of a compute cluster. A sensing circuit obtains the computation values from the computation circuits via the column multiplexer as the column multiplexer cycles through the data lines. The sensing circuit combines the obtained computation values over the determined number of multiplexer cycles. A first clock may initiate the multiplexer to cycle through its data lines for the determined number of multiplexer cycles, and a second clock may initiate each individual cycle. The multiplexer or additional circuitry may be utilized to modify the order in which data is written to the columns.

So add that to Weebit's hybrid analog/digital ReRAM ...

We you did ask!
 
  • Like
  • Fire
  • Haha
Reactions: 13 users

stockduck

Regular
Back home on PC.

Much criticism has been levelled at Sean Hehir for not communicating with shareholders over the last several months and to some extent this type of criticism has been made since shortly after he took the reins at Brainchip.

My intention is not to debate the merits of the criticism suffice to say I on two occasions have made clear a concern I had about his communication style, but as I am just an old technophobe without any experience of running, for the sake of this discussion a Silicon Valley type technology start-up that originated from Australia currently listed on the ASX and intending to eventually launch on Nasdaq during a time of global turbulence with the major shareholders and founders looking over my shoulder, so I expressed those concerns directly to the company.

In expressing those concerns it was emphasised that the company was in the process of reinventing its corporate branding as a commercial entity and while it appeared to be taking backward steps these measures were essential if the end goals were to be realised.

I probably do not need to write that Tony Dawe has been under almost constant attack for the best part of six months from shareholders concerned about the direction of the company and that over this time the company has only poorly at one level communicated the importance of what was taking place to set the foundations for success.

These foundations can be summed up from the company's point of view in the one word "ECOSYSTEMS".

So if we look at the avalanche of news over the last couple of weeks, today's promise by Sean Hehir of more to come, and the focus of today's podcast fronted by Sean Hehir being "ECOSYSTEMS" and the qualifications of the guest to speak precisely on this point it is my impression that this podcast was Sean Hehir trying to explain to shareholders why the pain of the last 18 months since his appointment had been necessary.

I posted earlier a quote relating to competitors but the overall paper is about whether the world is ready for neuromorphic computing and in essence it does nothing if not make clear that in the absence of what Sean Hehir and his guest describe as "Ecosystems" an AKIDA technology revolution cannot take place.

Building these "ECOSYSTEMS" is not sexy, it does not bring in immediate income but without technology "ECOSYSTEMS" AKIDA will have no value to anyone.

Sean Hehir used at the AGM last year his three legged chair to describe what he was doing and how without one of the legs a three legged chair cannot stand or be of any use.

"ECOSYSTEMS" were one of the legs of this chair and today was Sean Hehir attempting to explain with the assistance of someone who has done it for Nvidia why taking the time to build "ECOSYSTEMS" to support AKIDA technology just had to be accomplished otherwise failure would be the outcome.

I hope that this podcast has drawn a line in the sand and that going forward we will see a change in the communication that takes place with shareholders and in this regard I am not talking about lots of IP licence agreements being announced what I am talking about is a less introspective style, one where Sean Hehir takes on fully the role of visionary CEO leading from the front confident that all the foundations needed including "ECOSYSTEMS" are firmly in place.

If this all sounds a bit Tesla-ish having a vision that shareholders, customers and "ECOSYSTEM" partners can embrace is in my opinion absolutely essential if Brainchip is to lead the Artificial Intelligence revolution and have the world follow AKIDA technology science fiction forward into the 4th Industrial Revolution.

Well you did ask.😁😂🤣😂

My opinion only DYOR
FF

AKIDA BALLISTA
A very nice one of explanation....and I think you "nailed" it.

That`s how I feel it too, but for a "mom and dad investor" it is also feeling pain and high financial risk even if you do not "need" the money you have invested in. I hope Mr. CEO knows about this convidence those shareholders put on him.

This will be very exciting to see how an new financial investment will be made, if this will be the case in future.

Do we have to endure a further fall in the share price, or can a new investment also result in added value in terms of the share price, so that although patience is required on the part of the shareholders as an ingredient in the cake, the shareholders are not thrown out of the kitchen and can continue to watch the preparation of the coffee coronation and are welcome co-owners as tasters?

Are "mom and dad Investors" a part of the ecosystem?

;)😇
I hope, I got the message right.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

overpup

Regular
A very nice one of explanation....and I think you "nailed" it.

That`s how I feel it too, but for a "mom and dad investor" it is also feeling pain and high financial risk even if you do not "need" the money you have invested in. I hope Mr. CEO knows about this convidence those shareholders put on him.

This will be very exciting to see how an new financial investment will be made, if this will be the case in future.

Do we have to endure a further fall in the share price, or can a new investment also result in added value in terms of the share price, so that although patience is required on the part of the shareholders as an ingredient in the cake, the shareholders are not thrown out of the kitchen and can continue to watch the preparation of the coffee coronation and are welcome co-owners as tasters?

Are "mom and dad Investors" a part of the ecosystem?

;)😇
I hope, I got the message right.
Well said, SD... I have been trying to find the words for this growing feeling for some time now. You have done it better than i could. - thank you!
 
  • Like
Reactions: 8 users

stockduck

Regular
Some news from Nürnberg....


".......o realize continuous data collection, system monitoring, and remote debugging for the charging equipment across the world. The solution can restart the power during system failures or while offline to optimize edge device management and reduce downtime.

....."

There seems to be a business relationship to aetina.



There seems a connection to Hailo....

".....
Aetina AI-MXM-H84A modules feature four Hailo-8™ AI processors, providing up to 104 Tera-Operations Per Second (TOPS) AI performance with best-in-class power efficiency to speed up deployment of neural network (NN) and deep learning (DL) processes on edge devices by AI developers. The Hailo-8™ AI accelerator allows edge devices to run DL applications at full scale with superb efficiency, effectiveness, and sustainability.
......"
😲
 
  • Like
  • Fire
Reactions: 9 users

Evermont

Stealth Mode
Nice to see we cracked a mention in GFT Ventures Q1 Update.


1678888808081.png
 
  • Like
  • Fire
Reactions: 23 users

stockduck

Regular
Some news from Nürnberg....


".......o realize continuous data collection, system monitoring, and remote debugging for the charging equipment across the world. The solution can restart the power during system failures or while offline to optimize edge device management and reduce downtime.

....."

There seems to be a business relationship to aetina.



There seems a connection to Hailo....

".....
Aetina AI-MXM-H84A modules feature four Hailo-8™ AI processors, providing up to 104 Tera-Operations Per Second (TOPS) AI performance with best-in-class power efficiency to speed up deployment of neural network (NN) and deep learning (DL) processes on edge devices by AI developers. The Hailo-8™ AI accelerator allows edge devices to run DL applications at full scale with superb efficiency, effectiveness, and sustainability.
......"
😲
I took a look at their technologiy partners.....

I think.....here we go.:D🙃😉
Let`s see in patient.....patient....patient......:love:


....but maybe 2,5-5 W powerconsumption is to much...?

"....
  • Industry-leading power efficiency with a typical power consumption of 5W ...."


but there is a nice correlation in the spelling in this news from 8. March:


".....
The Hailo-15 VPU family includes three variants — the Hailo-15H, Hailo-15M, and Hailo-15L — to meet the varying processing needs and price points of smart camera makers and AI application providers. Ranging from 7 TOPS (Tera Operation per Second) up to an astounding 20 TOPS, this VPU family enables over 5x higher performance than currently available solutions in the market, at a comparable price point. All Hailo-15 VPUs support multiple input streams at 4K resolution and combine a powerful CPU and DSP subsystems with Hailo’s field-proven AI core.
...."
just like akida E/akida S/ akida P..?



".....
With this family of high-performance AI vision processors, Hailo is also pioneering the use of vision-based transformers in cameras for real-time object detection. The added AI capacity can also be utilized for video enhancement and much better video quality in low-light environments, for video stabilization, and high dynamic range performance.
...."
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 14 users

Serengeti

Regular

Attachments

  • 60600EAB-62E5-43C4-8C83-839EAF9B9870.jpeg
    60600EAB-62E5-43C4-8C83-839EAF9B9870.jpeg
    413 KB · Views: 83
Last edited:
  • Like
  • Wow
  • Fire
Reactions: 19 users

dippY22

Regular
I have said it many times when this place is used for its proper purpose to share thoughts (not anxieties) and research this is as close to insider trading as you will get without breaking the law. Many thanks @stuart888 . You see Americans are nice people too.😂🤣😂

However lets look at what Teksun replaced Cisco, Toshiba and Lenaro with:


"BrainChip is a technology company making AI everywhere easier to deploy and scale. It’s neuromorphic processor called, Akida™, is pure digital and is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, making it cost effective and simple to implement. Designed for sustainable AI, Akida™, performs complex functions at the device level, delivering a real-time, instant response.

BrainChip's solutions have applications in various industries, including automotive, smart homes, security and surveillance. The company has partnered with industry leaders such as Arm, Mercedes Benz, and Renesas driving intelligence into next generation device
s."

So,

I knew
Brainchip had partnered with ARM its on their website.

I DID NOT know that Brainchip had partnered with Renesas to drive intelligence into next generation devices what was announced on the ASX was that Renesas had purchased two nodes of AKIDA IP and more recently that Renesas had taped out its own chip which it would be bringing to market later in 2023. Even Chippy said in his magazine interview that they had purchased what they needed from Brainchip fullstop. No ongoing partnership was implied in any way shape or form.

I MOST CERTAINLY DID NOT KNOW THAT MERCEDES BENZ HAD PARTNERED WITH BRAINCHIP TO DRIVE INTELLIGENCE INTO NEXT GENERATION DEVICES because all I was told by Brainchip was they could not talk about Mercedes Benz other than to say they continued to work with Mercedes Benz and Mercedes Benz had said they were working with artificial intelligence experts Brainchip. Just because you are working with someone it does not mean it is a partnership nor does it signal anything about your future intentions. Brainchip has on its website that they are trusted by Mercedes Benz. I am trust my brother-in-law but I am not in partnership. This is a significant statement of intent.

Given this is a revised statement I think it is reasonable to conclude that both Teksun and Brainchip have agreed on this wording. Indeed Tony Dawe has said Brainchip has approved this new wording. As such I am just as excited if not more so by what it now publicly reveals. The significance in my opinion dictates the conclusion that Teksun had leverage and wanted something of equal or greater significance to replace Cisco, Toshiba and Lenaro with for the purpose of driving their customers interest in a Teksun Brainchip product offerings. In other words if you don't want us to use Cisco, Toshiba and Lenaro what can we use?

I hope that our German investor friends take comfort from the fact that this partnership between Mercedes Benz and Brainchip to drive intelligence into next generation devices has now been publicly confirmed.

My opinion only DYOR
FF

AKIDA BALLISTA

Really nice FF.

There is a saying I like, regardless of who has said it in the past and to whom it may be recently attributed to, .... " it takes a village ".

The phrase is applicable in very general terms that to succeed is often beyond just the individual abilities. Think of the rural underdeveloped village and the necessary communal support required for it to survive. Or the family member needing a helping hand in a period of distress or hardship. Churches are great examples of the village, as are many helping hand non profits.

But the 1,000 eyes that comprise this forum are that metaphorical village, too.

And in times of a shrinking stock price when many here (me included) get agitated and distressed fretting about it alone with their thoughts at 3 a.m., it is an immense relief to have input from some who have the ability to cull out the wheat from the chaff, and FF is a prominent example of that. He is a superb reader with a laser focus attention to details. Perhaps that is from being an attorney I'd surmise. I read the same updated post from Teksun and flew over the words without another glance, or thought such as,...."Whoa, wait...., that is different than what I previously knew, isn't it?"

Their are many others in the TSE village who do the same thing but in a different way, ....hence the powerful heterogenious nature of the 1,000 eyes and what makes it so daunting and relentless in the pursuit of, not just dot connections, but the truth. The TSE village, like any "village", has a wonderful and necessary diverse mix of life and work experience, ...from engineering to management, entrepreneuship to lifelong investor class, internationally located fan base, enthusiasm levels, and research acumen. From the passive non contributing reader to the enegizer bunny types who have insatiable drive and impressive detective skills like - stuart888 and of course, the irrepressible Bravo (meow meow). Many, many others I will neglect to mention but they are well known and valued.

With age comes a mellowing of sorts and a knack for not getting "ones undies in a bundle" when challenges confront you.

The culmination of lifelong experential learning result in what a common villager might refer to as the elder who responds to a challenge in a way most of the village is incapable of, calmly and wisely. I'm not calling Fact Finder an elder, per se (well, actually I guess I am), but he is really good when it comes to such nuggets as the example post I replied to.

I suppose in investing it is like a Warren Buffet who has been at the game a long time is buying when the rest of his investing breatheren village is selling. Or, vice versa. Reading the tea leaves has always had some cache for a reason, but being a discerning reader like FF (or Buffet) is priceless to success and to this forum and our investing success.

The TSE is a great forum (and village) when it chooses to be and I again applaud those here who make it so. Regards, dippY
 
  • Like
  • Love
  • Fire
Reactions: 75 users

Beebo

Regular
I have said it many times when this place is used for its proper purpose to share thoughts (not anxieties) and research this is as close to insider trading as you will get without breaking the law. Many thanks @stuart888 . You see Americans are nice people too.😂🤣😂

However lets look at what Teksun replaced Cisco, Toshiba and Lenaro with:


"BrainChip is a technology company making AI everywhere easier to deploy and scale. It’s neuromorphic processor called, Akida™, is pure digital and is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, making it cost effective and simple to implement. Designed for sustainable AI, Akida™, performs complex functions at the device level, delivering a real-time, instant response.

BrainChip's solutions have applications in various industries, including automotive, smart homes, security and surveillance. The company has partnered with industry leaders such as Arm, Mercedes Benz, and Renesas driving intelligence into next generation device
s."

So,

I knew
Brainchip had partnered with ARM its on their website.

I DID NOT know that Brainchip had partnered with Renesas to drive intelligence into next generation devices what was announced on the ASX was that Renesas had purchased two nodes of AKIDA IP and more recently that Renesas had taped out its own chip which it would be bringing to market later in 2023. Even Chippy said in his magazine interview that they had purchased what they needed from Brainchip fullstop. No ongoing partnership was implied in any way shape or form.

I MOST CERTAINLY DID NOT KNOW THAT MERCEDES BENZ HAD PARTNERED WITH BRAINCHIP TO DRIVE INTELLIGENCE INTO NEXT GENERATION DEVICES because all I was told by Brainchip was they could not talk about Mercedes Benz other than to say they continued to work with Mercedes Benz and Mercedes Benz had said they were working with artificial intelligence experts Brainchip. Just because you are working with someone it does not mean it is a partnership nor does it signal anything about your future intentions. Brainchip has on its website that they are trusted by Mercedes Benz. I am trust my brother-in-law but I am not in partnership. This is a significant statement of intent.

Given this is a revised statement I think it is reasonable to conclude that both Teksun and Brainchip have agreed on this wording. Indeed Tony Dawe has said Brainchip has approved this new wording. As such I am just as excited if not more so by what it now publicly reveals. The significance in my opinion dictates the conclusion that Teksun had leverage and wanted something of equal or greater significance to replace Cisco, Toshiba and Lenaro with for the purpose of driving their customers interest in a Teksun Brainchip product offerings. In other words if you don't want us to use Cisco, Toshiba and Lenaro what can we use?

I hope that our German investor friends take comfort from the fact that this partnership between Mercedes Benz and Brainchip to drive intelligence into next generation devices has now been publicly confirmed.

My opinion only DYOR
FF

AKIDA
Ok, so if QCOM is not the “communications” company, could CISCO be it? Could we be involved in Industrial IoT with CISCO (?)

Well, CISCO is another household name I approve of 😀 👍
 
  • Like
Reactions: 11 users

Makeme 2020

Regular
 
  • Like
  • Fire
Reactions: 29 users

Makeme 2020

Regular
 
  • Like
  • Fire
  • Wow
Reactions: 22 users
D

Deleted member 118

Guest
A complete stranger at embedded world day 2

 
  • Fire
  • Like
Reactions: 3 users

rgupta

Regular
Latest tweet from edge impulse.

It is an Nvidia embedded product which can survive 2 years on 19ah battery.
If brainchip is not involved here that means ???
 
  • Like
Reactions: 6 users

rgupta

Regular
Looks like train on the edge. Camera, noise, vibration samples etc.
Another part of the puzzle megachips sell two of our licences any guesses who can be those two.
I assume more likely than not edge impulse is one of them.
Dyor
 
  • Like
Reactions: 2 users
Top Bottom