BRN Discussion Ongoing

Learning

Learning to the Top 🕵‍♂️
Only rule on a roller coaster is stay seated. It’s those who stand up who loose their head. 😂🤣😞

FF

AKIDA BALLISTA
Very true FF,

A roller coaster ride is fun and exciting and sometime dangerous. But, one must stay seated and buckle up. And it will arrive at the destination!
The moment you unbuckle and get off halfway, thats is dangerous (financially).

So stay seated and enjoy the ride everyone 🎢🚃🚀

Its great to be a shareholder.
 
  • Like
  • Fire
  • Love
Reactions: 28 users
I highly recommend anyone, who's interested in the workings of market manipulation, watch this video.



It is mainly aimed at informing people who like to trade, but provides valuable insight, into the actions of "Market Movers", "Smart Money" and "Professional Shorters".

Particularly interesting, was his statements that say, these individuals/algorithms, can see all orders in the system.
Including "stop loss" which are basically sell orders.

And the "dropping of the penny" for me, that they are responsible, for the big moves, in both directions, as traps..

These measures, apply to all markets and will be played out, on a larger scale, in the US tonight.

Incidently, I've only watched 2 or 3, of this guy's videos, a while back, but he's responsible for my adoption of Heikin-Ashi candles, when I do check the charts..
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users
Maybe comments of the recent podcast have been posted, but I have to say that listening to Doug Fairbairn was magical.

Megachips have certainly picked a top man to head their operations in the US, he has clearly stated the early product pipeline, that
being, Wearables, Security and IIOT Industrial Internet of Things, so expect our IP embedded in this fields first.

The billion company that nobody had/has ever heard of, except within the Japanese market space, until now !

I had a quiet laugh to myself earlier today while chatting to her younger sister back in New Zealand, back in 2020 while I was over
at my property in NZ, we had a conversation about buying into Brainchip at 5c, today she was considering buying a small parcel, and
I said the price was currently 0.852, it was very cheap, having myself bought more at 0.95, $1.07 and $1.16 recently, I'm not concerned
with those prices whatsoever, as I was when buying at 0.039...but my advice to her was, if you weren't prepared to listen to me then, at
the 5c mark, why listen to me now at 0.852.

I just listen to our founder and all our brilliant staff, if they say it's all on track, well, that's good enough for me.

Always trust your own gut feel, not everyone shares the same risk tolerance as I and many others do, and that's ok, some investors are
more than happy working on the yield as their return, and I except that.

Cheers and goodnight from Perth.....Tech x
Hey Tech where do you have a property in Nz
I have a little place on waiheke island, I love the place but never to get to spend enough time there, hopefully things will change and I will have the funds to support myself soon if all Things go the way I think we they will.
Go you good thing BRN…
 
  • Like
  • Love
Reactions: 12 users
Excellent post on LinkedIn @chapman89 just had a read and a like.
 
  • Like
  • Fire
Reactions: 8 users
  • Like
Reactions: 3 users
Not sure if you got a LinkedIn account.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

cosors

👀
Hi Cosors,

They're using a CNN-IP dedicated deep learning module for their cameras not SNN so I don't think they're using Akida. It looks like they are targeting cheaper models (niche market) not looking to go fully autonomous or past ADAS Level 3. Very basic hands off stuff and auto parking at this stage.
Those who are well versed in the subject can may ignore this post.

I'm responding to you because I read this and it was new to me, I've only been around a short time.

May 29, 2020
"The startup Brainchip therefore set itself the goal of developing an SNN-based chip with the Akida that is suitable for use in edge devices. It was actually supposed to be on the market last year. But then co-founder and CTO Peter van der Made realized that potential users would not be able to do much with a pure SNN chip simply because they are used to CNNs. So Brainchip decided to also integrate CNN functions on the Akida. "That's why we accepted the additional development time of one year and integrated MAC arrays on the chip," explained Peter van der Made to Markt&Technik."

I guess that's still the case, it can be chosen between SNN and CNN, just that it can't be processed as fast and energy efficient? I hope I understand this correctly. So if that's the case then I'll have to revise my original answer a bit because I would have been wrong. Please bear with me on my newbe post.

Here is a slightly older (2020) article, short and easy to understand. For those who sometimes can not follow the content and wonder what Diogenese talks about.
I don't think it's been shared here yet. It starts with Akida and ends with ReRAM from Weebit in two parts:
https://www.elektroniknet.de/halbleiter/dem-gehirn-immer-aehnlicher.176820.html

I'm not trying to add the two together, Diogenese has already made it clear that this would be difficult if I have understood this correctly. So not compatible without further ado.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 12 users
Further to the earlier info on Siemens, just found a recent blog post (June2) by them as below.

No reference to BRN or Nviso obviously however definitely appear to be fans of neuromorphic AI though author thinks space may be first major start :unsure:

Maybe didn't know of the Nviso collaboration :)




Neuromorphic systems and evolutionary AI
By Spencer Acain • June 2, 2022 • 3 MIN READ

In the field of artificial intelligence, the idea of evolution is a critical one. The very concept of an AI algorithm is one that can grow, change, adapt, and in essence evolve to fit its design requirements. AI research has progressed to such a point that an AI can be trained to achieve a level of performance far surpassing what a human could attain under specific conditions, such as a smart robotics station, offering never before seen precision and speed in smart factories.

However, this high degree of speed and precision does introduce some downsides as well. Training AI to work well in a specific location and task requires a vast amount of data from that location. What’s more, if conditions change, the algorithm may suffer reduced accuracy or cease to function altogether. Evolutionary research and neuromorphic systems is a promising approach for helping AI evolve in a more natural way, through both software and hardware, allowing it to tackle a wider range of conditions without loss of performance.

Creating a robust solution is one of the most important aspects of designing an AI system, especially for industrial applications. AI algorithms are trained to accommodate a wide expected range of conditions based on their expected operation environment, but when incoming data falls outside of the range of training data or becomes too noisy, the system will more than likely fail. Unlike AI, humans can continue to perform tasks even under widely varying conditions, so the goal is to design AI to be more human-like. This is exactly what neuromorphic systems propose to do by seeking to emulate not just the neuron structure of the brain, but the function of the synapses as well through memristors. Synapsis and memristors have the property of changing resistance based on how much current has flowed through them in the past, essentially creating a memory of past values. Thanks to this memory-like functionality the AI decision-making process becomes more robust and less susceptible to failing from less-than-ideal input data.

Going hand in hand with neuromorphic systems is evolutionary research that fundamentally is similar to reinforcement learning but does have some key differences. Reinforcement learning is a common way for AI to be trained by governing a system with a reward function that reinforces correct behaviors and punishes poor ones. This allows the software essentially on its own to evolve a solution that fits the requirements. Where an evolutionary system differs is in the scope of what the reward function manages. Called genetic code, its scope goes far beyond training just software, instead it governs everything from robot design to software to control chip layout. While in a factory the options for robot design are limited, the ability to train robotics control hardware through programmable FPGAs along with the software presents a unique path to improving AI robustness when combined with neuromorphic systems.

Allowing the hardware to be trained alongside the software under these new principles paves the way for an entirely new level of adaptability and robustness in AI. Compared to a traditional machine learning training schema, a neuromorphic system does not need to be trained with only “correct” data. Rather its training process can include non-ideal and highly variable data or even outright failure states. Although it might seem counter-productive to include undesirable outcomes in the training process, this approach allows the system to learn and evolve so it can function accurately even when contending with poor quality data or faulty hardware. More than accounting for failures, a neuromorphic system might also help mitigate the heavy reliance AI has on deployment-specific training data. Rather than resilience against poor quality data, the system would be accounting for environmental differences between the training location and the final deployment site.

The quest for truly human-like AI and robotics still has a long way to go but evolutionary design and neuromorphic systems combining to allow hardware and software to truly evolve side-by-side bring us a step closer to true natural evolution. While this technology still requires substantial development before it is economically viable to replace existing AI implementations in phones and factories, it may see its first applications in the cold reaches of space. The extreme conditions found even in low earth orbit make it difficult for existing AI systems to function accurately, something of paramount importance when operating a million miles from earth. In years to come, neuromorphic systems may become the living mind of the next generation of satellites and deep space exploration crafts pushing the boundary of human understanding.

Siemens Digital Industries Software is driving transformation to enable a digital enterprise where engineering, manufacturing and electronics design meet tomorrow. Xcelerator, the comprehensive and integrated portfolio of software and services from Siemens Digital Industries Software, helps companies of all sizes create and leverage a comprehensive digital twin that provides organizations with new insights, opportunities and levels of automation to drive innovation.

For more information on Siemens Digital Industries Software products and services, visit siemens.com/software or follow us on LinkedIn, Twitter, Facebook and Instagram.

Siemens Digital Industries Software – Where today meets tomorrow.
 
  • Like
  • Fire
Reactions: 26 users
Those who are well versed in the subject can ignore this post.

I'm responding to you because I read this and it was new to me, I've only been around a short time.

May 29, 2020
"The startup Brainchip therefore set itself the goal of developing an SNN-based chip with the Akida that is suitable for use in edge devices. It was actually supposed to be on the market last year. But then co-founder and CTO Peter van der Made realized that potential users would not be able to do much with a pure SNN chip simply because they are used to CNNs. So Brainchip decided to also integrate CNN functions on the Akida. "That's why we accepted the additional development time of one year and integrated MAC arrays on the chip," explained Peter van der Made to Markt&Technik."

I guess that's still the case, it can be chosen between SNN and CNN, just that it can't be processed as fast and energy efficient? I hope I understand this correctly. So if that's the case then I'll have to revise my original answer a bit because I would have been wrong. Please bear with me on my newbe post.

Here is a slightly older (2020) article, short and easy to understand. For those who sometimes can not follow the content and wonder what Diogenese talks about.
I don't think it's been shared here yet. It starts with Akida and ends with ReRAM from Weebit in two parts:
https://www.elektroniknet.de/halbleiter/dem-gehirn-immer-aehnlicher.176820.html

I'm not trying to add the two together, Diogenese has already made it clear that this would be difficult if I have understood this correctly. So not compatible without further ado.
"I guess that's still the case, it can be chosen between SNN and CNN, just that it can't be processed as fast and energy efficient?"

My understanding Cosors, is that AKIDA takes the Convolurional Neural Network and Spikenises it and so in doing so, retains the lion's share of the Spiking NNs benefits 😉
 
  • Like
  • Love
  • Fire
Reactions: 17 users
Morning Rise from the ashes,

That was a pretty intense read , on my mobile.

Skimming through, all interesting.

Try these pages , if limited for time.

Page 79, Neuromorphic chips.

Page 91, Artificial Synapses/ Brain.

Page 129, Molecular Recognition.

Page 282, interesting chart.

Page 292, interesting

Page 328, Fast Emerging Tech.

Definitely worth a read.

Regards,
Esq.
Good morning @Esq.111
Yes I found it a very insightful read🤓 cheers for taking the time to have a read. Glad you found it worth your while.
 
Last edited:
  • Like
  • Fire
Reactions: 7 users

Diogenese

Top 20
Those who are well versed in the subject can ignore this post.

I'm responding to you because I read this and it was new to me, I've only been around a short time.

May 29, 2020
"The startup Brainchip therefore set itself the goal of developing an SNN-based chip with the Akida that is suitable for use in edge devices. It was actually supposed to be on the market last year. But then co-founder and CTO Peter van der Made realized that potential users would not be able to do much with a pure SNN chip simply because they are used to CNNs. So Brainchip decided to also integrate CNN functions on the Akida. "That's why we accepted the additional development time of one year and integrated MAC arrays on the chip," explained Peter van der Made to Markt&Technik."

I guess that's still the case, it can be chosen between SNN and CNN, just that it can't be processed as fast and energy efficient? I hope I understand this correctly. So if that's the case then I'll have to revise my original answer a bit because I would have been wrong. Please bear with me on my newbe post.

Here is a slightly older (2020) article, short and easy to understand. For those who sometimes can not follow the content and wonder what Diogenese talks about.
I don't think it's been shared here yet. It starts with Akida and ends with ReRAM from Weebit in two parts:
https://www.elektroniknet.de/halbleiter/dem-gehirn-immer-aehnlicher.176820.html

I'm not trying to add the two together, Diogenese has already made it clear that this would be difficult if I have understood this correctly. So not compatible without further ado.
Thanks for this cosors,

As you say, it explains the concepts in readily-understandable language.

Although I had deduced the need for 2-bit and 4-bit weights/actuations Akida to use MAC (Multiply Accumulate) calculation circuits, this is one of the first publications I have seen which attributes the presence of MACs to a statement from the company.

There is a trade off between speed/power efficiency and accuracy as the number of bits in the weights/actuations increases.

A 4*4 MAC is roughly speaking 4 times faster/less power hungry than an 8*8 MAC.

A 1-bit (pure spiking) Akida is about 16 times faster/more power efficient than the 4*4 MAC Akida embodiment. [edit: However different layers in Akida can have different numbers of bits, so for example, a first layer may use 4-bits and the next 2 layers may use 1 bit, and the output may (possibly?) use 4 bits. ]

As the article you attached says, Weebit are still trying to get a consistently reproduceable analog MemRistor, as the manufacturing process is difficult to control precisely. Manufacturing variations are a problem for analog neurons/synapses because there are hundreds or thousands of input signals (currents) for each synapse in which the input currents are added to produce an output voltage whose amplitude is determined by the sum of the input currents flowing in a resistor. That is, for each input current, the output voltage increases by a fixed amount. Hence the operating voltage of the circuit must be divided into very small increments to accommodate the number of potential input currents. Thus variations in the resistance of the MemRistor/ReRAM elements can produce errors which accumulate.

The reason this is not a problem with digital neurons/synapses is that the digital voltage has a much greater margin for error because the operating voltage only needs to be divided in half to indicate either a 1 or a zero, and the accumulation of signals is a digital number composed of 1s or zeros.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

Diogenese

Top 20
Thanks for this cosors,

As you say, it explains the concepts in readily-understandable language.

Although I had deduced the need for 2-bit and 4-bit weights/actuations Akida to use MAC (Multiply Accumulate) calculation circuits, this is one of the first publications I have seen which attributes the presence of MACs to a statement from the company.

There is a trade off between speed/power efficiency and accuracy as the number of bits in the weights/actuations increases.

A 4*4 MAC is roughly speaking 4 times faster/less power hungry than an 8*8 MAC.

A 1-bit (pure spiking) Akida is about 16 times faster/more power efficient than the 4*4 MAC Akida embodiment.

As the article you attached says, Weebit are still trying to get a consistently reproduceable analog MemRistor, as the manufacturing process is difficult to control precisely. Manufacturing variations are a problem for analog neurons/synapses because there are hundreds or thousands of input signals (currents) for each synapse in which the input currents are added to produce an output voltage whose amplitude is determined by the sum of the input currents flowing in a resistor. That is, for each input current, the output voltage increases by a fixed amount. Hence the operating voltage of the circuit must be divided into very small increments to accommodate the number of potential input currents. Thus variations in the resistance of the MemRistor/ReRAM elements can produce errors which accumulate.

The reason this is not a problem with digital neurons/synapses is that the digital voltage has a much greater margin for error because the operating voltage only needs to be divided in half to indicate either a 1 or a zero, and the accumulation of signals is a digital number composed of 1s or zeros.
On the SNN/CNN issue, CNN is a software technique running MACs on CPU/GPU, hence slow and power hungry.

CNN can be converted to run on Akida SoC using the Akida CNN2SNN toolkit, but it isn't quite plug-and-play.

https://doc.brainchipinc.com/user_guide/cnn2snn.html

The Brainchip CNN2SNN toolkit provides means to convert Convolutional Neural Networks (CNN) that were trained using Deep Learning methods to a low-latency and low-power Spiking Neural Network (SNN) for use with the Akida runtime. This document is a guide to that process.

The Akida neuromorphic IP provides Spiking Neural Networks (SNN) in which communications between neurons take the form of “spikes” or impulses that are generated when a neuron exceeds a threshold level of activation. Neurons that do not cross the threshold generate no output and contribute no further computational cost downstream. This feature is key to Akida hardware efficiency. The Akida neuromorphic IP further extends this efficiency by operating with low bitwidth “synapses” or weights of connections between neurons.

Despite the apparent fundamental differences between SNNs and CNNs, the underlying mathematical operations performed by each may be rendered identical. Consequently, the trained parameters of a CNN can be converted to be Akida-compatible, given only a small number of constraints. By careful attention to specifics in the architecture and training of the CNN, an overly complex conversion step from CNN to SNN can be avoided. The CNN2SNN toolkit comprises a set of functions designed for the popular Tensorflow Keras framework, making it easy to train a SNN-compatible network
.

1655392991644.png
 
  • Like
  • Fire
  • Love
Reactions: 21 users

cosors

👀
Thanks for this cosors,

As you say, it explains the concepts in readily-understandable language.

Although I had deduced the need for 2-bit and 4-bit weights/actuations Akida to use MAC (Multiply Accumulate) calculation circuits, this is one of the first publications I have seen which attributes the presence of MACs to a statement from the company.

There is a trade off between speed/power efficiency and accuracy as the number of bits in the weights/actuations increases.

A 4*4 MAC is roughly speaking 4 times faster/less power hungry than an 8*8 MAC.

A 1-bit (pure spiking) Akida is about 16 times faster/more power efficient than the 4*4 MAC Akida embodiment.

As the article you attached says, Weebit are still trying to get a consistently reproduceable analog MemRistor, as the manufacturing process is difficult to control precisely. Manufacturing variations are a problem for analog neurons/synapses because there are hundreds or thousands of input signals (currents) for each synapse in which the input currents are added to produce an output voltage whose amplitude is determined by the sum of the input currents flowing in a resistor. That is, for each input current, the output voltage increases by a fixed amount. Hence the operating voltage of the circuit must be divided into very small increments to accommodate the number of potential input currents. Thus variations in the resistance of the MemRistor/ReRAM elements can produce errors which accumulate.

The reason this is not a problem with digital neurons/synapses is that the digital voltage has a much greater margin for error because the operating voltage only needs to be divided in half to indicate either a 1 or a zero, and the accumulation of signals is a digital number composed of 1s or zeros.
Thank you very much for both contributions!
Even I understand that.
I'm glad you were able to find something in the article as well (hint regarding MACs). Here is another report that I have decided to read and that was mentioned here some time ago. Only he does not resolve the matter as accurately as you with the other post and the CNN2SNN toolkit. This is probably one of the reports that cost a lot of money initially and perhaps focuses more on research regarding Brainchip for investment..

"Akida can perform event-based convolution as well, enabling the chip to run both Convolutional Neural Networks (CNN, see Appendix I) and Spiking Neural Networks (SNN). BrainChip is the only company to have combined neuromorphic processing with event-based convolution."
Welcome to the revolution

https://static1.squarespace.com/sta...s+research+initiation+report+-+20+08+2021.pdf
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Slymeat

Move on, nothing to see.
  • Like
  • Haha
Reactions: 3 users
I can now visualise how you saw emus.
By the looks of that drawing I think my mate was seeing pink elephants at the time 😂
 
  • Haha
  • Like
Reactions: 2 users
  • Like
  • Fire
Reactions: 4 users

cosors

👀
For those that may be feeling a little bit miffed about the last few weeks on the stock market may I suggest you book a session (if you are in WA) to the Van Gough Alive happening at the Supreme Court Gardens. He was a troubled man but look what he's left us to enjoy.
Thats me with my dear old mother.
I actually just wanted to take a quick look at Panasonic and social robotics and Akida. See what they are particularly proud of!


panavangogh.png

 
  • Like
  • Love
  • Fire
Reactions: 10 users

Slymeat

Move on, nothing to see.
I highly recommend anyone, who's interested in the workings of market manipulation, watch this video.



It is mainly aimed at informing people who like to trade, but provides valuable insight, into the actions of "Market Movers", "Smart Money" and "Professional Shorters".

Particularly interesting, was his statements that say, these individuals/algorithms, can see all orders in the system.
Including "stop loss" which are basically sell orders.

And the "dropping of the penny" for me, that they are responsible, for the big moves, in both directions, as traps..

These measures, apply to all markets and will be played out, on a larger scale, in the US tonight.

Incidently, I've only watched 2 or 3, of this guy's videos, a while back, but he's responsible for my adoption of Heikin-Ashi candles, when I do check the charts..

Thanks for introducing me to Heiken-Ashi @DingoBorat.

I’m not really all that into charting, I’m more a long-term hold, and ride the roller coaster kind of investor. But I must admit that, on first visit, Heiken-Ashi does indeed appear to be a trend spotting tool.

I also agree that charts can confirm market sentiment, which often drives prices and can help explain seeming insanity. But I tend to let the science do most of the convincing for me.

As an exercise, I have included the Heiken-Ashi chart for BrainChip. To my untrained eye, it appears to be demonstrating indecision at the moment. Brave traders may see this as a reversal, but it could be a false indicator, similar to those at $1.70, $1.20, and $1.10. The green, confirmed up trends, and red, confirmed down trends, of the past, are quite easy to spot. And I take it the “tails” a take some getting used to. For instance, in the uptrend from 70c to,$2.30, all,bars have decent tails, which, as I understand it, indicates trend-changing pressure. It would take a brave trader to ignore those and stay the path.

But therein also lies my aversion to all forms of charting, they all lag and are open to interpretation! Plus you MUST always consider the real-world picture which can shape the chart and cause deception. Events such as demergers, splits, consolidations, and special distributions can throw the chart into disarray for extended periods—months.

That said, this format is one of my new favourites. I also like MACD and a personalised modification of Hull—which mainly uses convergence and divergence of the multiple MAs.

image


Following is my modified Hull approach applied to BRN to keep it relevant. This uses multiple fast and slow period MAs. The fast MAs I use are 3,5,7,9,11, and 13 and the slow MAs are 21,24,27,30,33, and 36. I have added Heiken-Ashi to show how the two work nicely together. With Heineken-Ashi surprisingly having far less lag but showing more false reversals.



1655400135530.png
 
  • Like
  • Love
  • Fire
Reactions: 25 users

SERA2g

Founding Member
Not sure what you are implying here.

The post reads ''Stellantis has chosen our third-generation LIDAR..................''

Now in this instance Valeo is stating that Stellantis has chosen their tech - they have not revealed too much technical details, so they possibly have NDA in place too.

Brainchip openly mentioned Valeo via ASX Announcement a couple of years ago, not sure how much more open the company can be when declaring a client relationship.

NDA is important.

A silly example - those who eat KFC, do you know what is in it? Apparently there are 18 ingredients which not many know, because that's their secret recipe. Don't think those who handle it are going to share it with every person just because they want to know.
It is their moat, which they won't give it away.
A dam good moat, those 18 ingredients.
 
  • Haha
  • Like
Reactions: 7 users
Top Bottom