BRN Discussion Ongoing

krugerrands

Regular
Between the 3rd and the 4th of October, the shorters covered more than 3 million shares, making very decent coin.

Very disappointed, to see some soft serves here, gifting cash to these A holes..

Short position has since increased.

BrainChip can and will, weather and outperform the markets!

Shorters are not a charitable organisation!

But if fear fills your hearts about the future, by looking at the present World macro stage..
Go ahead and gift them your shares 🙄...

Take those numbers with a grain of salt.

DateReported Short
05/10/2022115,318,892
04/10/2022113,784,258
03/10/2022117,183,327
30/09/2022117,471,870

Gross shorts for the 4th 1,265,143.

Do you believe that on the 4th they managed to cover 4,664,212 shares ( 1,265,143 gross new shorts on 4th + 3,399,069 covered net positions shorts from 3rd ) when the trading volume on the day was 8,002,430 ?

That would mean that trading volume on the 4th was
~58% short covering
~16% new shorts
~26% normal trade

or

~74% shorting trade ( new/covering )
~26% normal trade

.....

A birds-eye view of the activity.
1665459764704.png


1665459779296.png


Price has been roughly around the $1 mark since 03/2022 but reported net short position keeps increasing from then 35M or 2% to where we are now.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

jk6199

Regular
The great thing about this TSE site being set up (thank you again!) is for days like this.

Price down, but still positive and informative topics brought up.

I just have a look at all the miniscule amounts of trade transactions and smile. When you see what is behind those attempts to drop the share price down a couple of cents without much success, I know I'm not the most stressed person involved in BRN.

4c this month and a patent announcement or other could drop anytime, hopefully soon!
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Shadow59

Regular
The great thing about this TSE site being set up (thank you again!) is for days like this.

Price down, but still positive and informative topics brought up.

I just have a look at all the miniscule amounts of trade transactions and smile. When you see what is behind those attempts to drop the share price down a couple of cents without much success, I know I'm not the most stressed person involved in BRN.

4c this month and a patent announcement or other could drop anytime, hopefully soon!
Massive trade in collector shares though.
brn snip.JPG
 
  • Haha
  • Like
Reactions: 13 users

Diogenese

Top 20
@Diogenese or anyone else more techy.....opinion or education pls :)

The BRN patent below was one filed in 2019 and whilst we all know of SNN I see references to SCNN as a spiking convolutional neural network which we've also heard of.

I noticed the acronym used within some NetApp documentation in relation to AI, NVDA examples etc. (some snips & links below) and mused whether their use of the acronym is similar. They call it Spatial Convolutional Neural Network and relates to autonomous driving and other uses but will stick to AD for this purpose. They do reference it across to GPUs as well, so is it something that is purely GPU driven by NetApp files, embedded or something else.

My question is:

Within our patent I also see references to the spatial nature of the spikes and presume NetApp's use of the term is purely around the "spatial" effect within CNN but given we know we can convert CNN to SNN?

Are these 2 distinctly separately terms / uses or could there be an overlap as we know the industry all uses various acronyms to suit their own discussions, products but are essentially one and the same or similar?

Are we able to be integrated cloud side?



But conventional SNNs can suffer from several technological problems. First, conventional SNNs are unable to switch between convolution and fully connected operation. For example, a conventional SNN may be configured at design time to use a fully-connected feedforward architecture to learn features and classify data. Embodiments herein (e.g., the neuromorphic integrated circuit) solve this technological problem by combining the features of a CNN and a SNN into a spiking convolutional neural network (SCNN) that can be configured to switch between a convolution operation or a fully-connected neural network function. The SCNN may also reduce the number of synapse weights for each neuron. This can also allow the SCNN to be deeper (e.g., have more layers) than a conventional SNN with fewer synapse weights for each neuron. Embodiments herein further improve the convolution operation by using a winner-take-all (WTA) approach for each neuron acting as a filter at particular position of the input space. This can improve the selectivity and invariance of the network. In other words, this can improve the accuracy of an inference operation.
  • In some embodiments, an input to a SCNN is derived from an audio stream. An Analog to Digital (A/D) converter can convert the audio stream to digital data. The A/D converter can output the digital data in the form of Pulse Code Modulation (PCM) data. A data to spike converter can convert the digital data to a series of spatially and temporally distributed spikes representing the spectrum of the audio stream.
  • [0052]
    In some embodiments, an input to a SCNN is derived from a video stream. The A/D converter can convert the video stream to digital data. For example, the A/D converter can convert the video stream to pixel information in which the intensity of each pixel is expressed as a digital value. A digital camera can provide such pixel information. For example, the digital camera can provide pixel information in the form of three 8-bit values for red, green and blue pixels. The pixel information can be captured and stored in memory. The data to spike converter can convert the pixel information to spatially and temporally distributed spikes by means of sensory neurons that simulate the actions of the human visual tract.


The elements used in this solution are:
  • Azure Kubernetes Service (AKS)
  • Azure Compute SKUs with NVIDIA GPUs
  • Azure NetApp Files
  • RUN: AI
  • NetApp Trident


In this architecture, the focus is on the most computationally intensive part of the AI or machine learning (ML) distributed training process of lane detection. Lane detection is one of the most important tasks in autonomous driving, which helps to guide vehicles by localization of the lane markings. Static components like lane markings guide the vehicle to drive on the highway interactively and safely.

Convolutional Neural Network (CNN)-based approaches have pushed scene understanding and segmentation to a new level. Although it doesn’t perform well for objects with long structures and regions that could be occluded (for example, poles, shade on the lane, and so on). Spatial Convolutional Neural Network (SCNN) generalizes the CNN to a rich spatial level. It allows information propagation between neurons in the same layer, which makes it best suited for structured objects such as lanes, poles, or truck with occlusions. This compatibility is because the spatial information can be reinforced, and it preserves smoothness and continuity.

Thousands of scene images need to be injected in the system to allow the model learn and distinguish the various components in the dataset. These images include weather, daytime or nighttime, multilane highway roads, and other traffic conditions.

For training, there is a need for good quality and quantity of data. Single GPU or multiple GPUs can take days to weeks to complete the training. Data-distributed training can speed up the process by using multiple and multinode GPUs. Horovod is one such framework that grants distributed training but reading data across clusters of GPUs could act as a hindrance. Azure NetApp Files provides ultrafast, high throughput and sustained low latency to provide scale-out/scale-up capabilities so that GPUs are leveraged to the best of their computational capacity. Our experiments verified that all the GPUs across the cluster are used more than 96% on average for training the lane detection using SCNN.

View attachment 18567


NetApp, Inc. is an American hybrid cloud data services and data management company headquartered in San Jose, California. It has ranked in the Fortune 500 from 2012–2021. Founded in 1992 with an IPO in 1995, NetApp offers cloud data services for management of applications and data both online and physically.



  1. HOME
  2. ARTIFICIAL INTELLIGENCE
  3. AI IN AUTOMOTIVE

AI in the automotive industry: Innovation turbocharged​

AI is driving the future—and present—of the automotive industry, leaving the past in the dust. With the right AI infrastructure, you can stay ahead of the competition and meet ever-evolving customer demands.
Chalk and cheese. As you say there is no acronym police, so when presented with acronyms in a new technology, it's important to know the context and what the author intends.

This is a software solution running on GPUs. It is for "distributed training", ie, those little blokes on the left of the diagram are collaboratively doing the training (image labelling).

The author explained that normal CNN is not so hot for long objects or occlusions.

Their SCNN is related to the interconnexion of neurons in the same layer (Normally, neurons in one layer do not talk to each other, only to neurons in the next layer).

Spatial Convolutional Neural Network (SCNN) generalizes the CNN to a rich spatial level. It allows information propagation between neurons in the same layer, which makes it best suited for structured objects such as lanes, poles, or truck with occlusions. This compatibility is because the spatial information can be reinforced, and it preserves smoothness and continuity.

I don't see why this could not be applied in Akida, but a definitive answer is above my pay grade.

Remember, CNN involves sliding a little 'window' across the image sensor pixels to take small samples of the image for analysis. Maybe the intra-layer connexions strengthens the ability of spatial CNN to associate pixel information from more distant parts of the sensor to overcome the problems with long/occluded objects?

Blue sky musings: If the window is 5*5 pixels, and the "stride" (the distance the window moves between samples) is 2 pixels, then, after 3 strides there are no common pixels. So maybe the intra-layer connexions provide some association with the more distant pixels?

I wonder if the human classifiers define the bounding boxes?
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users
Massive trade in collector shares though. View attachment 18573
And they are so quick by the time I check the serial number to see if its one I am missing it is already sold. So annoying.:mad:🤬😤🤡🤡🤡
 
  • Haha
  • Like
  • Sad
Reactions: 19 users
Chalk and cheese. As you say there is no acronym police, so when presented with acronyms in a new technology, it's important to know the context and what the author intends.

This is a software solution running on GPUs. It is for "distributed training", ie, those little blokes on the left of the diagram are collaboratively doing the training (image labelling).

The author explained that normal CNN is not so hot for long objects or occlusions.

Their SCNN is related to the interconnexion of neurons in the same layer (Normally, neurons in one layer do not talk to each other, only to neurons in the next layer).

Spatial Convolutional Neural Network (SCNN) generalizes the CNN to a rich spatial level. It allows information propagation between neurons in the same layer, which makes it best suited for structured objects such as lanes, poles, or truck with occlusions. This compatibility is because the spatial information can be reinforced, and it preserves smoothness and continuity.

I don't see why this could not be applied in Akida, but a definitive answer is above my pay grade.
Thanks D

Figured you'd be able to condense into something easier.

Just thought better post the source info to get context.

Be nice then if we were able to cross into those type applications.

Maybe one day.
 
  • Like
Reactions: 6 users

robsmark

Regular
I'd rather be surprised to the upside, than disappointed..

If royalties do get a mention, as having started to flow, which I think is a very real possibility, I don't think we will ever see, a break up of incoming revenues..

Good Fortune to all Holders! 👍
Slava Ukraini!
If these deals where already penciled in between customers and Brainchip, then the surrounding economic factors are unlikely to affect them. It is highly unlikely that tech giants (Fortune 500, etc.) would change their strategic plan based on a war in Ukraine, or high (as expected following COVID) inflation figures.

Tech is a race, and it always has been - to use Samsung and Apple as an example, Apple has a plan to release an iPhone in two years, as would Samsung with the Galaxy. Each of these companies are testing, signing, making deals with whomever suppliers to implement into these future offerings. If they terminate a deal that is required to develop these products in a effort to reduce expenditure in the short-term, they pay for that exponentially in the long-term. Say Apple puts a deal on hold and Samsung decides to push ahead - then when two companies each release their respective product, the better one (Samsung in this instance) will gain more market share. Besides, for the most part payment for these suppliers (Brainchip in this case) is made as the company releases their products into the wild - (e.g., in two years time), so by this logic, these deals have already been signed (perhaps under the umbrella for an existing material customer), and the payment/revenue to Brainchip is due on X, Y, Z. This is especially true in a world where tech innovation has plateaued, I mean how impressive are new phone (as an example) releases over their predecessors? Certainly not even close to what they where a decade ago.

Obviously this is a dream scenario (Apple and Samsung), but the principal remains consistent with tech companies of all valuations. Its a brutal industry - dog eat dog - survival of the fittest. Evident if we cast our eyes back over the years when considering companies like Nokia, Compaq, Blackberry, even Blockbuster... They failed to innovate, and they failed as a company. These lessons have already been learnt, and big tech knows this. This is why Intel is pumping $billions into SNN research - its failed to innovate over recent years, and now it needs to make up ground.

The other indicator for me is Sean Hehir, our own CEO. A seasoned Silicon Valley executive who obviously understands business and big tech. During an interview with Tom Piotrowski (Commsec), this May, he stated, and I quote, "I look forward to standing in front of you next year, and talking about results". I for one did not take that statement lightly, and I'm pretty sure that Sean wouldn't have said it unless he was absolutely certain that he (they) would be successful in delivering - free markets have a nose for bullshit, and a failure to deliver would be career suicide, especially for a new CEO, regardless of surrounding economic factors (especially true given he made that statement during COVID and an already fractured market).

Peter has said it, Sean has said it, the company has a market ready product - explosive growth is what I expect.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 64 users
Take those numbers with a grain of salt.

DateReported Short
05/10/2022115,318,892
04/10/2022113,784,258
03/10/2022117,183,327
30/09/2022117,471,870

Gross shorts for the 4th 1,265,143.

Do you believe that on the 4th they managed to cover 4,664,212 shares ( 1,265,143 gross new shorts on 4th + 3,399,069 covered net positions shorts from 3rd ) when the trading volume on the day was 8,002,430 ?

That would mean that trading volume on the 4th was
~58% short covering
~16% new shorts
~26% normal trade

or

~74% shorting trade ( new/covering )
~26% normal trade

.....

A birds-eye view of the activity.
View attachment 18571

View attachment 18572

Price has been roughly around the $1 mark since 03/2022 but reported net short position keeps increasing from then 35M or 2% to where we are now.
You cannot be suggesting that something suspicious is taking place on the ASX. Heaven forbid. I just cannot see that being allowed. We are a democracy with the rule of law. 😞☹️🤡🤡🤡
 
  • Haha
  • Like
  • Thinking
Reactions: 19 users
Take those numbers with a grain of salt.

DateReported Short
05/10/2022115,318,892
04/10/2022113,784,258
03/10/2022117,183,327
30/09/2022117,471,870

Gross shorts for the 4th 1,265,143.

Do you believe that on the 4th they managed to cover 4,664,212 shares ( 1,265,143 gross new shorts on 4th + 3,399,069 covered net positions shorts from 3rd ) when the trading volume on the day was 8,002,430 ?

That would mean that trading volume on the 4th was
~58% short covering
~16% new shorts
~26% normal trade

or

~74% shorting trade ( new/covering )
~26% normal trade

.....

A birds-eye view of the activity.
View attachment 18571

View attachment 18572

Price has been roughly around the $1 mark since 03/2022 but reported net short position keeps increasing from then 35M or 2% to where we are now.
The problem is, all the "reported" figures need to be taken with, not only a grain of salt, but a generous shaking of the container..

But we need some kind of gauge..

They know we monitor the figures and if any psychological edge, can be attained by skewing them, would they?

They need only "report", "not report" or report fictional figures, as they see fit.
Who's going to stop them?
The Australian Securities and Investments Commission (ASIC)? 🤣🤣🤣

That bunch of gutless, hand under the table, pen pushers?

As long as some are doing the right thing and reporting their stupid bets against us, which is evidenced by the overall "strengthening" short position, I'm happy 😉
 
  • Like
  • Love
  • Haha
Reactions: 18 users

TopCat

Regular
Here is a list of cameras in Sony’s product line which contain Sony’s Event Based Vision sensor ( well at least I assume it is ) . I can’t download it as you need to register with a valid email address , not a free one…I only have a gmail account. Maybe someone would like to try and download it see if it contains any juicy info.


Download​

A list of cameras with Event-based Vision Sensor (EVS)
Download a list of cameras that incorporate Sony's Event-based Vision Sensor
Register to Download
 
  • Like
  • Fire
Reactions: 13 users

Vojnovic

Regular
Here is a list of cameras in Sony’s product line which contain Sony’s Event Based Vision sensor ( well at least I assume it is ) . I can’t download it as you need to register with a valid email address , not a free one…I only have a gmail account. Maybe someone would like to try and download it see if it contains any juicy info.


Download​

A list of cameras with Event-based Vision Sensor (EVS)
Download a list of cameras that incorporate Sony's Event-based Vision Sensor
Register to Download
 

Attachments

  • CameraList_EVS_2207.pdf
    328.2 KB · Views: 141
  • Like
  • Love
Reactions: 14 users

TopCat

Regular
Thanks but I can’t get it to open
 
Thanks but I can’t get it to open
There's only one camera Sony lists 😔..
And it's made by Prophesee..
Who are they 🤔..
 

Attachments

  • _20221011_153855.JPG
    _20221011_153855.JPG
    56.8 KB · Views: 81
  • Like
  • Fire
  • Thinking
Reactions: 21 users

TopCat

Regular
There's only one camera Sony lists 😔..
And it's made by Prophesee..
Who are they 🤔..
The download said cameras…plural. Ripped off 😆
 
  • Haha
  • Like
  • Sad
Reactions: 6 users

Vojnovic

Regular
Thanks but I can’t get it to open
Strange. No problem, will screenshot it. This is all that's inside the document:
1665464774063.png
 
  • Like
  • Love
  • Fire
Reactions: 20 users
The extensive list of one contains:

“Prophesee
USB3.0
https://www.prophesee.ai/event- camera-evk4/
Prophesee's EVK(Evaluation Kit)can be used for evaluation purpouse only”

Evaluation Kit only for my money leaves lots of room for a subtle upgrade to the production model before mass production begins.

My opinion only DYOR
FF


AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 16 users

krugerrands

Regular
You cannot be suggesting that something suspicious is taking place on the ASX. Heaven forbid. I just cannot see that being allowed. We are a democracy with the rule of law. 😞☹️🤡🤡🤡

The thing is that they could very well be complying with the reporting guidelines and we would still be getting these outcomes.
It is just that these provisions will not result in accurate reporting and there is allowance for naked short selling.

I mentioned that in this post
Regulatory Guide 196 covering Short selling and reporting compliance (RG 196).

So yes, a bit of a sad clown circus.

These reports are only to placate the majority.
There should be no reason these numbers are not reported accurately and in real time like other market data... like if.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 7 users
The EVK was the release back in Apr.



PARIS, April 13, 2022 – Prophesee today announced the availability of an ultralight, compact HD evaluation kit (EVK4) for developers of computer vision systems who want to start evaluation of the new Sony IMX636ES HD stacked Event-Based Vision sensor, realized in collaboration between Sony and Prophesee. The full-featured EVK provides computer vision engineers with an extensively tested solution for efficient technology onboarding and rapid application prototyping and development. The kit is natively compatible with free award-winning software from Prophesee. It also includes premium-level technical support and knowledge center access to application notes, advanced documentation, community forums and more.
 
  • Like
  • Fire
Reactions: 18 users
Recent article from CFOTech Asia.

Wonder who we working with highlighted last couple paragraphs.



TinyML SaaS to become a billion-dollar market by 2030
Wed 21 Sep 2022 Gaurav Sharma

As TinyML vendors continue to democratize Machine Learning (ML) at a rapid pace, global technology intelligence firm ABI Research forecasts that TinyML Software-as-a-Service (SaaS) revenue will exceed US$220 million in 2022 and become an important component from 2025 onward.

While total revenue will be dominated by chipset sales, as TinyML device shipments continue to grow, the TinyML SaaS and professional service market have the potential to become a billion-dollar market by 2030, ABI Research adds.

ABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision-makers worldwide. The above findings are from ABI Research’s TinyML: A Market Update application analysis report, which is a part of the company’s AI and ML research service.

The TinyML market has come a long way since ABI Research first analyzed this market back in 2020.

The TinyML Foundation, which gathers most of the prominent vendors in this space, has substantially expanded in recent years.

Similar expansion has been in the applications of TinyML; with forest fire detection, shape detection, and seizure detection among some of the most spectacular use cases.

Moreover, given how central environmental sensors are to TinyML, the possibilities are extensive.

Nonetheless, ambient sensing and audio processing remain the most common applications in TinyML, with sound architectures holding an almost 50% market share in 2022. Most of these applications employ either a microcontroller (MCU) or an Application-Specific Integrated Circuit (ASIC).

The personal and work devices sector will be the most significant increase in the near future.

“Any sensory data from an environment can probably have an ML model applied to that data. Some of the most common applications include word spotting, object recognition, object counting, and audio or voice detection,” explains David Lobina, Artificial Intelligence and Machine Learning Research Analyst at ABI Research.

With the myriad possibilities, there are also potential pitfalls, but for which ABI Research believes there are well-identified solutions.

“The physical constraints on TinyML devices are genuine. These devices favour small and compact ML models, which call for innovation at the software solutions level for specific use cases. And software providers will be the most active in the TinyML market,” says Lobina. Software providers in this space include Edge Impulse, SensiML, Neuton, Nota, and Deeplite.

In addition, considering the vast number of use cases, vendors must concentrate on those applications that TinyML has a clear value proposition worked out before production.

“The role of software is crucial, and vendors must develop software tools to automate TinyML itself. Finally, new technology will be required to bring about ever more sophisticated TinyML models. Neuromorphic computing and chips, along with the corresponding technique of Spiking Neural Networks, would bode well for the future,” adds Lobina.
 
  • Like
  • Fire
  • Love
Reactions: 24 users
The extensive list of one contains:

“Prophesee
USB3.0
https://www.prophesee.ai/event- camera-evk4/
Prophesee's EVK(Evaluation Kit)can be used for evaluation purpouse only”

Evaluation Kit only for my money leaves lots of room for a subtle upgrade to the production model before mass production begins.

My opinion only DYOR
FF


AKIDA BALLISTA
From the Prophesee link you provided.


"Evaluation Kits are your perfect gateway to Event-Based Vision.

They feature the most technologically advanced sensors available to date with the only mass-produced mini-pbga package Event-Based Vision sensor (GEN3.1 – VGA) and the breakthrough stacked HD sensor realized in collaboration between Sony and Prophesee (IMX636ES)"


The party's already begun 😉
 
  • Like
  • Fire
  • Thinking
Reactions: 25 users
Top Bottom