BRN Discussion Ongoing

D

Deleted member 118

Guest
Does anyone have any broker detail of late as it be interesting to see who has been selling of late, which has seemed to of subsided today.
 
  • Like
Reactions: 2 users

Rskiff

Regular
Gidday over in NZ...you raise a very interesting point, a number of shareholders hold Simon in very high regard, and so this little bit of news
was in all honesty for those concerned...Simon was on the SAB but things got a little complicated internally, so he moved on...now I am
aware of how he would like to be still involved or engaged with the company, but the truth is that, Peter and Anil are running this ship from
a technical standpoint and the reason that I posted this information was that certain someone's do read these posts from time to time and they may which to re-engage or at least inquire in to what Simon is working on, that is purely my own opinion.

We ALL KNOW that we wish to see most, if not, ALL "early access partners" sign up to our IP....they must be sold on the technology by now,
I personally don't understand why they appear, from the outside, not wanting to 100% fully commit just yet.

Are they waiting on the next version of Akida, are they haggling over the contractual figures ?

P.S. I did suggest that an update on AKD 2000 would be better received coming directly from the founder himself, rather than the CEO, and
again, that's my personal preference, as would be a lot of shareholders in my honest opinion, no disrespect to Sean though.

Have a great afternoon........Cheers Tech
Thanks for the reply Tech. How about getting on the "Bat phone, or Bat email" to inform the individual you think may read your post, then guaranteed info is passed on. thanks
 
  • Like
  • Love
Reactions: 9 users

Slade

Top 20
I’ve joined the up coming zoom webinar on the 24th. At 11am American/Canadian pacific time .A Wednesday. I am hopeful that they may say something over that time period of the webinar or I will be asking why they bother to have it .
I will be tuning in as well. Really looking forward to it.




ACEEFE79-9B39-45DA-AEEC-8A354278EC6F.jpeg

9 days to go.
 
  • Like
  • Fire
  • Love
Reactions: 11 users
Recent chip conference in China.

Predominantly around Intel in the article but was the paragraph below that caught my eye.

Err...hello? :rolleyes:



In addition to heterogeneous computing and heterogeneous integration, the exploration of neuromorphic chips is also an important way to break the bottleneck of computing power.

Song Jiqiang pointed out that most of the current artificial intelligence still relies on GPU, CPU or accelerator, and their essence is still the price drop of multiplier-accumulator. The neuromorphic chip constructs the underlying computing unit in a way that simulates human neurons.

If such a chip is constructed, and artificial intelligence algorithms are programmed by means of a spiking neural network, it will have the opportunity to achieve a thousand-fold improvement in energy efficiency. Under the same energy consumption, it can reach a thousand times the computing power of the current multiplier-adder chip.
 
  • Like
  • Fire
  • Love
Reactions: 54 users
Looks like I’m getting another cheap top up end of month when another work bonus hits!
 
  • Like
  • Fire
Reactions: 18 users

Boab

I wish I could paint like Vincent
I’ve joined the up coming zoom webinar on the 24th. At 11am American/Canadian pacific time .A Wednesday. I am hopeful that they may say something over that time period of the webinar or I will be asking why they bother to have it .
A little more info for those that may have missed it. Should be a cracker.

 
  • Like
  • Fire
  • Love
Reactions: 14 users

Diogenese

Top 20
  • Like
  • Fire
  • Love
Reactions: 50 users

buena suerte :-)

BOB Bank of Brainchip
  • Like
Reactions: 5 users

robsmark

Regular
My earlier post regarding a conversation I had with Tony a couple of weeks ago was deleted after Tony refuted some of the comments which I made.

I can assure all that the post was made with no bad intentions, and that the conversation was relayed to the best of my ability. As I mentioned in the post - no new information was discussed/released by Tony.

Tony - my apologies if I misquoted or misrepresented you in any way.
 
  • Like
  • Love
  • Wow
Reactions: 39 users

alwaysgreen

Top 20
Well today sucks...

Hopefully onwards and upwards for the remainder of the week.
 
  • Like
Reactions: 4 users

SERA2g

Founding Member
My earlier post regarding a conversation I had with Tony a couple of weeks ago was deleted after Tony refuted some of the comments which I made.

I can assure all that the post was made with no bad intentions, and that the conversation was relayed to the best of my ability. As I mentioned in the post - no new information was discussed/released by Tony.

Tony - my apologies if I misquoted or misrepresented you in any way.
Lesson learned here for everyone. If you're going to quote a 'personal' conversation you've had with anyone from the company it's probably best to check in with the person in question before sharing it here.

Appreciate your efforts though @robsmark, I'm positive your intentions were good!

Cheers mate
 
  • Like
  • Fire
  • Love
Reactions: 56 users

Vojnovic

Regular
Germans continue their love affair with our company:
 
  • Like
  • Love
  • Fire
Reactions: 26 users

equanimous

Norse clairvoyant shapeshifter goddess
Convolutional Spiking Neural Networks for Detecting Anticipatory Brain Potentials Using Electroencephalogram

 
  • Like
Reactions: 6 users

TheFunkMachine

seeds have the potential to become trees.
Not just any neuromorphic sensor but an Australian neuromorphic sensor.

Not sure how many other Australian Neuromorphic Sensor companies there are?
But Akida is not a sensor rather sensor agnostic, or neuromorphic processor that process sensor input/output.

Not saying it’s not Akida, I’m just saying Akida is not a neuromorphic sensor
 
  • Like
Reactions: 13 users

Sirod69

bavarian girl ;-)
This is an interesting paper. Does not name BRN or AKIDA but when read you will see why partnering with ARM Cortex M4 is so very exciting:

A Brief Review of Deep Neural Network Implementations for ARM Cortex-M Processor

....System-On-Chip (SoC) devices are an attractive solution that, in addition to high processing capabilities, includes multiple peripheral devices that can be very helpful for the sophisticated requirements of deep-learning applications. Examples of manufacturers that develop AI integrated circuits for edge computing are Samsung, Texas Instruments, Qualcomm, and STM. Some of their recent products are briefly presented below.......

5. Conclusions
Deep learning and deep neural networks are emerging as promising solutions for solving complex problems. Solving complex problems requires high computational capabilities and memory resources, so are traditionally designed to run on a large computer system around specialized hardware. However, recent research shows that simple applications can benefit from the deep learning paradigm and their edge computing implementation as well. Edge computing is the solution to many real-world problems that need to be solved soon. For instance, the automotive industry is using and developing prototypes using state-of-the-art hardware and software solutions for autonomous driving. Once these prototypes prove their ability to solve problems, the systems will have to run on real-world cars. At that stage, cost is necessary to be competitive in the market, and, using high performance computing solutions, the cost is high. The edge computing paradigm must be prepared with efficient and low-cost solutions while meeting specific requirements such as functional safety. In this work, we provide a summary of what edge computing means in the context of low-cost/low-power applications. Here, the ARM Cortex-M processor represents one of the best possible candidates. More specifically, we summarize deep neural network implementations using ARM Cortex-M core-based microcontrollers. From the software perspective, the STM32Cube.AI support package, made available by STMicroelectronics for its 32-bit microcontroller series, represents one of the best freely available tools. Implementing deep neural networks on embedded devices, such as microcontrollers, is a difficult task. This is mainly due to the computation and memory footprint constrains. For this reason, it is observed that developers are forced to customize existing architectures or even develop from scratch innovative models that better suit embedded processors. Optimization techniques such as quantization, pruning, and distillation are constantly evolving to achieve higher performance, and they are enabling developers to introduce state-of-the-art models of increasing complexity to the embedded domain. Ultimately, using an optimized hardware combined with optimized deep neural network architectures leads to maximum energy efficient systems. Electronics 2022, 11, 2545 19 of 21 Future work proposes to extend the study to a wider family of ARM cores, including, for example, deep learning applications running on Cortex-A type processors or even specialized Arm Ethos-N series processors for machine learning

https://www.mdpi.com/2079-9292/11/16/2545/pdf?version=1660467458
 
  • Like
  • Love
  • Fire
Reactions: 24 users

Onboard21

Regular

Onboard21

Regular

projection

Member
^^ It was a 4ds reference
 
  • Like
  • Fire
Reactions: 5 users

langeo

Regular
  • Like
  • Fire
Reactions: 3 users

CeeMite

Member
  • Like
  • Fire
Reactions: 5 users
Top Bottom