BRN Discussion Ongoing

I imagine you would have predated any gaming console I’m an 80s child… I’m sure you would beat me at marbles 🤔😂
Now marbles there is a real game and when cracker night came around in Dad’s shed with a vice, a piece of gal water pipe, a double bunger a chipped marble, a box of matches and whammo instant lethal weapon.

Those were the days.

Never heard of anyone blowing their finger off with a Game Boy. 🤣😂FF
 
  • Haha
  • Like
  • Wow
Reactions: 22 users
I imagine you would have predated any gaming console I’m an 80s child… I’m sure you would beat me at marbles 🤔😂
My first and last school fight was over a game of marbles .I decided that fighting was not my strong point . That aside , I just ventured back in and increased my holding by 6,500 shares .
 
  • Like
  • Love
Reactions: 17 users

Sam

Nothing changes if nothing changes
Now marbles there is a real game and when cracker night came around in Dad’s shed with a vice, a piece of gal water pipe, a double bunger a chipped marble, a box of matches and whammo instant lethal weapon.

Those were the days.

Never heard of anyone blowing their finger off with a Game Boy. 🤣😂FF
I used to blow on my fingers if they were getting sweaty though😂😂 couldn’t let slippery fingers get in the way of a good game.
 
  • Haha
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Dear Dodgy-knees, do you feel like checking out the 5 prototype applications mentioned below to see if they're treading on our turf?

B x

(Extract 1)

From cloud to local

“Currently, in most approaches used to classify images or read identity documents from a smartphone, the device is only used to capture the image. The data is sent to cloud servers for analysis, a process that is often computationally intensive. The result is then sent back to the user’s telephone. The technique works well so long as network coverage is available, but not in areas with no coverage or where access is limited.

To solve this problem, the MobileAI research project aimed to incorporate artificial intelligence technology into the smartphone while maintaining its robustness and ability to operate in real time,” explains Montaser Awal, head of the artificial intelligence research team at ARIADNEXT, a company specializing in the remote verification of ID documents.

(Extract 2)

Mission accomplished. The project has advanced the state of the art and resulted in ten scientific publications and five prototype applications.


Extract 1


Extract 2
 
Last edited:
  • Like
Reactions: 11 users

TechGirl

Founding Member
On behalf of @TechGirl I present to you, on the Awesome Brainchip Videos Youtube Channel, Sean speaking at theQ1 Virtual Summit:



(As always, remember to hit the like button, subscribe, and ring some bell thingy, LOL)


Thanks so much JK :)
 
  • Like
Reactions: 9 users

Labsy

Regular
Yeah....I'm not buying the smartphone answer entirely. Louis DiNardo was promoting cell phones and laptops as a target market until the very beginning of 2020 when they suddenly dropped off the radar. If the power savings with Akida are beneficial for electric vehicles for voice recognition and always-on applications, then they're even more valuable for a smart phone or laptop. My hope is that they only dropped off the radar because these smartphone companies have it in their NDA for Brainchip to apply the cone of silence.
Well, most likely a middle man ip customer in the semiconductor industry like qualcom??? INTEL???????????????????
Nvidia?? This would explain his answer as then technically brainchip isn't directly working with a phone company??? Just a thought.
 
  • Like
  • Thinking
Reactions: 18 users

Diogenese

Top 20
Morning Chippers,

Just screen recorded Sean speaking at the Q1 Virtual Summit: BrainChip Holdings

Investor Presentation video link below


Thanks TechGirl,

I was going to say that I would have like to have seen a slide showing a comparison of the power and speed of the current dominant NN (CNN) compared to Akida's power and speed in performing similar tasks at about the 13 minute mark where he does talk about CNN and power, but I guess Sean knows his audience better than I do.
 
  • Like
Reactions: 10 users

hotty4040

Regular
Hi FF,

Most welcome, Haha you slept in, I made sure I got up, I set 5 alarms just in case I hit snooze to many times.

Angry Daffy Duck GIF by Looney Tunes


Yes I thought he spoke more freely when talking about Mercedes which was nice to listen to, I believe every time Sean speaks he gets better & better at getting his message across. Sean has a great voice & he speaks at the perfect pace.

Yeah I wasn't too worried when I heard the smart phone comment, as if in the very near future phone companies aren't going to be knocking our doors down, foaming at the mouth to get their hands on the best, cheapest most awesome planet saving, power saving chips

This is Samsung, Nvidia, Apple, Google, Intel, LG, HP, Tesla trying to get their hands on our tech once the penny drops

guinea pigs running GIF
Couldn't agree more, Sean seams to be growing in confidence each time we meet him.

Slowly, slowly, catcha the monkey IMHO, his message is clear and concise.

Akida Ballista
 
  • Like
Reactions: 8 users

Diogenese

Top 20
Hello JoMo.
I can't seem to find my way back into the Melbourne ketchup thread, but am confirming will be there at 6.30.
8.30 finish is fine by me as need all the beauty sleep I can get.
Looking forward to it. :)
Sounds like it will be the sauce of some great friendships.
 
  • Like
  • Haha
Reactions: 16 users

TechGirl

Founding Member
Thanks TechGirl,

I was going to say that I would have like to have seen a slide showing a comparison of the power and speed of the current dominant NN (CNN) compared to Akida's power and speed in performing similar tasks at about the 13 minute mark where he does talk about CNN and power, but I guess Sean knows his audience better than I do.

Most welcome Dio :)

Maybe one of us should ask Sean to include the power and speed of the current dominant NN (CNN) compared to Akida's power and speed in performing similar tasks in the next presentation.
 
  • Like
Reactions: 7 users

JK200SX

Regular
Sounds like it will be the sauce of some great friendships.
Ketchup, Sauce & Mustard :) I think there's a hidden meaning in the posts and I'm going to have to order this on the night.....

1646875688754.png
 
  • Haha
  • Like
Reactions: 5 users

JK200SX

Regular
  • Like
  • Haha
Reactions: 11 users

HUSS

Regular
There is a FINAL point I want to make about this presentation. The CEO Sean Hehir was asked a question and he answered that question with one three letter WORD and that WORD was:

“YES.”


He did not say anything else just this one SINGLE WORD.

What was the question?

Why did this answer make me smile?

I am sure someone amongst the 1,000 Eyes heard the question and his answer and is smiling too but no one has mentioned it.

My opinion only DYOR
FF

AKIDA BALLISTA
If will be revenue in 2022??..he said YES..I smiled too and said to myself that this is my biggest take of this presentation today. lol
 
  • Like
  • Fire
Reactions: 14 users

JoMo68

Regular
Hello JoMo.
I can't seem to find my way back into the Melbourne ketchup thread, but am confirming will be there at 6.30.
8.30 finish is fine by me as need all the beauty sleep I can get.
Looking forward to it. :)
Hi Hopalong, it's probably a couple of pages in to the now 6-page list of BR threads. I called the venue and although there are two-hour booking slots, we can stay longer if there is no one booked after us. The woman said that we should be ok because it's not too busy on a Tuesday night. Otherwise we can just move to a different part of the venue at 8.30.
 
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Drawing on some further background information that may interest you and others.
Last year it was disclosed not long after the presentation by a certain small engineering firm on a Rob Telson Brainchip podcast that in furtherance of the ARM business model he was setting up Brainchip’s own list of Engineering firms.

If you go to ARM’s website they have a list of these small engineering firms which the last time I looked numbered about 34.

For those who do not know what these guys do it is two fold.

Customers come to them with a problem say my underpants keep riding up but there is nothing in the market presently that will overcome the problem. These firms then go about trying to come up with a solution and sourcing products from their partner firms and modifying them to solve the customers problem.

The second way they make a buck is by going out into the world and trying to identify potential customers suffering from a similar or different problem with their underpants and coming up with a solution with there partners and selling it to these potential customers.

I think it was the former CEO Mr. Dinardo who used the term when describing these relations as great because these guys “have to hunt and kill there own”. In other words it does not cost Brainchip anything to have these relationships.

The thing is though to get your name on their lists is not easy for this very same reason. They need to believe that what you have will add value and be something they can sell otherwise they do not eat.

Eastronics and Saleslink are a big deal for this reason and where Eastronics is concerned it was suggested they may have been the link between Brainchip and NaNose.

Following the ARM model there will be more announcements in this area and probably quite soon. I know for a fact that Rob Telson was run off his feet covering both sales and marketing and was finding it difficult to get the time to close out all of these sales relationships with his target small engineering firms.

My opinion only DYOR
FF

AKIDA BALLISTA


Hi FF,

Arm's engineers better find other projects to work on aside from inventing wedgie-proof undies because it looks like Amazon have already cornered this market. Whats-more I understand that Kim Kardashian has just released her new line of crotchless underwear, which also means that she has this area covered (or not covered so-to-speak).

Screen Shot 2022-03-10 at 12.56.34 pm.png
 
  • Haha
  • Like
Reactions: 9 users

Diogenese

Top 20
Dear Dodgy-knees, do you feel like checking out the 5 prototype applications mentioned below to see if they're treading on our turf?

B x

(Extract 1)

From cloud to local

“Currently, in most approaches used to classify images or read identity documents from a smartphone, the device is only used to capture the image. The data is sent to cloud servers for analysis, a process that is often computationally intensive. The result is then sent back to the user’s telephone. The technique works well so long as network coverage is available, but not in areas with no coverage or where access is limited.

To solve this problem, the MobileAI research project aimed to incorporate artificial intelligence technology into the smartphone while maintaining its robustness and ability to operate in real time,” explains Montaser Awal, head of the artificial intelligence research team at ARIADNEXT, a company specializing in the remote verification of ID documents.

(Extract 2)

Mission accomplished. The project has advanced the state of the art and resulted in ten scientific publications and five prototype applications.


Extract 1


Extract 2
Hi Bravo77,

I can't find any relevant patents, but there is the 8 month non-publication lacuna.

The article mentions CNN and modifying the CNN algorithms to make them executable on mobile devices, apparently by improving the descriptors of the images.

https://innovationorigins.com/en/se...to-improve-local-image-recognition-on-phones/

At the heart of the matter is a family of particularly powerful deep-learning algorithms, called convolutional neural networks (CNN). “These are excellent candidates for mobile image recognition”, explains Montaser Awal. “But for our purposes, we had to modify their architecture and optimize them to make them executable on mobile devices while maintaining a similar performance level to cloud-based server systems.
...
By switching image recognition to mobile and improving the descriptors of these images, the company made a game-changing move and can now handle databases of 100,000 images. These capacities will allow QUAI DES APPS to meet the needs of the retail industry, whether for images of products on the shelf or in a catalogue.

None of this suggests a CNN2SNN conversion.

1646878017270.png
 
  • Like
  • Fire
  • Haha
Reactions: 12 users

Jefwilto

Regular
  • Haha
  • Like
  • Fire
Reactions: 9 users
Dear Dodgy-knees, do you feel like checking out the 5 prototype applications mentioned below to see if they're treading on our turf?

B x

(Extract 1)

From cloud to local

“Currently, in most approaches used to classify images or read identity documents from a smartphone, the device is only used to capture the image. The data is sent to cloud servers for analysis, a process that is often computationally intensive. The result is then sent back to the user’s telephone. The technique works well so long as network coverage is available, but not in areas with no coverage or where access is limited.

To solve this problem, the MobileAI research project aimed to incorporate artificial intelligence technology into the smartphone while maintaining its robustness and ability to operate in real time,” explains Montaser Awal, head of the artificial intelligence research team at ARIADNEXT, a company specializing in the remote verification of ID documents.

(Extract 2)

Mission accomplished. The project has advanced the state of the art and resulted in ten scientific publications and five prototype applications.


Extract 1


Extract 2
My quick look found the following which makes clear that what they have developed is an algorithm which allows them to use a convolutional neural network. They need the CNN to be pretrained with the data set up to 100,000 images for on phone processing without going back to the cloud. They are very careful never in any document I read to touch on power consumption but it is a scientific fact that CNN uses far more power than SNN. So based upon just these two facts alone their product is vastly inferior to AKIDA technology.

The example they give in the following extract of a customer downloading a photo of the product on the shelf and receiving additional information sounds interesting however if you are buying the same tomato sauce you have bought for 50 years you are unlikely to need extra information. (I am keeping with the sauce theme.) If however there is a brand new display with a revolutionary tomato sauce made from special tomatoes from Iceland you may well want extra information but how did the store owner get this brand new promotional product into the 100,000 images stored on the persons phone???? Me thinks there is a flaw here. If however they had AKIDA it would have been very simple as the whole network does not need to be retrained as we know. Would a store owner retrain the whole network every time someone comes to him with a new product unlikely in my opinion as he would incur and expense for a product he may in the end not put on the shelf.

Then there is the question of power consumption if someone is reading about many products because of the novelty factor and then flattens their phone and cannot ring for an Uber/taxi to take them and their purchases home.

At the heart of the matter is a family of particularly powerful deep-learning algorithms, called convolutional neural networks (CNN). “These are excellent candidates for mobile image recognition”, explains Montaser Awal. “But for our purposes, we had to modify their architecture and optimise them to make them executable on mobile devices while maintaining a similar performance level to cloud-based server systems”.
Mission accomplished. The project has advanced the state of the art and resulted in ten scientific publications and five prototype applications. The new algorithms for image classification and text recognition from a photograph of an ID document were immediately integrated into IDcheck.io, ARIADNEXT's flagship product for ID document authentication. “The acquirement of cutting-edge expertise in deep learning for image recognition is also an important factor for future developments”, the company explains.

The project has allowed QUAI DES APPS to improve Blinkl, its augmented narration web app. Its service allows clients in shops to photograph products on the shelves and obtain more information about them. Until now, the image recognition process was executed on remote servers. The disadvantages of this were the computational load on these machines and latencies during peak periods, such as during sales or product launches. In addition, there was a bottleneck in the image search which limited the size of the database to 1000 products. By switching image recognition to the mobile and improving the descriptors of these images, the company made a game-changing move and can now handle databases of 100,000 images. These capacities will allow QUAI DES APPS to meet the needs of the retail industry, whether for images of products on the shelf or in a catalogue.



These people really need to investigate a licence to add a couple of AKIDA nodes so they can include the AKIDA technology advantage and convert the CNN to SNN reducing power consumption and permitting on device training with one or few shots.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Wow
Reactions: 19 users
My quick look found the following which makes clear that what they have developed is an algorithm which allows them for use with a convolutional neural network. They need the CNN to be pretrained with the data set up to 100,000 images for on phone processing without going back to the cloud. They are very careful never in any document I read to touch on power consumption but it is a scientific fact that CNN uses far more power than SNN. So based upon just these two facts alone their product is vastly inferior to AKIDA technology.

The example they give in the following extract of a customer downloading a photo of the product on the shelf and receiving additional information sounds interesting however if you are buying the same tomato sauce you have bought for 50 years you are unlikely to need extra information. (I am keeping with the sauce theme.) If however there is a brand new display with a revolutionary tomato sauce made from special tomatoes from Iceland you may well want extra information but how did the store owner get this brand new promotional product into the 100,000 images stored on the persons phone???? Me thinks there is a flaw here. If however they had AKIDA it would have been very simple as the whole network does not need to be retrained as we know. Would a store owner retrain the whole network every time someone comes to him with a new product unlikely in my opinion as he would incur and expense for a product he may in the end not put on the shelf.

Then there is the question of power consumption if someone is reading about many products because of the novelty factor and then flattens their phone and cannot ring for an Uber/taxi to take them and their purchases home.

At the heart of the matter is a family of particularly powerful deep-learning algorithms, called convolutional neural networks (CNN). “These are excellent candidates for mobile image recognition”, explains Montaser Awal. “But for our purposes, we had to modify their architecture and optimise them to make them executable on mobile devices while maintaining a similar performance level to cloud-based server systems”.
Mission accomplished. The project has advanced the state of the art and resulted in ten scientific publications and five prototype applications. The new algorithms for image classification and text recognition from a photograph of an ID document were immediately integrated into IDcheck.io, ARIADNEXT's flagship product for ID document authentication. “The acquirement of cutting-edge expertise in deep learning for image recognition is also an important factor for future developments”, the company explains.

The project has allowed QUAI DES APPS to improve Blinkl, its augmented narration web app. Its service allows clients in shops to photograph products on the shelves and obtain more information about them. Until now, the image recognition process was executed on remote servers. The disadvantages of this were the computational load on these machines and latencies during peak periods, such as during sales or product launches. In addition, there was a bottleneck in the image search which limited the size of the database to 1000 products. By switching image recognition to the mobile and improving the descriptors of these images, the company made a game-changing move and can now handle databases of 100,000 images. These capacities will allow QUAI DES APPS to meet the needs of the retail industry, whether for images of products on the shelf or in a catalogue.



These people really need to investigate a licence to add a couple of AKIDA nodes so they can include the AKIDA technology advantage and convert the CNN to SNN reducing power consumption and permitting on device training with one or few shots.

My opinion only DYOR
FF

AKIDA BALLISTA
You won this time Dio but this is not the final battle we will meet again. FF
 
  • Haha
  • Like
  • Sad
Reactions: 13 users

Boab

I wish I could paint like Vincent
Sounds like it will be the sauce of some great friendships.
There's just no stopping you is there?
Keep em coming😂
 
  • Like
  • Haha
Reactions: 3 users
Top Bottom