BRN Discussion Ongoing

I must say FF your mention of being a retired lawyer (which most of us happen to know) made me think - if you look anything like Jennifer Robinson I want to meet up 😀
If you look like Jennifer Lopez I could say I do.😁
 
  • Haha
  • Like
Reactions: 12 users

FJ-215

Regular
I'm kinda thunderstruck..........




OK, not great but give it a chance. gets better around the 2 minute mark.

make that the 4mins

Actually, this one is much better (and half as long)



 
  • Fire
  • Like
Reactions: 8 users

Diogenese

Top 20
Was looking into Renesas again to see if any other info around.

I found their DRP AI uses EdgeCortix in one component and maybe @Diogenese can elaborate on any overlap with Akida functions or shows this DRP not using Akida?

Or, is the TVM just a compiler / converter as how I read it?

TIA.



Features of DRP-AI
DRP-AI consists of AI-MAC (multiply-accumulate processor) and DRP (reconfigurable processor). AI processing can be executed at high speed by assigning AI-MAC for operations on the convolution layer and fully connected layer, and DRP for other complex processing such as preprocessing and pooling layer

Tool: DRP-AI Translator / DRP-AI TVM※1
DRP-AI Translator and DRP-AI TVM are tools that are available to convert trained AI models into a format that can run on DRP-AI. This section describes the features of these two tools.
The DRP-AI Translator is a tool that is tuned to maximize DRP-AI performance. DRP-AI achieves high-speed performance, low power consumption and reduced CPU load by enabling DRP-AI to perform all the operations of an AI model.
DRP-AI TVM applies the DRP-AI accelerator to the proven ML compiler framework Apache TVM※2」. This enables support for multiple AI frameworks (ONNX, PyTorch, TensorFlow, etc.). In addition, it enables operation in conjunction with the CPU, allowing more AI models to be run.

These two tools can be selected according to the customer's product application.

View attachment 19344


View attachment 19345


*1 DRP-AI TVM is powered by EdgeCortix MERATM Compiler Framework
2 For more information on Apache TVM, please refer to https://tvm.apache.org
No Akida -

R-Car V3U Block Diagram
1666170797632.png



1666170901757.png


1666175368075.png

Renesas have been working on this for over 10 years. You can't expect them to discard it at the drop of a SNN.
 

Attachments

  • 1666170824602.png
    1666170824602.png
    434.8 KB · Views: 92
Last edited:
  • Like
  • Sad
  • Haha
Reactions: 20 users

Foxdog

Regular
If you look like Jennifer Lopez I could say I do.😁
😀 Oh and of course there's Jennifer Aniston - a magnificent trio, what - must be something in the name 🤔
 
  • Haha
  • Like
Reactions: 2 users

JK200SX

Regular
You musta know'd you waz in trubbl when you seen dat BUTT!




This live version is great (.... and who is the lady in black going off.... Bravo? :) )



And if you want something a bit more laid back with a country and western flare, then have a listen:

 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 7 users

TasTroy77

Founding Member
Screenshot_20221019-202943_LinkedIn.jpg
Chris Stevens likes this
 
  • Like
  • Fire
  • Love
Reactions: 32 users

Mugen74

Regular
  • Like
  • Love
  • Fire
Reactions: 9 users
I like the driver identification and driver authentication side of this technology.

As well as supplying input to all the personalised adjustments—seat, steering wheel, and mirror positions to name a few, then also air conditioning setting, music choice, driver assist preferences etc.

This potentially could do away with auto theft and car-jacking. I.e. if the car doesn‘t recognise you, then you can‘t use it!

I don‘t see a use for my car telling me what emotions it thinks I am expressing. I don‘t even like it checking my attention, which should become less important anyway once automation levels mature to useful levels.
Probably would be useful if it could detect if you are having a seizure, heart attack or strike, so that the car can automatically stop safely and call emergency services.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Deadpool

Did someone say KFC
Probably would be useful if it could detect if you are having a seizure, heart attack or strike, so that the car can automatically stop safely and call emergency services.
Or another option, it can detect when there’s "movement at the station" (so to speak) and it texts your wife / partner that she should open a bottle of wine, draw the shades and put on some Barry White.:love:


Barry White Flirting GIF
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

Foxdog

Regular
Or another option, it can detect when there’s "movement at the station" (so to speak) and it texts your wife / partner that she should open a bottle of wine, draw the shades and put on some Barry White.:love:


Barry White Flirting GIF
Or run for the hills 😀
 
  • Haha
  • Like
  • Fire
Reactions: 11 users

Bobbygant

Regular
Not sure if this has been shared as yet?
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Gman

Member

“What they built is a robot that not only learns intellectually as a human would, but can pull from all five “senses.” The robot chef can touch, smell, see, hear, and even taste, thanks to a tasting bar that mimics the human tongue. The senses send feedback to the OS, creating a learning loop similar to a human’s, which logs all information for future use. It’s as if the robot is constantly in a culinary school class, but remembers every detail from the homework”

“Multiple senses all provide data in a way that we can make an intellectual decision,” Sunkara says. “That’s the A.I. process as well.”

Sounds familiar or I could just be hungry 🤔
 
  • Like
  • Fire
  • Thinking
Reactions: 23 users

White Horse

Regular
Insto's steady climb in accumulation.

View attachment 18695

Latest Insto ownership. Vanguard are getting more aggressive.


https://www.msn.com/en-au/money/wat...b97c2e357e031c7f6&duration=1D&l3=L3_Ownership


BrainChip Holdings Ltd
Watchlist
0.94
AT CLOSE
‎+0.02 (‎+1.62%)
SummaryEarningsInvestorsFinancialsAnalysisOptionsCompanyHistoryRelated
Institutional Ownership
6.97%

Top 10 Institutions
5.99%

Mutual Fund Ownership
6.22%

Float
82.64%
Mutual Fund OwnershipInstitutional Ownership
Institution NameShares Held (% Change)% Outstanding
Vanguard Investments Australia Ltd.22,140,950 (+0.17%)1.29
BlackRock Institutional Trust Company, N.A.18,225,041 (+0.03%)1.06
BlackRock Advisors (UK) Limited13,016,223 (-0.00%)0.76
The Vanguard Group, Inc.11,749,260 (+0.64%)0.68
LDA Capital Limited10,000,000 (-0.07%)0.58
Irish Life Investment Managers Ltd.9,187,138 (+0.04%)0.53
FV Frankfurter Vermögen AG7,200,000 (-0.01%)0.42
BetaShares Capital Ltd.5,270,349 (+0.03%)0.31
BlackRock Investment Management (Australia) Ltd.3,077,154 (-0.02%)0.18
State Street Global Advisors Australia Ltd.3,065,409 (+0.00%)0.18
State Street Global Advisors (US)2,475,052 (+0.01%)0.14
First Trust Advisors L.P.2,086,443 (+0.02%)0.12
Nuveen LLC1,773,407 (+0.00%)0.10
Charles Schwab Investment Management, Inc.1,546,196 (+0.09%)0.09
California State Teachers Retirement System1,324,606 (+0.08%)0.08
 
  • Like
  • Fire
Reactions: 36 users
Or another option, it can detect when there’s "movement at the station" (so to speak) and it texts your wife / partner that she should open a bottle of wine, draw the shades and put on some Barry White.:love:


Barry White Flirting GIF
Classic 👍

 
  • Like
  • Love
  • Haha
Reactions: 10 users
This post is a good reminder from half yearlys which showed $4,8m revenue, and receivables at $3.4m, so my money is around $3m cash receipts for 4C
Look at the note to the receivables. Realisticly closer to 2-2.5m
 

cassip

Regular
Shareholder about Akida:

 
  • Like
  • Fire
  • Haha
Reactions: 7 users

I must say in the next few years the major leaps in technology is really going to be mind-blowing.
 
  • Like
  • Wow
Reactions: 13 users
Not sure if this has been shared as yet?

Hi @Bobbygrant
I am pretty sure everyone would have seen this interview before but that does not diminish its value.

Those who have read my posts over a few years know that I am fond of doing retrospectives as it is always the case that when I revisit past interviews, presentations and releases hindsight comes into play and the significance of things that I previously missed now stand out like neon signs.

In this interview towards the end the CEO Sean Hehir speaks about understanding customer requirements and how when power is not an issue they can tailor their offering and go toe to toe with other solutions taking advantage of AKIDA’s flexibility to scale.

This statement put me in mind of Rob Lincourt from DELL Technologies statement in the Brainchip podcast that what they were doing with AKIDA was seeing just how far it could scale.

Which put me in mind of Anil Mankar’s statement that they were being benchmarked against a GPU and it was coming up favourable to AKIDA.

Which put me in mind of Tim Llewellyn from Nviso who said that with Anil Mankar’s advice they have tweaked AKIDA to run at up to 1670 fps.

Which put me in mind of Peter van der Made’s statements going back to 2015 that AKIDA technology could do all the compute from data centres to the far edge.

Which finally brings me to the former CEO Mr. Dinardo’s comment when hosing down shareholder excitement around Peter van der Made’s statement that 100 AKD1000 could do all the compute for full autonomy saying that Peter gets carried away and while what he has said is true the focus of Brainchip was the Edge and Far Edge.

So going back to CEO Sean Hehir’s statement regarding where power is not an issue I think I can now safely say that Brainchip has been widening its horizon and that when we go looking for AKIDA technology our focus should no longer simply be on the energy constrained Edge application.

Ubiquitous now means just that. Anywhere you need Ai semiconductors from the cloud data centre to the box on a remote heavy rail line in the Kimberly where only battery power exists AKIDA will be found as it is Essential Ai.

Why? Because just like the song “AKIDA can do anything better than you” ever imagined was possible with a hugely reduced power footprint and in real-time and with one shot, few shot, incremental learning.

It can do maths and regression analysis better than you. Yes it can, yes it can, yes it cannnnnn.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 81 users
Hi @Bobbygrant
I am pretty sure everyone would have seen this interview before but that does not diminish its value.

Those who have read my posts over a few years know that I am fond of doing retrospectives as it is always the case that when I revisit past interviews, presentations and releases hindsight comes into play and the significance of things that I previously missed now stand out like neon signs.

In this interview towards the end the CEO Sean Hehir speaks about understanding customer requirements and how when power is not an issue they can tailor their offering and go toe to toe with other solutions taking advantage of AKIDA’s flexibility to scale.

This statement put me in mind of Rob Lincourt from DELL Technologies statement in the Brainchip podcast that what they were doing with AKIDA was seeing just how far it could scale.

Which put me in mind of Anil Mankar’s statement that they were being benchmarked against a GPU and it was coming up favourable to AKIDA.

Which put me in mind of Tim Llewellyn from Nviso who said that with Anil Mankar’s advice they have tweaked AKIDA to run at up to 1670 fps.

Which put me in mind of Peter van der Made’s statements going back to 2015 that AKIDA technology could do all the compute from data centres to the far edge.

Which finally brings me to the former CEO Mr. Dinardo’s comment when hosing down shareholder excitement around Peter van der Made’s statement that 100 AKD1000 could do all the compute for full autonomy saying that Peter gets carried away and while what he has said is true the focus of Brainchip was the Edge and Far Edge.

So going back to CEO Sean Hehir’s statement regarding where power is not an issue I think I can now safely say that Brainchip has been widening its horizon and that when we go looking for AKIDA technology our focus should no longer simply be on the energy constrained Edge application.

Ubiquitous now means just that. Anywhere you need Ai semiconductors from the cloud data centre to the box on a remote heavy rail line in the Kimberly where only battery power exists AKIDA will be found as it is Essential Ai.

Why? Because just like the song “AKIDA can do anything better than you” ever imagined was possible with a hugely reduced power footprint and in real-time and with one shot, few shot, incremental learning.

It can do maths and regression analysis better than you. Yes it can, yes it can, yes it cannnnnn.

My opinion only DYOR
FF

AKIDA BALLISTA
For those not born in the Stone Age:

 
  • Haha
  • Like
  • Fire
Reactions: 19 users

“What they built is a robot that not only learns intellectually as a human would, but can pull from all five “senses.” The robot chef can touch, smell, see, hear, and even taste, thanks to a tasting bar that mimics the human tongue. The senses send feedback to the OS, creating a learning loop similar to a human’s, which logs all information for future use. It’s as if the robot is constantly in a culinary school class, but remembers every detail from the homework”

“Multiple senses all provide data in a way that we can make an intellectual decision,” Sunkara says. “That’s the A.I. process as well.”

Sounds familiar or I could just be hungry 🤔

I have posted in the past that Sony AI continue to advertise positions AI/machine learning qualified candidates for their Gastronomy team
Akida does all the senses and Sony are working with Prophesee

Are these the ingredients for BrainChip involvement?

Joke Drums GIF by Bax Music



Roles and Responsibilities​

  • Conduct fundamental and innovative research in machine learning within the Gastronomy domain, including but not limited to, multisensory (vision, speech, olfactory, gustatory, and haptic) perception learning, human-in-the-loop learning, etc.
  • Research and develop innovative machine learning strategies for novel recipe creation that includes important aspects such as health and sustainability.
  • Construct multisensory datasets (related to food) from scratch.
  • Collaborate with a diverse team to integrate your research into products.
  • Write code to support research, usually in Python.
  • Write reports and give presentations for internal audiences.
  • Publish influential research outcomes at top-tier CV conferences.

Required Qualifications and Skills​

  • Pursuing PhD, Masters, or Bachelors degree in Machine Learning, Computer Science, Data Science or related field.
  • In-depth expertise in one or more of machine learning, deep learning, computer vision, optimization and/or NLP.
  • Experience with and/or enthusiasm for building datasets from scratch.
  • Excellent communication and presentation skills.
  • Proven ability to implement software in Python.
  • Familiarity with deep learning frameworks, in particular PyTorch.
  • (Preferred) Experience with cloud computing, in particular AWS.
  • Excellent oral and written English. Japanese language skills are a plus but not a necessity.

Preferred Qualifications​

  • Experience or interest in modeling/understanding multisensory (image, speech, olfactory, haptic, and gustatory) perception.
  • Familiarity with user research and constructing datasets.
 
  • Like
  • Love
  • Fire
Reactions: 24 users
Top Bottom