BRN Discussion Ongoing

HopalongPetrovski

I'm Spartacus!
Just listened to the podcast on ABC Background Briefing.
Very interesting how quickly this is all developing and on how many fronts simultaneously.

 
  • Like
  • Fire
Reactions: 8 users
  • Like
Reactions: 2 users

zeeb0t

Administrator
Staff member
  • Like
  • Haha
  • Fire
Reactions: 8 users

MDhere

Regular
This would make everyone so happy. Right before the next AGM too! 🥳

Feeling hopeful on this one Bravo.
20240218_135721.jpg
This is the other side of my computer sticky note which was written by the looks around may 2023, so Metroid Prime 4 may be due for release soon too.
We are in good company with Megachips and Megachips is in good company with Nintendo.
Will my sticky note come alive? 🙏
 
  • Like
  • Love
  • Fire
Reactions: 26 users
Feeling hopeful on this one Bravo.
View attachment 57143 This is the other side of my computer sticky note which was written by the looks around may 2023, so Metroid Prime 4 may be due for release soon too.
We are in good company with Megachips and Megachips is in good company with Nintendo.
Will my sticky note come alive? 🙏
1708231033927.gif
 
  • Haha
  • Like
Reactions: 11 users

Earlyrelease

Regular
This look like something related to discord, not this forum.
Cheers Zeebot, they obviously got my TSE name and hope I am a member of discord which I am not. Cheers maybe other be aware
 
  • Like
  • Fire
Reactions: 8 users

Diogenese

Top 20
View attachment 57136 View attachment 57138


Several points here:

  • Spike rate or spike time
  • Existing algorithms only partly take advantage of neural technology (MB working on algorithms to take advantage of neuromorphic computing)
  • Still need automotive-grade neuromorphic chips before this tech is common in cars.

  • Extend vehicle range
  • Increase the number of AI functions
  • 2020 Joined Intel Nueromorphic Research Community
  • Expanding collaboratons with research partners and unis


Spike rate was shown by Thorpe, referring to Adrian's 1920's paper, to be redundant. Thorpe invented spike rank coding (N-of-M coding, enabling insignificant spikes to be ignored. Maybe the reference to spike rate is a red herring for competitors, but their DD would be deficient if they don't know about MB + Akida and that Akida uses rate coding.

New neuromorphic-aware algorithms may produce further time/energy savings.

We know MB is developing an MB.OS processor with Nvidia and some friends, so this will be automotive grade (probably similar to RadHard for NASA, and Vorago has already done that for Akida).
View attachment 57136 View attachment 57138


Hi ILL,

I see several points here:

  • Spike rate or spike time
  • Existing algorithms only partly take advantage of neural technology (MB working on algorithms to take advantage of neuromorphic computing)
  • Still need automotive-grade neuromorphic chips before this tech is common in cars.
  • Extend vehicle range
  • Increase the number of AI functions
  • 2020 Joined Intel Nueromorphic Research Community
  • Expanding collaboratons with research partners and unis

Although we have seen a couple of articles on spike rate recently, spike rate was shown by Thorpe, referring to Adrian's 1920's paper, to be redundant. Thorpe invented spike rank coding (spike time coding) (N-of-M coding), enabling insignificant spikes to be ignored. Maybe the reference to spike rate is a red herring for competitors, but their DD would be deficient if they don't know about MB + Akida and that Akida uses rate coding. So really lip service to the NDAs, or a nod to a couple of decades of waster research.

New neuromorphic-aware algorithms may produce further time/energy savings.

We know MB is developing an MB.OS processor with Nvidia and some friends, so this will be automotive grade (probably similar to RadHard for NASA, and Vorago has already done that for Akida).

Increasing the number of AI functions can only be good for Akida.

Extend vehicle range ties in with increasing the number of AI functions plus advantages from neuromorphic-aware algorithms.

... and then there's the ecosystem - we're in with MB, Nvidia, and probably everyone else on MB's dance card.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 48 users

IloveLamp

Top 20
Several points here:

  • Spike rate or spike time
  • Existing algorithms only partly take advantage of neural technology (MB working on algorithms to take advantage of neuromorphic computing)
  • Still need automotive-grade neuromorphic chips before this tech is common in cars.

  • Extend vehicle range
  • Increase the number of AI functions
  • 2020 Joined Intel Nueromorphic Research Community
  • Expanding collaboratons with research partners and unis


Spike rate was shown by Thorpe, referring to Adrian's 1920's paper, to be redundant. Thorpe invented spike rank coding (N-of-M coding, enabling insignificant spikes to be ignored. Maybe the reference to spike rate is a red herring for competitors, but their DD would be deficient if they don't know about MB + Akida and that Akida uses rate coding.

New neuromorphic-aware algorithms may produce further time/energy savings.

We know MB is developing an MB.OS processor with Nvidia and some friends, so this will be automotive grade (probably similar to RadHard for NASA, and Vorago has already done that for Akida).



Hi ILL,

I see several points here:

  • Spike rate or spike time
  • Existing algorithms only partly take advantage of neural technology (MB working on algorithms to take advantage of neuromorphic computing)
  • Still need automotive-grade neuromorphic chips before this tech is common in cars.
  • Extend vehicle range
  • Increase the number of AI functions
  • 2020 Joined Intel Nueromorphic Research Community
  • Expanding collaboratons with research partners and unis

Although we have seen a couple of articles on spike rate recently, spike rate was shown by Thorpe, referring to Adrian's 1920's paper, to be redundant. Thorpe invented spike rank coding (spike time coding) (N-of-M coding), enabling insignificant spikes to be ignored. Maybe the reference to spike rate is a red herring for competitors, but their DD would be deficient if they don't know about MB + Akida and that Akida uses rate coding. So really lip service to the NDAs, or a nod to a couple of decades of waster research.

New neuromorphic-aware algorithms may produce further time/energy savings.

We know MB is developing an MB.OS processor with Nvidia and some friends, so this will be automotive grade (probably similar to RadHard for NASA, and Vorago has already done that for Akida).

Increasing the number of AI functions can only be good for Akida.

Extend vehicle range ties in with increasing the number of AI functions plus advantages from neuromorphic-aware algorithms.

... and then there's the ecosystem - we're in with MB, Nvidia, and probably everyone else on MB's dance card.


Thanks for the insight dio ☺️



Thanks for the insight dio ☺️
 
  • Haha
  • Like
Reactions: 12 users

Diogenese

Top 20
  • Haha
  • Like
  • Love
Reactions: 10 users

IloveLamp

Top 20
  • Haha
  • Like
Reactions: 8 users

Diogenese

Top 20
Somebody say somethink!
 
  • Haha
  • Like
Reactions: 7 users

Esq.111

Fascinatingly Intuitive.
Boom... ...

Regards.
Esq
 
  • Haha
  • Fire
  • Like
Reactions: 13 users
This is the body of the email, I just sent Tony Dawe.
I will let the forum know, if there is a response or information I can post, regarding the subject 👍

"You may have noticed, there has been some discussion on TSEx about Dr Tony Lewis's LinkedIn comment, concerning small LMs and neuromorphic hardware.

His comments, in my opinion, were ambiguous.
He said both, neuromorphic hardware using them and the VSLMs themselves "hold promise".
Also stating, to his knowledge, BrainChip would be the first to implement this, at the Edge.

To me, he said they are still working on it, but it could also be, that he, being commercially minded, was saying..
"We've done it, but AKD2000 doesn't exist yet, so it's still just in simulation".


In FactFinder's post, about the November 6th private shareholder meeting, he stated the following.

"It has been confirmed by the CEO Sean Hehir in person to myself and others that Brainchip has succeeded in developing and running a number of Large Language Models, LLMs, on AKIDA 2.0 and that AKIDA 3.0 is on track and that its primary purpose will be to focus on running LLM's as its point of technology difference".

Straight off the bat, FactFinder refers to LLMs, not the "very small" or "tiny" LMs, that Dr Tony, refers to having promise.
So either he misquoted Sean, or Sean referenced LLMs.

From the above meeting, it appears that, at the very least, very small LMs are successfully running, in simulation, on AKIDA 2.0 and this is now public knowledge?

My questions are..

Are VSLMs running successfully on AKIDA 2.0, or are they still ironing out the bugs and this will be more an AKIDA 3.0 focused, game?

If they are running successfully, this would be considered quite a huge achievement (being a World first and considering the current hunger, for such technology).
Why hasn't the Company made a proper statement/Tweet, or something, when the information is apparently public knowledge?

It seems to me, that it would be in the Company's best interest (as well as us shareholders of course) to at least "tap" the drum and not have to rely on Chinese whispers etc?"
 
  • Like
  • Love
  • Fire
Reactions: 48 users

hotty4040

Regular
Anybody else got this email? Thinking scam

Hello Earlyrelease,
We’re asking everyone to choose a unique username instead of using discriminators in their username (username#0000). Starting March 4, 2024, Discord will begin assigning usernames to users who have not chosen one themselves. You are receiving this email because you have not chosen a new username.
This is a notice that if you do not update your username by March 4, 2024, Discord will assign a new, unique username to your account. We will try to assign you a unique username that is similar to your current username.
You can update your username now by following these instructions:
On Desktop/Browser
  1. Select the cog wheel in the bottom left to open User Settings
  2. Click Get Started
On Mobile
  1. Open the You tab by tapping your profile in the bottom right of the screen
  2. Tap Get Started
You can always change your username later in your User Settings. Learn more about our recent changes to usernames in our support article.
Thank you,
Discord
Need help? Contact our support team or hit us up on Twitter @discord.
Want to give us feedback? Let us know what you think on our feedback site.

Been there, done that early, I'm suggesting, disregard, I hope............

hotty...
 
  • Like
Reactions: 1 users

RobjHunt

Regular
  • Haha
Reactions: 8 users

manny100

Regular
Interesting read FF,
That led me to another term I had not been aware of but suspected was a tactic used by shorters in collusion.
A Bear Raid.
In a typical bear raid, short sellers may conspire beforehand to quietly establish large short positions in the target stock. Since the short interest in the stock increases the risk of a short squeeze that can inflict substantial losses on the shorts, the short sellers cannot afford to wait patiently for months until their short strategy works out.


The next step in the bear raid is akin to a smear campaign, with whispers and rumors about the company spread by unknown sources. These rumors can be anything that portrays the target company in a negative light, such as allegations of accounting fraud, an SEC investigation, an earnings miss, financial difficulties, and so on. The rumors may cause nervous investors to exit the stock in droves, driving the price down further and giving the short sellers the profit they are looking for.
That is more common than we think.
Huge shorters will use both social and established media to put out negative stories about a company.
 
  • Like
Reactions: 5 users

Boab

I wish I could paint like Vincent
It’ll soon be Monday
1708247313903.gif
 
  • Haha
  • Like
  • Love
Reactions: 25 users
Top Bottom