BRN Discussion Ongoing

chapman89

Founding Member
Just touching on that lecture video again, at the 1 hour 30 minute mark Nickunj from Brainchip is talking about some use cases such as-

“RF signalling or maybe infrared, or we have Lidar coming in. We have customers who have reached out for all these use cases, we are working with them, it’ll come soon, you will see these integrations & products VERY SOON”
 
  • Like
  • Fire
  • Love
Reactions: 68 users

Sam

Nothing changes if nothing changes
  • Haha
  • Like
  • Love
Reactions: 11 users
Are you calling ChatGPT a liar 🤥 😂
It should just stick to writing children's stories and complaints to sesame street producers for never disclosing how to get to sesame street even though it was asked plenty of times in the theme song😉
 
  • Haha
  • Like
Reactions: 5 users

I'd like you to write a formal complaint to the producers of Sesame street, regarding why they never provided directions to sesame street even though it was ask numerous times over and over in the sesame street theme song. This has still been unanswered to this day many decades later.


Dear Sesame Street Producers,
I am writing to express my frustration and disappointment that the television show "Sesame Street" has never provided directions to the eponymous location despite numerous requests for them in the show's theme song.
As a long-time fan of the show, I have always been curious about where Sesame Street is located and how to get there. The fact that the show has never provided this information is a significant oversight, in my opinion.
I understand that Sesame Street is a fictional place, but that does not excuse the lack of transparency on the part of the producers. It is unacceptable that the show has neglected to provide such a basic and important piece of information to its viewers.
I urge the producers of Sesame Street to reconsider their decision and provide the requested directions to Sesame Street as soon as possible. This will go a long way in improving the overall quality and enjoyment of the show for its devoted fans.
Sincerely, [Your name]
 
  • Haha
Reactions: 9 users
We must be getting close to a 4C release
More disappointment to come tell me I'm wrong
 
  • Like
Reactions: 1 users

Sam

Nothing changes if nothing changes
It is a great little app but unsure if I will see it through to the end of the free trial😂
 
  • Like
Reactions: 2 users

VictorG

Member
Are you calling ChatGPT a liar 🤥 😂
Let's just call it colourful with a penchant to wax lyrical.
As a matter of interset, ChatGPT shares are currently offered by tender which values the company at US $27 billion. Not bad for a tech company that is yet to generate a single dollar in revenue and sources its answers from pinocchio's book on ethical alchemy.
 
  • Haha
  • Like
  • Love
Reactions: 10 users

TasTroy77

Founding Member
So, the 4C should be in, in a fortnight, and it's hard to expect any significant business development before then, as only renesas and megachips have licensed our ip.

I don't think we are doing enough to force Renesas' hand to use our IP. It would be hard to see renesas not producing a publicly available chip with akida ip if they have a client asking for one. They have only ever doubled down on their own DRP AI solution, and have even bought out whole software companies to get it to work (reality.ip).

I don't see that we have an infinite amount of time to land a deal. What we need is very very simple. A three way deal with a sensor maker, and megachips and us to produce a product that can be purchased. In the mean time Renesas are screwing us just as Edison stuffed Tesla and it's high time we sent them their invoice, collected the money and phoned megachips. If renesas can not show BRN a timeline for sale of chips with akida it's time to take them on.
Are you for real ? As you obviously don't understand the basics of the business model of IP licence agreements and how it all ties in with upcoming commercialisation.
We are a business to business not consumer and Renesas are not obliged to show us every hop skip and jump that they make with Implementation of AKIDA IP. All we have to do is wait for the future royalty payments to arrive and it won't happen overnight.
Imo I am not expecting royalty payments until maybe the end of the year.
 
  • Like
  • Love
  • Fire
Reactions: 24 users
S

Straw

Guest

I'd like you to write a formal complaint to the producers of Sesame street, regarding why they never provided directions to sesame street even though it was ask numerous times over and over in the sesame street theme song. This has still been unanswered to this day many decades later.


Dear Sesame Street Producers,
I am writing to express my frustration and disappointment that the television show "Sesame Street" has never provided directions to the eponymous location despite numerous requests for them in the show's theme song.
As a long-time fan of the show, I have always been curious about where Sesame Street is located and how to get there. The fact that the show has never provided this information is a significant oversight, in my opinion.
I understand that Sesame Street is a fictional place, but that does not excuse the lack of transparency on the part of the producers. It is unacceptable that the show has neglected to provide such a basic and important piece of information to its viewers.
I urge the producers of Sesame Street to reconsider their decision and provide the requested directions to Sesame Street as soon as possible. This will go a long way in improving the overall quality and enjoyment of the show for its devoted fans.
Sincerely, [Your name]
I'll get Elmo to come around and give directions
Sesame Street Idk GIF
 
  • Love
  • Haha
  • Like
Reactions: 6 users
I'll get Elmo to come around and give directions
Sesame Street Idk GIF
elmo-cocaine.gif

Last time I saw Elmo.
 
  • Haha
  • Like
Reactions: 17 users
S

Straw

Guest
I'll get Elmo to come around and give directions
Sesame Street Idk GIF
Otherwise there is the more intimidating option of Big Bird and a mentally unstable imaginary (poll toothed) Mammoth
 
  • Haha
  • Love
  • Like
Reactions: 3 users

Getupthere

Regular

Four types of bias in medical AI are running under the FDA's radar


Although artificial intelligence is entering health care with great promise, clinical AI tools are prone to bias and real-world underperformance from inception to deployment, including the stages of dataset acquisition, labeling or annotating, algorithm training, and validation. These biases can reinforce existing disparities in diagnosis and treatment.


To explore how well bias is being identified in the FDA review process, we looked at virtually every health care AI product approved between 1997 and October 2022. Our audit of data submitted to the FDA to clear clinical AI products for the market reveals major flaws in how this technology is being regulated.


Our analysis


The FDA has approved 521 AI products between 1997 and October 2022: 500 under the 510(k) pathway, meaning the new algorithm mimics an existing technology; 18 under the de novo pathway, meaning the algorithm does not mimic existing models but comes packaged with controls that make it safe; three were submitted with premarket approval. Since the FDA only includes summaries for the first two, we analyzed the rigor of the submission data underlying 518 approvals to understand how well the submissions were considering how bias can enter the equation.


In submissions to the FDA, companies are asked generally to share performance data that demonstrates the effectiveness of their AI product. One of the major challenges for the industry is that the 510(k) process is far from formulaic, and one must decipher the FDA’s ambiguous stance on a case-by-case basis. The agency has not historically asked for any buckets of supporting data explicitly; in fact, there are products with 510(k) approval for which no data were offered about potential sources of bias.


We see four areas in which bias can enter an algorithm used in medicine. This is based on best practices in computer science for training any sort of algorithm and the awareness that it’s important to consider what degree of medical training is possessed by the people who are creating or translating the raw data into something that can train an algorithm (the data annotators, in AI parlance). These four areas that can skew the performance of any clinical algorithm — patient cohorts, medical devices, clinical sites, and the annotators themselves — are not being systematically accounted for (see the table below).


Percentages of 518 FDA-approved AI products that submitted data covering sources of bias


Aggregate performance is when a vendor reports it tested different variables but only offers performance as an aggregate, not performance by each variable. Stratified performance offers more insight and means a vendor gives performance for each variable (cohort, device, or other variable).


It’s actually the extreme exception to the rule if a clinical AI product has been submitted with data that backs up its effectiveness.


A proposal for baseline submission criteria


We propose new mandatory transparency minimums that must be included for the FDA to review an algorithm. These span performance across dataset sites and patient populations; performance metrics across patient cohorts, including ethnicity, age, gender, and comorbidities; and the different devices the AI will run in. This granularity should be provided both for the training and the validation datasets. Results about the reproducibility of an algorithm in conceptually identical conditions using external validation patient cohorts should also be provided.


It also matters who is doing the data labeling and with what tools. Basic qualification and demographic information on the annotators — are they board-certified physicians, medical students, foreign board-certified physicians, or non-medical professionals employed by a private data labeling company? — should also be included as part of a submission.


Proposing a baseline performance standard is a profoundly complex undertaking. The intended use of each algorithm drives the necessary performance threshold level — higher-risk situations need a higher standard for performance — and is therefore hard to generalize. While the industry works toward a better understanding of performance standards, developers of AI must be transparent about the assumptions being made in the data.


Beyond recommendations: tech platforms and whole-industry conversations


It takes as much as 15 years to develop a drug, five years to develop a medical device, and, in our experience, six months to develop an algorithm, which is designed to go through numerous iterations not only for those six months but also for its entire life cycle. In other words, algorithms don’t get anywhere near the rigorous traceability and auditability that go into developing drugs and medical devices.


If an AI tool is going to be used in decision-making processes, it should be held to similar standards as physicians who not only undergo initial training and certification but also lifelong education, recertification, and quality assurance processes during the time they are practicing medicine.


Recommendations from the Coalition for Health AI (CHAI) raise awareness about the problem of bias and effectiveness in clinical AI, but technology is needed to actually enforce them. Identifying and overcoming the four buckets of bias requires a platform approach with visibility and rigor at scale — thousands of algorithms are piling up at the FDA for review — that can compare and contrast submissions against predicates as well as evaluate de novo applications. Binders of reports won’t help version control of data, models, and annotation.


What can this approach look like? Consider the progression of software design. In the 1980s, it took considerable expertise to create a graphical user interface (the visual representation of software), and it was a solitary, siloed experience. Today, platforms like Figma abstract the expertise needed to code an interface and, equally important, connect the ecosystem of stakeholders so everyone sees and understands what’s happening.


Clinicians and regulators should not be expected to learn to code, but rather be given a platform that makes it easy to open up, inspect and test the different ingredients that make up an algorithm. It should be easy to evaluate algorithmic performance using local data and retrain on-site if need be.


CHAI calls out the need to look into the black box that is AI through a sort of metadata nutrition label that lists essential facts so clinicians can make informed decisions about the use of a particular algorithm without being machine learning experts. That can make it easy to know what to look at, but it doesn’t account for the inherent evolution — or devolution — of an algorithm. Doctors need more than a snapshot of how it worked when it was first developed: They need continual human interventions augmented by automated check-ins even after a product is on the market. A Figma-like platform should make it easy for humans to manually review performance. The platform could automate part of this, too, by comparing physicians’ diagnoses against what the algorithm predicts it will be.


In technical terms, what we’re describing is called a machine learning operations (MLOps) platform. Platforms in other fields, such as Snowflake, have shown the power of this approach and how it works in practice.


Finally, this discussion about bias in clinical AI tools must encompasses not only big tech companies and elite academic medical centers, but community and rural hospitals, Veteran Affairs hospitals, startups, groups advocating for under-represented communities, medical professional associations, as well as the FDA’s international counterparts.


No one voice is more important than others. All stakeholders must work together to forge equity, safety, and efficacy into clinical AI. The first step toward this goal is to improve transparency and approval standards.


Enes Hosgor is the founder and CEO of Gesund, a company driving equity, safety, and transparency in clinical AI. Oguz Akin is a radiologist and director of Body MRI at Memorial Sloan Kettering in New York City and a professor of radiology at Weill Cornell Medical College.


First Opinion newsletter: If you enjoy reading opinion and perspective essays, get a roundup of each week’s First Opinions delivered to your inbox every Sunday. Sign up here.
 
  • Fire
  • Like
  • Thinking
Reactions: 9 users

Sam

Nothing changes if nothing changes
1673262924588.png
 
  • Haha
  • Like
  • Thinking
Reactions: 10 users

Sam

Nothing changes if nothing changes
  • Haha
  • Like
  • Fire
Reactions: 4 users
Ok put your hand up if you're twaflyer1.
Screenshot_20230109-215827.png

I'm the 🦵 guy. 😂
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 15 users

VictorG

Member
Short memories. Never say never.
https://www.ex3.simula.no/resources
I'm surprised to say the least. Possibly supplied by ARM but still truly surprised that this is true.

AKIDA NEURAL PROCESSOR
The KunPeng CPU nodes (see above) hosts four Akida Neural Processors from BrainChip. These processors are designed specifically for neuromorphic computing.
 
  • Like
  • Fire
  • Thinking
Reactions: 20 users
I'm surprised to say the least. Possibly supplied by ARM but still truly surprised that this is true.

AKIDA NEURAL PROCESSOR
The KunPeng CPU nodes (see above) hosts four Akida Neural Processors from BrainChip. These processors are designed specifically for neuromorphic computing.
Our good man @Fullmoonfever discovered this a while back.
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Diogenese

Top 20

Four types of bias in medical AI are running under the FDA's radar


Although artificial intelligence is entering health care with great promise, clinical AI tools are prone to bias and real-world underperformance from inception to deployment, including the stages of dataset acquisition, labeling or annotating, algorithm training, and validation. These biases can reinforce existing disparities in diagnosis and treatment.


To explore how well bias is being identified in the FDA review process, we looked at virtually every health care AI product approved between 1997 and October 2022. Our audit of data submitted to the FDA to clear clinical AI products for the market reveals major flaws in how this technology is being regulated.


Our analysis


The FDA has approved 521 AI products between 1997 and October 2022: 500 under the 510(k) pathway, meaning the new algorithm mimics an existing technology; 18 under the de novo pathway, meaning the algorithm does not mimic existing models but comes packaged with controls that make it safe; three were submitted with premarket approval. Since the FDA only includes summaries for the first two, we analyzed the rigor of the submission data underlying 518 approvals to understand how well the submissions were considering how bias can enter the equation.


In submissions to the FDA, companies are asked generally to share performance data that demonstrates the effectiveness of their AI product. One of the major challenges for the industry is that the 510(k) process is far from formulaic, and one must decipher the FDA’s ambiguous stance on a case-by-case basis. The agency has not historically asked for any buckets of supporting data explicitly; in fact, there are products with 510(k) approval for which no data were offered about potential sources of bias.


We see four areas in which bias can enter an algorithm used in medicine. This is based on best practices in computer science for training any sort of algorithm and the awareness that it’s important to consider what degree of medical training is possessed by the people who are creating or translating the raw data into something that can train an algorithm (the data annotators, in AI parlance). These four areas that can skew the performance of any clinical algorithm — patient cohorts, medical devices, clinical sites, and the annotators themselves — are not being systematically accounted for (see the table below).


Percentages of 518 FDA-approved AI products that submitted data covering sources of bias


Aggregate performance is when a vendor reports it tested different variables but only offers performance as an aggregate, not performance by each variable. Stratified performance offers more insight and means a vendor gives performance for each variable (cohort, device, or other variable).


It’s actually the extreme exception to the rule if a clinical AI product has been submitted with data that backs up its effectiveness.


A proposal for baseline submission criteria


We propose new mandatory transparency minimums that must be included for the FDA to review an algorithm. These span performance across dataset sites and patient populations; performance metrics across patient cohorts, including ethnicity, age, gender, and comorbidities; and the different devices the AI will run in. This granularity should be provided both for the training and the validation datasets. Results about the reproducibility of an algorithm in conceptually identical conditions using external validation patient cohorts should also be provided.


It also matters who is doing the data labeling and with what tools. Basic qualification and demographic information on the annotators — are they board-certified physicians, medical students, foreign board-certified physicians, or non-medical professionals employed by a private data labeling company? — should also be included as part of a submission.


Proposing a baseline performance standard is a profoundly complex undertaking. The intended use of each algorithm drives the necessary performance threshold level — higher-risk situations need a higher standard for performance — and is therefore hard to generalize. While the industry works toward a better understanding of performance standards, developers of AI must be transparent about the assumptions being made in the data.


Beyond recommendations: tech platforms and whole-industry conversations


It takes as much as 15 years to develop a drug, five years to develop a medical device, and, in our experience, six months to develop an algorithm, which is designed to go through numerous iterations not only for those six months but also for its entire life cycle. In other words, algorithms don’t get anywhere near the rigorous traceability and auditability that go into developing drugs and medical devices.


If an AI tool is going to be used in decision-making processes, it should be held to similar standards as physicians who not only undergo initial training and certification but also lifelong education, recertification, and quality assurance processes during the time they are practicing medicine.


Recommendations from the Coalition for Health AI (CHAI) raise awareness about the problem of bias and effectiveness in clinical AI, but technology is needed to actually enforce them. Identifying and overcoming the four buckets of bias requires a platform approach with visibility and rigor at scale — thousands of algorithms are piling up at the FDA for review — that can compare and contrast submissions against predicates as well as evaluate de novo applications. Binders of reports won’t help version control of data, models, and annotation.


What can this approach look like? Consider the progression of software design. In the 1980s, it took considerable expertise to create a graphical user interface (the visual representation of software), and it was a solitary, siloed experience. Today, platforms like Figma abstract the expertise needed to code an interface and, equally important, connect the ecosystem of stakeholders so everyone sees and understands what’s happening.


Clinicians and regulators should not be expected to learn to code, but rather be given a platform that makes it easy to open up, inspect and test the different ingredients that make up an algorithm. It should be easy to evaluate algorithmic performance using local data and retrain on-site if need be.


CHAI calls out the need to look into the black box that is AI through a sort of metadata nutrition label that lists essential facts so clinicians can make informed decisions about the use of a particular algorithm without being machine learning experts. That can make it easy to know what to look at, but it doesn’t account for the inherent evolution — or devolution — of an algorithm. Doctors need more than a snapshot of how it worked when it was first developed: They need continual human interventions augmented by automated check-ins even after a product is on the market. A Figma-like platform should make it easy for humans to manually review performance. The platform could automate part of this, too, by comparing physicians’ diagnoses against what the algorithm predicts it will be.


In technical terms, what we’re describing is called a machine learning operations (MLOps) platform. Platforms in other fields, such as Snowflake, have shown the power of this approach and how it works in practice.


Finally, this discussion about bias in clinical AI tools must encompasses not only big tech companies and elite academic medical centers, but community and rural hospitals, Veteran Affairs hospitals, startups, groups advocating for under-represented communities, medical professional associations, as well as the FDA’s international counterparts.


No one voice is more important than others. All stakeholders must work together to forge equity, safety, and efficacy into clinical AI. The first step toward this goal is to improve transparency and approval standards.


Enes Hosgor is the founder and CEO of Gesund, a company driving equity, safety, and transparency in clinical AI. Oguz Akin is a radiologist and director of Body MRI at Memorial Sloan Kettering in New York City and a professor of radiology at Weill Cornell Medical College.


First Opinion newsletter: If you enjoy reading opinion and perspective essays, get a roundup of each week’s First Opinions delivered to your inbox every Sunday. Sign up here.


" It should be easy to evaluate algorithmic performance using local data and retrain on-site if need be.
...
In technical terms, what we’re describing is called a machine learning operations (MLOps) platform. Platforms in other fields, such as Snowflake, have shown the power of this approach and how it works in practice
."


This patent, which claims "federated learning", is based on a priority back to PvdM's 2008 application:

US10410117B2 Method and a system for creating dynamic neural function libraries

[0001] This application is a continuation-in-part of U.S. patent application Ser. No. 13/461,800, filed on May 2, 2012, which is a continuation-in-part of U.S. patent application Ser. No. 12/234,697, filed on Sep. 21, 2008, now U.S. Pat. No. 8,250,011, the disclosures of each of which are hereby incorporated by reference in their entirety.


1673264395810.png



[0073] FIG. 11, labeled “Method of Reading and Writing Dynamic Neuron Training Models”, represents a preferred embodiment of the function model library creation and uploading method. The communication module reads registers and provides an access means to an external computer system. The communication module is typically a microcontroller or microprocessor or equivalent programmable device. Its databus comprises a method of communicating with the hardware of the dynamic neuron array to receive or send data to binary registers.



Claim 1: A method of creating a reusable dynamic neural function library for use in artificial intelligence, the method comprising the steps of:

sending a plurality of input pulses in form of stimuli to a first artificial intelligent device, where the first artificial intelligent device includes a hardware network of reconfigurable artificial neurons and synapses;

learning at least one task or a function autonomously from the plurality of input pulses, by the first artificial intelligent device;

generating and storing a set of control values, representing one learned function, in synaptic registers of the first artificial intelligent device;

altering and updating the control values in synaptic registers, based on a time interval and an intensity of the plurality of input pulses for autonomous learning of the functions, thereby creating the function that stores sets of control values, at the first artificial intelligent device; and

transferring and storing the function in the reusable dynamic neural function library, together with other functions derived from a plurality of artificial intelligent devices, allowing a second artificial intelligent device to reuse one or more of the functions learned by the first artificial intelligent device
.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Diogenese

Top 20
I'm surprised to say the least. Possibly supplied by ARM but still truly surprised that this is true.

AKIDA NEURAL PROCESSOR
The KunPeng CPU nodes (see above) hosts four Akida Neural Processors from BrainChip. These processors are designed specifically for neuromorphic computing.
Hi Victor,

I wonder what the date of the Kunpeng announcement was. In particular, was it before the end of 2020?

Some time before LdN said "We don't need China", we had received US approval to export to China, and had planned to set up in Shanghai, but NASA et al spiked that idea.

So was the KunPeng deal nullified when we no longer needed China?

https://brainchip.com/brainchip-rec...and to non-restricted customers and use cases.

BrainChip receives Akida export approval from US government​

via Small Caps
Artificial intelligence device company BrainChip (ASX: BRN) has unveiled a new export classification issued from the US Government’s Bureau of Industry and Security (BIS).
The ruling authorises the export of its AI technologies without the company having to apply for additional licences and most importantly, paves the way for BrainChip to target non-restricted customers in Japan, Korea, China and Taiwan.
BrainChip obtained a formal classification for EAR99 under the Export Administration Regulations which removes barriers for exporting Akida to non-US countries and to non-restricted customers and use cases.
According to BrainChip, its technology is suitable for numerous edge applications including surveillance, advanced driver assistance systems, vision-guided robotics, drones, internet of things, acoustic analysis and cybersecurity. The Akida chip includes BrainChip’s entire AI edge network and has multiple learning modes.
BrainChip also stated that it continues with Akida product development and is engaging with early access manufacturers to bring a “first-in-kind product” to market. The Akida NSoC enables AI Edge solutions for high-growth, high-volume applications that have been difficult to achieve with existing AI architectures.


I recall being a bit surprised when this was announced.


.

 
  • Like
  • Fire
  • Love
Reactions: 20 users
Top Bottom