BRN Discussion Ongoing

This upcoming conference could be interesting.

Wonder if the workshop on Cortex M etc might get us a mention. Be nice.



Join us on the 24th of November in Warwickshire for the 18th Arm MCU Conference by Hitex. This year we are pleased to welcome keynote speakers from Arm, Keil and Linaro. The conference is accompanied by a tabletop exhibition and training workshops to help you get the best from the day.

1669433496380.png



Hitex Arm Microcontroller Conference

The 2022 conference will bring together experts from ARM and their partners to present new and emerging technologies for Cortex-M microcontrollers - with some exciting new speakers as well.

18 years of partnership, collaboration, and innovation

Since the first Cortex-M based microcontroller was launched in 2006 the family has gone from strength to strength to become today’s standard microcontroller processor. Now with the Armv8.x architectural revision Silicon vendors can release the next generation of Cortex-M processors with enhanced hardware extensions for today's critical applications such as the IoT, Machine Learning and Functional Safety.

On the 24th of November 2022, we are pleased to present a diverse range of experts from the UK embedded community gathered under one roof to discuss all the latest developments in microcontroller silicon, software and design techniques.

The day will include:

Technical conference
Industry-leading keynote speakers
Latest technology round-up
Exhibition
Full conference content to take home
URL: https://www.hitexarmconference.co.uk

Training courses

In the run-up to the conference, we will be running our most popular training courses. If you are starting with Cortex-M processors these courses are a great springboard for your first project

Cortex-M MCU Workshop 22 November 2022

A one-day introduction to the Cortex-M processor family, development tools, software standards and key programming techniques

Using a Real-Time Operating System 23 November 2022

This course provides a complete introduction to starting development with an RTOS for current ‘bare metal’ developers. Including concepts and introduction to the thee CMSIS-RTOSv2 API, How to design application code with an RTOS and adopting a layered software architecture for productivity and code reuse.

URL: https://www.hitexarmconference.co.uk/training-courses

Not forgetting the important bits...

The conference opens at 8.30, 24th November at the Delta Hotel Warwick. With parking, breakfast, lunch and the infamous goody bag all included free of charge, make sure you secure your seat today.


1669433444726.png



 
  • Like
  • Fire
Reactions: 7 users
Cracking night catching up with the Perth Brainchip crew at the Hilton. Thanks to @Earlyrelease for organising it, much appreciated, thanks to Tony Dawes for taking to time to come and meet with us all, and it was great to (re)meet everyone else.

This really is a great little company with massive potential, and I look forward to watching it grow with the fellow investors who I’ve met through this forum (thanks to @zeeb0t). Here’s to 2023 being OUR year!
So you go and have a beer with a top line horse trainer, 3 drinks in your asking him about things in his stable ,which horses to back, , A meeting with Tony and what your talking about the Weather
 

buena suerte :-)

BOB Bank of Brainchip
View attachment 22915
Ok so a typical Friday SP retrace. Nevermind I shall shelve the celebratory Heineken Asahi candle combo as the reversal is yet to be confirmed. Instead I will partake in a quiet Panhead or several and not lament but quietly contemplate the exciting future that awaits all of us stoic holders. Happy weekend Chippers😎👍
Cheers Foxdog :cool:🍻
 
  • Like
  • Haha
Reactions: 4 users

buena suerte :-)

BOB Bank of Brainchip
Tony is an astute gentlemen and a true professional. We talked about what had already been announced and I asked him as a retail investor in the company himself, was he excited about where we are heading, to which he responded yes. I like to think I’m a fairly good judge of character and he seemed very genuine in his response.
It was great catching up robsmark ...(y)🍻🍻 cheers
 
  • Like
  • Haha
Reactions: 6 users

TopCat

Regular
This upcoming conference could be interesting.

Wonder if the workshop on Cortex M etc might get us a mention. Be nice.



Join us on the 24th of November in Warwickshire for the 18th Arm MCU Conference by Hitex. This year we are pleased to welcome keynote speakers from Arm, Keil and Linaro. The conference is accompanied by a tabletop exhibition and training workshops to help you get the best from the day.

View attachment 22937


Hitex Arm Microcontroller Conference

The 2022 conference will bring together experts from ARM and their partners to present new and emerging technologies for Cortex-M microcontrollers - with some exciting new speakers as well.

18 years of partnership, collaboration, and innovation

Since the first Cortex-M based microcontroller was launched in 2006 the family has gone from strength to strength to become today’s standard microcontroller processor. Now with the Armv8.x architectural revision Silicon vendors can release the next generation of Cortex-M processors with enhanced hardware extensions for today's critical applications such as the IoT, Machine Learning and Functional Safety.

On the 24th of November 2022, we are pleased to present a diverse range of experts from the UK embedded community gathered under one roof to discuss all the latest developments in microcontroller silicon, software and design techniques.

The day will include:

Technical conference
Industry-leading keynote speakers
Latest technology round-up
Exhibition
Full conference content to take home
URL: https://www.hitexarmconference.co.uk

Training courses

In the run-up to the conference, we will be running our most popular training courses. If you are starting with Cortex-M processors these courses are a great springboard for your first project

Cortex-M MCU Workshop 22 November 2022

A one-day introduction to the Cortex-M processor family, development tools, software standards and key programming techniques

Using a Real-Time Operating System 23 November 2022

This course provides a complete introduction to starting development with an RTOS for current ‘bare metal’ developers. Including concepts and introduction to the thee CMSIS-RTOSv2 API, How to design application code with an RTOS and adopting a layered software architecture for productivity and code reuse.

URL: https://www.hitexarmconference.co.uk/training-courses

Not forgetting the important bits...

The conference opens at 8.30, 24th November at the Delta Hotel Warwick. With parking, breakfast, lunch and the infamous goody bag all included free of charge, make sure you secure your seat today.


View attachment 22936


I’m really liking the sound of the M55. During the week I posted a few things about Cambridge Consultants including working with ARM and Prophesee. I just came across this Cortex M55 page which includes a video with Cambridge Consultants describing the M55 and their keyword detection trials. Worthwhile watching the whole video of about 7 minutes but if not go straight to about 6 minutes and listen to how incredible they think it is that it’s also possible to add vision with it because it’s so good.

 
  • Like
  • Fire
  • Love
Reactions: 10 users

Slade

Top 20
I would like to make two basic suggestions for the BRN website.

1. I think it's time to take some authentic photos of BrainChip offices, tech and people instead of using standard stock photos like these:

1669438307281.png
1669438364721.png


2. Sort this photo out as its blurred and should be clear and sharp

1669438435806.png
 
  • Like
  • Fire
Reactions: 18 users

Moneytalks

Member
Tony is an astute gentlemen and a true professional. We talked about what had already been announced and I asked him as a retail investor in the company himself, was he excited about where we are heading, to which he responded yes. I like to think I’m a fairly good judge of character and he seemed very genuine in his response.
Yep, great getting together again over a few bevvies in Perth and meeting a few more of the long time holders🎉.
I'll add that in my conversation with Tony he mentioned the tenacity Chris Stevens has displayed in his role heading up worldwide sales.
He sounds pretty relentless in getting the ledger pumping👏👏.
Great to hear as a shareholder 👍
Enjoy your weekend Chippers!!
 
  • Like
  • Love
  • Fire
Reactions: 69 users

Shezza

Emerged
Exactly.

An additional undisclosed responsibility of Rob Telson's role is to give the 1000 eyes a hint of potential dots so we can continue drawing our Mona Lisa-esque-mother-of-all-dot-paintings. We are getting closer and closer to completing the dot painting. It's simply a matter of connecting each dot, one after the other, and Rob Telson is assisting us with this...

Of course... under ASX obligations, Rob can't simply give us these hints for free and so must also throw a pile of shit on top of the REAL dots by liking industry related but not necessarily akida related content to create the image of fairness and complying with ASX obligations.

We all know... Rob is the real MVP.

I wonder what his AI super power is.
@robsmark was Tony super excited for the company and its progress?
 

Newk R

Regular
Patience is the thing really. I remember 1966 vividly. Many years of patience and finally in 2010 I tasted sweet revenge. I'm hoping BRN at $5.00 can produce in me the same euphoria, but I doubt it.
Floreat Pica
;)😊
I Just noticed that my post above was on page 1892. The mighty 'pies played their first game in 1892. I'm now selling my BRN shares at $18.92.
 
  • Like
  • Haha
  • Love
Reactions: 30 users
I do not recall this being posted before and the date of release suggests not so guess which space program is using a COTS anomaly detection SNN on space missions:

Small Business Innovation Research/Small Business Tech Transfer
Neuromorphic Spacecraft Fault Monitor, Phase II
Completed Technology Project (2020 - 2022)
Project Introduction
The goal of this work is to develop a low power machine learning anomaly detector. The low power comes from the type of machine learning (Spiking Neural Network (SNN)) and the hardware the neuromorphic anomaly
detector runs on. The ability to detect and react to anomalies in sensor readings on board resource constrained spacecraft is essential, now more than ever, as enormous satellite constellations are launched and humans push out again beyond low Earth orbit to the Moon and beyond. Spacecraft are autonomous systems operating in dynamic environments. When monitored parameters exceed limits or watchdog timers are not reset, spacecraft can automatically enter a 'safe' mode where primary functionality is reduced or stopped completely. During safe mode the primary mission is put on hold while teams on the ground examine dozens to hundreds of parameters and compare them to archived historical data and the spacecraft design to determine the root cause and what corrective action to take. This is a difficult and time consuming task for humans, but can be accomplished faster, in real- time, by machine learning. As humans travel away from Earth, light travel time delays increase, lengthening the time it takes for ground crews to respond to a safe mode event. The few astronauts onboard will have a hard time replacing the brain power and experience of a team of experts on the ground. Therefore, a new approach is needed that augments existing capabilities to help the astronauts in key decision moments. We provide a new machine learning approach that recognizes nominal and faulty behavior, by learning during integration, test, and on-orbit checkout. This knowledge is stored and used for anomaly detection in a low power neuromorphic chip and continuously updated through regular operations. Anomalies are detected and context is provided in real-time, enabling both astronauts onboard, and ground crews on Earth, to take action and avoid potential faults or safe mode events.
Anticipated Benefits
The software developed in Phase II can potentially be used by NASA for anomaly detection onboard the ISS, the planned Lunar Gateway, and future missions to Mars. The NSFM software can also be used by ground crews to augment their ability to monitor spacecraft and astronaut health telemetry once it reaches the ground. The NSFM software can furthermore be used during integration and test to better inform test operators of the functionality of the system during tests in real time.
The software developed in Phase II can potentially be used for anomaly detection onboard any of the new large constellations planned by private companies. It can also be applied to crewed space missions, deep space probes, UUVs, UAVs, and many industrial applications on Earth. The NSFM software developed in Phase II can also be used during Integration and Test of any commercial satellite.


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 46 users
The following paper is a comprehensive argument in support of the adoption of spiking neural network neuromorphic computing if robotics is to attain its potential.

It is a MUST READ but not an urgent read.

It does become technical at times but I have just skim read it while watching TV in about 20 minutes and can say the AKIDA technology is at least addressing 90% of the issues where Von Neumann compute is presently failing.

I suspect it might address 100% but my technophobia comes into play when they discuss some of the issues so have to concede I have an absence of requisite knowledge.

The final paragraphs are extracted here and are a useful taste of how the paper argues the SNN case:

“Outlook​

Embodied neuromorphic intelligent agents are on their way. They promise to interact more smoothly with the environment and with humans by incorporating brain-inspired computing methods. They are being designed to take autonomous decisions and execute corresponding actions in a way that takes into account many different sources of information, reducing uncertainty and ambiguity from perception, and continuously learning and adapting to changing conditions.
In general, the overall system design of traditional robotics and even current neuromorphic approaches is still far from any biological inspiration. A real breakthrough in the field will happen if the whole system design is based on biological computational principles, with a tight interplay between the estimation of the surroundings and the robot’s own state, and decision making, planning and action. Scaling to more complex tasks is still an open challenge and requires further development of perception and behaviour, and further co-design of computational primitives that can be naturally mapped onto neuromorphic computing platforms and supported by the physics of its electronic components. At the system level, there is still a lack of understanding on how to integrate all sensing and computing components in a coherent system that forms a stable perception useful for behaviour. Additionally, the field is lacking a notion of how to exploit the intricate non-linear properties of biological neural processing systems, for example to integrate adaptation and learning at different temporal scales. This is both on the theory/algorithmic level and on the hardware level, where novel technologies could be exploited, for such requirements.
The roadmap towards the success of neuromorphic intelligent agents encompasses the growth of the neuromorphic community with a cross-fertilisation with other research communities, as discussed in Box 5, Box 6.
The characteristics of neuromorphic computing technology so far have been demonstrated by proof of concept applications. It nevertheless holds the promise to enable the construction of power-efficient and compact intelligent robotic systems, capable of perceiving, acting, and learning in challenging real-world environments. A number of issues need to be addressed before this technology is mature to solve complex robotic tasks and can enter mainstream robotics. In the short term, it will be imperative to develop user-friendly tools for the integration and programming of neuromorphic devices to enable a large community of users and the adoption of the neuromorphic approach by roboticists. The path to follow can be similar to the one adopted by robotics, with open source platforms and development of user-friendly middleware. Similarly, the community should rely on a common set of guiding principles for the development of intelligence using neural primitives. New information and signal processing theories should be developed following these principles also for the design of asynchronous, event-based processing in neuromorphic hardware and neuronal encoding circuits. This should be done with the cross-fertilisation of the neuromorphic community with computational neuroscience and information theory; furthermore interaction with materials and (soft-)robotics communities will better define the application domain and the specific problems for which neuromorphic approaches can make a difference. Eventually, the application of a neuromorphic approach to robotics will find solutions that are applicable in other domains, such as smart spaces, automotive, prosthetics, rehabilitation, and brain-machine interfaces, where different types of signals may need to be interpreted, to make behavioural decisions and generate actions in real-time”


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 43 users
If anyone like Blind Freddie is excited by Prophesee and Brainchip here is a paper from 2017 proving the advantage to be had in combining an event based sensor with effectively a software version of an SNN processor using a huge 16 neurons for object avoidance in robotics:


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 16 users
And does anyone remember the recent Edge Impulse presentation where the presenter described AKIDA as science fiction well do you also remember he said that they had hocked up Nvidia Jetson with an Indian client to count traffic at an intersection.

Well the following paper makes clear the Brainchip and Prophesee would have been a far better choice:


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Deadpool

Did someone say KFC
I do not recall this being posted before and the date of release suggests not so guess which space program is using a COTS anomaly detection SNN on space missions:

Small Business Innovation Research/Small Business Tech Transfer
Neuromorphic Spacecraft Fault Monitor, Phase II
Completed Technology Project (2020 - 2022)
Project Introduction
The goal of this work is to develop a low power machine learning anomaly detector. The low power comes from the type of machine learning (Spiking Neural Network (SNN)) and the hardware the neuromorphic anomaly
detector runs on. The ability to detect and react to anomalies in sensor readings on board resource constrained spacecraft is essential, now more than ever, as enormous satellite constellations are launched and humans push out again beyond low Earth orbit to the Moon and beyond. Spacecraft are autonomous systems operating in dynamic environments. When monitored parameters exceed limits or watchdog timers are not reset, spacecraft can automatically enter a 'safe' mode where primary functionality is reduced or stopped completely. During safe mode the primary mission is put on hold while teams on the ground examine dozens to hundreds of parameters and compare them to archived historical data and the spacecraft design to determine the root cause and what corrective action to take. This is a difficult and time consuming task for humans, but can be accomplished faster, in real- time, by machine learning. As humans travel away from Earth, light travel time delays increase, lengthening the time it takes for ground crews to respond to a safe mode event. The few astronauts onboard will have a hard time replacing the brain power and experience of a team of experts on the ground. Therefore, a new approach is needed that augments existing capabilities to help the astronauts in key decision moments. We provide a new machine learning approach that recognizes nominal and faulty behavior, by learning during integration, test, and on-orbit checkout. This knowledge is stored and used for anomaly detection in a low power neuromorphic chip and continuously updated through regular operations. Anomalies are detected and context is provided in real-time, enabling both astronauts onboard, and ground crews on Earth, to take action and avoid potential faults or safe mode events.
Anticipated Benefits
The software developed in Phase II can potentially be used by NASA for anomaly detection onboard the ISS, the planned Lunar Gateway, and future missions to Mars. The NSFM software can also be used by ground crews to augment their ability to monitor spacecraft and astronaut health telemetry once it reaches the ground. The NSFM software can furthermore be used during integration and test to better inform test operators of the functionality of the system during tests in real time.
The software developed in Phase II can potentially be used for anomaly detection onboard any of the new large constellations planned by private companies. It can also be applied to crewed space missions, deep space probes, UUVs, UAVs, and many industrial applications on Earth. The NSFM software developed in Phase II can also be used during Integration and Test of any commercial satellite.


My opinion only DYOR
FF

AKIDA BALLISTA
Hey @factfinder, reminds me of Arther C Clark's 1968, 2001 A Space Odessey, who wrote about this very proposal
HAL 9000 Heuristically programmed ALgorithmic computer.
2001 a space odyssey GIF


Science fiction becoming actuality
 
  • Like
  • Love
Reactions: 8 users
This is part of what Carnegie Mellon submitted to the:

The White House Office of Science and Technology Policy – on behalf of the National Science and Technology Council's Select Committee on Artificial Intelligence and Machine Learning and AI Subcommittee, the National AI Initiative Office, and the Networking and Information Technology Research and Development National Coordination Office – released a Request for Information (RFI) on February 2, 2022, to request input on updating the National Artificial Intelligence Research and Development Strategic Plan. The RFI was published in the Federal Register and the comment period was open from February 2, 2022, through March 4, 2022.
This document contains the 63 responses received from interested parties. In accordance with the RFI instructions, only the first 10 pages of content were considered for each response

“Recommendations of AI Research Focus Areas to Create Solutions to Major Societal Challenges
The National AI Research and Development Strategic plan can catalyze innovations in both fundamental discoveries and applications that address specific societal challenges. Progress towards realizing this potential can be realized by collaborative efforts in the following areas.
77

Foster Interagency Collaboration to Ensure America Leads in Enabling Distributed Artificial Intelligence
The U.S. should lead a bold transformative agenda over the next five years to enable AI to evolve from highly structured and controlled, centralized architectures to more adaptive and pervasively distributed ones that autonomously fuse AI capability among the enterprise, the edge, and across AI systems and sensors embedded on-platform. CMU terms this revolutionary architectural advance as AI Fusion. The vision is built upon plans for a cohesive research advancing capabilities in microelectronics, AI frameworks and algorithms and innovations in federated learning in the AI fabric and abstraction layers.
Building a community research roadmap for distributed AI will address several critical challenges for the growth of AI, challenges that cut across agency-specific missions. The ability to enable distributed AI at the edge will minimize the dependence on aggregating and engineering massive data sets and reduce the need to “move the data to the algorithms” as well as the inherent challenges associated with the need for continuous high-bandwidth connectivity. Research in this area will also greatly enhance the capacity to address privacy and security challenges. It is dependent on, will contribute to and will benefit from the national computing infrastructure initiatives launched by the NAIIO.
Most critically, an AI Fusion research agenda will contribute to the network of AI institutes by enabling a host of applications emerging from increased convergence across AI-enabled cyber and physical systems. This convergence is vital to the viability of applications in commercial, military and national security domains. AI Fusion, for example, will be a critical contribution to the Department of Defense’s (DOD) focus on Multi-Domain Operations. It will also enhance the potential for advances in smart city applications and AI breakthroughs aiding manufacturing, energy, health care, education and agricultural innovations. A focus on AI Fusion should operate synergistically with national initiatives in microelectronics and tie directly with research and innovation efforts aimed at enhancing, protecting and hardening critical U.S. supply chains.
Initiate Research to Engineer AI into Societal Systems
While fundamental advances are needed in AI science, advances in engineering AI into systems of societal importance are vital to realize the full impact on major national missions. Engineering AI into such systems will be essential to transform U.S. manufacturing and enhance infrastructure and energy systems to meet critical national economic and societal goals.
Engineering AI will require the design, development and deployment of new use-inspired AI algorithms and methodologies, targeted to real-world applications and possessing enhanced scalability, robustness, fairness, security, privacy and policy impact. Advancing Engineering AI will also require new hardware and software systems, including cloud, edge and device computing infrastructures that sense and store the vast amounts of data collected in the real world and that enable devices to access and transmit this data from anywhere, to anywhere, in secure and private ways. Foundational research for Engineering AI is needed to enable the deployment of the highest performing and most energy-efficient AI systems. Such systems will
78

require architecting new hardware and computing frameworks; designing faster, more powerful and efficient integrated circuits; and developing sensing modalities to support data collection, storage and processing of the data deluge.
In addition, Carnegie Mellon recognizes that research on Engineering AI must include a focus on creating trust not only from a technical standpoint but from the system of stakeholders interacting with the AI system — be it in education, infrastructure or climate. Users and communities have to trust the system that is allocating resources and making decisions.
Potential applications and use cases for Engineering AI include autonomous infrastructure systems (AIS) that can help create equitable, innovative and economically sustainable communities. AIS technology could, for example, include initiatives integrating food delivery, the tracking of goods while preserving privacy and tools to improve mobility. Engineering AI will be key to the digital transformation of manufacturing in the U.S., including robotics for manufacturing, development of a timely and trustworthy supply chain and additive manufacturing. Engineering AI also has the potential to revolutionize how electricity is produced, distributed and consumed. It can provide insights to improve electricity distribution through demand forecasting, load management and community governance, as well as to innovate new energy storage solutions, control pollutants and advance wind, solar and nuclear energies.”


If you have read the above you too may think that there is a lot more to the relationship between Brainchip and Carnegie Mellon than we first thought.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 43 users

dippY22

Regular
Was browsing the Computer Science area at Carnegie Mellon U website. Not sure I've seen this posted before, ....doesn't mean it hasn't, I just don't recall.

The class agenda in one of the Computer Science / Machine Learning courses ....
 

Attachments

  • Screenshot 2022-11-26 3.22.05 PM.png
    Screenshot 2022-11-26 3.22.05 PM.png
    202.3 KB · Views: 224
  • Like
  • Fire
  • Love
Reactions: 34 users

Dallas

Regular
 
  • Like
  • Fire
Reactions: 15 users

Dallas

Regular
😉🤠🤔
 
  • Like
  • Fire
  • Love
Reactions: 10 users
This is part of what Carnegie Mellon submitted to the:

The White House Office of Science and Technology Policy – on behalf of the National Science and Technology Council's Select Committee on Artificial Intelligence and Machine Learning and AI Subcommittee, the National AI Initiative Office, and the Networking and Information Technology Research and Development National Coordination Office – released a Request for Information (RFI) on February 2, 2022, to request input on updating the National Artificial Intelligence Research and Development Strategic Plan. The RFI was published in the Federal Register and the comment period was open from February 2, 2022, through March 4, 2022.
This document contains the 63 responses received from interested parties. In accordance with the RFI instructions, only the first 10 pages of content were considered for each response

“Recommendations of AI Research Focus Areas to Create Solutions to Major Societal Challenges
The National AI Research and Development Strategic plan can catalyze innovations in both fundamental discoveries and applications that address specific societal challenges. Progress towards realizing this potential can be realized by collaborative efforts in the following areas.
77

Foster Interagency Collaboration to Ensure America Leads in Enabling Distributed Artificial Intelligence
The U.S. should lead a bold transformative agenda over the next five years to enable AI to evolve from highly structured and controlled, centralized architectures to more adaptive and pervasively distributed ones that autonomously fuse AI capability among the enterprise, the edge, and across AI systems and sensors embedded on-platform. CMU terms this revolutionary architectural advance as AI Fusion. The vision is built upon plans for a cohesive research advancing capabilities in microelectronics, AI frameworks and algorithms and innovations in federated learning in the AI fabric and abstraction layers.
Building a community research roadmap for distributed AI will address several critical challenges for the growth of AI, challenges that cut across agency-specific missions. The ability to enable distributed AI at the edge will minimize the dependence on aggregating and engineering massive data sets and reduce the need to “move the data to the algorithms” as well as the inherent challenges associated with the need for continuous high-bandwidth connectivity. Research in this area will also greatly enhance the capacity to address privacy and security challenges. It is dependent on, will contribute to and will benefit from the national computing infrastructure initiatives launched by the NAIIO.
Most critically, an AI Fusion research agenda will contribute to the network of AI institutes by enabling a host of applications emerging from increased convergence across AI-enabled cyber and physical systems. This convergence is vital to the viability of applications in commercial, military and national security domains. AI Fusion, for example, will be a critical contribution to the Department of Defense’s (DOD) focus on Multi-Domain Operations. It will also enhance the potential for advances in smart city applications and AI breakthroughs aiding manufacturing, energy, health care, education and agricultural innovations. A focus on AI Fusion should operate synergistically with national initiatives in microelectronics and tie directly with research and innovation efforts aimed at enhancing, protecting and hardening critical U.S. supply chains.
Initiate Research to Engineer AI into Societal Systems
While fundamental advances are needed in AI science, advances in engineering AI into systems of societal importance are vital to realize the full impact on major national missions. Engineering AI into such systems will be essential to transform U.S. manufacturing and enhance infrastructure and energy systems to meet critical national economic and societal goals.
Engineering AI will require the design, development and deployment of new use-inspired AI algorithms and methodologies, targeted to real-world applications and possessing enhanced scalability, robustness, fairness, security, privacy and policy impact. Advancing Engineering AI will also require new hardware and software systems, including cloud, edge and device computing infrastructures that sense and store the vast amounts of data collected in the real world and that enable devices to access and transmit this data from anywhere, to anywhere, in secure and private ways. Foundational research for Engineering AI is needed to enable the deployment of the highest performing and most energy-efficient AI systems. Such systems will
78

require architecting new hardware and computing frameworks; designing faster, more powerful and efficient integrated circuits; and developing sensing modalities to support data collection, storage and processing of the data deluge.
In addition, Carnegie Mellon recognizes that research on Engineering AI must include a focus on creating trust not only from a technical standpoint but from the system of stakeholders interacting with the AI system — be it in education, infrastructure or climate. Users and communities have to trust the system that is allocating resources and making decisions.
Potential applications and use cases for Engineering AI include autonomous infrastructure systems (AIS) that can help create equitable, innovative and economically sustainable communities. AIS technology could, for example, include initiatives integrating food delivery, the tracking of goods while preserving privacy and tools to improve mobility. Engineering AI will be key to the digital transformation of manufacturing in the U.S., including robotics for manufacturing, development of a timely and trustworthy supply chain and additive manufacturing. Engineering AI also has the potential to revolutionize how electricity is produced, distributed and consumed. It can provide insights to improve electricity distribution through demand forecasting, load management and community governance, as well as to innovate new energy storage solutions, control pollutants and advance wind, solar and nuclear energies.”


If you have read the above you too may think that there is a lot more to the relationship between Brainchip and Carnegie Mellon than we first thought.

My opinion only DYOR
FF

AKIDA BALLISTA
This journey we are on is becoming very very positive, exciting,thrilling and down right amazing ( did someone say KFC)
 
  • Like
  • Haha
  • Love
Reactions: 12 users
Top Bottom