BRN Discussion Ongoing

BRNs Akida seems to be in space, so are we just giving the technology away.
Where's the money honey.
No REVENUE And No IP License.
Can't wait for 2030
They haven't even turned it on yet to see if it works. If it is a partnership I am thinking we get a percentage of services rended. Way to go yet. By memory it will be turned on about 4 weeks after payload put into orbit, then testing, then hopefully they have some jobs lined up. IMO.

SC
 
  • Like
  • Love
Reactions: 5 users

Frangipani

Top 20
A new Brains & Machines podcast is out - the latest episode’s podcast guest was Amirreza Yousefzadeh, who co-developed the digital neuromorphic processor SENECA at imec The Netherlands in Eindhoven before joining the University of Twente as assistant professor in February 2024.




Here is the link to the podcast and its transcript:



BrainChip is mentioned a couple of times.

All of 2017, towards the end of his PhD, Yousefzadah collaborated with our company as an external consultant. His PhD supervisor in Sevilla was Bernabé Linares-Barranco who co-founded both GrAI Matter Labs and Prophesee. Yousefzadah was one of the co-inventors (alongside Simon Thorpe) of the two JAST patents that were licensed to BrainChip (Bernabé Linares-Barranco was co-inventor of one of them, too).

https://thestockexchange.com.au/threads/all-roads-lead-to-jast.1098/

30FFF075-2A07-4AA0-BCF5-8EBAC8D0C035.jpeg



Here are some interesting excerpts from the podcast transcript, some of them mentioning BrainChip:

5B683626-A2D8-4E8A-A900-8BAD606FFC36.jpeg

BBBDFD0C-E9FA-4CBD-B1C1-7A5EB89DB7FD.jpeg


E86D70C0-6B20-46B8-8803-F0A5005869AC.jpeg



(Please note that the original interview took place some time ago, when Amirezza Yousefzadah was still at imec full-time, ie before Feb 2024 - podcast host Sunny Bains mentions this at the beginning of the discussion with Giulia D’Angelo and Ralph Etienne-Cummings and again towards the end of the podcast, when there is another short interview update with him.)


4167AF2B-C3EE-471B-8C21-9B299EC9B16C.jpeg



Sounds like we can finally expect a future podcast episode featuring someone from BrainChip? 👍🏻
(As for the comment that “they also work on car tasks with Mercedes, I think”, I wouldn’t take that as confirmation of any present collaboration, though…)


E1D11422-2621-442A-85C2-BB9AE42F3BA3.jpeg



Interesting comment on prospective customers not enamoured of the term “neuromorphic”… 👆🏻



DB5EBAA9-1B3A-493C-96F2-721BC523B4C6.jpeg

C5B7ABDD-D6C9-44DC-A36C-E35603D34F69.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 40 users

Mt09

Regular
Hopefully ESA/Frontgrade will share/sell their hardware (that’s due to tape out later this year) for class 4/5 missions, should suit the bill for ant61’s requirements.

Plus we look like being included on some future hardware for class 1 (safety critical) missions according to Laurent Hili.

47 minutes on



Well worth a read, around defining safety critical systems.

 
  • Like
  • Love
Reactions: 16 users

Makeme 2020

Regular
They haven't even turned it on yet to see if it works. If it is a partnership I am thinking we get a percentage of services rended. Way to go yet. By memory it will be turned on about 4 weeks after payload put into orbit, then testing, then hopefully they have some jobs lined up. IMO.

SC
So BRN MANAGEMENT is saying to customers if it works you pay but if it doesn't you don't have to pay .
 
  • Like
Reactions: 1 users

Frangipani

Top 20
They haven't even turned it on yet to see if it works. If it is a partnership I am thinking we get a percentage of services rended. Way to go yet. By memory it will be turned on about 4 weeks after payload put into orbit, then testing, then hopefully they have some jobs lined up. IMO.

SC



A684F2BE-C53B-45D5-9440-57A65767A8A7.jpeg



Teaching us the virtue of patience is definitely something space and the stock market have in common… 😉
 
  • Like
  • Fire
  • Love
Reactions: 18 users
Doesn’t matter how old. I will not sell before we reach a price which is acceptable! If I will not reach the goal, the shares will get my family and so on. I’m not in to sell on penny’s
That's the spirit 7! Cheers! 👍

20240408_192419.jpg
 
  • Haha
  • Like
Reactions: 8 users

Frangipani

Top 20
A new Brains & Machines podcast is out - the latest episode’s podcast guest was Amirreza Yousefzadeh, who co-developed the digital neuromorphic processor SENECA at imec The Netherlands in Eindhoven before joining the University of Twente as assistant professor in February 2024.




Here is the link to the podcast and its transcript:



BrainChip is mentioned a couple of times.

All of 2017, towards the end of his PhD, Yousefzadah collaborated with our company as an external consultant. His PhD supervisor in Sevilla was Bernabé Linares-Barranco who co-founded both GrAI Matter Labs and Prophesee. Yousefzadah was one of the co-inventors (alongside Simon Thorpe) of the two JAST patents that were licensed to BrainChip (Bernabé Linares-Barranco was co-inventor of one of them, too).

https://thestockexchange.com.au/threads/all-roads-lead-to-jast.1098/

View attachment 60489


Here are some interesting excerpts from the podcast transcript, some of them mentioning BrainChip:

View attachment 60487
View attachment 60490

View attachment 60491


(Please note that the original interview took place some time ago, when Amirezza Yousefzadah was still at imec full-time, ie before Feb 2024 - podcast host Sunny Bains mentions this at the beginning of the discussion with Giulia D’Angelo and Ralph Etienne-Cummings and again towards the end of the podcast, when there is another short interview update with him.)


View attachment 60492


Sounds like we can finally expect a future podcast episode featuring someone from BrainChip? 👍🏻
(As for the comment that “they also work on car tasks with Mercedes, I think”, I wouldn’t take that as confirmation of any present collaboration, though…)


View attachment 60493


Interesting comment on prospective customers not enamoured of the term “neuromorphic”… 👆🏻



View attachment 60494
View attachment 60495

And as for patience required regarding product timelines:

CDDDCFD1-5A84-46BA-B6B6-D18BD3D3053C.jpeg


B777DC7E-C825-4598-B345-49E97C2F33CD.jpeg
 
  • Like
  • Fire
Reactions: 16 users
Interesting group and workshop.

Haven't checked if posted already. Apols if has.

We're being utilised as well in the workshop as is Prophesee, Inivation and some others.

Equipment apparently being provided by Neurobus & WSU as organisers it appears....they obviously have access to Akida...wonder what else they been doing with it outside of the workshop :unsure:



SPA24: Neuromorphic systems for space applications​



Topic leaders​

  • Gregory Cohen | WSU
  • Gregor Lenz | Neurobus

Co-organizers​

  • Alexandre Marcireau | WSU
  • Giulia D’Angelo | fortiss
  • Jens Egholm | KTH

Invited Speakers​

  • Prof Matthew McHarg - US Air Force Academy
  • Dr Paul Kirkland - University of Strathclyde
  • Dr Andrew Wabnitz - DSTG

Goals​

The use of neuromorphic engineering for applications in space is one of the most promising avenues to successful commercialisation of this technology. Our topic area focuses on the use of neuromorphic technologies to acquire and process space data captured from the ground and from space, and the exploration and exploitation of neuromorphic algorithms for space situational awareness and navigation. The project combines the academic expertise of Western Sydney University in Australia (Misha Mahowald Prize 2019) and industry knowledge of Neurobus, a European company specialising in neuromorphic space applications. Neuromorphic computing is a particularly good fit for this domain due to its novel sensor capabilities, low energy consumption, its potential for online adaptation, and algorithmic resilience. Successful demonstrations and projects will substantially boost the visibility of the neuromorphic community as the domain is connected to prestigious projects around satellites, off-earth rovers, and space stations. Our goal is to expose the participants to a range of challenging real-world applications and provide them with the tools and knowledge to apply their techniques where neuromorphic solutions can shine.

Projects​

  • Algorithms for processing space-based data: The organisers will make a real-world space dataset available thatwas recorded from the ISS, specifically for the purpose of this workshop. In addition, data can be recorded with WSU's remote-controlled observatory network. There are exciting discoveries to be made using this unexplored data, especially when combined with neuromorphic algorithmic approaches.
  • Processing space-based data using neuromorphic computing hardware: Using real-world data, from both space and terrestrial sensors, we will explore algorithms for star tracking, stabilisation, feature detection, and motion compensation on neuromorphic platforms such as Loihi, SpiNNaker, BrainChip, and Dynap. Given that the organisers are involved in multiple future space missions, the outcomes of this project may have a significant impact on a future space mission (and could even be flown to space!)
  • Orbital collision simulator: The growing number of satellites in the Earth’s orbit (around 25,000 large objects) makes it increasingly likely that objects will collide, despite our growing Space Situational Awareness capabilities to monitor artificial satellites. The fast and chaotic cloud of flying debris created during such collisions can threaten more satellites and is very hard to track with conventional sensors. By smashing Lego “cubesats” () in front of a Neuromorphic camera, we can emulate satellite collisions and assess the ability of the sensor to track the pieces and predict where they will land. We will bring Lego kits to the workshop. Participants will need to design a “collider”, build cubesats with legos, record data, measure the position of fallen pieces, and write algorithms to process the data.
  • High-speed closed-loop tracking with neuromorphic sensors: Our motorised telescope can move at up to 50o per second and can be controlled by a simple API. The low-latency of event cameras makes it possible to dynamically control the motors using visual feedback to keep a moving object (bird, drone, plane, satellite...) in the centre of the field of view. The algorithm can be tested remotely with the Astrosite observatory (located in South Australia) or with the telescope that we will bring to the workshop.
  • Navigation and landing: Prophesee’s GenX320 can be attached to a portable ST board and powered with a small battery. To simulate a landing of a probe on an extraterrestrial body, we attach the camera to an off-the-shelf drone for the exploration of ventral landing, optical flow and feature tracking scenarios, as well as predicting the distance to the ground to avoid dangerous landmarks.
  • High-speed object avoidance: The goal is to work on an ultra-low-latency vision pipeline to avoid incoming objects in real-time, simulating threats in the form of orbital debris. This will involve a robotic element added to the orbital collision simulator.


Materials, Equipment, and Tutorials:

We're going to make available several pieces of equipment that includes telescopes to record the night sky, different event cameras from Prophesee and iniVation, a Phantom high-frame rate camera for comparison, neuromorphic hardware such as Brainchip's Akida and SynSense Speck. ICNS will also provide access to their Astrosite network of remote telescopes, as well as their new DeepSouth cluster.

We will run hands-on sessions on neuromorphic sensing and processing in space, building on successful tutorials from space conferences, providing code and examples for projects, and training with neuromorphic hardware. Experts in space imaging, lightning, and high-speed phenomena detection will give talks, focusing on neuromorphic hardware's potential to address current shortcomings. The workshop will feature unique data from the International Space Station, provided by WSU and USAFA, marking its first public release, allowing participants to develop new algorithms for space applications and explore neuromorphic hardware's effectiveness in processing this data for future space missions. Additionally, various data collection systems will be available, including telescope observation equipment, long-range lenses, tripods, a Phantom High-speed camera, and WSU's Astrosite system for space observations. Neurobus will make available neuromorphic hardware on site. This setup facilitates experiments, specific data collection for applications, experimenting with closed-loop neuromorphic systems, and online participation in space observation topics due to the time difference
 
  • Like
  • Fire
  • Love
Reactions: 45 users

Frangipani

Top 20
Interesting video buddy.
They mention in the video.
BRAINSHIP AND AKITA.
Love your work.

You forgot Brainshift! 🤣

88CF5075-3797-4020-ADF7-A68F5B3550A8.jpeg



That Japenese dog Akita

Speaking of Akita: While the image of that puppy (or is it an adult dog?) featured on BrainChip’s website is most likely supposed to be a picture of an Akita Inu for obvious reasons…

EC2C3592-5057-49ED-92E4-6E77475A2FB3.jpeg


… I reckon they may have mixed up the dog breeds - looks more like a Shiba Inu (right) to me - the smaller cousin of the Akita Inu (left).

D6B19184-EA9D-4931-8C58-8B62812ADFB8.jpeg


Then again, I hardly know anything about canines, let alone Japanese breeds, unlike the curvaceous host of Animal Watch, “YouTube’s most popular Wolf and Alpha Dog Breed Channel” (I wonder how many of her 1.18 million followers are genuinely interested in the four-legged animals featured… 🤣):





HI (human Intelligence) created this video😂🙈

This is why the world needs AI 😂

On the other hand, the world still needs human intelligence to fact-check and proofread the AI’s creations:

8D01BCBB-68BB-4561-9455-400EB0FBB45D.jpeg



I first came across the above LinkedIn post in mid-March and incredibly this blunder has still not been corrected by the GenAI-enamoured authors one month to the day after they uploaded their case report on March 8! 😱
How utterly embarrassing for BOTH the authors AND those entrusted with the scientific peer review…

Go check it out for yourselves, it is still there for the world to see:


8BA7B64D-7D82-4107-92BF-1B24CD24950F.jpeg

B20DEE56-9667-4D70-8365-E82CBC1CCD0C.jpeg

Is that so?! 🤭
 
  • Haha
  • Wow
  • Like
Reactions: 8 users

Frangipani

Top 20
Interesting group and workshop.

Haven't checked if posted already. Apols if has.

We're being utilised as well in the workshop as is Prophesee, Inivation and some others.

Equipment apparently being provided by Neurobus & WSU as organisers it appears....they obviously have access to Akida...wonder what else they been doing with it outside of the workshop :unsure:



SPA24: Neuromorphic systems for space applications​



Topic leaders​

  • Gregory Cohen | WSU
  • Gregor Lenz | Neurobus

Co-organizers​

  • Alexandre Marcireau | WSU
  • Giulia D’Angelo | fortiss
  • Jens Egholm | KTH

Invited Speakers​

  • Prof Matthew McHarg - US Air Force Academy
  • Dr Paul Kirkland - University of Strathclyde
  • Dr Andrew Wabnitz - DSTG

Goals​

The use of neuromorphic engineering for applications in space is one of the most promising avenues to successful commercialisation of this technology. Our topic area focuses on the use of neuromorphic technologies to acquire and process space data captured from the ground and from space, and the exploration and exploitation of neuromorphic algorithms for space situational awareness and navigation. The project combines the academic expertise of Western Sydney University in Australia (Misha Mahowald Prize 2019) and industry knowledge of Neurobus, a European company specialising in neuromorphic space applications. Neuromorphic computing is a particularly good fit for this domain due to its novel sensor capabilities, low energy consumption, its potential for online adaptation, and algorithmic resilience. Successful demonstrations and projects will substantially boost the visibility of the neuromorphic community as the domain is connected to prestigious projects around satellites, off-earth rovers, and space stations. Our goal is to expose the participants to a range of challenging real-world applications and provide them with the tools and knowledge to apply their techniques where neuromorphic solutions can shine.

Projects​

  • Algorithms for processing space-based data: The organisers will make a real-world space dataset available thatwas recorded from the ISS, specifically for the purpose of this workshop. In addition, data can be recorded with WSU's remote-controlled observatory network. There are exciting discoveries to be made using this unexplored data, especially when combined with neuromorphic algorithmic approaches.
  • Processing space-based data using neuromorphic computing hardware: Using real-world data, from both space and terrestrial sensors, we will explore algorithms for star tracking, stabilisation, feature detection, and motion compensation on neuromorphic platforms such as Loihi, SpiNNaker, BrainChip, and Dynap. Given that the organisers are involved in multiple future space missions, the outcomes of this project may have a significant impact on a future space mission (and could even be flown to space!)
  • Orbital collision simulator: The growing number of satellites in the Earth’s orbit (around 25,000 large objects) makes it increasingly likely that objects will collide, despite our growing Space Situational Awareness capabilities to monitor artificial satellites. The fast and chaotic cloud of flying debris created during such collisions can threaten more satellites and is very hard to track with conventional sensors. By smashing Lego “cubesats” () in front of a Neuromorphic camera, we can emulate satellite collisions and assess the ability of the sensor to track the pieces and predict where they will land. We will bring Lego kits to the workshop. Participants will need to design a “collider”, build cubesats with legos, record data, measure the position of fallen pieces, and write algorithms to process the data.
  • High-speed closed-loop tracking with neuromorphic sensors: Our motorised telescope can move at up to 50o per second and can be controlled by a simple API. The low-latency of event cameras makes it possible to dynamically control the motors using visual feedback to keep a moving object (bird, drone, plane, satellite...) in the centre of the field of view. The algorithm can be tested remotely with the Astrosite observatory (located in South Australia) or with the telescope that we will bring to the workshop.
  • Navigation and landing: Prophesee’s GenX320 can be attached to a portable ST board and powered with a small battery. To simulate a landing of a probe on an extraterrestrial body, we attach the camera to an off-the-shelf drone for the exploration of ventral landing, optical flow and feature tracking scenarios, as well as predicting the distance to the ground to avoid dangerous landmarks.
  • High-speed object avoidance: The goal is to work on an ultra-low-latency vision pipeline to avoid incoming objects in real-time, simulating threats in the form of orbital debris. This will involve a robotic element added to the orbital collision simulator.


Materials, Equipment, and Tutorials:

We're going to make available several pieces of equipment that includes telescopes to record the night sky, different event cameras from Prophesee and iniVation, a Phantom high-frame rate camera for comparison, neuromorphic hardware such as Brainchip's Akida and SynSense Speck. ICNS will also provide access to their Astrosite network of remote telescopes, as well as their new DeepSouth cluster.

We will run hands-on sessions on neuromorphic sensing and processing in space, building on successful tutorials from space conferences, providing code and examples for projects, and training with neuromorphic hardware. Experts in space imaging, lightning, and high-speed phenomena detection will give talks, focusing on neuromorphic hardware's potential to address current shortcomings. The workshop will feature unique data from the International Space Station, provided by WSU and USAFA, marking its first public release, allowing participants to develop new algorithms for space applications and explore neuromorphic hardware's effectiveness in processing this data for future space missions. Additionally, various data collection systems will be available, including telescope observation equipment, long-range lenses, tripods, a Phantom High-speed camera, and WSU's Astrosite system for space observations. Neurobus will make available neuromorphic hardware on site. This setup facilitates experiments, specific data collection for applications, experimenting with closed-loop neuromorphic systems, and online participation in space observation topics due to the time difference


Apology accepted… 😉

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-415663
 
  • Haha
  • Like
Reactions: 4 users
A new Brains & Machines podcast is out - the latest episode’s podcast guest was Amirreza Yousefzadeh, who co-developed the digital neuromorphic processor SENECA at imec The Netherlands in Eindhoven before joining the University of Twente as assistant professor in February 2024.




Here is the link to the podcast and its transcript:



BrainChip is mentioned a couple of times.

All of 2017, towards the end of his PhD, Yousefzadah collaborated with our company as an external consultant. His PhD supervisor in Sevilla was Bernabé Linares-Barranco who co-founded both GrAI Matter Labs and Prophesee. Yousefzadah was one of the co-inventors (alongside Simon Thorpe) of the two JAST patents that were licensed to BrainChip (Bernabé Linares-Barranco was co-inventor of one of them, too).

https://thestockexchange.com.au/threads/all-roads-lead-to-jast.1098/

View attachment 60489


Here are some interesting excerpts from the podcast transcript, some of them mentioning BrainChip:

View attachment 60487
View attachment 60490

View attachment 60491


(Please note that the original interview took place some time ago, when Amirezza Yousefzadah was still at imec full-time, ie before Feb 2024 - podcast host Sunny Bains mentions this at the beginning of the discussion with Giulia D’Angelo and Ralph Etienne-Cummings and again towards the end of the podcast, when there is another short interview update with him.)


View attachment 60492


Sounds like we can finally expect a future podcast episode featuring someone from BrainChip? 👍🏻
(As for the comment that “they also work on car tasks with Mercedes, I think”, I wouldn’t take that as confirmation of any present collaboration, though…)


View attachment 60493


Interesting comment on prospective customers not enamoured of the term “neuromorphic”… 👆🏻



View attachment 60494
View attachment 60495


Really a great episode. I listened to it yesterday but still had not enough time to copy my favorites quotes of the transcript.

One aspect that already became more pronounced during the last episodes is, that the the 3 hosts (Sunny Bains, Giulia D'Angelo & Ralph Etienne-Cummings) seem to like/enjoy the development of how the interviews are starting to change from a more scientific, historical view of (mostly strictly analog) neuromorphic computing to a more hands-on (commercial), current state of neuromorphic hardware.

In November '23 I emailed Sunny Bains if she had plans/might consider doing an episode about the current fields/contexts in which neuromorphic or event-based computing might have a specific advantage, and also what their view is about things that might be possible by this technology that weren't before.

In her response she couldn't tell further details about what potential future episodes of "Brains & machines" might cover, but she mentioned that she was currently writing a book about "neuromorphic engineering, but unfortunately that will not come out until the end of next year or even 2025".
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Frangipani

Top 20
One aspect that already became more pronounced during the last episodes is, that the the 3 hosts (Sunny Bains, Giulia D'Angelo & Ralph Etienne-Cummings) seem to like/enjoy the development of how the interviews are starting to change from a more scientific, historical view of (mostly strictly analog) neuromorphic computing to a more hands-on (commercial), current state of neuromorphic hardware.

It’s a good thing that Giulia D’Angelo will soon get to witness AKD1000 in action (if she hasn’t done so already), as she has been added as one of the co-organisers for the Telluride 2024 Topic Area Neuromorphic Systems for Space Applications:


B1421045-F61C-4BAC-B418-B77078BC8226.jpeg


She recently moved from Genoa to Munich to work as Senior Researcher at fortiss.

953BD308-503B-464B-8FF8-A0D66A0BBC4A.jpeg




The fortiss Neuromorphic Lab was one of more than 100 partner institutions in the EU-funded Human Brain Project (> SpiNNaker) that ran from 2013 to 2023. It has also been doing lots of projects based on Loihi in recent years (as well as collaborating with IBM on at least one project). While I have yet to come across any proof of fortiss researchers utilising Akida, I noticed they have at least been aware of BrainChip’s tech since 2020, as evidenced by their 2020 Annual Report:



A01DE0E3-3F0C-4819-93AC-81FAA0B52516.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Tothemoon24

Top 20

IMG_8759.jpeg

Revolutionizing Space Infrastructure: ANT61 Smart Robots​


April 8, 2024 – Mikhail Asavkin
Space infrastructure plays a key role in maintaining civilization on Earth. From moving goods around the globe and monitoring the effects of global warming, such as forest fires and floods, to everyday transactions, such as buying a donut at your favorite cafe, our everyday lives depend on satellites in orbit.
At ANT61, we want to make sure these satellites are still there when you need them. As humanity evolves, the importance of space infrastructure will only grow, and it’s about time we start using the same approach to maintaining it. On Earth, if something breaks down in a factory or a power plant, we don’t build a replacement. Instead, we send a repair team with spare parts, and even better yet, we monitor the components wear and replace them before they break down.
At ANT61, we create technology that enables us to use the same common-sense approach for space infrastructure.
Historically, there have been two obstacles to applying this approach in space. First, when something breaks down in orbit, it’s extremely difficult to send a crew out to understand what went wrong, and dead satellites can’t call home to explain what happened. Second, it’s way too expensive to send humans to repair something in space. One notable exception to this rule was several multi-billion dollar Space Shuttle missions to refurbish the Hubble Space Telescope.
What we do
ANT61 is providing solutions for both: our Beacon product allows satellite operators to understand what went wrong with their satellite and restore them back to operation. For larger and more expensive satellites, we are building robots that will dock, refuel and, in the future, refurbish satellites, prolonging their useful life. At the core of these robots lies ANT61 BrainTM, the innovative devices that combine machine vision and decision-making technology, enabling the autonomy of these maintenance robots. Autonomy is very important as, due to the speed of light limitations, it won’t be possible to control every movement of these robots remotely from Earth.
The first generation of the ANT61 Brain uses the BrainChip AkidaTM chip for power-efficient AI and is currently on board the Optimus-1 satellite, which was deployed recently by the SpaceX Transporter-10 mission. We will test ANT61 Brain later this year and perform training and evaluation of various neural networks that we will use for future in-orbit servicing technology demonstrations.
We chose to partner with BrainChip because we believe that neuromorphic technology will bring the same exponential improvement in AI as 20 years ago. The shift from CPU to GPU opened doors to deep neural network applications, which are at the core of all AI technologies today. Neuromorphic technology is also perfect for space: the lower power consumption means less heat dissipation, and we can get up to five times more computational power for the same electrical power budget.
Our vision for the future
Humanity’s expansion to the cosmos requires infrastructure that can only be built by robots. With the in-orbit servicing experience, ANT61 will become the main supplier of the robotic workforce for Moon and Mars installations, enabling companies from Earth to project their ambition to space, providing valuable services and resources for future off-world factories and cities.
We believe that in 20 years, 10 billion people on Earth will be able to look up and see the city lights on the night side of the lunar crescent. The space industry will transform from a place for the elite few to one open to everyone.
If you are coming out of college or pondering a career change, now is a great time to join a space company.
 

Attachments

  • IMG_8759.jpeg
    IMG_8759.jpeg
    1.2 MB · Views: 111
  • Like
  • Fire
  • Love
Reactions: 60 users

Tothemoon24

Top 20
This is a great listen ; little snip of discussion below

Click on link for full transcript

ML on the Edge with Zach Shelby Ep. 7 — EE Time's Sally Ward-Foxton on the Evolution of AI

ARTIFICIAL INTELLIGENCE
By Mike SeneseApr 8, 2024
ML on the Edge with Zach Shelby Ep. 7 — EE Time's Sally Ward-Foxton on the Evolution of AI

In this episode of Machine Learning on the Edge, host Zach Shelby is joined by EE Times senior reporter Sally Ward-Foxton, an electrical engineer with a rich background in technology journalism. Live from GTC 2024, they cover the ways that NVIDIA is impacting the AI space with its GPU innovation, and explore the latest technologies that are shaping the future of AI at the edge and beyond.

Watch or listen to the interview here or on Spotify. A full transcript can be read below.


Watch on Youtube



Listen on Spotify


Sally Ward-Foxton
Yeah so let's have an accelerator from BrainChip or Syntiant or somebody where it's a separate chip and it's alongside your CPU or or your microcontroller. I think eventually, a lot of that will go down the same route that you're talking about for cryptography, we'll go on to the SOC. It will be a block on the SOC, because that makes the most sense for embedded use cases for power efficiency and integration and so on. We will get there eventually. Not today, but eventually.

Zach Shelby
Very interesting. We have a good example of what's happened in the industry now with this little box, this is just an AI-powered camera reference design that we helped build, with a bunch of different partner and customer cases. One of those being nature conservation. So it turns out that a lot of endangered species, poaching, human conflict, nature conservation cases like elephants in Africa, really can make use of computer vision, with AI, in the field. Deep in the forest, in the jungle, needs to be left there for months at a time, very hard to go collect data. This has no acceleration. This is an STM32H7. So high-end Cortex M7, a lot of external memory, so we can put models that range from 10 to 20 megabytes in size, even, into this. And with software techniques, quantization, compression, re-architecting some of the ways that we do object detection, we can do fairly real-time object detection on this, and because it's a microcontroller architecture, we can do that for a year of battery life, with a certain number of PIR events per day where we capture images. And that's what no acceleration. So it's really interesting, as we get acceleration into these types of existing SOCs, say the next generation of ST microcontrollers has an accelerator, where's that going to get us and bring us right? What's the kind of optimization we should be thinking about from a from device manufacturers point of view? Like, all right, I've got a camera. Yep. We don't have acceleration, we're going to do a little bit of AI. Now we're going to want to add acceleration for the next generation.

Sally Ward-Foxton
Yeah, I mean, if you're looking at reducing power, doing more efficient ML, you definitely need acceleration today, I guess it's a balance of can your application handle the extra cost that you're going to face?
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 25 users

Frangipani

Top 20

Contextual Computing Requires an AI-First Approach​

OPINION​

By Vikram Gupta 04.08.2024

The infusion of AI into internet of things (IoT) edge devices has the potential to realize the vision of intelligent, contextually aware computing that intuitively acts on our behalf based on seemingly a priori knowledge. As we move through this world, our engagement with technology is frictionless, fluid, productive and trusted. Without prompt, home automation systems know users’ habits and needs; factories know when maintenance is needed; emergency services deliver medical care promptly; agricultural lands have optimal yields; and ecologies are sustained—a pathway to a better world.

This is what’s possible when IoT islands are knit together intelligently, securely, reliably and cost effectively. We’re not there yet, but with an edge computing market set to grow rapidly in the next two years, the industry is accelerating in that direction.

AI-infused IoT

The promise of the IoT as a transformative force remains intact, but it has been slowed by several challenges: fragmented hardware and software ecosystems, user privacy concerns, a cloud-centric data processing model with slow response times, and unreliable connectivity. The infusion of AI at the edge addresses two of these issues by allowing decisions to be made quickly in situ, with no need to upload user data. This tackles the latency and privacy issues while making better use of available bandwidth and lowering power consumption by reducing the number of transmissions.

Given this, solutions that skillfully handle edge IoT data while ensuring the seamless integration of AI for enhanced contextual awareness and improved IoT features will surely gain popularity. This is why so many companies have been seeking to incorporate the value of AI as they deploy smarter IoT products across consumer, industrial and automotive markets.

Interest in doing so has only spiked with the rapid emergence of large language models (LLMs). While AI and ML have been evolving quickly within the context of the IoT, an exciting overlap between LLMs and edge AI is the emergence of small language models (SLMs) that will aid in the hyper-personalization of the end user and drive more compute to the edge.

Meanwhile, practical applications of AI-infused edge IoT are already gaining traction in areas like feature enhancement for home video devices. Edge AI can also optimize crowd flow, ensure security, and enhance user experiences in retail, public transportation, and entertainment venues through people counting and behavioral analysis.

AI paralysis

While the opportunities are compelling, companies’ ability to capitalize on them varies. Some product companies have data but don’t have AI models or even know where to begin developing them. Others are more sophisticated and have models but can’t deploy them effectively across unproven hardware and ever-shifting, incompatible tool suites. Others remain paralyzed in the face of a technological revolution that will up-end their business if they don’t act quickly.

While it makes devices more useful, edge AI adds complexity and further frustrates developers, customers and users. They all recognize the potential of intelligent, connected nodes to enhance the user experience but lack the know-how, tools, and infrastructure to capitalize upon this relatively new and exciting technology. The spike in edge AI interest has resulted in sporadic ad hoc solutions hitched to legacy hardware and software tools with development environments that don’t efficiently capitalize upon AI’s potential to address customer demand for AI enablement for applications they’ve yet to clarify.

This situation is untenable for developers and end users, and the issue comes into stark relief against a backdrop of AI compute being increasingly pushed from the data center to the edge in applications like healthcare and finance, where security and response time are paramount. Clearly, more needs to be done by the industry to improve the customer journey to enable intelligent edge products.

Close the AI gap: Concept to deployment

While the edge AI train has already left the station, will different nodes have different shades of intelligence? Logic would dictate that everything would be intelligent, yet the degree of intelligence depends on the application. Some intelligence might be externally visible as a product feature, but others may not.

Regardless, if everything is going to be intelligent, it would follow that AI shouldn’t be a bolt-on “feature,” but inherent in every IoT product. Getting customers from ideation to real-world deployment requires shifting from the currently fragmented ecosystem to a cohesive, AI-first approach to IoT edge device design. This will require several elements: scalable AI-native hardware, unified software, more adaptive frameworks, a partnership-based ecosystem and fully optimized connectivity. This is the only way developers can deploy AI at the edge at the requisite speed, power, performance, reliability, security and cost point required to take part in a future that is coming…fast.

Many necessary elements are already available thanks to work done over the years on applying AI to vision, audio, voice and time series data. However, processor scalability and multi-modal capability need more attention to enable cost-effective, contextually aware sensing across increasingly diverse applications. While current microcontrollers and microprocessors are each highly capable in their own right, a gap still exists for the right IoT processors with the right mix of power, performance and processing flexibility to ensure the right compute at each edge AI node.

These suitable IoT processors, combined with compatible wireless connectivity and supported by a cohesive infrastructure, software, development tools and a truly “AI first” approach to ease the customer journey, will unlock a number of intelligent IoT products to help improve our world.

—Vikram Gupta is SVP and GM of IoT processors and Chief Product Officer at Synaptics.
 
  • Like
  • Fire
  • Love
Reactions: 7 users

RobjHunt

Regular
Yep, got close to an ounce in nuggets over weekend. A BRN nugget to end this week would be great. Just gotta clean all the ironstone of the specimens
Great bloody darts mate!
 
  • Like
  • Haha
Reactions: 3 users

Fredsnugget

Regular
Great bloody darts mate!
Yep, that little lot will get me another 8800 BRN shares. I always knew gold would pay off as a great investment :)
 
  • Like
  • Fire
  • Love
Reactions: 17 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 6 users

IloveLamp

Top 20
1000014867.jpg
 
  • Like
  • Love
Reactions: 5 users
Top Bottom