BRN Discussion Ongoing

Arayat

Emerged
Today a mini report was released by Cambrian Ai covering the release of AKIDA Second Generation. The significance of this report cannot be overstated. I had held the opinion prior to reading this report that AKIDA Third Generation would be the IP that allowed Brainchip and its shareholders to capitalise on the obsessed CHATGpt technology world but I was clearly wrong.

Thank you Cambrian Ai for disclosing just what makes AKIDA Second Generation a potential love child of CHATGpt or more generally GenAi.

To explain why I hold the opinion that today's reveal by Cambrian Ai is of such significance I have set out here a series of FACTS which I believe justifies my opinion.


From CNN —
The crushing demand for AI has also revealed the limits of the global supply chain for powerful chips used to develop and field AI models.
The continuing chip crunch has affected businesses large and small, including some of the AI industry’s leading platforms and may not meaningfully improve for at least a year or more, according to industry analysts.
The latest sign of a potentially extended shortage in AI chips came
webicon_green.png
in Microsoft’s annual report recently. The report identifies, for the first time, the availability of graphics processing units (GPUs) as a possible risk factor for investors.
GPUs are a critical type of hardware that helps run the countless calculations involved in training and deploying artificial intelligence algorithms.
“We continue to identify and evaluate opportunities to expand our datacenter locations and increase our server capacity to meet the evolving needs of our customers, particularly given the growing demand for AI services,” Microsoft wrote. “Our datacenters depend on the availability of permitted and buildable land, predictable energy, networking supplies, and servers, including graphics processing units (‘GPUs’) and other components.”
Microsoft’s nod to GPUs highlights how access to computing power serves as a critical bottleneck for AI. The issue directly affects companies that are building AI tools and products, and indirectly affects businesses and end-users who hope to apply the technology for their own purposes.
OpenAI CEO Sam Altman, testifying before the US Senate in May, suggested that the company’s chatbot tool was struggling to keep up with the number of requests users were throwing at it.
“We’re so short on GPUs, the less people that use the tool, the better,” Altman said. An OpenAI spokesperson later told CNN the company is committed to ensuring enough capacity for users.
The problem may sound reminiscent of the pandemic-era shortages in popular consumer electronics that saw gaming enthusiasts paying substantially inflated prices for game consoles and PC graphics cards. At the time, manufacturing delays, a lack of labor, disruptions to global shipping and persistent competing demand from cryptocurrency miners contributed to the scarce supply of GPUs,
webicon_green.png
spurring a cottage industry of deal-tracking tech to help ordinary consumers find what they needed."


Cambrian Ai's mini paper on Brainchip and Second Gen AKIDA can be found via Cambrian Ai's Homepage at:
webicon_gray.png
https://cambrian-ai.com/
On their Homepage they state that they work with the following companies: Cerebras, Esperanto Technologies, Graphcore, IBM, Intel, NVIDIA, Qualcomm, Tenstorrent, SiFive, Simple Machines, Synopsys, Cadence and others.

In their mini paper they say many nice things but the significant parts in my opinion are found in the concluding paragraph which I have broken up into three points:


1. The second-generation Akida processor IP is available now from BrainChip for inclusion in any SoC and comes complete with a software stack tuned for this unique architecture.

2. We encourage companies to investigate this technology, especially those implementing time series or sequential data applications.

3. Given that GenAI and LLMs generally involve sequence prediction, and advances made for pre-trained language models for event-based architectures with SpikeGPT, the compactness and capabilities of BrainChip’s TENNs and the availability of Vision Transformer …second-generation Akida could facilitate more GenAI capabilities at the Edge.


In Summary:

There is a massive shortage of GPU chips to run GenAi and this shortage is going to extend into the next year and a half at least.

Cambrian Ai has told the World that they should be looking at AKIDA Second Generation to facilitate GenAi capabilities at the Edge.

If Cambrian Ai only pass on their opinion to their existing customer base this is still a very large portion of the technology market where AKIDA Second Generation could profitably be adopted.


As we all should know AKIDA technology reduces bandwidth by processing data collected at the EDGE into relevant meta data and in so doing reduces processing demand in the Cloud be it public or private.

Peter van der Made is on record stating that AKIDA technology at the EDGE doing its thing can reduce the use of power in the Cloud by up to 97%.


It just makes sense if you cannot get enough GPU’s to handle your GenAi workloads in the Cloud then one solution is to reduce the workload.

Cambrian is only stating the obvious when it asks companies to investigate what AKIDA Second Generation can do for them to facilitate their GenAi capabilities at the EDGE.

In my opinion there is no reason to doubt that Brainchip is finally on the cusp of publicly realising what we all have known for what seems like a lifetime. An EDGE technology revolution.

Validation of AKIDA technology Science Fiction has been coming thick and fast from diverse sources but of late the most impressive was from TATA researchers who found that AKIDA technology was 35 times more power efficient and 3.4 times faster than the Nvidia GPU that they were also trialling. I doubt that such findings by a company with the global presence of TATA will have been missed by those that matter in the technology space.

However this is my opinion only so be sure to DYOR.

Regards
Fact Finder

Been a holder since 2019, still in the black and still hopeful. My first post here.

The Cambrian AI research helps to get the message out but doesn't come across as independent, rigorous analysis so I wonder about its impact. Indeed a disclosure at the bottom says: This document was developed with BrainChip Inc. funding and support. I'm sure further independent support, of which we've had plenty in various ways, will keep coming. It is so much more persuasive. I work in investor relations/public relations and rarely seek paid-for validation.
 
  • Love
Reactions: 1 users

cosors

👀
Saw an interesting like on one of the recent Brainchip post on LinkedIn so decided to chase it up.
Checkout-

I will look into it more tomorrow sometime but reading through the above page, I strongly believe that Akida is the secret sauce here.
Thanks, I'll take a look at it too!
Wiki: Viavi Solutions
Best regards to France ;)
 
  • Like
  • Fire
Reactions: 5 users

Cartagena

Regular
In addition, his Wikipedia page (German only) states that he was already associated with Daimler AG as the first ever KIT (Karlsruhe Institute of Technology) Industry Fellow in 2013 and 2014 and that he was not only Head of AI Research at Mercedes, but in fact its founder.
So yes, I’d definitely say the fact he “likes” Brainchip carries a lot of weight. 😊


View attachment 42757
Excellent research. He is very well regarded indeed in the Mercedes engineering project team. Love it 👍
 
  • Like
  • Fire
Reactions: 5 users

Frangipani

Regular
Excellent research. He is very well regarded indeed in the Mercedes engineering project team. Love it 👍
Thank you! 🙏🏻
However, @thelittleshort also did some research on this gentleman in the past, so I don’t deserve sole credit for joining those dots…

116FAB30-2A09-4644-AE6D-EA8D0C9CCAA6.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 15 users

jtardif999

Regular
I prefer to discuss it with my fellow shareholders on this forum.

That’s the only way change happens.

I am not against the company…. I have been a shareholder for a very long time.

Way before BRN name was ever mentioned anywhere on the internet.

I will always support the company but I will always call out BS when I see it.
Getupthere I think you need to get down here, if the share price had been at or around a $1 at the time of the AGM do you think many here would have voted for a strike? I don’t think so… In any case a strike has happened previously, several years back, and when it came to the crunch not many voted for the second strike as that would have required management to be re-elected, and it seemed that when push came to shove shareholders didn’t really want that kind of disruption or upheaval - it was more about venting frustration at the time.
 
  • Like
  • Love
Reactions: 6 users

stockduck

Regular
Just for info from uni-heidelberg/germany and research with spinnaker2 I think..
only for technical nerds:LOL:

https ://flagship.kip.uni-heidelberg.de/jss/HBPm?m=displayPresentation&mI=234&mEID=9002
 
  • Like
Reactions: 1 users

stockduck

Regular

...a doctor thesis from TUM School of Computation, Information and Technology
Technische Universität München

...could be aws related...?
another one....

Prophese is mentioned here.under footer 2..."PROPHESEE. Metavision® packaged sensor. 2021. Accessed: 2023-01-24."



OPTICAL FLOW ESTIMATION FROM EVENT-BASED CAMERAS AND SPIKING NEURAL NETWORKS
 
  • Like
Reactions: 10 users

Frangipani

Regular

This article was posted today.

It says below the text that it is “based on an interview with Nandan Nayampally, Chief Marketing Officer, BrainChip” and “has been transcribed and curated by Jay Soni, an electronics enthusiast at EFY”. As it is tagged with “May 2023”, I reckon the interview was done about three months ago.

A recurring topic are the huge losses per year in the manufacturing industry because of downtime due to preventable maintenance issues as well as
“the loss of productivity in the US due to people not coming to work, because the cost of preventable chronic diseases is $1.1 trillion. That is just the productivity loss, not the cost of healthcare. This could have been substantially reduced by more capable and cost-effective monitoring through digital health. Therefore, a need of real-time AI close to the device to help learn, predict, and correct issues and proactively schedule maintenance is duly necessary.”


“The solution to latency lies in a distributed AI computing model, which is a strong component with the ability to embed edge AI devices, to have the performance to run necessary AI processing. More importantly, we need the ability to learn on the device to allow for secure customisation, which can be achieved by making these systems event based.

This will reduce the amount of data and eliminate sensitive data being sent to the cloud, thereby reducing network congestion, cloud computation, and improve security. It also provides real-time responses, which makes timely actions possible. Such devices become compelling enough for people to buy as it’s not only about the performance but how you make it cost-effective as well.”




Accelerating Edge AI With Neural Systems​


By Nandan Nayampally
August 23, 2023


Artificial intelligence (AI) and machine learning (ML) are now considered a must to enhance solutions in every market segment with their ability to continuously learn and improve the experience and the output. As these solutions become popular, there is an increasing need for secure learning and customisation. However, the cost of training the AI for application-specific tasks is the primary challenge across the market

The challenge in the market for AI to get more scaled is just the cost of doing AI. If you look at GPT-3, a single training session costs weeks and $6 million! Therefore, every time a new model or a change is presented, again the count starts from zero and the current model needs to be retrained, which is quite expensive. Variety of things affect the training of these models, such as drift, but the need to retrain for customisation would certainly help in managing retraining cost.
Fig. 1: Various market sectors and how neuromorphic systems and AI can be utilised in the respective industry (Credit: BrainChip)
Fig. 1: Various market sectors and how neuromorphic systems and AI can be utilised in the respective industry (Credit: BrainChip)

Consider the manufacturing industry, which is projecting losses of about $50 billion per year because of preventable maintenance issues. There is downtime cost because the maintenance wasn’t done in time. Another staggering statistic shows the loss of productivity in the US due to people not coming to work because the cost of preventable chronic diseases is $1.1 trillion. That is just the productivity loss, not the cost of healthcare.

This could have been substantially reduced by more capable and cost-effective monitoring through digital health. Therefore, a need of real-time AI close to the device to help learn, predict, and correct issues and proactively schedule maintenance is duly necessary.


AI is fueled by data, so one needs to consider the amount of data that is being generated by all the devices. Let us consider just one car. A single car generates data that ranges over a terabyte per day. Imagine the cost of actually learning from it. Imagine how many cars are there generating that kind of data? What does it do to the network? While not all of that data is going to the cloud, a lot of that data is being stored somewhere.

These are a few of both the opportunities or problems that AI can solve, but also the challenges that are preventing AI from scaling to deliver those solutions. The real questions can be framed as follows:

How do we reduce the latency and therefore improve the responsiveness of the system?
• How do we actually make these devices scale? How can we make them cost effective?
• How do you ensure privacy and security?
• How to tackle network congestion due to the large amount of data generated?


The solution​

A variety of factors affect the latency of a system. For example, the hardware capacity, network latency, and the use of large amounts of data is also a problem. These devices embed AI, they also have the self-learning abilities based on the innovative time series data handling for predictive analysis. Predictive maintenance is an important factor in any manufacturing industry as most of the industries are now turning to robotic assembly lines.
In-cabin experience, for example, utilises a similar AI enabled environment, even for automated operation of autonomous vehicles (AVs).
Sensing the vital signs prediction and analysis of the medical data is an important health and wellness application example.
Security and surveillance as well are now turning towards AI enabled security cams for continuous surveillance.

“The more intelligent devices you make, the greater the growth of the overall intelligence.”

The solution to latency lies in a distributed AI computing model, which is a strong component with the ability to embed edge AI devices, to have the performance to run necessary AI processing. More importantly, we need the ability to learn on the device to allow for secure customisation, which can be achieved by making these systems event based.

This will reduce the amount of data and eliminate sensitive data being sent to the cloud, thereby reducing network congestion, cloud computation, and improve security. It also provides real-time responses, which makes timely actions possible. Such devices become compelling enough for people to buy as it’s not only about the performance but how you make it cost-effective as well.


Neural devices are trained generally in two ways, either using the machine fed data or using spikes that mimic the functionality of spiking neural networks. This makes the system loaded.

A neural system, on the other hand, requires an architecture that can accelerate all types of neural networks, may it be convolution neural network (CNN), deep neural network (DNN), spiking neural network (SNN), or even vision transformers or sequence prediction.

Therefore, using a stateless architecture in distributed systems can help reduce the load on servers and the time it takes for the system to store and retrieve data. Stateless architecture enables applications to remain responsive and scalable with a minimal number of servers, as it does not require applications to keep record of the sessions.

Using direct media access (DMA) controllers can boost responsiveness as well. This hardware device allows input-output devices to directly access the memory with less participation of the processor. It’s like dividing the work among people to boost the overall speed, and that is exactly what happens here.

Intel Movidius Myriad X, for example, is specifically a vision processing unit that has a dedicated neural compute engine that directly interfaces with a high-throughput intelligent memory fabric to avoid any memory bottleneck when transferring data. The Akida IP Platform by BrainChip also utilises DMA in a similar manner. It also has a runtime manager that manages all operations of the neural processor with complete transparency and can also be accessed through a simple API.

Other features of these platforms include the multi-pass processing, which is processing multiple layers at a time, making the processing very efficient. One ability of the device is the smaller footprint integration to make fewer nodes in a configuration and go from a parallel type execution to a sequential type execution. This means latency.


Fig. 2: Edge AI spectrum with the Akida IP Platform products. The low end done by the microcontroller unit (MCU) can be done on a small CPU, but it’s not quite efficient. The mid range is addressed by the MCU with its own deep learning accelerators. The higher end workloads need a complete MCU with graphic processing unit (GPU) and ML (Credit: BrainChip)
Fig. 2: Edge AI spectrum with the Akida IP Platform products. The low end done by the microcontroller unit (MCU) can be done on a small CPU, but it’s not quite efficient. The mid range is addressed by the MCU with its own deep learning accelerators. The higher end workloads need a complete MCU with graphic processing unit (GPU) and ML (Credit: BrainChip)

But a huge amount of that latency comes from the fact that the CPU gets involved every time a layer comes in. As the device processes multiple layers at a time, and the DMA manages all activity itself, latencies are significantly reduced.
Consider an AV, which requires the data to be processed from various sensors at a time. The TDA4VM from Texas Instruments Inc. comes with a dedicated deep-learning accelerator on-chip that allows the vehicle to perceive its environment by collecting data from around four to six cameras, a radar, lidar, and even from an ultrasonic sensor. Similarly, the Akida IP Platform can do larger networks on a smaller configuration simultaneously.

Glimpse into the market​

These devices have a huge variety of scope in the market. As these are referred to as event based devices, they have an application sector. For example, Google’s Tensor processing units (TPUs), which are application-specific integrated circuits (ASICs), are designed to accelerate deep learning workloads in their cloud platform. TPUs deliver up to 180 teraflops of performance, and have been used to train large-scale machine learning models like Google’s AlphaGo AI.

Similarly, Intel’s Nervana neural network processor is an AI accelerator designed to deliver high performance for deep learning workloads. The processor features a custom ASIC architecture optimised for neural networks, and has been adopted by companies like Facebook, Alibaba, and Tencent.

Qualcomm’s Snapdragon neural processing engine AI accelerator is designed for use in mobile devices and other edge computing applications. It features a custom hardware design optimised for machine learning workloads, and can deliver up to 3-terflop performance, making it suitable for on-device AI inference tasks.

Several other companies have already invested heavily in designing and producing neural processors that are being adapted into the market as well, and that too in a wide range of industries. As AI, neural networks, and machine learning are becoming more mainstream, it is expected that the market for neural processors will continue to grow.

In conclusion, the future of neural processors in the market is promising, although many factors may affect their growth and evolution, including new technological developments, government regulations, and even customer preferences.


This article, based on an interview with Nandan Nayampally, Chief Marketing Officer, BrainChip, has been transcribed and curated by Jay Soni, an electronics enthusiast at EFY

Nandan Nayampally is Chief Marketing Officer at BrainChip
 
  • Like
  • Love
  • Fire
Reactions: 56 users

Esq.111

Fascinatingly Intuitive.
Morning Rise from the ashes,

According to ABC news just now..
Thay are about 160 metres from touch down.

🤞

The amount of time it took me to type this post.... SUCCESSFULLY TOUCHED DOWN.

Regards,
Esq.
 
  • Like
  • Haha
  • Love
Reactions: 20 users

Frangipani

Regular
  • Haha
Reactions: 6 users

Esq.111

Fascinatingly Intuitive.
You, too, it seems - Chandrayaan-3’s touchdown on the lunar surface was actually almost 6 hours ago… 😂
Morning Frangipani ,

Well that means thay should of extracted the first core sample of cheese by now then....time to break out the crackers.

😃.

Regards,
Esq.
 
  • Haha
  • Like
  • Love
Reactions: 15 users

Frangipani

Regular
Anyone keeping an eye on this or done any digging?


No concrete evidence that Akida has landed on the moon today (like Aquila 🦅 did in 1969 😉), but I sure like the sound of this:

What sets this mission apart is the pivotal role of artificial intelligence (AI) in guiding the spacecraft during its critical descent to the moon's surface

(…)

A High-Stakes Game of AI and Sensors

As the descent phase commences on August 23 at 17:47 hours, mission control's role shifts from active intervention to vigilant observation. The lander's autonomous systems, propelled by advanced AI, take centre stage. Steering the spacecraft safely to its designated landing site becomes the AI's paramount task. Chandrayaan-3's success hinges on its AI-driven sensors, which operate collectively to understand the lander's position, speed, and orientation.


ISRO Chairman S Somnath, shedding light on the technological marvel, explained the sensor array's composition, including velocimeters and altimeters. These devices furnish essential data about the lander's speed and altitude. Additionally, an array of cameras, including a hazard avoidance camera and inertia-based cameras, play a pivotal role in capturing crucial visual information. These disparate data streams are harmonised using a sophisticated computer algorithm, creating a composite image of the lander's location.

Moreover, Chandrayaan-3 boasts an intelligent navigation, guidance, and control system, seamlessly integrated into the lander's infrastructure. This intricate network of computer logic takes charge of the spacecraft's movements, orchestrating the trajectory for a safe touchdown. Somnath explained that this AI system is designed to manage diverse scenarios, meticulously planning for every conceivable outcome.

Defying Gravity with Grace

The heart of Chandrayaan-3's AI prowess lies in its ability to adapt and respond in real-time, even when confronted with unforeseen challenges.
A series of exhaustive simulations, overhauled guidance designs, and meticulously constructed algorithms have been employed to ensure precision during each phase of descent. The craft's capacity to withstand deviations from nominal parameters is a testament to its resilience. In a remarkable revelation, Somnath shared that even in the face of sensor failures or other setbacks, the lander's propulsion system alone could ensure a successful landing.

The Climb to Success: Calculations and Choices

The lander's journey from a lofty height of 30 km above the lunar surface to a more delicate altitude of 7.42 km unfolds in multiple phases, lasting approximately 15 minutes. Throughout this period, the onboard sensors tirelessly compute and recalibrate the spacecraft's path. At key junctures, such as 800 or 1300 meters above the surface, the sensors perform verification checks to ascertain their accuracy. At a mere 150 meters from the lunar terrain, a decisive moment arises—hazard verification. Here, the lander's AI navigates a crucial choice: should it proceed with a vertical landing or opt for lateral movement of up to 150 meters to circumvent potential obstacles? We will all know in the next few hours.



While Indians around the world are celebrating, Vladimir P., is not exactly over the moon, given that Russia's first moon mission in 47 years ended in disaster on August 20, when its Luna-25 lander crashed into the moon’s surface after an incident during pre-landing manoeuvres…

 
  • Like
  • Fire
  • Love
Reactions: 22 users

Frangipani

Regular
"No concrete evidence that Akida has landed on the moon today (like Aquila 🦅 did in 1969 😉)"
Ahhh are you one of those conspiracy theorists that actually believe their is a moon? 😂😉

I was actually gonna joke that you’d probably now believe that some of the Northern hemisphere’s news agencies had mistakenly released the prerecorded Luna Studio-generated material hours too early, while the Australian broadcasters did it at the correct scheduled time! 🤣
 
  • Haha
Reactions: 5 users

Frangipani

Regular
Although I'm a bit lost on your Aquila reference. Was that a typo?( Apollo)
🤔😂
July 20 resp 21 (depending on where in the world you live) 1969:
“The Eagle has landed…”


I simply liked the play on words: Aquila - Akida.
Please excuse my convoluted thinking… 🤣
 
  • Haha
  • Like
Reactions: 4 users

FJ-215

Regular
Green for go in the US overnight. NVIDA results about to drop. Fingers crossed they are good and we get some updraft in to our own half year financials and the depending news of both Akida 2.0 and Akida 1500
 
  • Like
  • Fire
Reactions: 13 users

charles2

Regular
NVDA

Big beat and optimistic projections

Stock soaring after hours >$500USD

Waiting for the trickle down
 
  • Like
  • Love
  • Wow
Reactions: 14 users

Frangipani

Regular
While we are on the space & ornithology topic - meet Huginn and Muninn.




SCIENCE & EXPLORATION

The Huginn mission – an overview​

22/05/20234621 VIEWS73 LIKES
ESA / Science & Exploration / Human and Robotic Exploration

ESA Astronaut Andreas Mogensen will fly to the International Space Station for his second mission called Huginn, in late summer of 2023. It will be a mission of firsts for both Andreas and ESA.

Naming a mission​


1692822350768.jpeg



SpaceX Crew-7, Huginn mission patch, 2023

The Huginn mission name, chosen by Andreas, originates in Norse mythology with Huginn and Muninn – the two raven accomplices of the god Odin. The two ravens symbolise the human mind, with Huginn representing thought and Muninn, memory. Norse mythology tells that tale of two ravens who flew into the world every morning and gathered information from the farthest corners of the world to bring back news to Odin.

Huginn is a great name that represents my mission, going to the International Space Station to gather knowledge through science and talk about what I find,” says Andreas.

The mission patch shows the raven Huginn with the silhouette of Denmark on its wing and a white line leading from Copenhagen, Denmark, the birthplace of Andreas, to the International Space Station. The red and white colours of the mission patch are inspired by the Danish flag.


Piloting the Dragon​

Andreas will be the pilot in the SpaceX Crew Dragon and become the first European to take that role. He will be sitting next to Crew-7's commander and NASA astronaut Jasmin Moghbeli. Andreas will be responsible for the spacecraft’s performance and systems, like a co-pilot on an airplane. The Crew Dragon launches and docks automatically with the Space Station, but Andreas and Jasmin can take over control of the spacecraft if necessary.

Andreas will be the fourth ESA astronaut to fly under NASA’s commercial crew programme, following Thomas Pesquet, Matthias Maurer and Samantha Cristoforetti. The Crew Dragon will dock to the International Space Station, marking the start of Andreas’s half-year stay on the Space Station

A long-duration mission filled with science​

Andreas Mogensen Robotic arm training Andreas Mogensen Robotic arm training

On his first mission ‘iriss’ in 2015, Andreas stayed on the Space Station for 10 days. He slept inside the Columbus module and worked close to 10 hours each day to maximise his time in orbit.

Huginn will be Andreas’s first long-duration mission to the International Space Station, where he will work and live for half a year as part of Expeditions 69 and 70. He will conduct over 30 European experiments during the Huginn mission. The experiments are divided into three pillars: climate, health, and space for Earth.

The climate experiments vary from water filtration to understanding thunder phenomena, where Andreas will take images of thunder clouds from the Cupola module on the Space Station.


Maintaining a healthy lifestyle during a long-duration mission to space is vital. Andreas will run several experiments to better understand how astronauts sleep in space and how to support astronauts' mental health with virtual reality videos of calming environments.

Science in space delivers solutions for problems in our daily lives on Earth. During the Huginn mission, Andreas will monitor 3D-printing metal objects on the Space Station with an ESA printer and will control a group of robots on Earth from the European Columbus module.

For more details on the Huginn mission, follow Andreas on Twitter, Instagram, Facebook and on ESA social media channels.



The launch will be livestreamed on Aug 25 at 09:49 CEST, for those of you who are interested:


Watch Huginn launch​

18/08/202311040 VIEWS148 LIKES
ESA / Science & Exploration / Human and Robotic Exploration

In brief​

ESA astronaut Andreas Mogensen will be launched as part of Crew-7 to the International Space Station for his second mission, called Huginn, on 25 August at 08:49 BST (09:49 CEST). ESA WebTV 2 will start the coverage of the launch at 04:45 BST (05:45 CEST), about four hours before liftoff.

In-depth​

Launch​

Crew-7 consisting of ESA astronaut Andreas Mogensen, NASA astronaut Jasmin Moghbeli, JAXA astronaut Satoshi Furukawa and Roscosmos astronaut Konstantin Borisov will take off in the SpaceX Crew Dragon Endurance on top of a Falcon 9 rocket from NASA’s Kennedy Space Center in Florida, USA. Liftoff is planned at 08:49 BST (09:49 CEST, 03:49 local time) and will be streamed on ESA WebTV 2.

First European Dragon pilot​

Andreas will be the first European to take the role as pilot on the Crew Dragon, where he will sit next to Jasmin, the Crew-7 commander. Andreas will monitor the spacecraft’s performance and systems are working as expected during the flight to the Space Station, like a copilot in an aircraft.
“It is an honour to be the pilot of Crew Dragon, with our international partners showing their trust in ESA and my work,” says Andreas.

Quarantine and traditions​

Andreas Mogensen in quarantine for Huginn launch Andreas Mogensen in quarantine for Huginn launch
Before launch, the astronauts enter quarantine to ensure no unwanted bacteria or viruses make their way to the Space Station.

The astronauts will head to Endurance three hours before liftoff, around 05:30 BST (06:30 CEST). Before walkout, the astronauts go through a series of traditionssuch as playing a card game with the head of NASA’s Astronaut Office until the astronauts win a round. They will also sign their name on the wall of the last room before getting into the Dragon capsule.

Journey to space​

The Dragon from liftoff to orbit. Note that the launch of Andreas and Crew-7, the first stage will land on ground, not on a sea platform. The Dragon from liftoff to orbit. Note that the launch of Andreas and Crew-7, the first stage will land on ground, not on a sea platform.

Just two and half minutes after liftoff the Falcon 9 first-stage booster will separate from the rocket to land back on Earth. The second stage continues to bring the crew to orbit around nine minutes after liftoff. Once the second stage cuts its engines, a zero-g indicator will start to float in the Endurance spacecraft, letting the crew know they have reached orbit.

The trip to the International Space Station will take around 24 hours where they will dock. The Huginn mission will officially begin as soon as Andreas passes the hatch to Earth’s orbiting laboratory.
Tune into ESA’s WebTV 2 to watch the launch on the morning of 25 August and follow Andreas’s mission on the Huginn page and his social media.

Crew-7 launch schedule
EventLocal time in Florida (ET) BSTCEST
Astronauts walk to the cars00:26 05:2606:26
Crew-7 drives to rocket00:3205:3206:32
Arrival at pad 39a00:4705:4706:47
Crew-7 enters Crew Dragon Endurance01:0606:0607:06
Hatch closes01:5006:5007:50
Launch03:4908:4909:49
First stage separation03:5108:5109:51
Second stage separation04:0109:0110:01



And the following is the ESA mission’s most interesting detail for us - if only we knew what exactly has been done to that DAVIS 346 camera… 🤔
It says the technical purpose is to “test a new camera concept” - well, it is not the first time for one of iniVation’s event-based cameras to fly into space…




38321A63-EF85-44A4-8789-9081CBE86937.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 14 users

MrRomper

Regular

No concrete evidence that Akida has landed on the moon today (like Aquila 🦅 did in 1969 😉), but I sure like the sound of this:

What sets this mission apart is the pivotal role of artificial intelligence (AI) in guiding the spacecraft during its critical descent to the moon's surface

(…)

A High-Stakes Game of AI and Sensors

As the descent phase commences on August 23 at 17:47 hours, mission control's role shifts from active intervention to vigilant observation. The lander's autonomous systems, propelled by advanced AI, take centre stage. Steering the spacecraft safely to its designated landing site becomes the AI's paramount task. Chandrayaan-3's success hinges on its AI-driven sensors, which operate collectively to understand the lander's position, speed, and orientation.


ISRO Chairman S Somnath, shedding light on the technological marvel, explained the sensor array's composition, including velocimeters and altimeters. These devices furnish essential data about the lander's speed and altitude. Additionally, an array of cameras, including a hazard avoidance camera and inertia-based cameras, play a pivotal role in capturing crucial visual information. These disparate data streams are harmonised using a sophisticated computer algorithm, creating a composite image of the lander's location.

Moreover, Chandrayaan-3 boasts an intelligent navigation, guidance, and control system, seamlessly integrated into the lander's infrastructure. This intricate network of computer logic takes charge of the spacecraft's movements, orchestrating the trajectory for a safe touchdown. Somnath explained that this AI system is designed to manage diverse scenarios, meticulously planning for every conceivable outcome.

Defying Gravity with Grace

The heart of Chandrayaan-3's AI prowess lies in its ability to adapt and respond in real-time, even when confronted with unforeseen challenges.
A series of exhaustive simulations, overhauled guidance designs, and meticulously constructed algorithms have been employed to ensure precision during each phase of descent. The craft's capacity to withstand deviations from nominal parameters is a testament to its resilience. In a remarkable revelation, Somnath shared that even in the face of sensor failures or other setbacks, the lander's propulsion system alone could ensure a successful landing.

The Climb to Success: Calculations and Choices

The lander's journey from a lofty height of 30 km above the lunar surface to a more delicate altitude of 7.42 km unfolds in multiple phases, lasting approximately 15 minutes. Throughout this period, the onboard sensors tirelessly compute and recalibrate the spacecraft's path. At key junctures, such as 800 or 1300 meters above the surface, the sensors perform verification checks to ascertain their accuracy. At a mere 150 meters from the lunar terrain, a decisive moment arises—hazard verification. Here, the lander's AI navigates a crucial choice: should it proceed with a vertical landing or opt for lateral movement of up to 150 meters to circumvent potential obstacles? We will all know in the next few hours.



While Indians around the world are celebrating, Vladimir P., is not exactly over the moon, given that Russia's first moon mission in 47 years ended in disaster on August 20, when its Luna-25 lander crashed into the moon’s surface after an incident during pre-landing manoeuvres…

https://www.linkedin.com/posts/shak...r5?utm_source=share&utm_medium=member_desktop

Rob likes ????
 
  • Haha
  • Like
  • Fire
Reactions: 7 users

FJ-215

Regular
NVDA

Big beat and optimistic projections

Stock soaring after hours >$500USD

Waiting for the trickle down
Just watching on Bloomberg now.

Great result, forecasting $16B for the third quarter!!

Shows the market that AI has got legs and is here to stay. Hopefully the market will take another look at the sector and this time notice the shiny jewel that is BRN.
 
  • Like
  • Wow
  • Love
Reactions: 25 users
Top Bottom