BRN Discussion Ongoing

SiDEvans

Regular
How I’m feeling about BRN today!

9BD596DE-1C3E-4A24-8D4B-6002127D2DD0.jpeg
 
  • Like
  • Haha
  • Love
Reactions: 38 users
D

Deleted member 118

Guest
 
  • Like
  • Fire
Reactions: 4 users

Steve10

Regular
I’ve been reading some more again from the end of 2019 company progress update and I can’t work out what ADE stands for. Anyone know?

Talk about intellectual property licensing a bit more. There were a lot of questions about it, and I think in part that's because we've voiced a strong opinion, coming in advance of actual device sales. There's no manufacturing process involved. There's no inventory. There's no loan package qualification by the customer. We released that in 2019. We have received strong response from prospective customers. The ADE, in the hands of one major South Korean company, is being exercised almost as much as we exercise it. They really have dug in, validated some of the benchmark results that we've provided, and now they're moving onto some of their own proprietary networks to do validation.
Akida™ Development Environment (ADE).

BrainChip’s Akida Development Environment Now Freely Available for Use​

Develop and Deploy on Akida Deeply Learned Neural Networks in a standard TensorFlow/Keras Environment

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that access to its Akida™ Development Environment (ADE) no longer requires pre-approval, now allowing designers to freely develop systems for edge and enterprise products on the company’s Akida Neural Processing technology.

ADE is a complete, industry-standard machine learning framework for creating, training and testing deeply learned neural networks. The platform leverages TensorFlow and Keras for neural network development, optimization and training. Once the network model is fully trained, the ADE includes a simple-to-use compiler to map the network to the Akida fabric and run hardware accurate simulations on the Akida Execution Engine. The framework uses the Python scripting language and its associated tools and libraries, including Jupyter notebooks, NumPy and Matplotlib. With just a few lines, developers can easily run the Akida simulator on industry-standard datasets and benchmarks in the Akida model zoo such as Imagenet1000, Google Speech Commands, MobileNet among others. Users can easily create, modify, train and test their own models within a simple use development environment.

ADE comprises three main Python packages:

  • the Akida Execution Engine including the Akida Simulator is an interface to the BrainChip Akida neural processing hardware. To allow the development, optimization and testing of Akida models, it includes a software backend that simulates the Akida NSoC. The output of the Akida Execution Engine generates all necessary files to run the Akida neural processor hardware as well.
  • the CNN development tool utilizes TensorFlow/Keras to develop, optimize and train deeply learned neural networks such as CNNs
  • the Akida model zoo contains pre-created neural network models built with the Akida sequential API and the CNN development tool using quantized Keras models.
“The enormous success of our early-adopters program allowed us to make ADE available to developers looking to use an Akida-based environment for their deep machine learning needs,” said Louis DiNardo, CEO of BrainChip. “This is an important milestone for BrainChip as we continue to deliver our technology to a marketplace in search of a solution to overcome the power- and training-intense needs that deep learning networks currently require. With the ADE, designers can access the tools and resources needed to develop and deploy Edge application neural networks on the Akida neural processing technology.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida is a complete neural processing engine for edge applications, which eliminates CPU and memory overhead while delivering unprecedented efficiency, faster results, at minimum cost. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Access to ADE is currently available online at https://doc.brainchipinc.com/. Among the resources are installation information, user guide, API reference, Akida examples, support and license documentation. ADE requires TensorFlow 2.0.0. Any existing virtual environment previously used would need to be updated as per the installation step.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

M_C

Founding Member
Innoviz works with vw and bmw

Screenshot_20230316_170945_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 24 users

TopCat

Regular
Akida™ Development Environment (ADE).

BrainChip’s Akida Development Environment Now Freely Available for Use​

Develop and Deploy on Akida Deeply Learned Neural Networks in a standard TensorFlow/Keras Environment

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that access to its Akida™ Development Environment (ADE) no longer requires pre-approval, now allowing designers to freely develop systems for edge and enterprise products on the company’s Akida Neural Processing technology.

ADE is a complete, industry-standard machine learning framework for creating, training and testing deeply learned neural networks. The platform leverages TensorFlow and Keras for neural network development, optimization and training. Once the network model is fully trained, the ADE includes a simple-to-use compiler to map the network to the Akida fabric and run hardware accurate simulations on the Akida Execution Engine. The framework uses the Python scripting language and its associated tools and libraries, including Jupyter notebooks, NumPy and Matplotlib. With just a few lines, developers can easily run the Akida simulator on industry-standard datasets and benchmarks in the Akida model zoo such as Imagenet1000, Google Speech Commands, MobileNet among others. Users can easily create, modify, train and test their own models within a simple use development environment.

ADE comprises three main Python packages:

  • the Akida Execution Engine including the Akida Simulator is an interface to the BrainChip Akida neural processing hardware. To allow the development, optimization and testing of Akida models, it includes a software backend that simulates the Akida NSoC. The output of the Akida Execution Engine generates all necessary files to run the Akida neural processor hardware as well.
  • the CNN development tool utilizes TensorFlow/Keras to develop, optimize and train deeply learned neural networks such as CNNs
  • the Akida model zoo contains pre-created neural network models built with the Akida sequential API and the CNN development tool using quantized Keras models.
“The enormous success of our early-adopters program allowed us to make ADE available to developers looking to use an Akida-based environment for their deep machine learning needs,” said Louis DiNardo, CEO of BrainChip. “This is an important milestone for BrainChip as we continue to deliver our technology to a marketplace in search of a solution to overcome the power- and training-intense needs that deep learning networks currently require. With the ADE, designers can access the tools and resources needed to develop and deploy Edge application neural networks on the Akida neural processing technology.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida is a complete neural processing engine for edge applications, which eliminates CPU and memory overhead while delivering unprecedented efficiency, faster results, at minimum cost. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Access to ADE is currently available online at https://doc.brainchipinc.com/. Among the resources are installation information, user guide, API reference, Akida examples, support and license documentation. ADE requires TensorFlow 2.0.0. Any existing virtual environment previously used would need to be updated as per the installation step.
Thank you @Steve10 and appreciate all the research you’ve provided lately 👍
 
  • Like
  • Fire
  • Love
Reactions: 20 users

gex

Regular
I can say one thing. In the shitty last few days brn has been my best performer.
Is that one thing?
 
  • Like
  • Haha
  • Thinking
Reactions: 19 users

Steve10

Regular
Ford autonomous driving systems re-launched as Latitude AI after Argo AI with VW ceased operations last year.

Ford establishes Latitude AI to develop autonomous driving technology​

6 March 2023

Automotive giant Ford has established a new subsidiary dedicated to autonomous driving systems for passenger vehicles.

The new firm, Latitude AI, comprises around 550 employees with expertise across machine learning, robotics, cloud platforms, mapping, sensors, compute systems, test operations, systems and safety engineering.

The majority of employees are formerly of Argo AI, a previous joint venture between Ford and Volkswagen that was announced to begin ceasing operations in October.

The decision to conclude Argo AI was made due to growing losses – amounting to billions – hitting both Ford and Volkswagen, as well as ongoing uncertainty surrounding when Level 4 autonomous driving technology would become commercially available.

Through its new wholly-owned subsidiary, Ford seeks to completely automate driving – hands-free, eyes-off-the-road – during particularly tedious and unpleasant situations, such as sitting in bumper-to-bumper traffic, or driving on long stretches of highway.

Ford’s current hands-free driving technology, BlueCruise, enables drivers to take their hands off the wheel on over 130,000 miles of prequalified North American roads. The technology – currently available in Ford’s Mustang Mach-E SUV, F-150 Truck, F-150 Lightning Truck and Expedition SUV models – has already accumulated more than 50 million miles of hands-free driving. It does however require the driver to keep their eyes on the road, which is ensured via a driver-facing camera.

“We see automated driving technology as an opportunity to redefine the relationship between people and their vehicles,” said Doug Field, chief advanced product development and technology officer at Ford. “Customers using BlueCruise are already experiencing the benefits of hands-off driving. The deep experience and talent in our Latitude team will help us accelerate the development of all-new automated driving technology – with the goal of not only making travel safer, less stressful and more enjoyable, but ultimately over time giving our customers some of their day back."

Sammy Omari, executive director of ADAS Technologies at Ford, will also serve as the CEO of Latitude. “We believe automated driving technology will help improve safety while unlocking all-new customer experiences that reduce stress and in the future will help free up a driver’s time to focus on what they choose," he said. “The expertise of the Latitude team will further complement and enhance Ford’s in-house global ADAS team in developing future driver assist technologies, ultimately delivering on the many benefits of automation.”

Latitude is headquartered in Pittsburgh, Pennsylvania, with additional engineering hubs in Dearborn, Michigan and Palo Alto, California. The company will also operate a highway-speed test track facility in Greenville, South Carolina.
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users

Steve10

Regular

How embedded vision and AI will revolutionise industries in the coming future​

15 March 2023

As we stand on the cusp of the Fourth Industrial Revolution, the integration of embedded vision systems and artificial intelligence (AI) is poised to unleash a wave of disruption that will revolutionise industries as diverse as healthcare, manufacturing, transportation, and retail.

With the ability to process massive amounts of data in real time and make complex decisions with astonishing speed and accuracy, these technologies have the potential to transform the way businesses operate, optimise supply chains, enhance product quality, and deliver unparalleled customer experiences. As we look to the future, it is clear that those companies that are able to harness the power of embedded vision systems and AI will be the ones that thrive in an increasingly competitive and dynamic marketplace.

The application of embedded vision systems and AI has extended beyond their traditional use cases, spurring new and innovative solutions across industries. For instance, the use of AI-powered chatbots has significantly improved customer service, providing 24/7 support and reducing response times. Additionally, AI algorithms have been used to predict and prevent equipment failure in manufacturing, reducing downtime and improving overall efficiency. In the healthcare industry, embedded vision systems and AI have enabled the development of precision medicine, allowing for accurate diagnoses and targeted treatment plans.

Furthermore, the convergence of these technologies has also facilitated the development of new forms of human-machine interaction, such as gesture recognition and voice-controlled interfaces. These innovations have led to the creation of more intuitive and seamless user experiences in areas such as gaming, virtual reality, and augmented reality. The widespread adoption of embedded vision systems and AI has also resulted in a growing demand for skilled professionals capable of designing, developing, and implementing these solutions. As such, academic institutions and training programmes are increasingly offering courses and degrees focused on these technologies, providing the necessary skills for individuals seeking careers in these areas.

Together, embedded vision systems and AI have the potential to revolutionise industries in the coming future. Some of the industries that are set to benefit from this technological convergence are discussed here.

Autonomous vehicles​

The automotive industry is a hotbed of innovation, and the integration of embedded vision systems and AI represents a significant leap forward in this industry's technological capabilities. One of the most compelling examples of this integration is in the development of autonomous vehicles, which are poised to revolutionise transportation as we know it. Embedded vision systems, comprising cameras, lidar, and other sensors, capture visual data in real-time, which is then transmitted to AI algorithms that analyse the information and make decisions based on predefined parameters.

1678952669802.png

AI-powered embedded vision systems comprising cameras, lidar, and other sensors have enabled the development of autonomous vehicles (Image: Shutterstock/Scharfsinn)

In addition to revolutionising consumer transportation, autonomous vehicles will undoubtedly also be a game-changer for the logistics industry, where they are already being deployed to pick and transport packages in warehouses.

Healthcare​

The healthcare industry also stands to benefit greatly from the integration of embedded vision systems and AI, as it promises to transform the field of medical imaging and diagnosis. The ability of embedded vision systems to capture high-resolution images of internal organs and tissues has already shown promise in detecting and diagnosing health issues. However, the true potential of this technology lies in AI's ability to analyse these images and identify patterns that may not be visible to the human eye. By leveraging machine learning algorithms, medical professionals can achieve faster and more accurate diagnoses, leading to better patient outcomes. This technology could also lead to reduced costs by minimising the need for additional tests and consultations.

With the integration of embedded vision systems and AI, the healthcare industry has the potential to revolutionise the way medical diagnoses are made, paving the way for more efficient and effective healthcare solutions.

Sport broadcasting​

Sports broadcasting is another industry poised for significant transformation with the integration of embedded vision systems and AI. With the help of AI, sports broadcasters can enhance the viewing experience of their audiences by providing them with real-time statistics, player tracking, and other vital information during live matches. Embedded vision systems can capture and process high-definition images of sports events, and AI algorithms can analyse this data to provide viewers with unique insights into the game, such as player heat maps, ball speed and trajectory, and other performance metrics.

This data can be used by coaches and analysts to make critical decisions, while sports broadcasters can use it to create more engaging and informative content for their viewers. With the integration of embedded vision systems and AI, the sports broadcasting industry is set to undergo a significant transformation, providing viewers with a more immersive and insightful experience.

Retail​

The retail industry holds immense potential for significant advancement through the integration of embedded vision systems and AI, which could revolutionise the shopping experience for customers and improve business outcomes for retailers.

One of the most revolutionary and innovative applications of these technologies is the concept of autonomous shopping, which entails a comprehensive network of high-tech cameras, state-of-the-art sensors, and AI-powered algorithms automating the entire shopping experience. This cutting-edge approach provides customers with the ability to simply walk into a store, select the items they desire, and exit without the need for interaction with a cashier or checkout system. Essential to this process, embedded vision systems enable instantaneous object detection and recognition, facilitating the AI algorithms' identification of products while simultaneously tracking their placement into the customer's cart. Furthermore, data gathered through these systems is instrumental in supporting retailers to optimise their inventory management, streamline store layout, and enhance overall customer experience.

In addition, AI-powered recommendation engines, which analyse customer data to provide personalised product recommendations, represent a major advancement in the field of customer engagement. The amalgamation of embedded vision systems and AI therefore has great potential to revolutionise the retail industry, dramatically transforming the way we approach the shopping experience as a whole.

Conclusion​

The convergence of embedded vision systems and AI has tremendous implications for a wide range of industries, with there being great promise for the transformative potential of these technologies. The progress shared here that has been made thus far in the automotive, healthcare, sports broadcasting and retail industries underscores the potential of this technology to improve efficiency, reduce costs, and enhance productivity. Nevertheless, it is important to address challenges such as privacy concerns and the ethical development of AI to ensure that these technologies are used responsibly and sustainably.

As industry experts continue to explore the possibilities of embedded vision systems and AI, it is clear that this technology will continue to shape and transform industries in the years to come.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

buena suerte :-)

BOB Bank of Brainchip

How embedded vision and AI will revolutionise industries in the coming future​

15 March 2023

As we stand on the cusp of the Fourth Industrial Revolution, the integration of embedded vision systems and artificial intelligence (AI) is poised to unleash a wave of disruption that will revolutionise industries as diverse as healthcare, manufacturing, transportation, and retail.

With the ability to process massive amounts of data in real time and make complex decisions with astonishing speed and accuracy, these technologies have the potential to transform the way businesses operate, optimise supply chains, enhance product quality, and deliver unparalleled customer experiences. As we look to the future, it is clear that those companies that are able to harness the power of embedded vision systems and AI will be the ones that thrive in an increasingly competitive and dynamic marketplace.

The application of embedded vision systems and AI has extended beyond their traditional use cases, spurring new and innovative solutions across industries. For instance, the use of AI-powered chatbots has significantly improved customer service, providing 24/7 support and reducing response times. Additionally, AI algorithms have been used to predict and prevent equipment failure in manufacturing, reducing downtime and improving overall efficiency. In the healthcare industry, embedded vision systems and AI have enabled the development of precision medicine, allowing for accurate diagnoses and targeted treatment plans.

Furthermore, the convergence of these technologies has also facilitated the development of new forms of human-machine interaction, such as gesture recognition and voice-controlled interfaces. These innovations have led to the creation of more intuitive and seamless user experiences in areas such as gaming, virtual reality, and augmented reality. The widespread adoption of embedded vision systems and AI has also resulted in a growing demand for skilled professionals capable of designing, developing, and implementing these solutions. As such, academic institutions and training programmes are increasingly offering courses and degrees focused on these technologies, providing the necessary skills for individuals seeking careers in these areas.

Together, embedded vision systems and AI have the potential to revolutionise industries in the coming future. Some of the industries that are set to benefit from this technological convergence are discussed here.

Autonomous vehicles​

The automotive industry is a hotbed of innovation, and the integration of embedded vision systems and AI represents a significant leap forward in this industry's technological capabilities. One of the most compelling examples of this integration is in the development of autonomous vehicles, which are poised to revolutionise transportation as we know it. Embedded vision systems, comprising cameras, lidar, and other sensors, capture visual data in real-time, which is then transmitted to AI algorithms that analyse the information and make decisions based on predefined parameters.

View attachment 32333
AI-powered embedded vision systems comprising cameras, lidar, and other sensors have enabled the development of autonomous vehicles (Image: Shutterstock/Scharfsinn)

In addition to revolutionising consumer transportation, autonomous vehicles will undoubtedly also be a game-changer for the logistics industry, where they are already being deployed to pick and transport packages in warehouses.

Healthcare​

The healthcare industry also stands to benefit greatly from the integration of embedded vision systems and AI, as it promises to transform the field of medical imaging and diagnosis. The ability of embedded vision systems to capture high-resolution images of internal organs and tissues has already shown promise in detecting and diagnosing health issues. However, the true potential of this technology lies in AI's ability to analyse these images and identify patterns that may not be visible to the human eye. By leveraging machine learning algorithms, medical professionals can achieve faster and more accurate diagnoses, leading to better patient outcomes. This technology could also lead to reduced costs by minimising the need for additional tests and consultations.

With the integration of embedded vision systems and AI, the healthcare industry has the potential to revolutionise the way medical diagnoses are made, paving the way for more efficient and effective healthcare solutions.

Sport broadcasting​

Sports broadcasting is another industry poised for significant transformation with the integration of embedded vision systems and AI. With the help of AI, sports broadcasters can enhance the viewing experience of their audiences by providing them with real-time statistics, player tracking, and other vital information during live matches. Embedded vision systems can capture and process high-definition images of sports events, and AI algorithms can analyse this data to provide viewers with unique insights into the game, such as player heat maps, ball speed and trajectory, and other performance metrics.

This data can be used by coaches and analysts to make critical decisions, while sports broadcasters can use it to create more engaging and informative content for their viewers. With the integration of embedded vision systems and AI, the sports broadcasting industry is set to undergo a significant transformation, providing viewers with a more immersive and insightful experience.

Retail​

The retail industry holds immense potential for significant advancement through the integration of embedded vision systems and AI, which could revolutionise the shopping experience for customers and improve business outcomes for retailers.

One of the most revolutionary and innovative applications of these technologies is the concept of autonomous shopping, which entails a comprehensive network of high-tech cameras, state-of-the-art sensors, and AI-powered algorithms automating the entire shopping experience. This cutting-edge approach provides customers with the ability to simply walk into a store, select the items they desire, and exit without the need for interaction with a cashier or checkout system. Essential to this process, embedded vision systems enable instantaneous object detection and recognition, facilitating the AI algorithms' identification of products while simultaneously tracking their placement into the customer's cart. Furthermore, data gathered through these systems is instrumental in supporting retailers to optimise their inventory management, streamline store layout, and enhance overall customer experience.

In addition, AI-powered recommendation engines, which analyse customer data to provide personalised product recommendations, represent a major advancement in the field of customer engagement. The amalgamation of embedded vision systems and AI therefore has great potential to revolutionise the retail industry, dramatically transforming the way we approach the shopping experience as a whole.

Conclusion​

The convergence of embedded vision systems and AI has tremendous implications for a wide range of industries, with there being great promise for the transformative potential of these technologies. The progress shared here that has been made thus far in the automotive, healthcare, sports broadcasting and retail industries underscores the potential of this technology to improve efficiency, reduce costs, and enhance productivity. Nevertheless, it is important to address challenges such as privacy concerns and the ethical development of AI to ensure that these technologies are used responsibly and sustainably.

As industry experts continue to explore the possibilities of embedded vision systems and AI, it is clear that this technology will continue to shape and transform industries in the years to come.

Great posting @Steve10 (y)


How embedded vision and AI will revolutionise industries in the coming future​

As we stand on the cusp of the FOURTH Industrial Revolution "Those words alone are only possible because of BRN AKIDA" WOW!!!
the integration of embedded vision systems and artificial intelligence (AI) is poised to unleash a wave of disruption that will revolutionise industries as diverse as healthcare, manufacturing, transportation, and retail.

Brainchip Revolutionising Neuromorphic Ai technology
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users

GStocks123

Regular

GStocks123

Regular
  • Like
  • Haha
  • Love
Reactions: 9 users
D

Deleted member 118

Guest

  • Experience in neuromorphic computing (e.g., Intel Loihi or BrainChip Akida) or cybersecurity is a plus.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Deadpool

hyper-efficient Ai
In the interest of potentially answering some initial questions others might have about Spiral Blue...

From @Diogenese (with permission) earlier this month. I ran Spiral Blue's space edge computer past him for his opinion.



"Hi SeRA2,

No published patent docs, but they are not published until 18 moths after filing.

They use Nvidia, so they are probably software.

https://spiralblue.space/space-edge-computing

Space Edge Computers use the NVIDIA Jetson series, maximising processing power while keeping power draw manageable. They carry polymer shielding for single event effects, as well as additional software and hardware mitigations. We provide onboard infrastructure software to manage resources and ensure security, as well as onboard apps such as preprocessing, GPU based compression, cloud detection, and cropping. We can also provide AI based apps for object detection and segmentation, such as Vessel Detect and Canopy Mapper.

Note our friend "segmentation" is along for the ride."
Hi @SERA2g. An observation, I haven't seen much of you, last couple of months, your always educational and entertaining posts are very much appreciated.
Good to see you back on board, and I know I don't have to inform you, but Akida is about to become the corner stone and pinnacle of the 4th industrial revolution.
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Jamjet

Emerged
Couldn’t help myself topped up with more shares yesterday I am all in.
Love reading through all the post here thanks for the input all.
 
  • Like
  • Love
  • Fire
Reactions: 33 users

TheFunkMachine

seeds have the potential to become trees.
How many amputees are there from land mine explosions.

Space will not be a volume buyer for decades but robots are in factories around the world already.

Underground mines are a robotic target market.

If AKIDA proves itself in space on global television carrying out on the fly repairs just consider the marketing value.

My opinion only DYOR
How many amputees are there from land mine explosions.

Space will not be a volume buyer for decades but robots are in factories around the world already.

Underground mines are a robotic target market.

If AKIDA proves itself in space on global television carrying out on the fly repairs just consider the marketing value.

My opinion only DYOR
Yes, I agree, But in terms of this news some people are expecting an asx announcement. That won’t happen imo because it is not a revenu producing contact, or if it is, then this should have already been an asx announcement before we hear wind of it trough the other party? It’s not material news is what I was referring to.

If you Re- read my post I did say that it will showcase its ability, which was maybe a poor choice of wording, what I meant with that is that trough this collaboration with the robot arm, it will show the world what’s possible with Akida as the brains, and it is pretty cool stuff, and that will then spark confidence and maybe even open up for new use-cases by proving its capabilities in space. I didn’t mean to sound negative, just an opinion in regards to direct revenue and asx announcement.

I’m still very much positive regarding this space mission!
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Schwale

Regular
Couldn’t help myself topped up with more shares yesterday I am all in.
Love reading through all the post here thanks for the input all.
Same here, i just topped up my SMSF with another 10,000. These low prices are like buying from Coles on half price specials..

irresistible!
 
  • Like
  • Haha
  • Fire
Reactions: 26 users

goodvibes

Regular
  • Fire
  • Like
Reactions: 6 users
This Valeo AI promotion was posted earlier in the day however I’ve only just had time to watch it and there hasn’t been much comment on it so i thought I’d post it again.

Given the Ant61 robotics spectacular news today it’s easy to miss some info with all that’s going on.

Given Valeo are a trusted partner of ours I can only hope the AI they are talking about is Akida.

Enjoy:


 
  • Like
  • Love
  • Fire
Reactions: 30 users

raybot

Member
for tomorrow I've ordered another 5000 for 0.28 cents (Europe), if that doesn't work, I'll have to sell a sofa as well :). I'm also grateful if the order doesn't work. The course is really awful! Of course we are all long here. I found the Teksun thing very exciting, word has gotten around the world. I'll be operated on Friday, so I'll be out of here again for a few days. I hope I can read everything here again. Nice to be in this forum!
All the best Sirod !
 
  • Like
  • Love
Reactions: 15 users

Quatrojos

Regular
  • Like
  • Fire
  • Love
Reactions: 66 users
Top Bottom