BRN Discussion Ongoing

7für7

Top 20
Love lamps
I am looking at your post and can somebody correct me. We haven’t committed to making Pico on chip or released anything to market about making this chip so has this guy let something out of the bag or is this person stating what could occur going forward based on publically known information and some assumptions. As the line

Akida Pico: a very low power NPU to bring Al
to any device with a battery or batteries. This chip is manufactured by GlobalFoundries at a 22 nm FDSOI (22FDX) manufacturing process.

This is in past tense and suggests this has occurred???
Or maybe just a misunderstanding… maybe he don’t know that brainchips akida is a license product. 🤷🏻‍♂️
 
  • Like
  • Thinking
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
cats-lil-bub.gif



Screenshot 2024-10-08 at 9.56.53 am.png





Ericsson-10.7-1-1024x572.png

Ericsson and SoftBank Corp. step up collaboration on AI-RAN integration to enhance network efficiency and performance​

Ericsson Media Release | Oct 7, 2024
Ericsson (NASDAQ: ERIC) and SoftBank Corp. (“SoftBank”) today announced a collaboration to explore converged architectures for AI-RAN. By combining their strengths, the two companies aim to explore the potential for developing new AI-integrated RAN solutions that
boost network efficiency and performance.

A woman looking up with an AI illustration in the background

Ericsson and SoftBank will explore common network and compute infrastructure solutions that allow both Artificial Intelligence (AI) and Radio Access Network (RAN) to operate on the same network infrastructure. The goal is to unlock new use cases for communications service providers by using the power of AI to enhance network efficiency.
The two companies will jointly conduct techno-economic analyses, develop prototypes and run lab demos to optimize RAN and AI convergence at the edge. The initiative will also focus on hardware partitioning, workload distribution, and software portability across different hardware platforms.
Ericsson and SoftBank will jointly explore the following key areas:
  • Optimizing AI and RAN convergence: optimizing architectures to support AI and RAN working together at the edge, with a focus on Centralized RAN (C-RAN). including assessing the pros and cons of shared hardware AI and RAN processing.
  • AI and RAN co-existence: managing hardware and workload sharing between AI and RAN applications.
  • Engineering demos: testing the possibility of running RAN applications and AI engines on the same hardware infrastructure to see how they can share resources.
Fredrik Jejdling, Executive Vice President and Head of Business Area Networks, Ericsson says: “We look forward to this collaboration with SoftBank in exploring the potential convergence of AI and RAN infrastructures. This reaffirms our commitment to innovation and excellence, and we believe it will lead to new solutions that empower communications service providers to build more open, efficient and versatile networks.”
Hideyuki Tsukuda, Executive Vice President & CTO (Chief Technology Officer) at SoftBank Corp., says: “SoftBank welcomes this new collaboration with Ericsson, which aligns with our strategy to invest in AI infrastructure that enables the overlay and optimization of RAN. This partnership reflects our vision of leveraging AI to enhance communication networks and opens up opportunities for collaboration with key industry players.”
Ericsson and SoftBank are among the prominent industry leaders who formed the AI-RAN Alliance, unveiled at the Mobile World Congress Barcelona 2024. This alliance is a new collaborative effort to integrate AI into cellular technology, with the goal of progressing RAN technology and mobile networks.


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 29 users
Some Mad action at the moment..
 
  • Like
  • Fire
  • Thinking
Reactions: 7 users

Justchilln

Regular
Nobody has mentioned half yearly that came out yesterday


world’s first commercial producer of neuromorphic artificial intelligence IP, today released its Half
Year results for the period ended 30 June 2024.
During the first half of 2024, the Company focused its efforts on expanding the capabilities of the
Akida 2.0 technology through continued development of our TENNs platform. These initiatives are
a direct response to customer inquiries and feedback which reflect the market’s rapidly growing
interest for ultra-low power edge AI. The Company has seen growing interest from customers
seeking audio, video, space and military application solutions. Development efforts will continue
throughout the remainder of the year, with anticipated offerings and updates in Q1 of 2025.
The Company continued to implement cost reduction initiatives throughout the period in an effort to
preserve cash. Management continually assesses opportunities to reduce expenses without
hampering critical sales and development activities, or hindering recruiting and retention efforts
that are extremely challenging in the AI and tech industry. This is an ongoing effort and additional
cost reduction initiatives are anticipated in the second half of the year.
Significant Events After 30 June 2024
Subsequent to the balance sheet date, the Company received US$3,645,104 (A$5,465,128) upon
the closure of the capital call notice with LDA Capital on 28 June 2024.
On 25 July 2024, the Company announced:
• an equity capital raise of A$25 million comprising a fully underwritten share placement to
professional and sophisticated investors raising A$20 million before costs (“Placement”),
• the sale of A$2 million (before costs) of existing securities from LDA Capital (“Existing
Share Sale’), and
• a non-underwritten share purchase plan (“SPP”) to be offered to eligible Australian and
New Zealand shareholders to raise a further A$3 million.
The Company received US$13,606,629 (A$20,941,007) (net cash after costs) on 31 July 2024 to
close out the Placement and the Existing Share Sale transactions, with 103,245,355 shares issued
on 1 August 2024.
The SPP offer closed on 15 August 2024 resulting in the issue of 3,274,604 shares on 22 August
2024 and cash received by BrainChip of US$425,838 (A$632,013).
Business progress reflected in key appointments
In August, the Company announced the addition of Steven Brightfield to the executive team as the
Chief Marketing Officer.
Also in late August, the Company formally appointed Jonathan Tapson as the permanent Vice
President of Engineering, replacing Anil Mankar who is currently serving as a technical advisor
until his retirement at the end of this year.
Engagement with Frontgrade-Geisler subsequent to half year end
Also in August, the Company signed two agreements totalling €190k for projects with Frontgrade
Gaisler and Airbus Defense and Space to provide customers with AI capabilities for space
applications using Akida 1.0 technology.
These two projects were initiated in response to a European Space Agency request for ultra-low
power, neuromorphic, edge AI computing in space to enable future missions to the moon and
beyond. Due to the nature, timing and risk associated with these projects, it is not possible for the
Company to estimate future royalty revenues associated with these agreements.
Financial Summary
The Group made a net loss after income tax for the half-year ended 30 June 2024 of $11,517,767
(30 June 2023: $17,146,781). Revenue for the half-year ended 30 June 2024 of $106,693
decreased 8% from $115,606 in the same period a year ago. Total expenses for the half-year
ended 30 June 2024 of $11,690,959 decreased 31% from $16,851,241 reported in the half-year
ended 30 June 2023.
As a result of the Half Year financial close process, the Company completed an assessment of the
recoverability of the Group’s assets after considering impairment indicators such as high global
interest rates and the challenge of inconsistent revenue streams during the start-up phase.
Management completed a value-in-use discounted cashflow model based on a 5-year projection
period with a terminal value. Management determined that it would be prudent to impair the
carrying value of the intangible assets of $576,037, despite the Company’s confidence that the
technology remains on track. It should be noted that the impairment is a non-cash charge and has
no impact on the Company’s liquidity or ability to continue as a going concern.
This announcement is authorised for release by the BRN Board of Directors.
____________________________________________________________________________
About BrainChip Holdings Ltd (ASX: BRN)
BrainChip is the worldwide leader in edge AI on-chip processing and learning. The Company’s
first-to-market neuromorphic processor, AkidaTM, mimics the human brain to analyse only essential
sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision,
and economy of energy. Keeping machine learning local to the chip, independent of the cloud, also
dramatically reduces latency while improving privacy and data security. In enabling effective edge
compute to be universally deployable across real world applications such as connected cars,
consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor,
is the future for its customers’ products as well as the planet. Explore the benefits of Essential AI at
www.brainchip.com.
______________________________________________________________________________
Additional information is available at:
Investor Relations Contact: IR@brainchip.com
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006
That was released Aug 26
 
  • Like
Reactions: 12 users

Yoda

Regular
Strange that the SP is climbing so much at the moment with no news. I'm not complaining but I do find it odd.
 
  • Like
  • Thinking
  • Love
Reactions: 7 users

7für7

Top 20
Strange that the SP is climbing so much at the moment with no news. I'm not complaining but I do find it odd.


Enjoy the ride 🤙🤙

1728346335525.gif
 
  • Haha
  • Like
Reactions: 6 users

7für7

Top 20
I find it even more remarkable that Nintendo has delayed the release of the new Switch. Only now we are hearing about Pico from BrainChip, which makes its use in the Switch more plausible. I’m not saying that these are related, but the timing is somewhat noteworthy… We’ll see.
 
  • Thinking
  • Like
  • Fire
Reactions: 11 users

HopalongPetrovski

I'm Spartacus!
Some Mad action at the moment..
Nice to see it maintaining some decent volumes through too.
Seems to me that it got started when the data on Pico was released to the writers and contributors who would have extensive networks, and then consolidated a few days later when the articles started appearing and Jo Average became aware. Since then, a steady demand with some cashing out thinking the pump reaching its peak, and others cashing in with the expectation or hope of some official announcement of something concrete and substantial dropping, to turbo charge this rise further.
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Dr E Brown

Regular
Nice to see it maintaining some decent volumes through too.
Seems to me that it got started when the data on Pico was released to the writers and contributors who would have extensive networks, and then consolidated a few days later when the articles started appearing and Jo Average became aware. Since then, a steady demand with some cashing out thinking the pump reaching its peak, and others cashing in with the expectation or hope of some official announcement of something concrete and substantial dropping, to turbo charge this rise further.
I really hope you are right and we have turned a corner, but I am cautious after seeing this happen before just before a quarterly and then a dump on the announcement.
My sincere hope is that this quarterly shows some revenue with positive forward looking commentary, then we can maintain this slow accretion in sp.
 
  • Like
  • Love
  • Fire
Reactions: 34 users

7für7

Top 20
I really hope you are right and we have turned a corner, but I am cautious after seeing this happen before just before a quarterly and then a dump on the announcement.
My sincere hope is that this quarterly shows some revenue with positive forward looking commentary, then we can maintain this slow accretion in sp.
Says the bro with a time machine in the garage, but without the 1000 gigawatts of power to run it… Don’t you have storms with lightning in Australia? Or another question… Haven’t you converted your time machine to run on garbage yet? I’m asking for Marty.

1728355144985.gif
 
  • Like
  • Haha
  • Sad
Reactions: 4 users
As pointed out on HC, looks like the sudden drop in the SP just now may have something to do with today's actions on the Hang Seng. Currently down 6%.
 
  • Like
  • Wow
  • Thinking
Reactions: 9 users

7für7

Top 20
As pointed out on HC, looks like the sudden drop in the SP just now may have something to do with today's actions on the Hang Seng. Currently down 6%.

As I walk through the charts of the valley of tech,
I take a look at BrainChip, and it’s looking like wrecked…

Been spending most our lives, living in a Shorter’s Paradise,
Keep betting on the drop, like we’re rolling loaded dice,
Been spending most our lives, living in a Shorter’s Paradise,
Watch the price tumble down, it’s the bears who pay the price.

1728360463337.gif
 
  • Haha
  • Sad
  • Wow
Reactions: 7 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 3 users

Luppo71

Founding Member
I'm not sure if this has been posted already, but is it Sean that is in the photo below, with Mikhail Asavkin?


View attachment 70040
If that is ten years in the future, then yes it is.
 
  • Haha
Reactions: 3 users

Diogenese

Top 20
Today, 33 million shares have been sold at or under 28c.

Was this wise?
 
  • Like
  • Haha
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Just noticed this Tata Exlsi publication dated 3 September 2024 on multimodal AI.

I've also included a slide from Tata Exlsi's Q2 FY24 Report as a reminder of how serious they are about our partnership in terms of driving our technology into medical and industrial applications.


Screenshot 2024-10-08 at 4.25.52 pm.png



Screenshot 2024-10-08 at 4.37.40 pm.png






Publication Name: Times tech.in
Date: September 03, 2024

Multimodal AI to Enhance Media & Communication Experiences​

Multimodal AI to Enhance Media & Communication Experiences

Multimodal AI is transforming media and communication by integrating various data types like text, images, and videos to enhance content creation, audience engagement, and advanced search capabilities. In an interview with TimesTech, Deewakar Thakyal, Senior Technology Lead at Tata Elxsi, explains how this groundbreaking technology is shaping the future of personalized and immersive content experiences across industries.

TimesTech: What is multimodal AI? How is it different from AI and GenAI?​

Deewakar: Multimodal AI is a type of artificial intelligence that can process and understand multiple types of data simultaneously, such as text, images, audio, and video using various AI techniques, such as Natural Language Processing (NLP), Computer Vision, Speech Recognition, Machine Learning, and Large Language Models (LLMs). Unlike traditional AI, which is often limited to a single modality, multimodal AI can integrate information from different sources to provide a more comprehensive understanding of the world.
GenAI, or generative AI, is a subset of AI that can create new content, such as text, images, or code, based on patterns it learns from existing data. While GenAI can be multimodal, it’s primarily focused on generating new content. In this context, multimodal AI is focused on understanding the context, while GenAI is about creating. Multimodal AI can analyse a complex scene, such as a street intersection, and understand the interactions between pedestrians, vehicles, and traffic signals. On the other hand, GenAI can create a realistic image of a person based on a textual description.

TimesTech: How is multimodal AI enhancing content creation?​

Deewakar: Multimodal AI is revolutionizing content creation by allowing for more dynamic, engaging, and personalized experiences. It enhances understanding by processing various forms of content simultaneously, tailors content to individual needs, assists human creators, enables new content formats, and improves accessibility. For example, multimodal AI can analyse user preferences and behaviour to create personalized recommendations, suggesting products or articles that align with their interests. It can also assist human creators by generating ideas, suggesting different angles, or providing feedback on drafts.
Multimodal AI can transform content production, advertising, and creative industries. By generating cohesive and contextually relevant content across different formats, such as text, images, and audio, these models can cater to diverse needs and preferences, enhancing both reach and impact.
Additionally, multimodal AI can enable the creation of novel content formats, such as interactive storytelling or personalized product recommendations, making content more engaging and immersive. By incorporating features like speech-to-text and text-to-speech, multimodal AI can make content more accessible to a wider audience, including those with disabilities. This helps to create a more inclusive and equitable content ecosystem.

TimesTech: What is the role of Multimodal AI in improving audience engagement?​

Deewakar: With multimodal AI comes the integration of various types of data such as text, images, videos etc. With such varied content, the use of AI makes it easy to ascertain user preferences by processing multiple sensory inputs simultaneously. With consumers looking for more personalization in content and more digital platforms struggling to keep up with the demand, employing multimodal AI helps enhance audience engagement by directly making use of audience insights.
Tata Elxsi’s AIVA platform, for example, utilizes AI to create highlights of video content based on user preferences, which enables consumers to get more insights into specific parts of the video content. The use of AI-powered chatbots provides an interactive avenue for users to receive content recommendations based on their interests. Chatbots are also important support systems that answer user queries and provide content support. Keeping in mind audience demographics, multimodal AI also helps in content localisation by helping with translation and subtitling, giving a more nuanced understanding of the content to specific consumers.

TimesTech: Does multimodal AI help in advanced search and analysis? How?​

Deewakar: Multimodal AI can be extended to provide video insights like facial expressions, and situational sentiments as well as identify actions and objects by integrating and analysing data from multiple sources such as images, audio, text etc., which becomes helpful for consumers to get a better understanding of the content. Multimodal AI is extensively utilized by advertisers and media companies to deliver personalized ads that fit user behaviour and are optimized for different platforms like websites, mobile apps, and social media.
This can be seen through Tata Elxsi’s content discovery and recommendation engine named uLike, which is powered by multimodal AI. The program helps users search for videos based on tags, keywords and text within videos, which helps make the content more visible. Through such mechanisms, it becomes easier to curate content that fits consumer preferences while also detecting and removing harmful or inappropriate content from platforms, which is a result of analysing user behaviour and feedback.
At the same time, while opening the scope for monetization and ethical use through licensing agreements. Multimodal AI becomes important to drive innovation in this regard.

TimesTech: What is the futuristic scope of multimodal AI?​

Deewakar: With major digital transformation firms inching toward multimodal AI, it only goes to show that this will be a major breakthrough in content generation and personalisation across the media and entertainment industry. However, this technology can be extended to other industries as well, such as e-commerce, healthcare, education etc. Due to the significance of technologies like NLP, which can better analyse context and sentiment, there is a higher scope for multimodal AI to enhance the human-machine experience. However, it also becomes necessary to pay attention to ethical concerns and privacy issues with its use, as this involves analysing user data to provide insights. With the proper measures, multimodal AI will be transformational for the industry and can bring in the much-needed innovation, as promised.



Screenshot 2024-10-08 at 4.28.50 pm.png


 
  • Like
  • Fire
  • Love
Reactions: 42 users

Diogenese

Top 20
I don't recall the exact date Valeo and BRN got together, but I think this Valeo patent application pre-dates that.

The bit that interests me is that, while they propose using a CNN, they have modified an algorithm so that it only works on the lane markings. The purpose of this is to reduce the processing load.
WO2023046617A1 ROAD LANE DELIMITER CLASSIFICATION 20210921

In general, the environmental sensor data may be evaluated by means of a known algorithm to identify the plurality of points, which represent the road lane delimiter, for example by edge recognition and/or pattern recognition algorithms.

The classification algorithm is, in particular, a classification algorithm with an architecture, which is trainable by means of machine learning and, in particular, has been trained by means of machine learning before the computer-implemented method for road lane delimiter classification is carried out. The classification algorithm may for example be based on a support vector machine or an artificial neural network, in particular a convolutional neural network, CNN. Due to the two-dimensional nature of the two- dimensional histogram, it may essentially be considered as an image and therefore be particularly suitable to be classified by means of a CNN.

On the other hand, the method allows for an individual classification of each road lane delimiter in a scene and, consequently, to an accurate classification of the road lane delimiters. In particular, the two-dimensional histogram is formed for a particular road lane delimiter such that each road lane delimiter represented by the corresponding two- dimensional histogram may be classified individually by means of the classification algorithm. This also means that the classification algorithm does not have to handle the complete point cloud of the lidar system or a complete camera image, but only the relevant fraction corresponding to the road lane delimiter. This also reduces the complexity of the classification algorithm and the respective memory requirements.

This was filed in September 2021, so they were working on it for some time before that. In other words, this was developed pre their introduction to Akida when they were struggling to compress the conventional processor load of CNN so it would operate on the available processors in a timely fashion (real time) and without draining the battery.

We can all remember Luca's excitement at the processing capability of Akida 1, so I imagine Akia 2/TeNNs blew his socks off.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

jtardif999

Regular
Some prodigious research there Frangipani, as always 👍

What I like about what you brought to the surface, is the fact that like TENNs, AKIDA Pico has "already" been known to the "customers" we are dealing with, for some Time, which shortens the "lead time" for product developments.

Some of which, will hopefully break the surface soon.
I think Akida Pico is simply just Akida-E packaged with TENNs.
 
  • Thinking
  • Like
Reactions: 5 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 30 users

IloveLamp

Top 20
(Translated from arabic, talks about combining Neuralink tech with Akida Pico)

1000018863.jpg


1000018857.jpg
1000018860.jpg
 
  • Like
  • Fire
  • Love
Reactions: 36 users
Top Bottom