BRN Discussion Ongoing

7für7

Top 20
Strange that the SP is climbing so much at the moment with no news. I'm not complaining but I do find it odd.


Enjoy the ride 🤙🤙

1728346335525.gif
 
  • Haha
  • Like
Reactions: 6 users

7für7

Top 20
I find it even more remarkable that Nintendo has delayed the release of the new Switch. Only now we are hearing about Pico from BrainChip, which makes its use in the Switch more plausible. I’m not saying that these are related, but the timing is somewhat noteworthy… We’ll see.
 
  • Thinking
  • Like
  • Fire
Reactions: 11 users

HopalongPetrovski

I'm Spartacus!
Some Mad action at the moment..
Nice to see it maintaining some decent volumes through too.
Seems to me that it got started when the data on Pico was released to the writers and contributors who would have extensive networks, and then consolidated a few days later when the articles started appearing and Jo Average became aware. Since then, a steady demand with some cashing out thinking the pump reaching its peak, and others cashing in with the expectation or hope of some official announcement of something concrete and substantial dropping, to turbo charge this rise further.
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Dr E Brown

Regular
Nice to see it maintaining some decent volumes through too.
Seems to me that it got started when the data on Pico was released to the writers and contributors who would have extensive networks, and then consolidated a few days later when the articles started appearing and Jo Average became aware. Since then, a steady demand with some cashing out thinking the pump reaching its peak, and others cashing in with the expectation or hope of some official announcement of something concrete and substantial dropping, to turbo charge this rise further.
I really hope you are right and we have turned a corner, but I am cautious after seeing this happen before just before a quarterly and then a dump on the announcement.
My sincere hope is that this quarterly shows some revenue with positive forward looking commentary, then we can maintain this slow accretion in sp.
 
  • Like
  • Love
  • Fire
Reactions: 34 users

7für7

Top 20
I really hope you are right and we have turned a corner, but I am cautious after seeing this happen before just before a quarterly and then a dump on the announcement.
My sincere hope is that this quarterly shows some revenue with positive forward looking commentary, then we can maintain this slow accretion in sp.
Says the bro with a time machine in the garage, but without the 1000 gigawatts of power to run it… Don’t you have storms with lightning in Australia? Or another question… Haven’t you converted your time machine to run on garbage yet? I’m asking for Marty.

1728355144985.gif
 
  • Like
  • Haha
  • Sad
Reactions: 4 users
As pointed out on HC, looks like the sudden drop in the SP just now may have something to do with today's actions on the Hang Seng. Currently down 6%.
 
  • Like
  • Wow
  • Thinking
Reactions: 9 users

7für7

Top 20
As pointed out on HC, looks like the sudden drop in the SP just now may have something to do with today's actions on the Hang Seng. Currently down 6%.

As I walk through the charts of the valley of tech,
I take a look at BrainChip, and it’s looking like wrecked…

Been spending most our lives, living in a Shorter’s Paradise,
Keep betting on the drop, like we’re rolling loaded dice,
Been spending most our lives, living in a Shorter’s Paradise,
Watch the price tumble down, it’s the bears who pay the price.

1728360463337.gif
 
  • Haha
  • Sad
  • Wow
Reactions: 7 users

IloveLamp

Top 20
1000018847.jpg
1000018849.jpg
 
  • Like
  • Fire
Reactions: 3 users

Luppo71

Founding Member
I'm not sure if this has been posted already, but is it Sean that is in the photo below, with Mikhail Asavkin?


View attachment 70040
If that is ten years in the future, then yes it is.
 
  • Haha
Reactions: 3 users

Diogenese

Top 20
Today, 33 million shares have been sold at or under 28c.

Was this wise?
 
  • Like
  • Haha
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Just noticed this Tata Exlsi publication dated 3 September 2024 on multimodal AI.

I've also included a slide from Tata Exlsi's Q2 FY24 Report as a reminder of how serious they are about our partnership in terms of driving our technology into medical and industrial applications.


Screenshot 2024-10-08 at 4.25.52 pm.png



Screenshot 2024-10-08 at 4.37.40 pm.png






Publication Name: Times tech.in
Date: September 03, 2024

Multimodal AI to Enhance Media & Communication Experiences​

Multimodal AI to Enhance Media & Communication Experiences

Multimodal AI is transforming media and communication by integrating various data types like text, images, and videos to enhance content creation, audience engagement, and advanced search capabilities. In an interview with TimesTech, Deewakar Thakyal, Senior Technology Lead at Tata Elxsi, explains how this groundbreaking technology is shaping the future of personalized and immersive content experiences across industries.

TimesTech: What is multimodal AI? How is it different from AI and GenAI?​

Deewakar: Multimodal AI is a type of artificial intelligence that can process and understand multiple types of data simultaneously, such as text, images, audio, and video using various AI techniques, such as Natural Language Processing (NLP), Computer Vision, Speech Recognition, Machine Learning, and Large Language Models (LLMs). Unlike traditional AI, which is often limited to a single modality, multimodal AI can integrate information from different sources to provide a more comprehensive understanding of the world.
GenAI, or generative AI, is a subset of AI that can create new content, such as text, images, or code, based on patterns it learns from existing data. While GenAI can be multimodal, it’s primarily focused on generating new content. In this context, multimodal AI is focused on understanding the context, while GenAI is about creating. Multimodal AI can analyse a complex scene, such as a street intersection, and understand the interactions between pedestrians, vehicles, and traffic signals. On the other hand, GenAI can create a realistic image of a person based on a textual description.

TimesTech: How is multimodal AI enhancing content creation?​

Deewakar: Multimodal AI is revolutionizing content creation by allowing for more dynamic, engaging, and personalized experiences. It enhances understanding by processing various forms of content simultaneously, tailors content to individual needs, assists human creators, enables new content formats, and improves accessibility. For example, multimodal AI can analyse user preferences and behaviour to create personalized recommendations, suggesting products or articles that align with their interests. It can also assist human creators by generating ideas, suggesting different angles, or providing feedback on drafts.
Multimodal AI can transform content production, advertising, and creative industries. By generating cohesive and contextually relevant content across different formats, such as text, images, and audio, these models can cater to diverse needs and preferences, enhancing both reach and impact.
Additionally, multimodal AI can enable the creation of novel content formats, such as interactive storytelling or personalized product recommendations, making content more engaging and immersive. By incorporating features like speech-to-text and text-to-speech, multimodal AI can make content more accessible to a wider audience, including those with disabilities. This helps to create a more inclusive and equitable content ecosystem.

TimesTech: What is the role of Multimodal AI in improving audience engagement?​

Deewakar: With multimodal AI comes the integration of various types of data such as text, images, videos etc. With such varied content, the use of AI makes it easy to ascertain user preferences by processing multiple sensory inputs simultaneously. With consumers looking for more personalization in content and more digital platforms struggling to keep up with the demand, employing multimodal AI helps enhance audience engagement by directly making use of audience insights.
Tata Elxsi’s AIVA platform, for example, utilizes AI to create highlights of video content based on user preferences, which enables consumers to get more insights into specific parts of the video content. The use of AI-powered chatbots provides an interactive avenue for users to receive content recommendations based on their interests. Chatbots are also important support systems that answer user queries and provide content support. Keeping in mind audience demographics, multimodal AI also helps in content localisation by helping with translation and subtitling, giving a more nuanced understanding of the content to specific consumers.

TimesTech: Does multimodal AI help in advanced search and analysis? How?​

Deewakar: Multimodal AI can be extended to provide video insights like facial expressions, and situational sentiments as well as identify actions and objects by integrating and analysing data from multiple sources such as images, audio, text etc., which becomes helpful for consumers to get a better understanding of the content. Multimodal AI is extensively utilized by advertisers and media companies to deliver personalized ads that fit user behaviour and are optimized for different platforms like websites, mobile apps, and social media.
This can be seen through Tata Elxsi’s content discovery and recommendation engine named uLike, which is powered by multimodal AI. The program helps users search for videos based on tags, keywords and text within videos, which helps make the content more visible. Through such mechanisms, it becomes easier to curate content that fits consumer preferences while also detecting and removing harmful or inappropriate content from platforms, which is a result of analysing user behaviour and feedback.
At the same time, while opening the scope for monetization and ethical use through licensing agreements. Multimodal AI becomes important to drive innovation in this regard.

TimesTech: What is the futuristic scope of multimodal AI?​

Deewakar: With major digital transformation firms inching toward multimodal AI, it only goes to show that this will be a major breakthrough in content generation and personalisation across the media and entertainment industry. However, this technology can be extended to other industries as well, such as e-commerce, healthcare, education etc. Due to the significance of technologies like NLP, which can better analyse context and sentiment, there is a higher scope for multimodal AI to enhance the human-machine experience. However, it also becomes necessary to pay attention to ethical concerns and privacy issues with its use, as this involves analysing user data to provide insights. With the proper measures, multimodal AI will be transformational for the industry and can bring in the much-needed innovation, as promised.



Screenshot 2024-10-08 at 4.28.50 pm.png


 
  • Like
  • Fire
  • Love
Reactions: 42 users

Diogenese

Top 20
I don't recall the exact date Valeo and BRN got together, but I think this Valeo patent application pre-dates that.

The bit that interests me is that, while they propose using a CNN, they have modified an algorithm so that it only works on the lane markings. The purpose of this is to reduce the processing load.
WO2023046617A1 ROAD LANE DELIMITER CLASSIFICATION 20210921

In general, the environmental sensor data may be evaluated by means of a known algorithm to identify the plurality of points, which represent the road lane delimiter, for example by edge recognition and/or pattern recognition algorithms.

The classification algorithm is, in particular, a classification algorithm with an architecture, which is trainable by means of machine learning and, in particular, has been trained by means of machine learning before the computer-implemented method for road lane delimiter classification is carried out. The classification algorithm may for example be based on a support vector machine or an artificial neural network, in particular a convolutional neural network, CNN. Due to the two-dimensional nature of the two- dimensional histogram, it may essentially be considered as an image and therefore be particularly suitable to be classified by means of a CNN.

On the other hand, the method allows for an individual classification of each road lane delimiter in a scene and, consequently, to an accurate classification of the road lane delimiters. In particular, the two-dimensional histogram is formed for a particular road lane delimiter such that each road lane delimiter represented by the corresponding two- dimensional histogram may be classified individually by means of the classification algorithm. This also means that the classification algorithm does not have to handle the complete point cloud of the lidar system or a complete camera image, but only the relevant fraction corresponding to the road lane delimiter. This also reduces the complexity of the classification algorithm and the respective memory requirements.

This was filed in September 2021, so they were working on it for some time before that. In other words, this was developed pre their introduction to Akida when they were struggling to compress the conventional processor load of CNN so it would operate on the available processors in a timely fashion (real time) and without draining the battery.

We can all remember Luca's excitement at the processing capability of Akida 1, so I imagine Akia 2/TeNNs blew his socks off.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

jtardif999

Regular
Some prodigious research there Frangipani, as always 👍

What I like about what you brought to the surface, is the fact that like TENNs, AKIDA Pico has "already" been known to the "customers" we are dealing with, for some Time, which shortens the "lead time" for product developments.

Some of which, will hopefully break the surface soon.
I think Akida Pico is simply just Akida-E packaged with TENNs.
 
  • Thinking
  • Like
Reactions: 5 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 30 users

IloveLamp

Top 20
(Translated from arabic, talks about combining Neuralink tech with Akida Pico)

1000018863.jpg


1000018857.jpg
1000018860.jpg
 
  • Like
  • Fire
  • Love
Reactions: 36 users

7für7

Top 20
Call me a fanboy… but the “rob likes” evening show would be a huge success

 
  • Fire
Reactions: 1 users

7für7

Top 20
And something from the German forum .. thanks @Dallas

 
  • Like
  • Love
Reactions: 5 users

Diogenese

Top 20
I think Akida Pico is simply just Akida-E packaged with TENNs.
Hi jt,

The brochure for Pico says it has a single neural processing engine.

1728373018940.png


Now the terms like NPE and NPU have been used incinsistently, but often they are used interchangeably.

In any case, it gives me the opportunity to post an image of th Monal Lisa of ICs - again:

US11468299B2 Spiking neural netwok 20181101

1728373188693.png
 
  • Like
  • Love
  • Fire
Reactions: 26 users
Last edited:
  • Like
  • Fire
Reactions: 10 users
I just stumbled across some interesting news about some researchers of a fruit-related company that have published a ML-model for creating a depth-map from a 2-dimensional image (without relying on the availability of metadata such as camera intrinsics).

Most obvious use case: blurring image regions dependent on (calculated) depth (e.g. for small image sensors in smartphones)
Depth mapping is handy for everything from robotic vision to blurring the background of images post-capture. Typically, it relies on being able to capture the scene from two slightly different angles — as with smartphones that have multiple rear-facing cameras, where the differences between the images on two sensors are used to calculate depth and separate the foreground from the background — or the use of a distance-measuring technology such as lidar. Depth Pro, though, requires neither of these, yet Apple claims it can turn a single two-dimensional image into an accurate depth map in well under a second.

Now the interesting parts:
"The key idea of our architecture," the researchers explain, "is to apply plain ViT [Vision Transformer] encoders on patches extracted at multiple scales and fuse the patch predictions into a single high-resolution dense prediction in an end-to-end trainable model. For predicting depth, we employ two ViT encoders, a patch encoder and an image encoder."
Ok, so it's about vision transformers ... hmm ...

Ah, and it's fast also:
It's also fast: in testing, Depth Pro delivers its results in just 0.3 seconds per image — though this, admittedly, is based on running the model on one of NVIDIA's high-end Tesla V100 GPUs.
What the ...?


Dear Brainchip, please
Thanks for your attention 😉


Edit - some addition

P.S.:
If this actually also works reliably for video, please consider adding companies/solutions related to the film and VFX hardware/software industry to your list of potential use-cases. Just for inspiration, imagine filming and keying (by depths-data) in real-time without green-screens (even if only used for pre-visualization).
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 14 users
Top Bottom