BRN Discussion Ongoing

This looks interesting:


Join us for Intel's 'AI Everywhere' event one week from today! You don't want to miss this. Register now and tune in on Dec.14 @ 15:45 CET: https://intel.ly/3NefZZO

I wonder what AI Intel are marketing?

Further to my previous post after 5.50 mins in this video there is talk of an NPU being part of Intels new offerings. I’d have to look at their partners page to see who is offering Intel an NPU 😂

 
  • Like
  • Fire
  • Haha
Reactions: 20 users

Home101

Regular
This looks interesting:


Join us for Intel's 'AI Everywhere' event one week from today! You don't want to miss this. Register now and tune in on Dec.14 @ 15:45 CET: https://intel.ly/3NefZZO

I wonder what AI Intel are marketing?
And given the timing of the new hire in our Sales, i would not be surprised if a relation with Intel is annouced. Imagine the possibilities
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users

Home101

Regular
And given the timing of the new hire in our Sales, i would not be surprised if a relation with Intel is annouced. Imagine the possibilities
And the shorts being aggressively covered now, what do they know?
 
  • Like
  • Thinking
Reactions: 4 users
And given the timing of the new hire in our Sales, i would not be surprised if a relation with Intel is annouced. Imagine the possibilities
We've had an official relationship with Intel (Intel Foundry Services) for almost, smack on a year now (December 11, 2022)..


"A new generation of devices that demand independent learning and inference capabilities, faster response times and limited power consumption has created opportunities for new products with smarter sensors, devices, and systems. Integrating AI into the SoC delivers efficient compute and the unique learning and performance requirements of Edge AI. BrainChip’s AkidaTM, enables low-latency and ultra-low power AI inference and on-chip learning"


So indeed, imagine the possibilities 😉
 
  • Like
  • Fire
Reactions: 25 users

IloveLamp

Top 20
  • Like
Reactions: 6 users

Tothemoon24

Top 20
IMG_7909.jpeg



A new next-generation sensor, the VALEO SCALA 3 LiDAR, which will equip the autonomous car of tomorrow": Valeo's LiDAR in the spotlight on French television BFMTV.

"This technology exceeds human capabilities" stresses journalist David Bouteiller in this report made on our Milovice test track, a few kilometers from Prague in the Czech Republic.

François Marion, Valeo's EVP Corporate Communications and Investor Relations, points out: "It has such high resolution that it is able to see a black tire on a road in the middle of the night at 150m, which neither the human eye nor cameras can see. What's more, it never tires or gets distracted."

Valeo has already booked more than a billion euros worth of orders for this technology, from Stellantis, an Asian automaker and an American robotaxi company.

There is a news report & footage attached to the link spoken in French 🇫🇷 ,


Happy Friday Chippers


 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 46 users

FuzM

Emerged
  • Like
Reactions: 1 users

Tothemoon24

Top 20
December 7, 2023

Using machine learning to monitor driver 'workload' could help improve road safety​

by Sarah Collins
driver Credit: Unsplash/CC0 Public Domain
Researchers have developed an adaptable algorithm that could improve road safety by predicting when drivers are able to safely interact with in-vehicle systems or receive messages, such as traffic alerts, incoming calls or driving directions.



The researchers, from the University of Cambridge, working in partnership with Jaguar Land Rover (JLR) used a combination of on-road experiments and machine learning as well as Bayesian filtering techniques to reliably and continuously measure driver "workload." Driving in an unfamiliar area may translate to a high workload, while a daily commute may mean a lower workload.

The resulting algorithm is highly adaptable and can respond in near real-time to changes in the driver's behavior and status, road conditions, road type, or driver characteristics.

This information could then be incorporated into in-vehicle systems such as infotainment and navigation, displays, advanced driver assistance systems (ADAS) and others.

Any driver-vehicle interaction can be then customized to prioritize safety and enhance the user experience, delivering adaptive human-machine interactions. For example, drivers are only alerted at times of low workload, so that the driver can keep their full concentration on the road in more stressful driving scenarios. The results are reported in the journal IEEE Transactions on Intelligent Vehicles.

"More and more data is made available to drivers all the time. However, with increasing levels of driver demand, this can be a major risk factor for road safety," said co-first author Dr. Bashar Ahmad from Cambridge's Department of Engineering. "There is a lot of information that a vehicle can make available to the driver, but it's not safe or practical to do so unless you know the status of the driver."

A driver's status—or workload—can change frequently. Driving in a new area, in heavy traffic or poor road conditions, for example, is usually more demanding than a daily commute.

"If you're in a demanding driving situation, that would be a bad time for a message to pop up on a screen or a heads-up display," said Ahmad. "The issue for car manufacturers is how to measure how occupied the driver is, and instigate interactions or issue messages or prompts only when the driver is happy to receive them."

There are algorithms for measuring the levels of driver demand using eye gaze trackers and biometric data from heart rate monitors, but the Cambridge researchers wanted to develop an approach that could do the same thing using information that's available in any car, specifically driving performance signals such as steering, acceleration and braking data. It should also be able to consume and fuse different unsynchronized data streams that have different update rates, including from biometric sensors if available.

To measure driver workload, the researchers first developed a modified version of the Peripheral Detection Task to collect, in an automated way, subjective workload information during driving. For the experiment, a phone showing a route on a navigation app was mounted to the car's central air vent, next to a small LED ring light that would blink at regular intervals.

Participants all followed the same route through a mix of rural, urban and main roads. They were asked to push a finger-worn button whenever the LED light lit up in red and the driver perceived they were in a low workload scenario.

Video analysis of the experiment, paired with the data from the buttons, allowed the researchers to identify high workload situations, such as busy junctions or a vehicle in front or behind the driver behaving unusually.

The on-road data was then used to develop and validate a supervised machine learning framework to profile drivers based on the average workload they experience, and an adaptable Bayesian filtering approach for sequentially estimating, in real-time, the driver's instantaneous workload, using several driving performance signals including steering and braking. The framework combines macro and micro measures of workload where the former is the driver's average workload profile and the latter is the instantaneous one.

"For most machine learning applications like this, you would have to train it on a particular driver, but we've been able to adapt the models on the go using simple Bayesian filtering techniques," said Ahmad. "It can easily adapt to different road types and conditions, or different drivers using the same car."

The research was conducted in collaboration with JLR who did the experimental design and the data collection. It was part of a project sponsored by JLR under the CAPE agreement with the University of Cambridge.

"This research is vital in understanding the impact of our design from a user perspective, so that we can continually improve safety and curate exceptional driving experiences for our clients," said JLR's Senior Technical Specialist of Human Machine Interface Dr. Lee Skrypchuk.

"These findings will help define how we use intelligent scheduling within our vehicles to ensure drivers receive the right notifications at the most appropriate time, allowing for seamless and effortless journeys."
 
  • Like
  • Fire
Reactions: 14 users

Tothemoon24

Top 20
IMG_7910.jpeg




📅👀 #CES2024 is just around the corner! If you’re making the annual journey to Vegas and want to see the event-based future of IoT, AR/VR/XR, next-generation mobile photography and much more - make sure to book a meeting with us.

🔋⚡ Take your consumer applications to the next level with extreme energy efficiency, high dynamic range, and processing performance.
We'll have our CES Award-Winning GenX320 Metavision® sensor on display, showcasing use cases in low-power Edge IoT such as eye tracking, gesture tracking, fall detection, and driver monitoring systems.
Plus, experience the future of mobile photography through a live demo of our event-based Metavision® sensor and AI running on the latest Qualcomm Snapdragon 8 Gen 3.

🏁 Stay ahead with Prophesee – book a meeting at our suite 29-312 in the Venetian now 👉 https://lnkd.in/dfe3jhZy
 
  • Like
  • Fire
Reactions: 19 users
He looks like he's had a few burgers since he left Brainchip, eh.......? Probably explains why he is a growth executive..... and given him so much energy to continually post on Linkedin. At least he looks happy now.

"....he didn't sell any chips because he ate all the chips......" :)

1701977954900.png
 
Last edited:
  • Haha
  • Like
Reactions: 26 users

Labsy

Regular
God I hope so but I get less that positive vibes from O'l Pattie boy at Intel... not holding my breath on this one even though I'm ever the optimist. I always said they should just adopt akida under licence and just call it loihi 3 or loihi edge or loihi science fiction or whatever..
But he doesn't seem the type given their obsession with X86 architecture and his recent comments on arm and how they stack up against that... like he's in fairly land.
 
  • Like
  • Fire
Reactions: 10 users

MDhere

Regular
View attachment 51594


A new next-generation sensor, the VALEO SCALA 3 LiDAR, which will equip the autonomous car of tomorrow": Valeo's LiDAR in the spotlight on French television BFMTV.

"This technology exceeds human capabilities" stresses journalist David Bouteiller in this report made on our Milovice test track, a few kilometers from Prague in the Czech Republic.

François Marion, Valeo's EVP Corporate Communications and Investor Relations, points out: "It has such high resolution that it is able to see a black tire on a road in the middle of the night at 150m, which neither the human eye nor cameras can see. What's more, it never tires or gets distracted."

Valeo has already booked more than a billion euros worth of orders for this technology, from Stellantis, an Asian automaker and an American robotaxi company.

There is a news report & footage attached to the link spoken in French 🇫🇷 ,


Happy Friday Chippers


Thanks tothemoon

Very interesting, I watched the video and understood most but but hard to interpret. He talk about ai detection and talks about tesla. He also mentions that France is alot more regulated then the USA France has to jump through more hurdles and bring prudent in as far as testing in real situations. But he is an extremely passionate man. Love Valeo ❤
 
  • Like
  • Love
  • Fire
Reactions: 14 users

IloveLamp

Top 20
Last edited:
  • Like
  • Wow
  • Fire
Reactions: 15 users
There is alot of misunderstanding in regards to the short cover.

The shorter covered the shares buy closing out the contract with the institution. They could have covered the short a month earlier but held the shares to keep pressure on the stock. They did not have to buy back that day and actually if they buy that day there is a t+2 settlement. Before they even own the share to give back to the institutions.

There is no magic in the daily volume vs day closed two different things. There was plenty if days that total shorted exceeded the volume traded vise versa.

We are at the end of the short campaign they clearly dropped the price bellow what many thought. From this stage on there will be less liquidity on sales tighter floats. The LDA shares will likely get absorbed by the institution that shorted the stock to cover maybe the rest of the shares and maybe take some long positions imo.

There really is a limit to what the price can drop for BRN im sure if BRNbwalked to Nvidia today and said take over my company for 500 million USD they would likely do it hands down IMO that's 42 cents AUD would you all be happy with that out come if it came to it? Likely not so despite what fear mongers are saying about solvency and value lack of sales you need to realise there is value in Akida more then we likely can accurately measure.

2 IP sales raised 5.5 million dollars they did not spend the money just for kicks and Imnsure Megachips had plenty of time to play before they even committed to the product.

The lack of buying is really 2 things.

1. Investor confidence no new money coming in to invest.
2. Fear driven tactics and poor communication and value identification.
3. No tangible reason yet for the flow of money to be bid only..
The SP will likely be we north of these prices before we see revenue on ink. The parties that drove the price down will drive it up. But again some are very cautious and want to see revenue before they buy shares or invest more thats fair and that comes at a premium.

Your basicly buying a bucket of dirt here that may or may not have gems in it. If you knew there was a Gem in it after the fist 5 % was removed the price of the rest of the bucket goes up and you are out that gem. It is a risk the biggest risk was the technology and its adaptability.

There has been plenty of positive feed back from many Tata included so in my mind it's only a matter of time before you see some ink.
Plus AI is so hyped overnight with AMD, you’d be surprised if BRN the only pure hardware Neuromorphic play in the world doesn’t get some buying interest today..
 
  • Like
  • Fire
  • Haha
Reactions: 9 users
  • Like
Reactions: 3 users

Ian

Founding Member
 
  • Like
  • Love
  • Fire
Reactions: 32 users

TheDrooben

Pretty Pretty Pretty Pretty Good
VVDN moving into smart phone manufacturing.....



“We are getting into smartphone manufacturing, plans of which are in the pipeline right now. It’s not a five-year plan—it’s for the near term. We are already speaking with companies and original equipment manufacturers for it. This is a part of our strategic relationship with our clients for whom we are already building many other products," Bansal said.



giphy (7).gif


Happy as Larry
 
  • Like
  • Fire
  • Love
Reactions: 48 users

skutza

Regular
CES is coming up again early next year. I wonder if like last year (seems like yesterday) we'll hope that someone comes out with a product using AKIDA and then all be disappointed that no news continue. (How can that be a year ago already?)
 
  • Like
  • Fire
  • Sad
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Further to my previous post after 5.50 mins in this video there is talk of an NPU being part of Intels new offerings. I’d have to look at their partners page to see who is offering Intel an NPU 😂



Hi @Stable Genius, I'll be retrieving my prayer helmet and rosary beads from the back of the wardrobe for the 14th December!


Screen Shot 2023-12-08 at 10.06.04 am.png



Screen Shot 2023-12-08 at 10.04.04 am.png

Extract

Screen Shot 2023-12-08 at 10.11.26 am.png


Screen Shot 2023-12-08 at 10.11.38 am.png




Extract
Screen Shot 2023-12-08 at 10.03.28 am.png


 

Attachments

  • Screen Shot 2023-12-08 at 10.04.24 am.png
    Screen Shot 2023-12-08 at 10.04.24 am.png
    121.2 KB · Views: 56
  • Like
  • Love
  • Fire
Reactions: 30 users

wilzy123

Founding Member



Key discussion points​

  1. Evolution of AI Models:
    Discussion on how AI models have evolved, focusing on optimisation for edge devices.

    Ian Bratt mentions, "There will be a breakthrough... and then you see kind of a phase of optimisation and that makes the model much smaller... more optimised and more amiable for deployment on the edge”.

  2. Balance of Cloud and Edge Computing:
    Insights on the shifting dynamics between cloud computing and edge computing in the AI landscape.

    Nandan Nayampally states, "While most of the buzz and investment seems to have been on the cloud... smaller more compact versions [of AI models] will emerge rapidly that are more amable to the edge".

  3. Demand for Edge AI:
    Exploration of factors driving the increasing demand for AI capabilities at the edge, such as latency and privacy concerns.

    Ian Bratt explains, "If you can do it on the edge then it will be done on the edge, so there's huge demand to enable significant AI workloads on the edge".

  4. Challenges in AI Development:
    Addressing the current challenges faced in the field of AI, including those related to large language models.

    Ian Bratt discusses the complexities in AI development, noting the ongoing process of model breakthroughs and subsequent optimisation phases.

  5. Future of AI and AGI:
    Predictions and expectations for the future of AI, including the concept and potential realization of Artificial General Intelligence (AGI).

    In the closing discussion, Ian Bratt expresses optimism about reaching AGI (Artificial General Intelligence), aligning with a positive future vision.
 
  • Like
  • Fire
  • Love
Reactions: 55 users
Top Bottom