BRN Discussion Ongoing

cassip

Regular
Thank you very much @TopCat and @Sirod69
Regards
cassip
 
  • Like
Reactions: 5 users

FJ-215

Regular
I think it's nice to hear from him today, it's good! I love it !!!😘🥰
Edge Impulse

Edge Impulse29.113 Follower:innen42 Min. •

Hear what makes BrainChip excited about teaming up with us to build out an ecosystem of ML solutions and deliver value to customers via the company's neuromorphic technology and Edge Impulse tools.

cc: Rob Telson

AND on Twitter:


Hi Sirod69,

I've always been impressed by Rob, he is a born salesman and a very good communicator. He always displays enthusiasm and confidence but to me, there is something a little "next level" about him in the clip you posted. I get the sense you can add belief in to that list. Saying you have the best product is one thing, knowing it is something else.
 
  • Like
  • Love
Reactions: 19 users

Learning

Learning to the Top 🕵‍♂️
Not sure if this been posted.
Screenshot_20221208_082609_LinkedIn.jpg

Tired or distracted? These sensors are watching​

12/01/2022
How do interior surveillance systems work? See also the video ∙ Image: © Bosch, Video: © ADAC eV
A technology that could save lives: Interior sensors and cameras recognize whether drivers are tired, alert or distracted and warn in good time. Still a vision of the future or soon to be reality? The ADAC has tested four prototypes of interior detection systems, so-called "In-Cabin Sensing Systems".
  • Common causes of accidents: tiredness or distraction
  • Cameras analyze the driver and warn
  • Data protection must be guaranteed
Already five hours on the road without a break and really tired as a dog. But the one hour home – I can still manage that now. I'll quickly text my wife that it will be later. And until then, a cup of coffee from the Thermo will keep me awake. Where was she again? Oh, in the backpack on the back seat. No problem, I'll get there somehow...
Tired, distracted, busy with something while driving: This is not only dangerous for the occupants of the vehicle, but also for all other road users. Even checking an email for three seconds at a speed of 100 km/h leads to a blind flight of almost 100 meters . And during this time it is not possible to react to other vehicles, pedestrians or obstacles on the road.
Fatigue warning: Mandatory since July 2022

Fatigue warning systems are already warning of this © Daimler
In fact, according to the ADAC accident database (2009-2019) , around every tenth serious traffic accident outside of built-up areas is due to a distracted, tired or physically impaired driver. If you then add the so-called extended effective range, i.e. accidents in which distraction and tiredness were at least one of the accident-causing factors, the figure is even 25 percent of non-urban accidents .
But these accidents on motorways, federal roads and country roads usually end tragically because they often result in serious or fatal injuries: in 2021, 71 percent of those killed and 48 percent of those seriously injured in Germany were attributed to non-urban roads.

Therefore, the European legislator reacted at the beginning of 2020 with the General Safety Regulation 2 (GSR 2). This regulation regulates the mandatory equipment of vehicle safety systems for type approval.

In order to reduce the high number of accidents caused by distracted and tired drivers, the GSR prescribes a drowsiness warning as a first step : Since July 6, 2022, all new vehicle models in classes M and N, i.e. cars and trucks, for type approval have a warning system that assesses driver fatigue. This applies to all newly registered vehicles from July 2024 .

In a second step, from July 2024 or 2026 , the vehicles must be equipped with a system that can also detect a distracted driver .

In addition to the legal requirements, the consumer protection program Euro NCAP will also contain a test catalog for in-cabin sensing systems from 2023 . In order to score two points, the manufacturer must demonstrate that the cabin sensors can detect a distracted, tired and unresponsive driver in various test scenarios.

As a reaction to the detection, it is also required that the sensitivity of the front collision warning and lane departure warning be increased, slight braking interventions take place and, if necessary, a "minimum risk manoeuvre" be carried out.

Difference: Indirect and direct measuring systems

Already in series production: The directly measuring infrared camera in the Wey Coffee 01 © wey
Many automobile manufacturers have been equipping their cars with drowsiness warning systems for over ten years . These drowsiness warning systems, which are already available in many vehicles, are indirect systems . Depending on the steering behavior, speed, time and other parameters, they determine how tired the driver is. Very simply designed drowsiness warnings are based solely on a timer that warns the driver after a certain time has elapsed.

But recognizing a distracted driver is much more complicated. Directly measuring systems are necessary here , which use sensors in the vehicle compartment to detect and assess head movement, line of sight or hand movement.

This also requires an infrared camera that can be mounted at different locations in the vehicle interior and also works in the dark. If the focus is solely on the driver (driver monitoring system), it is often installed on the A-pillar or above the steering column.

In order to be able to cover not only the driver's line of sight but also his posture and other occupants (occupant status monitoring), the sensor is attached in the area of the roof module/lighting module, on the rear-view mirror or on the dashboard.

To determine the source of the distraction , some systems can also detect objects as such. These objects include mobile phones, for example, but also coffee mugs, drinking bottles and muesli bars. In combination with a certain movement (e.g. putting a coffee cup to the mouth), the system can recognize the type of distraction and evaluate the criticality for driving safety and decide whether the activity is relevant for the driving task or not.

occupant camera was the focus of the ADAC investigation. In order to detect a distracted driver, the vehicle interior is divided into zones . If the gaze is focused on a zone that is not relevant to the driving task for a specific period of time (e.g. footwell in front passenger), the driver is considered to be distracted. The Bosch system can also classify objects (mobile phone, coffee mug) as such and is therefore able to recognize which distraction or activity it is.

With the radar sensors , people or children can be recognized as such based on their breathing . In this way, further areas of application, particularly with regard to health , can be realized in the future . Even forgotten children can be recognized in parked vehicles and the parents can be warned. With the help of the camera information, comfort functions such as presetting the seating position and the appropriate playlist can also be enabled.

Result: The prototypes are already working well
The three "active" systems from Ford, DTS/XPERI and Bosch already meet large parts of the Euro NCAP protocol, which will apply from 2023. Some systems only showed weaknesses when there was a specific occlusion of the face (e.g. long facial hair) or the object causing the distraction was outside the sensor's coverage area .

The "fatigue" test scenario could not be represented as representative and was therefore not checked for any of the systems. However, since increasing tiredness manifests itself very differently in people, it is quite difficult to clearly detect it. In order to improve the detection of a tired driver, manufacturers often take into account other information such as driving time, time of day or steering behavior over the duration of the journey in addition to eyelid opening and body posture.

But Sony 's "passive" system also makes sense. With its depth image sensor, it can generate information to detect the volume and angle of the torso, the distance between the headrest and the head, or an "out of position" sitting position. This information can then be used by the car manufacturer to adapt the restraint systems to the specific characteristics of the occupants and their seating position.

The system from Bosch also has a lot of potential. By using the radar sensor, children left behind in the parked vehicle can be detected (requirement Euro NCAP from 2025). In addition, further functions in the field of health can be realized through the fusion of radar sensor and camera

Conclusion: What else needs to be considered
In-cabin sensing systems can address a large number of traffic accidents , especially on non-urban roads (freeways, federal highways, country roads), which often result in serious or fatal injuries.

Three of the four in-cabin sensing systems assessed as part of the demonstrations can meet a large part of the Euro NCAP protocol that will apply from 2023 .

Optimal utilization of the potential of in-cabin sensing systems can be achieved if these systems can address all areas of vehicle safety - before, during and after the crash.

The in-cabin sensing systems represent an important building block towards automated driving at SAE level 3. They can detect whether the driver is ready to take over the driving task again as soon as the automated driving function has reached its limits.

In-cabin sensing systems only warn the driver if they are tired, distracted or physically incapacitated. If the driver does not intervene in time, the accidents cannot be prevented. For this reason, it is recommended to link the ICS systems with driver assistance systems .

In order to increase the driver's acceptance and confidence in the systems, it is important to keep the rate of false alarms as low as possible. In particular, the functionality of the systems must not be limited by the physical characteristics of the occupants (wearing a face mask, skin color, seating position in the vehicle, etc.).

Comfort functions (pre-setting the seat position, automatic opening of the garage door) can increase the acceptance of the interior sensors.

User data should not be stored in the vehicle without their consent and should only be used to implement safety-related system functions. If data is nevertheless processed and stored, consumers should be informed about this. More on this: This vehicle data is collected by a modern car .

Full Artificial here, my apologies about the cut and paste.


Learning.
 
  • Like
  • Fire
  • Love
Reactions: 64 users

mrgds

Regular
I don't agree with the criticism of the SW-F podcast.

One bit which seems to have drawn a lot of angst starts just before 13 minutes, and my Pitmans is non-existent, but this is the gist:

"There is a debate about how close should we copy the brain – limits of Silicon

Should we directly copy what happens in the brain?

I think there are several companies including Brainchip that are making a good go of it

Products that work and advantages to be had by using spiking architecture

Not everyone is as far along as Brainchip there are a few other in spiking space

Different architectures ASIC? Like BrainChip or Analog even more power can be saved if you can do it that way. Some of these are unanswered questions.

Training challenges to be ironed out

Event based processors
..."

This seems like a fair summation to me. Analog can be more power efficient because they only need a couple of components for a synapse. However the tech has other problems such as repeatability of manufacture which causes variations in spike amplitude (not a problem with digital), the need for ADC (analog-to-digital converter) and possibly DAC, and analog lacks the versatility of Akida.

Another angst-generating bit was the reference to "niche" when discussing the edge which S W-F characterized as a spectrum and fragmented.

Sally was responding to Rod's lead-in:
"Let’s talk about what you believe to be some of the key drivers in this space (the edge) and some of the problems you see as needing to be addressed over the net few years", so Sally's remarks need to be considered in this context.

This podcast is a conversation and Rod Telson said "You're spot on there .. very few are flexible enough to handle voice, vision .. " Rod asked Sally to discuss the key drivers and problems to be overcome. He did not ask her to endorse Akida.
Rod / Rob ........................ Tomato / Tomatoe ..........................:rolleyes:
 
  • Haha
  • Like
  • Love
Reactions: 13 users
Rod / Rob ........................ Tomato / Tomatoe ..........................:rolleyes:
Omg, there are identical twins working at Brainchip, haha. Now how good would that be having a Rod and Rob Telson working there.
 
  • Haha
  • Like
Reactions: 6 users

mrgds

Regular
Omg, there are identical twins working at Brainchip, haha. Now how good would that be having a Rod and Rob Telson working there.
"Now how good would that be having a Rod and Rob Telson working there "

To quote RT himself, ( either one ) .............................. "oh, great question " ...................................:rolleyes:
 
  • Haha
  • Like
Reactions: 9 users
An interesting insight into the future of neuroscience and its intersection with spiking neuromorphic processing:


The future industries that are being disclosed by these advances are hopefully only beneficial.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 7 users
  • Haha
  • Like
Reactions: 6 users
If you have no idea about Transformer networks for natural language then take ten minutes and be bedazzled by this explanation:


Left me wondering what Peter van der Made was up to in the last ten years Transformers seem like a snap to design and implement in 28 or 22nm.🤡🤣🤡😂🤡😵‍💫🤓

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 23 users
@Rocket577

Did you sign up?

What did you find?

Regards
FF

AKIDA BALLISTA
 

HopalongPetrovski

I'm Spartacus!
Agreed Funky. I found the podcast very underwhelming unfortunately. i thought it was confusing because Sally wasn’t enthusiastic at all, even going so far as to suggest that analogue is more power efficient. Sorry to say this but I found it very disappointing.
NO HOT TUB FOR HER!!! 🤣

 
  • Haha
  • Like
Reactions: 10 users

equanimous

Norse clairvoyant shapeshifter goddess
If you have no idea about Transformer networks for natural language then take ten minutes and be bedazzled by this explanation:


Left me wondering what Peter van der Made was up to in the last ten years Transformers seem like a snap to design and implement in 28 or 22nm.🤡🤣🤡😂🤡😵‍💫🤓

My opinion only DYOR
FF

AKIDA BALLISTA
Fascinating
 
  • Like
  • Fire
Reactions: 3 users
I don't agree with the criticism of the SW-F podcast.

One bit which seems to have drawn a lot of angst starts just before 13 minutes, and my Pitmans is non-existent, but this is the gist:

"There is a debate about how close should we copy the brain – limits of Silicon

Should we directly copy what happens in the brain?

I think there are several companies including Brainchip that are making a good go of it

Products that work and advantages to be had by using spiking architecture

Not everyone is as far along as Brainchip there are a few other in spiking space

Different architectures ASIC? Like BrainChip or Analog even more power can be saved if you can do it that way. Some of these are unanswered questions.

Training challenges to be ironed out

Event based processors
..."

This seems like a fair summation to me. Analog can be more power efficient because they only need a couple of components for a synapse. However the tech has other problems such as repeatability of manufacture which causes variations in spike amplitude (not a problem with digital), the need for ADC (analog-to-digital converter) and possibly DAC, and analog lacks the versatility of Akida.

Another angst-generating bit was the reference to "niche" when discussing the edge which S W-F characterized as a spectrum and fragmented.

Sally was responding to Rod's lead-in:
"Let’s talk about what you believe to be some of the key drivers in this space (the edge) and some of the problems you see as needing to be addressed over the net few years", so Sally's remarks need to be considered in this context.

This podcast is a conversation and Rod Telson said "You're spot on there .. very few are flexible enough to handle voice, vision .. " Rod asked Sally to discuss the key drivers and problems to be overcome. He did not ask her to endorse Akida.
If anyone wants to feel underwhelmed by a Brainchip podcast revisit the ARM Jem Davies interview and look where that landed Brainchip despite the initial desire to find a razor blade and a dark cupboard:


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Haha
Reactions: 10 users
On the subject of the regulatory authority's inability to get off their fat arses and do something about, well we will call it, let me see, price manipulation will do in this context. Here is a company just after open this morning. It doesn't take a genius to work out there is no client that would require these buys or sells. Three 1 Share orders on each side.

1670454671633.png
 
  • Like
  • Fire
  • Sad
Reactions: 14 users

Cardpro

Regular
Hmm I can't believe we are back at 60c level (below 1 Bill (USD) Market Cap) even after positioning really well to shoot to the mooooooon..!
I hope we get some positive price sensitive announcements soon haha
 
  • Like
  • Haha
  • Fire
Reactions: 10 users

cosors

👀
Here is a thought experiment. Read carefully!

"The end of irrelevant artificial intelligence
...
An interesting anecdote about the use of AI chatbots is the story of Xiaoice, a chatbot developed by Microsoft in China. Xiaoice was designed to have natural, human-like conversations with people, and she quickly became popular with users who enjoyed talking to her. Many users took Xiaoice so much to their hearts that they didn't even realise she was a chatbot, and some even claimed to be in love with her.

In the near future, AI chatbots will be an integral part of our daily lives. These intelligent, conversational agents will be able to assist us with a variety of tasks, from the mundane to the complex. The future of chatbots, and ChatGPT in particular, is likely to be one where they are an integral part of our daily lives. ChatGPT, a large language model trained by OpenAI, has already demonstrated the ability to have natural, human-like conversations across a wide range of topics. This capability, combined with the convenience and accessibility of chatbots, makes them a promising technology for everyday use.

One of the most interesting use cases for AI chatbots is customer service. Chatbots can handle a large number of customer queries, allowing human customer service agents to focus on more complex issues. Chatbots can also provide quick and accurate answers to common questions, improving the customer experience. Another interesting use case for AI chatbots is healthcare. Chatbots can be used to provide patients with information and help them manage their health and make informed decisions. For example, a chatbot could provide information about symptoms and treatment options or remind patients to take their medication. This can be particularly useful for people with chronic conditions who need ongoing support.

A third interesting use case for AI chatbots is education. Chatbots can be used to provide students with personalised learning experiences, tailoring the content and pace of lessons to each individual's needs. For example, a chatbot could help a student study for an exam by providing practice questions and feedback. This could be a valuable tool to help students learn more effectively. They could be used in finance, providing personalised investment advice and helping people manage their finances more effectively. And they could even be used in entertainment to provide engaging and personalised experiences for users.

However, there are also potential dangers associated with the widespread use of ChatGPT and other chatbots. One concern is the potential for abuse and manipulation. Chatbots, like any technology, can be used for nefarious purposes. For example, they could be used to spread false information or to harass and intimidate others. This is particularly worrying because chatbots are capable of having persuasive conversations with people. Another potential danger is the possibility of chatbots replacing human interaction. Although ChatGPT and other chatbots can provide valuable help and convenience, they should not be seen as a substitute for human contact. In some cases, people might rely too much on chatbots and lose their ability to communicate effectively with others. This could have negative consequences for both individuals and society as a whole.

Furthermore, the use of ChatGPT and other chatbots raises ethical questions. As these technologies become more advanced, they may be able to perform tasks that were previously only possible for humans. This could lead to employment issues and displacement of workers. It is important for society to consider and address these ethical concerns as the use of chatbots becomes more widespread.

Overall, the future use of ChatGPT and other chatbots has the potential to significantly improve our daily lives. However, it is important to be aware of the potential dangers and take measures to mitigate them. This may include careful regulation of the use of chatbots and ongoing dialogue about their ethical implications. By addressing these issues, we can ensure that ChatGPT and other chatbots are used in a responsible and useful way."
 
Last edited:
  • Like
  • Thinking
Reactions: 15 users

cosors

👀
Here is a thought experiment. Read carefully!

"The end of irrelevant artificial intelligence
...
In the near future, AI chatbots will be an integral part of our daily lives. These intelligent, interoperable agents will be able to assist us in a variety of tasks, from the mundane to the complex. The future of chatbots, especially ChatGPT, will likely see them as an integral part of our daily lives. ChatGPT, a large language model trained by OpenAI, has already proven its ability to have natural, human-like conversations about a wide range of topics. This ability, combined with the convenience and accessibility of chatbots, makes them a promising technology for everyday use.

One of the most interesting use cases for AI chatbots is customer service. Chatbots can handle large numbers of customer queries, freeing human customer service agents to focus on more complex issues. Also, chatbots can provide quick and accurate answers to common questions, improving the customer experience. Another interesting use case for AI chatbots is healthcare. Chatbots can be used to provide patients with information and help them manage their health and make informed decisions. For example, a chatbot could provide information about symptoms and treatment options, or remind patients to take their medication. This can be especially useful for people with chronic diseases,
A third interesting use case for AI chatbots is education. Chatbots can be used to provide students with personalized learning experiences, tailoring the content and pace of classes to each individual's needs. For example, a chatbot could help a student study for an exam by providing practice questions and feedback. This could be a valuable tool in helping students learn more effectively. They could be used in finance, providing personalized investment advice and helping people manage their finances more effectively. And they could even be used in entertainment to provide users with engaging and personalized experiences.

However, there are also potential dangers associated with the widespread use of ChatGPT and other chatbots. One concern is the potential for abuse and manipulation. Chatbots, like any technology, can be used for nefarious purposes. For example, they could be used to spread false information or to harass and intimidate others. This is of particular concern given the ability of chatbots to have persuasive conversations with humans. Another potential danger is the possibility of chatbots replacing human interaction. While ChatGPT and other chatbots can provide valuable help and convenience, they should not be viewed as a substitute for human contacts. In some cases, people might over-rely on chatbots and lose their ability to communicate effectively with other people. This could have negative consequences for both individuals and society as a whole.


In addition, the use of ChatGPT and other chatbots raises ethical questions. As these technologies become more advanced, they may be able to perform tasks previously only possible for humans. This could lead to employment problems and the displacement of workers. It is important for society to consider and address these ethical concerns as chatbot use becomes more widespread.

Overall, the future use of ChatGPT and other chatbots has the potential to significantly improve our daily lives. However, it is important to be aware of the potential dangers and take steps to mitigate them. This may involve careful regulation of the use of chatbots, as well as ongoing dialogue about their ethical implications. By addressing these questions, we can ensure that ChatGPT and other chatbots are used in a responsible and useful way."
Sascha Lobo: "I didn't write a single word of the italics above in this column. It comes 100 percent from an artificial intelligence called ChatGPT and that, to quote a popular German chancellor, is a turning point.
...

However, because this has only worked in English so far, I had the above column translated into German by the currently best AI translation service Deepl.com "

https://www.spiegel.de/netzwelt/web...olumne-a-b2afeb69-083d-4e69-8920-da5cad549d5f

That's interesting, isn't it?
 
  • Wow
  • Like
  • Fire
Reactions: 10 users

FJ-215

Regular
Hmm I can't believe we are back at 60c level (below 1 Bill (USD) Market Cap) even after positioning really well to shoot to the mooooooon..!
I hope we get some positive price sensitive announcements soon haha
Hi Cardpro,

Just my way of looking at things, I believe our news flow is tied to the put option agreement with LDA Capital. For 2022 we had a minimum draw down of around $20M. $5M of that was a carry over from last year. BRN fulfilled that obligation in a canter of the back of Mercedes disclosing their relationship with us. On Jan 1st the agreement resets for the final 12 months. There is a minimum draw down of $15M out to a maximum of $30M. We may get a few crumbs before Xmas to keep us interested but nothing earth shattering is my guess.

Regardless of what the SP does over the next month I think 2022 has been a break out year for BRN.
 
  • Like
  • Love
  • Fire
Reactions: 20 users

HopalongPetrovski

I'm Spartacus!
  • Haha
Reactions: 5 users
Top Bottom