BRN Discussion Ongoing

Damo4

Regular
My theory is that the broker handling the stop loss can see the stop loss, and should be obliged to keep that information confidential. Ideally, the information should be kept secret from the ASX, with the broker's trading computer programmed to place the sell order when the price falls to the stop loss value.

That would not prevent the stop loss being hit if the SP were to be manipulated down, but it would mean that the manipulators were hunting in the dark.
Yeah agreed, however in the dark is more like a moonlit paddock haha they know where to look for them.
Also I'm not sure why but it seems the 57-63c mark has so many people either selling or heading into negative territory or both
 
  • Like
Reactions: 2 users

Quiltman

Regular
Embedded World Germany. Maybe some of our German friends can go! I wonder what the list of "Partner Booths" is? And I would love to learn about Akida 1500!
View attachment 29772

Our partners booths .... that sounds enticing .... time to look at the exhibitor list.

Perhaps there is a clue in the wording
Could there be 4 partners exhibiting the following solutions, explicitly mentioned in the advert.

1) Energy-harvesting energy solutions
2) Medical devices with extended battery life
3) High end video object detection
4) ADAS/autonomous systems
 
  • Like
  • Fire
Reactions: 18 users

wilzy123

Founding Member
I would invest in THEZEEBOTHERALD IPO.....!!!


View attachment 29774
yeahhhp.gif
 
  • Like
  • Haha
  • Sad
Reactions: 3 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
In addition to this Bitsensing thingy, there's got to be something more juicy happening with Infineon. I mean they've worked with Cerence on the Level 3 Mercedes Drive Pilot technology. They're a top player in the Automotive Semiconductor space and the other thing is there are some slides from previous investor presentations of Infineon's that I think are very revealing. Sorry, I got them all muddled up, some are form a recent presentation and some are from last years. So maybe they're the ones that are supplying Akida to Valeo?

Screen Shot 2023-02-16 at 3.37.20 pm.png




Screen Shot 2022-06-14 at 6.01.54 pm.png





Screen Shot 2023-02-16 at 3.38.58 pm.png


Screen Shot 2023-02-16 at 3.36.32 pm.png


Screen Shot 2023-02-16 at 3.36.14 pm.png

Screen Shot 2022-06-14 at 7.01.46 pm.png


Screen Shot 2022-06-14 at 7.01.23 pm.png





Infineon and Cerence Ready AI Emergency Vehicle Detection Technology​


Image courtesy of Cerence, Inc.
Cerence_application.jpeg

Cars need microphones at the front and the rear to listen for emergency vehicles.
Autonomous vehicles must be able to yield to emergency vehicles.
Dan Carney | Oct 24, 2022



Suggested Event
ATX_CAN_4c.jpg
ATX Canada 2023
May 09, 2023 to May 11, 2023

Driving the Mercedes-Benz EQS EV at that company’s proving grounds, I was able to experience how the car’s SAE Level 3 driver assistance system was able to recognize when an emergency vehicle approached from behind from the sound of its siren.
Now Infineon Automotive Technologies and Massachusetts artificial intelligence specialist Cerence Inc. are partnering on these systems, using Infineon’s microelectromechanical systems (MEMS) microphone technology and Cerence’s AI.

These Infineon mics are the same type that automakers can use inside cars’ cabins for features such as voice recognition, noted Infineon Automotive Technologies division president Peter Schiefer in a presentation for the company’s OctoberTech conference. “The microphone is likewise suited for exterior applications, such as siren or road condition detection,” he said.
Reflecting on the arrival of Mercedes’ SAE Level 3 automation, Cerence’s director of product management, Stefan Hamerich, observed in a post on the company’s site that “Cerence Emergency Vehicle Detection (EVD) is one of the most critical examples of how the co-pilot can understand the world around it and make roads – and drivers and passengers – safer as a result.”

Related: MEMS Continue to Fulfill Their Promise
A challenge in identifying sirens is the incredible variety of sounds employed worldwide. According to Cerence, there are more than 1,500 different siren sounds that the company’s AI is trained to identify. The Level 3 Mercedes Drive Pilot technology automatically stops the car on the right side of the lane when a siren is detected, making room for the vehicle to pass on the left side.

Cabin-mounted microphones listen along with vehicle occupants as the entertainment system blasts music. Knowing what that sounds like lets the AI filter the music out of the sounds detected by the outside microphones, so it can tell what is a siren and what is Axl Rose.

Hamerich also points out the importance of systems that employ sound to detect emergency vehicles rather than simply relying on cameras to watch for their flashing lights. “Sensors that detect incoming emergency vehicles using images and lights are limited in situations when they are blocked or impeded by other cars or bad weather,” he wrote. “Using multiple sensors, Cerence EVD helps ensure early detection that can enable countermeasures to save critical seconds in an emergency.”
The Infineon MEMS microphone is hardened for service on the outside of the car, and is qualified according to AEC-Q103-003. That means it can tolerate an operating temperature range from -40°C to +105°C. It does so with a total harmonic distortion (THD) of less than 0.5 percent at a sound pressure level of 94 dB and a high acoustic overload point of 130 dBSPL, so the mic captures distortion-free audio signals in noisy environments, says Infineon.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users

Diogenese

Top 20
Yeah agreed, however in the dark is more like a moonlit paddock haha they know where to look for them.
Also I'm not sure why but it seems the 57-63c mark has so many people either selling or heading into negative territory or both
Reminds me of the Scotsman coming home late from the pub - takes a shortcut through the cow pasture - tam o'shanter falls off and he tries on ten before he finds the right one.

Sorry Damo - I'll take second place for that - it was all above my head ...
 
Last edited:
  • Haha
Reactions: 3 users

Diogenese

Top 20
Embedded World Germany. Maybe some of our German friends can go! I wonder what the list of "Partner Booths" is? And I would love to learn about Akida 1500!
View attachment 29772
Lots to like there!

"Meet new partners"

"We will share more details soon!" in reference to "other partners booths".

Akida 1500 jumpstarting development for at-sensor applications ...

High-end video-object detection ...

ADAS
...


Very upbeat.
 
  • Like
  • Fire
  • Love
Reactions: 46 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
So from Infineon's Doctoral Thesis ad we know that they're looking for someone to do a thesis investigating how to combine neuromorphic computing with the latest Infineon Aurix® µC. Which means we might be in the next generation Aurix if I'm not mistaken or completely crazy or both! I mean, where else are they going to get a neuromorphic event-based AI accelerator from because it's not as if they're in the habit of falling off the back of a passing truck, at least not where I live?

Please DYOR because I can be prone to getting the odd thing or two wrong.
 
Last edited:
  • Like
  • Fire
Reactions: 12 users

Pappagallo

Regular
Our partners booths .... that sounds enticing .... time to look at the exhibitor list.

Perhaps there is a clue in the wording
Could there be 4 partners exhibiting the following solutions, explicitly mentioned in the advert.

1) Energy-harvesting energy solutions
2) Medical devices with extended battery life
3) High end video object detection
4) ADAS/autonomous systems

Perhaps some more German companies? Surely Mercedes has a few mates.
 
  • Like
  • Haha
Reactions: 6 users

Diogenese

Top 20
So from Infineon's Doctoral Thesis ad we know that they're looking for someone to do a thesis investigating how to combine neuromorphic computing with the latest Infineon Aurix® µC. Which means we might be in the next generation Aurix if I'm not mistaken or completely crazy or both! I mean, where esle are they going to geta neuromorphic event-based AI accelerator from becasue it's not like they're in the habit of falling off the back of trucks.

Please DYOR because I can be prone to getting the odd thing or two wrong.
But I just bought one down at the pub that fell off the back of a truck - I believe it's called a gas pedal in the US.
 
  • Like
  • Haha
Reactions: 5 users

Diogenese

Top 20
Well most of my portfolio was very flat today - becalmed - like a painted ship upon a painted ocean - except for BRN and TLG.
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Diogenese

Top 20
Bit of an arm wrestle over half a cent at the death ...

1676524814729.png
 
  • Like
  • Haha
Reactions: 9 users

Xhosa12345

Regular
Bit of an arm wrestle over half a cent at the death ...

View attachment 29784

f*k me theres literally gifs on everything lol

sorry ive been bored this week

ill put myself on a gif ban until the next company announcement which should be the half yearly.... hopefully it comes tomorrow haha.

arm-wrestling-broken.gif
 
  • Wow
  • Like
  • Haha
Reactions: 10 users
f*k me theres literally gifs on everything lol

sorry ive been bored this week

ill put myself on a gif ban until the next company announcement which should be the half yearly.... hopefully it comes tomorrow haha.

View attachment 29796
Did you realise that there are twenty seven bones in your hand ?
 
  • Haha
  • Like
Reactions: 4 users

Calsco

Regular
Looks like after the drop yesterday we tested support at the monthly .50c then this morning bounced to .57c we have a weekly support at 54.5 and another monthly resistance at 59.5. We are still unfortunately in a downward trend.
 

Attachments

  • 3E2C9D0F-36F4-4863-BBF5-A87D23E3B0E7.jpeg
    3E2C9D0F-36F4-4863-BBF5-A87D23E3B0E7.jpeg
    7.2 MB · Views: 73
  • Like
  • Sad
Reactions: 9 users

Xhosa12345

Regular
Did you realise that there are twenty seven bones in your hand ?

no did not realise...

and 28 for the WANCA's im assuming?!
 
  • Haha
  • Like
Reactions: 14 users
  • Haha
  • Like
Reactions: 14 users

Xhosa12345

Regular
  • Haha
  • Like
Reactions: 5 users

Bombersfan

Regular
Just reading through the previous 2 annual reports. I don’t think we’ll see any new info in the chairs/directors reports being released, they’ll touch on the partnerships, the tape out, the increased sales activity/interest, commercialisation progress, and new key personnel etc etc. The 2 sections I’m keen to see tho will be receipts from customers, and total employees.

Receipts from customers end of 2020 $55k
Receipts from customers end of 2021 $2.5m
Receipts end of 2022… $$??

As working the licence fee/revenue model out is like trying to split the atom, this is the simplest figure to show some activity and progress. We know somehow the half yearly to June showed $4.8m in revenue, how that’s reported in receipts will be interesting but hopefully the annual receipts figure is showing good progression.



Total employees Dec 2020 - 42
Total employees Dec 2021 - 63
Total employees Dec 2022 - ??

If it’s heading towards 100 like mentioned from the company then this suggests things are still pretty well on track.

As most material info is under lock and key, these are hopefully 2 clear indicators that show things are still moving north.

Hopefully the fireside chat can be a little more revealing on the “where we are going” topic.
 
  • Like
  • Fire
  • Love
Reactions: 57 users
Late 2021 interview with Oculi CEO fwiw


SemiWiki 800x100 U 40 Nominations February
CEO Interviews
CEO Interview: Charbel Rizk of Oculi
by Daniel Nenni on 11-19-2021 at 6:00 am
Categories: CEO Interviews

Charbel Rizk CEO Interview SemiWiki Oculi
Charbel Rizk is CEO of Oculi®, a spinout from Johns Hopkins University, a fabless semiconductor startup commercializing technology to address the high power and latency challenges of vision technology. Dr. Rizk recognized these as barriers to effective AI in his years of experience as a Principal Systems Engineer, Lead Innovator, and Professor at Rockwell Aerospace, McDonnell Douglas, Boeing, JHUAPL and Johns Hopkins University. The Oculi vision solution reduces latency, bandwidth, and/or power consumption by up to 30x.
Why did you decide to create this technology?
Our original motivation was simply to enable more effective autonomy. Our perspective is that the planet needs the “human eye” in AI for energy efficiency and safety. Machines outperform humans in most tasks but human vision remains far superior despite technology advances. Cameras, being the predominant sensors for machine vision, have mega-pixels of resolution. Advanced processors can perform trillions of operations per second. With this combination, one would expect vision architecture (camera + computer) today to be on par with human vision. However, current technology is as much as ~40,000x behind, when looking at the combination of time and energy wasted in extracting the required information. There is a fundamental tradeoff between time and energy, and most solutions optimize one at the expense of the other. Just like biology, machine vision must generate the “best” actionable information very efficiently (in time and power consumption) from the available signal (photons).
What are the major problems with the current technology available in the market?
Cameras and processors operate very differently compared to the eye+brain combination, largely because they have been historically developed for different purposes. Cameras are for accurate communication and reproduction of a scene. Processors have evolved over time with certain applications in mind, with the primary performance measure being operations per second. The latest trend is domain specific architectures (i.e. custom chips), driven by demand from applications such as image processing.
Another important disconnect, albeit less obvious, is the architecture itself. When a solution is developed from existing components (i.e. off-the-self cameras and processors), it becomes difficult to integrate into a flexible solution and more importantly to dynamically optimize in real-time which is a key aspect of human vision.
As the world of automation grows exponentially and the demand for imaging sensors skyrockets, efficient (time and resources) vision technology becomes even more critical to safety (reducing latency) and to conserving energy.
What are the solutions proposed by Oculi?
Oculi has developed an integrated sensing and processing architecture for imaging or vision applications. Oculi patented technology is agnostic to both the sensing modality on the front end (linear, Geiger, DVS, infrared, depth or TOF) and the post-processing (CPU, GPU, AI Processors…) that follows.We have also demonstrated key IP in silicon that can materialize this architecture into commercial products within 12-18 months.
A processing platform that equals the brain is an important step in matching human perception, but it will not be sufficient to achieve human vision without “eye-like” sensors. In the world of vision technology, the eye represents the power and effectiveness of parallel edge processing and dynamic sensor optimization. The eye not only senses the light, it also performs a good bit of parallel processing and only transfers to the brain relevant information. It also receives feedback signals from the brain to dynamically adjust to changing conditions and/or objectives. Oculi has developed a novel vision architecture that deploys parallel processing and in-memory compute in the pixel (zero-distance between sensing and processing) that delivers up to 30x improvements in efficiency (time and/or energy).
The OCULI SPU™ (Sensing & Processing Unit), is a single chip complete vision solution delivering real-time Vision Intelligence (VI) at the edge with software-defined features and an output compatible with most computer vision ecosystems of tools and algorithms. Being fitted with the IntelliPixel™ technology, the OCULI SPU reduces bandwidth and external post-processing down to ~1% with zero loss of relevant information. The OCULI SPU S12, Our GEN 1 Go-To-Market product, is the industry’s first integrated neuromorphic (eye+brain) silicon deploying sparse sensing, parallel processing + memory, and dynamic optimization
It offers Efficient Vision Intelligence (VI) that is a prerequisite for effective Artificial Intelligence (AI) for edge applications. OCULI SPU is the first single-chip vision solution on a standard CMOS process that delivers unparalleled selectivity, efficiency, and speed.
There is significant room for improvement in today’s products by simply optimizing the architecture, in particular the signal processing chain from capture to action, and human vision is a perfect example of what’s possible. At Oculi, we have developed a new architecture for computer and machine vision that promises efficiency on par with human vision but outperforms in speed.
Do you want to talk about the potential markets? R&D?
We have developed a healthy pipeline of customers/partners engagements over a variety of markets from industrial and intelligent transportation to consumers to automotive. Our initial focus is on edge applications for eye, gesture, and face tracking for interactive/smart display and AR/VR markets. These are near term market opportunities with high volume and Oculi technology offers a clear competitive edge. As biology and nature have been the inspiration for much of the technology innovations, developing imaging technology that mimics human vision in efficiency but outperforms in speed is a logical path. It is a low hanging fruit (performance versus price) as Oculi has successfully demonstrated in multiple paid pilot projects with large international customers. Also unlike photos and videos we collect for personal consumption, machine vision is not about pretty images and the most number of pixels.
 
  • Like
  • Fire
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
So from Infineon's Doctoral Thesis ad we know that they're looking for someone to do a thesis investigating how to combine neuromorphic computing with the latest Infineon Aurix® µC. Which means we might be in the next generation Aurix if I'm not mistaken or completely crazy or both! I mean, where else are they going to get a neuromorphic event-based AI accelerator from because it's not as if they're in the habit of falling off the back of a passing truck, at least not where I live?

Please DYOR because I can be prone to getting the odd thing or two wrong.

I also found this from 5 monate's (which I'm pretty sure is German for months) ago.


SS.PNG

 
Last edited:
  • Like
  • Fire
Reactions: 15 users
Top Bottom