BRN Discussion Ongoing

Baisyet

Regular
At first I was confused by the name of this handle but dont think our resident wordsmith here has a twitter account.


thought the same
 

Deadpool

hyper-efficient Ai
thought the same
Oscar Wilde quote - Imitation is the sincerest form of flattery that mediocrity can pay to greatness.
 
  • Like
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Talk about the minotaur's cave Nothing for Xperi, nothing for DTS, so I tried CTO Petronel Bigioi - Found a few for FOTONATION LTD:
FotoNation is a wholly owned subsidiary of Xperi.

US11046327B2 System for performing eye detection and/or tracking

View attachment 22571

[0034] As further illustrated in FIG. 3, the system 302 may include a face detector component 316 , a control component 318 , and an eye tracking component 320 . The face detector component 316 may be configured to analyze the first image data 310 in order to determine a location of a face of a user. For example, the face detector component 316 may analyze the first image data 310 using one or more algorithms associated with face detection. The one or more algorithms may include, but are not limited to, neural network algorithm(s), Principal Component Analysis algorithm(s), Independent Component Analysis algorithms(s), Linear Discriminant Analysis algorithm(s), Evolutionary Pursuit algorithm(s), Elastic Bunch Graph Matching algorithm(s), and/or any other type of algorithm(s) that the face detector component 316 may utilize to perform face detection on the first image data 310 .

[0041] The eye tracking component 320 may be configured to analyze the second image data 312 in order to determine eye position and/or a gaze direction of the user. For example, the eye tracking component 320 may analyze the second image data 312 using one or more algorithms associated with eye tracking. The one or more algorithms may include, but are not limited to, neural network algorithm(s) and/or any other types of algorithm(s) associated with eye tracking.

Missed it by that much:

[0050] As described herein, a machine-learned model which may include, but is not limited to a neural network (e.g., You Only Look Once (YOLO) neural network, VGG, DenseNet, PointNet, convolutional neural network (CNN), stacked auto-encoders, deep Boltzmann machine (DBM), deep belief networks (DBN),), regression algorithm (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), Bayesian algorithms (e.g., naïve Bayes, Gaussian naïve Bayes, multinomial naïve Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., k-means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning algorithms (e.g., perceptron, back-propagation, Hopfield network, Radial Basis Function Network (RBFN)), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional or alternative examples of neural network architectures may include neural networks such as ResNet50, ResNet101, VGG, DenseNet, PointNet, and the like. Although discussed in the context of neural networks, any type of machine-learning may be used consistent with this disclosure. For example, machine-learning algorithms may include, but are not limited to, regression algorithms, instance-based algorithms, Bayesian algorithms, association rule learning algorithms, deep learning algorithms, etc.

... no suggestion of a digital SNN SoC, or even an analog one.

However, they did make a CNN SoC:

WO2017129325A1 A CONVOLUTIONAL NEURAL NETWORK

Von Neumann rools!


View attachment 22576




A convolutional neural network (CNN) for an image processing system comprises an image cache responsive to a request to read a block of NxM pixels extending from a specified location within an input map to provide a block of NxM pixels at an output port. A convolution engine reads blocks of pixels from the output port, combines blocks of pixels with a corresponding set of weights to provide a product, and subjects the product to an activation function to provide an output pixel value. The image cache comprises a plurality of interleaved memories capable of simultaneously providing the NxM pixels at the output port in a single clock cycle. A controller provides a set of weights to the convolution engine before processing an input map, causes the convolution engine to scan across the input map by incrementing a specified location for successive blocks of pixels and generates an output map within the image cache by writing output pixel values to successive locations within the image cache.
Xperi DTS
Mercedes
Prophesee

Screen Shot 2023-01-24 at 1.40.55 pm.png

Screen Shot 2023-01-24 at 2.03.26 pm.png




Screen Shot 2023-01-24 at 1.45.49 pm.png


Screen Shot 2023-01-24 at 1.43.33 pm.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 22 users

Tothemoon24

Regular
This is pure speculation on my behalf, but they recently did a cash raise to pay for, amongst other things, a “ core technology upgrade “

View attachment 27843
Hi TC
I once held Pck a couple of years ago , I’ve kept a close eye on their progress.

I sent an email to Tony Dawe last year .

This is the reply
 

Attachments

  • 2E10A667-96C7-456A-A191-2E8A87D636D0.png
    2E10A667-96C7-456A-A191-2E8A87D636D0.png
    2.3 MB · Views: 254
  • Like
  • Fire
Reactions: 18 users

TopCat

Regular
Hi TC
I once held Pck a couple of years ago , I’ve kept a close eye on their progress.

I sent an email to Tony Dawe last year .

This is the reply
ok, maybe one day. Thanks @Tothemoon24
 
  • Like
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

The other thing is it says DTS can be seen in Garmin's new Unified Cabin Experience. Garmin also have links with Cerence. Look it says embedded gaming: Atari for gaming without an internet connection!


cat-wow.gif



Cer pm.png




AUTOMOTIVE
Wednesday, 4 January 2023, 6:00 am CST

Garmin unveils its Unified Cabin Experience at CES 2023

In-vehicle solution with personalized entertainment zones intuitively connects wireless passenger devices to real-time game streams, videos, music and more​

FacebookTwitterEmailCopy Link
January 4, 2023/PR Newswire—Garmin (NYSE: GRMN) will demonstrate its latest in-cabin solutions for the Automotive OEM (original equipment manufacturer) market at CES 2023 with an emphasis on technologies that unify multiple domains, touchscreens and wireless devices on a single SoC (system on chip). Featuring four infotainment touchscreens, instrument cluster, cabin monitoring system, wireless headphones, wireless gaming controllers, smartphones, and numerous entertainment options—all powered by a single Garmin multi-domain computing module—the demonstration addresses several key technical and user experience challenges for next generation multi-screen systems running on the Android Automotive operating system.
Preview the Garmin Unified Cabin demo in action here.
“Garmin’s CES demo highlights technological innovations that advance the capabilities of Android Automotive OS for delivering a multi-zone personalized experience that consumers are expecting,” said Matt Munn, Garmin Automotive OEM managing director. “There are enormous opportunities to integrate many types of features and technologies into a single, unified system, but our connected in-cabin experience goes beyond system integration with innovative new features and an improved user experience.”
The system is customizable for each passenger and easy to operate, utilizing UWB (Ultra-Wideband) positioning technology to automatically connect wireless devices to the appropriate display. A cabin monitoring camera identifies and unlocks each passenger’s personal user interface profile, enabling occupants to enjoy multiple personalized entertainment options including cloud-based blockbuster console/PC games that can be played over 5G connectivity, on-board games, and multiple streaming video platforms—all from some of the most popular names. With the new Cabin App feature, passengers can locate connected devices, control other displays and share video and audio content across multiple passenger zones.
Garmin would like to recognize the following OEM providers for their contributions to the Unified Cabin Experience:
  • In-cabin sensing capabilities: Xperi’s DTS AutoSense™ platform uses advanced machine learning and a single camera to enable safety and experience features such as seat occupancy (including body pose), hands-on-wheel, activity and seatbelt detection, driver attention zones, driver distraction or occupant recognition.
  • Audio streaming: Xperi’s DTS AutoStage™ hybrid radio solution provides a richer, more immersive radio listening experience. This feature automatically switches between a station’s over-the-air radio signal to its IP stream when traveling in and out of range, as well as providing station metadata, album art and more.
  • Navigation: Mapbox enabled Garmin to accelerate the delivery of this demo in a fraction of the time. Mapbox Dash combines the company’s strengths in navigation, search and visualization, including AI and traffic data. This integration provides beautifully rendered and easily customizable interactive maps across all five displays of the system. The Garmin Unified Cabin Experience is also interoperable with other navigation systems.
  • Embedded gaming: Atari integration transforms the entire vehicle into an arcade, allowing for family-friendly gaming without a data connection.
  • Embedded software solutions: BlackBerry QNX Hypervisor and Neutrino RTOS incorporates best-in-class BlackBerry security technologies that safeguard users against system malfunctions, malware and cybersecurity breaches. These provide the necessary technology to power the industry’s next generation of products, while also supporting 64-bit ARMv8 computing platforms and Intel x86-64 architecture.
Screen Shot 2023-01-24 at 2.08.15 pm.png





 
  • Like
  • Fire
  • Wow
Reactions: 34 users

TechGirl

Founding Member
Great article on us about our Benchmarking.


BrainChip says new standards needed for edge AI benchmarking​


Jan 23, 2023 | Abhishek Jadhav

CATEGORIES Edge Applications | Edge Computing News | Industry Standards

BrainChip says new standards needed for edge AI benchmarking


BrainChip, a provider of neuromorphic processors for edge AI on-chip processing, has published a white paper that examines the limitations of conventional AI performance benchmarks. The white paper also suggests additional metrics to consider when evaluating AI applications’ overall performance and efficiency in multi-modal edge environments.

The white paper, “Benchmarking AI inference at the edge: Measuring performance and efficiency for real-world deployments”, examines how neuromorphic technology can help reduce latency and power consumption while amplifying throughput. According to research cited by BrainChip, the benchmarking used to measure AI performance in today’s industry tends to focus heavily on TOPS metrics, which do not accurately depict real-world applications.

“While there’s been a good start, current methods of benchmarking for edge AI don’t accurately account for the factors that affect devices in industries such as automotive, smart homes and Industry 4.0,” said Anil Mankar, the chief development officer of BrainChip.

Recommended reading: Edge Impulse, BrainChip partner to accelerate edge AI development

Limitations of traditional edge AI benchmarking techniques​

MLPerf is recognized as the benchmark system for measuring the performance and capabilities of AI workloads and inferences. While other organizations seek to add new standards for AI evaluations, they still use TOPS metrics. Unfortunately, these metrics fail to prove proper power consumption and performance in a real-world setting.

BrainChip proposes that future benchmarking of AI edge performance should include application-based parameters. Additionally, it should emulate sensor inputs to provide a more realistic and complete view of performance and power efficiency.

“We believe that as a community, we should evolve benchmarks to continuously incorporate factors such as on-chip, in-memory computation, and model sizes to complement the latency and power metrics that are measured today,” Mankar added.

Recommended reading: BrainChip, Prophesee to deliver “neuromorphic” event-based vision systems for OEMs

Benchmarks in action: Measuring throughput and power consumption​

BrainChip promotes a shift towards using application-specific parameters to measure AI inference capabilities. The new standard should use open-loop and closed-loop datasets to measure raw performance in real-world applications, such as throughput and power consumption.

BrainChip believes businesses can leverage this data to optimize AI algorithms with performance and efficiency for various industries, including automotive, smart homes and Industry 4.0.

Evaluating AI performance for automotive applications can be difficult due to the complexity of dynamic situations. One can create more responsive in-cabin systems by incorporating keyword spotting and image detection into benchmarking measures. On the other hand, when evaluating AI in smart home devices, one should prioritize measuring performance and accuracy for keyword spotting, object detection and visual wake words.

“Targeted Industry 4.0 inference benchmarks focused on balancing efficiency and power will enable system designers to architect a new generation of energy-efficient robots that optimally process data-heavy input from multiple sensors,” BrainChip explained.

BrainChip emphasizes the need for more effort to incorporate additional parameters in a comprehensive benchmarking system. The company suggests creating new benchmarks for AI interference performance that measure efficiency by evaluating factors such as latency, power and in-memory and (on-chip) computation.
 
  • Like
  • Love
  • Fire
Reactions: 78 users

SERA2g

Founding Member
Hi TC
I once held Pck a couple of years ago , I’ve kept a close eye on their progress.

I sent an email to Tony Dawe last year .

This is the reply
Nice one mate!

In August 2022 I emailed Adam Osseiran about a business I thought could benefit from akida technology. From my brief email, Adam and Tony were interested in meeting with the CEO, who I knew albeit not very well.

I had already spoken to the CEO about Brainchip and so made a 'warm' introduction by email and then left it to Tony and Adam to run with.

I did later get confirmation that they had met with the CEO but did not ask whether anything had come of it for obvious reasons. The CEO sent me an email afterwards thanking me for the introduction and said "we had a long call and are following up. Definitely tech we need to know about"

I suppose the point I'm making is the Brainchip team are more than happy to meet with prospective clients at the suggestion of shareholders, so if you have someone in your network you think would be valuable for Tony and the team to meet with, then have a chat with Tony and the client separately and provided they are both interested, send an introduction email to both parties.

You never know what may come of it!

Cheers
 
  • Like
  • Love
  • Fire
Reactions: 66 users

MrNick

Regular
Screen Shot 2023-01-24 at 12.03.23 pm.png
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users

TECH

Regular
Here's a question to no one in particular.

Who benefits by having the share price tightly controlled at this level or even sub 60c?

Tech x
 
  • Like
  • Thinking
Reactions: 8 users

Colorado23

Regular
G'day legends.
Listened to an interesting podcast with a NZ lad who is a co founder of Neuro. I seem to remember one or many of you discussing Neuro but have reached out to Mr Ferguson to gauge his response in relation to Brainchip. He also coincidentally completed a Phd at CMU.
 
  • Like
  • Fire
Reactions: 7 users

Cyw

Regular
Here's a question to no one in particular.

Who benefits by having the share price tightly controlled at this level or even sub 60c?

Tech x
I don't think it is controlled as such. Don't forget we have at least 30M shares to be dumped into the market. Buyers won't rush in to buy.
 
  • Like
  • Haha
  • Thinking
Reactions: 10 users

db1969oz

Regular
I don't think it is controlled as such. Don't forget we have at least 30M shares to be dumped into the market. Buyers won't rush in to buy.
Surely a fair bit of today’s trading was fuelled by this? Relentless selling all afternoon!
 
  • Like
  • Fire
Reactions: 6 users

Getupthere

Regular
Here's a question to no one in particular.

Who benefits by having the share price tightly controlled at this level or even sub 60c?

Tech x
Someone is loaning out shares to the shorters.

They make money for loaning out their shares, while at the same time accumulating more shares while the share price gets pushed down to the 60’s.

It looks like the loaners don’t won’t the price below a price range that kicks BRN out of the ASX 200.

Something tells me it might be someone that voted against Peter at the last AGM.

IMO

DYOR.
 
  • Like
  • Love
  • Thinking
Reactions: 17 users

FJ-215

Regular
Great article on us about our Benchmarking.


BrainChip says new standards needed for edge AI benchmarking​


Jan 23, 2023 | Abhishek Jadhav

CATEGORIES Edge Applications | Edge Computing News | Industry Standards

BrainChip says new standards needed for edge AI benchmarking


BrainChip, a provider of neuromorphic processors for edge AI on-chip processing, has published a white paper that examines the limitations of conventional AI performance benchmarks. The white paper also suggests additional metrics to consider when evaluating AI applications’ overall performance and efficiency in multi-modal edge environments.

The white paper, “Benchmarking AI inference at the edge: Measuring performance and efficiency for real-world deployments”, examines how neuromorphic technology can help reduce latency and power consumption while amplifying throughput. According to research cited by BrainChip, the benchmarking used to measure AI performance in today’s industry tends to focus heavily on TOPS metrics, which do not accurately depict real-world applications.

“While there’s been a good start, current methods of benchmarking for edge AI don’t accurately account for the factors that affect devices in industries such as automotive, smart homes and Industry 4.0,” said Anil Mankar, the chief development officer of BrainChip.

Recommended reading: Edge Impulse, BrainChip partner to accelerate edge AI development

Limitations of traditional edge AI benchmarking techniques​

MLPerf is recognized as the benchmark system for measuring the performance and capabilities of AI workloads and inferences. While other organizations seek to add new standards for AI evaluations, they still use TOPS metrics. Unfortunately, these metrics fail to prove proper power consumption and performance in a real-world setting.

BrainChip proposes that future benchmarking of AI edge performance should include application-based parameters. Additionally, it should emulate sensor inputs to provide a more realistic and complete view of performance and power efficiency.

“We believe that as a community, we should evolve benchmarks to continuously incorporate factors such as on-chip, in-memory computation, and model sizes to complement the latency and power metrics that are measured today,” Mankar added.

Recommended reading: BrainChip, Prophesee to deliver “neuromorphic” event-based vision systems for OEMs

Benchmarks in action: Measuring throughput and power consumption​

BrainChip promotes a shift towards using application-specific parameters to measure AI inference capabilities. The new standard should use open-loop and closed-loop datasets to measure raw performance in real-world applications, such as throughput and power consumption.

BrainChip believes businesses can leverage this data to optimize AI algorithms with performance and efficiency for various industries, including automotive, smart homes and Industry 4.0.

Evaluating AI performance for automotive applications can be difficult due to the complexity of dynamic situations. One can create more responsive in-cabin systems by incorporating keyword spotting and image detection into benchmarking measures. On the other hand, when evaluating AI in smart home devices, one should prioritize measuring performance and accuracy for keyword spotting, object detection and visual wake words.

“Targeted Industry 4.0 inference benchmarks focused on balancing efficiency and power will enable system designers to architect a new generation of energy-efficient robots that optimally process data-heavy input from multiple sensors,” BrainChip explained.

BrainChip emphasizes the need for more effort to incorporate additional parameters in a comprehensive benchmarking system. The company suggests creating new benchmarks for AI interference performance that measure efficiency by evaluating factors such as latency, power and in-memory and (on-chip) computation.
Hi TechGirl,

I expect that we will see a few more of these articles in the next week. Love all the dot joining here but we seem to have lost sight of the fact that we are in the midst of a capital raise. If you are selling your car you need to advertise. Giving it a wash and pumping up the tyres is a good plan too.
 
  • Like
  • Fire
Reactions: 11 users

BaconLover

Founding Member
Here's a question to no one in particular.

Who benefits by having the share price tightly controlled at this level or even sub 60c?

Tech x

I'm happy as long as LDA is happy ☺
 
  • Like
  • Haha
  • Thinking
Reactions: 11 users

hotty4040

Regular
I don't think it is controlled as such. Don't forget we have at least 30M shares to be dumped into the market. Buyers won't rush in to buy.

By whom Cyw, care to elaborate ?? unless you mean," our guarantors "

Akida Ballista >>>>> " Breaking News Imminent " I expect, quite soon <<<<<

hotty...
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Quatrojos

Regular
  • Like
  • Fire
Reactions: 11 users

Esq.111

Fascinatingly Intuitive.
Here's a question to no one in particular.

Who benefits by having the share price tightly controlled at this level or even sub 60c?

Tech x
Afternoon TECH,

Good question & the only cenario I can think of is a large entity in our field of business screwing the price down.

But one of many ways...

Large entity goes out & on market Buys say the equivalent of say 100,000,000 shares for a combined average value of say $140,000,000.00.

Next step thay then proceed to use these shares, lend them out to a separate entity , although still the same parent entity, all linked through various offshore accounts , brokers etc to short the crap out of our share price.

In doing so thay...

1, pick up to $140,000,000.00 tax loss for the books.

2, the Short side of the equation Buys back and makes a profit at of $X...

3, Most importantly by playing with a relative pittance , $140,000,000.00, by screwing the price down said company then mysteriously appears & offers a buyout price of say $2.50 per share, White Knite, which after the enduring torment thay have inflicted on the average retail shareholder for over a year, dare say alot would accept.

4, Large acquiring company has literally saved itself billions of $ off the purchase price simply by playing games with only $140,000,000.00

* As I have said earlier, I personally believe the fair value of Brainchip shares pressently should be well north of $4.50 Au , even pricing in world events.

The above is purely what I think is playing out before our eyes.

Regards,
Esq.
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 57 users

mrgds

Regular
Here's a question to no one in particular.

Who benefits by having the share price tightly controlled at this level or even sub 60c?

Tech x
Sure as hell ain"t the retail shareholders.

Id enjoy "Blind Freddies" reply to your question.

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Top Bottom