Deadpool
hyper-efficient Ai
Oscar Wilde quote - Imitation is the sincerest form of flattery that mediocrity can pay to greatness.thought the same
Oscar Wilde quote - Imitation is the sincerest form of flattery that mediocrity can pay to greatness.thought the same
Xperi DTSTalk about the minotaur's cave Nothing for Xperi, nothing for DTS, so I tried CTO Petronel Bigioi - Found a few for FOTONATION LTD:
FotoNation is a wholly owned subsidiary of Xperi.
US11046327B2 System for performing eye detection and/or tracking
View attachment 22571
[0034] As further illustrated in FIG. 3, the system 302 may include a face detector component 316 , a control component 318 , and an eye tracking component 320 . The face detector component 316 may be configured to analyze the first image data 310 in order to determine a location of a face of a user. For example, the face detector component 316 may analyze the first image data 310 using one or more algorithms associated with face detection. The one or more algorithms may include, but are not limited to, neural network algorithm(s), Principal Component Analysis algorithm(s), Independent Component Analysis algorithms(s), Linear Discriminant Analysis algorithm(s), Evolutionary Pursuit algorithm(s), Elastic Bunch Graph Matching algorithm(s), and/or any other type of algorithm(s) that the face detector component 316 may utilize to perform face detection on the first image data 310 .
[0041] The eye tracking component 320 may be configured to analyze the second image data 312 in order to determine eye position and/or a gaze direction of the user. For example, the eye tracking component 320 may analyze the second image data 312 using one or more algorithms associated with eye tracking. The one or more algorithms may include, but are not limited to, neural network algorithm(s) and/or any other types of algorithm(s) associated with eye tracking.
Missed it by that much:
[0050] As described herein, a machine-learned model which may include, but is not limited to a neural network (e.g., You Only Look Once (YOLO) neural network, VGG, DenseNet, PointNet, convolutional neural network (CNN), stacked auto-encoders, deep Boltzmann machine (DBM), deep belief networks (DBN),), regression algorithm (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), Bayesian algorithms (e.g., naïve Bayes, Gaussian naïve Bayes, multinomial naïve Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., k-means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning algorithms (e.g., perceptron, back-propagation, Hopfield network, Radial Basis Function Network (RBFN)), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional or alternative examples of neural network architectures may include neural networks such as ResNet50, ResNet101, VGG, DenseNet, PointNet, and the like. Although discussed in the context of neural networks, any type of machine-learning may be used consistent with this disclosure. For example, machine-learning algorithms may include, but are not limited to, regression algorithms, instance-based algorithms, Bayesian algorithms, association rule learning algorithms, deep learning algorithms, etc.
... no suggestion of a digital SNN SoC, or even an analog one.
However, they did make a CNN SoC:
WO2017129325A1 A CONVOLUTIONAL NEURAL NETWORK
Von Neumann rools!
View attachment 22576
A convolutional neural network (CNN) for an image processing system comprises an image cache responsive to a request to read a block of NxM pixels extending from a specified location within an input map to provide a block of NxM pixels at an output port. A convolution engine reads blocks of pixels from the output port, combines blocks of pixels with a corresponding set of weights to provide a product, and subjects the product to an activation function to provide an output pixel value. The image cache comprises a plurality of interleaved memories capable of simultaneously providing the NxM pixels at the output port in a single clock cycle. A controller provides a set of weights to the convolution engine before processing an input map, causes the convolution engine to scan across the input map by incrementing a specified location for successive blocks of pixels and generates an output map within the image cache by writing output pixel values to successive locations within the image cache.
Hi TCThis is pure speculation on my behalf, but they recently did a cash raise to pay for, amongst other things, a “ core technology upgrade “
View attachment 27843
ok, maybe one day. Thanks @Tothemoon24Hi TC
I once held Pck a couple of years ago , I’ve kept a close eye on their progress.
I sent an email to Tony Dawe last year .
This is the reply
Xperi DTS
Mercedes
Prophesee
View attachment 27844
View attachment 27853
View attachment 27849
View attachment 27850
Nice one mate!Hi TC
I once held Pck a couple of years ago , I’ve kept a close eye on their progress.
I sent an email to Tony Dawe last year .
This is the reply
I don't think it is controlled as such. Don't forget we have at least 30M shares to be dumped into the market. Buyers won't rush in to buy.Here's a question to no one in particular.
Who benefits by having the share price tightly controlled at this level or even sub 60c?
Tech x
Surely a fair bit of today’s trading was fuelled by this? Relentless selling all afternoon!I don't think it is controlled as such. Don't forget we have at least 30M shares to be dumped into the market. Buyers won't rush in to buy.
Someone is loaning out shares to the shorters.Here's a question to no one in particular.
Who benefits by having the share price tightly controlled at this level or even sub 60c?
Tech x
Hi TechGirl,Great article on us about our Benchmarking.
BrainChip says new standards needed for edge AI benchmarking | Edge Industry Review
Benchmarking used to measure AI performance in today's industry tends to focus heavily on TOPS metrics, which do not accurately depict real-world applications.www.edgeir.com
BrainChip says new standards needed for edge AI benchmarking
Jan 23, 2023 | Abhishek Jadhav
CATEGORIES Edge Applications | Edge Computing News | Industry Standards
BrainChip, a provider of neuromorphic processors for edge AI on-chip processing, has published a white paper that examines the limitations of conventional AI performance benchmarks. The white paper also suggests additional metrics to consider when evaluating AI applications’ overall performance and efficiency in multi-modal edge environments.
The white paper, “Benchmarking AI inference at the edge: Measuring performance and efficiency for real-world deployments”, examines how neuromorphic technology can help reduce latency and power consumption while amplifying throughput. According to research cited by BrainChip, the benchmarking used to measure AI performance in today’s industry tends to focus heavily on TOPS metrics, which do not accurately depict real-world applications.
“While there’s been a good start, current methods of benchmarking for edge AI don’t accurately account for the factors that affect devices in industries such as automotive, smart homes and Industry 4.0,” said Anil Mankar, the chief development officer of BrainChip.
Recommended reading: Edge Impulse, BrainChip partner to accelerate edge AI development
Limitations of traditional edge AI benchmarking techniques
MLPerf is recognized as the benchmark system for measuring the performance and capabilities of AI workloads and inferences. While other organizations seek to add new standards for AI evaluations, they still use TOPS metrics. Unfortunately, these metrics fail to prove proper power consumption and performance in a real-world setting.
BrainChip proposes that future benchmarking of AI edge performance should include application-based parameters. Additionally, it should emulate sensor inputs to provide a more realistic and complete view of performance and power efficiency.
“We believe that as a community, we should evolve benchmarks to continuously incorporate factors such as on-chip, in-memory computation, and model sizes to complement the latency and power metrics that are measured today,” Mankar added.
Recommended reading: BrainChip, Prophesee to deliver “neuromorphic” event-based vision systems for OEMs
Benchmarks in action: Measuring throughput and power consumption
BrainChip promotes a shift towards using application-specific parameters to measure AI inference capabilities. The new standard should use open-loop and closed-loop datasets to measure raw performance in real-world applications, such as throughput and power consumption.
BrainChip believes businesses can leverage this data to optimize AI algorithms with performance and efficiency for various industries, including automotive, smart homes and Industry 4.0.
Evaluating AI performance for automotive applications can be difficult due to the complexity of dynamic situations. One can create more responsive in-cabin systems by incorporating keyword spotting and image detection into benchmarking measures. On the other hand, when evaluating AI in smart home devices, one should prioritize measuring performance and accuracy for keyword spotting, object detection and visual wake words.
“Targeted Industry 4.0 inference benchmarks focused on balancing efficiency and power will enable system designers to architect a new generation of energy-efficient robots that optimally process data-heavy input from multiple sensors,” BrainChip explained.
BrainChip emphasizes the need for more effort to incorporate additional parameters in a comprehensive benchmarking system. The company suggests creating new benchmarks for AI interference performance that measure efficiency by evaluating factors such as latency, power and in-memory and (on-chip) computation.
Here's a question to no one in particular.
Who benefits by having the share price tightly controlled at this level or even sub 60c?
Tech x
I don't think it is controlled as such. Don't forget we have at least 30M shares to be dumped into the market. Buyers won't rush in to buy.
Afternoon TECH,Here's a question to no one in particular.
Who benefits by having the share price tightly controlled at this level or even sub 60c?
Tech x
Sure as hell ain"t the retail shareholders.Here's a question to no one in particular.
Who benefits by having the share price tightly controlled at this level or even sub 60c?
Tech x
$4.50? We are still essentially pre revenue.Afternoon TECH,
Good question & the only cenario I can think of is a large entity in our field of business screwing the price down.
But one of many ways...
Large entity goes out & on market Buys say the equivalent of say 100,000,000 shares for a combined average value of say $140,000,000.00.
Next step thay then proceed to use these shares, lend them out to a separate entity , although still the same parent entity, all linked through various offshore accounts , brokers etc to short the crap out of our share price.
In doing so thay...
1, pick up to $140,000,000.00 tax loss for the books.
2, the Short side of the equation Buys back and makes a profit at of $X...
3, Most importantly by playing with a relative pittance , $140,000,000.00, by screwing the price down said company then mysteriously appears & offers a buyout price of say $2.50 per share, White Knite, which after the enduring torment thay have inflicted on the average retail shareholder for over a year, dare say alot would accept.
4, Large acquiring company has literally saved itself billions of $ off the purchase price simply by playing games with only $140,000,000.00
* As I have said earlier, I personally believe the fair value of Brainchip shares pressently should be well north of $4.50 Au , even pricing in world events.
The above is purely what I think is playing out before our eyes.
Regards,
Esq.