BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Might need to look a bit further into these guys at Opteran to ensure there are no patent breaches??? 😱

UK AI chip startup Opteran has raised $12m to further develop its optical flow AI technology into decision making.

Extract
“By mimicking Nature’s genius to enable machines to move like natural creatures we are redefining the global market for machine autonomy,” said David Rajan, CEO, Opteran. “We expect Opteran Natural Intelligence to become the standard solution for autonomy anywhere on the ground or in the air, on any machine, large or ultra small, because it combines such sophisticated natural brain capabilities in a light-weight, efficient package.”

From Opteran Website
Screen Shot 2022-06-28 at 1.25.25 pm.png


Screen Shot 2022-06-28 at 1.24.46 pm.png

 
  • Like
  • Thinking
  • Fire
Reactions: 16 users
Might need to look a bit further into these guys at Opteran to ensure there are no patent breaches??? 😱

UK AI chip startup Opteran has raised $12m to further develop its optical flow AI technology into decision making.

Extract
“By mimicking Nature’s genius to enable machines to move like natural creatures we are redefining the global market for machine autonomy,” said David Rajan, CEO, Opteran. “We expect Opteran Natural Intelligence to become the standard solution for autonomy anywhere on the ground or in the air, on any machine, large or ultra small, because it combines such sophisticated natural brain capabilities in a light-weight, efficient package.”

From Opteran Website
View attachment 10322

View attachment 10324
Absolutely.
 
  • Like
Reactions: 6 users

Tuliptrader

Regular
Another day of shorter frisbee.

TT
 
  • Like
Reactions: 11 users

hotty4040

Regular
Speaking of delivery parcels, I just received my copy of "Higher Intelligence' by Peter AJ Van der Made and I've already come to the conclusion that I'm a lot stupider than I thought I was.

View attachment 10321

That's ok Bravo, don't worry, be happy. I used to be, very undecided about things, but now I'm not so sure.

;)

A B

hotty...
 
  • Haha
Reactions: 4 users

Shadow59

Regular
Another day of shorter frisbee.

TT
End of financial year games😟
Hopefully some recovery from Friday onwards.
 
  • Like
Reactions: 15 users

Diogenese

Top 20
Hey D

Appreciate your thoughts on the following new recent release. Couple things caught my eye particularly the weights but not tech enough to be comfortable this is similar to Akida?

Edit. Not saying Akida involved but curious to any similarities.





View attachment 10319
While Maxim don't refer to spikes, they do refer to 1-bit weights. They also refer to 8-bit weights in the same breath, so they have gone for additional accuracy as an option cf Akida's optional 4-bit weights/actuations.

Maxim have several NN patents, mainly directed to CNN using the now fashionable in-memory-compute, eg:

US2020110604A1 ENERGY-EFFICIENT MEMORY SYSTEMS AND METHODS
Priority: 20181003.
1 . A high-throughput compute system for performing arithmetic calculations, the compute system comprising:
a source memory that stores source data for an arithmetic operation;
a compute cache to cache some of the source data;
a compute memory coupled to the compute cache, the compute memory being used in one or more cycles of the arithmetic operation, the compute cache and the compute memory forming a computing structure;
a weight memory coupled to the compute memory, the weight memory stores weight data for use in the arithmetic operation; and
a controller coupled to the computing structure, the controller performs steps comprising:
in response to data in a first row located at a first end of the computing structure having undergone a full rotation cycle in a direction of a second end of the computing structure, discarding data in a second row located at a third end of the computing structure;
shifting data elements in the computing structure towards the third end;
at the first end, loading data from a third row into the computing structure to replace the data in the first row;
shifting the data elements in the computing structure towards the second end, such that a new data element is loaded into the computing structure at a fourth end; and
using two or more data elements in the computing structure to perform the arithmetic operation
.

It sounds like the thimble-and-pea trick, so here's a few pictures to complete the illusion:


[0003] Some of the most exciting applications of machine learning use Convolutional Neural Networks (CNNs). CNNs apply a number of hierarchical network layers and sub-layers to, for example, an input image to determine whether to categorize an image as containing a person or some other object. CNNs use neural network-based image classifiers that can automatically learn complex features for classification and object recognition tasks. Arithmetic operations for convolutions are typically performed in software that operates on a general-purpose computing device, such as a conventional microprocessor. This approach is very costly in terms of both power and time, and for many computationally intensive applications (e.g., real-time applications) general hardware is unable to perform the necessary operations in a timely manner as the rate of calculations is limited by the computational resources and capabilities of existing hardware designs.
...


As the amount of data subject to convolution operations increases and the complexity of operations continues to grow, the inability to reuse much of the data coupled with the added steps of storing and retrieving intermediate results from memory to complete an arithmetic operation present only some of the shortcoming of existing designs.

[### Our old friend the von Neumann bottleneck ###]


1656387604091.png


[0008] FIG. 1 illustrates an exemplary cache and compute structure according to various embodiments of the present disclosure.
0034] In operation, memory 100 may serve to store source data, e.g., input image data, video data, audio data, etc., arranged in a matrix format that has a certain height and a width.
[0035] The dimensions of cache and compute structure 120 may be designed such that, in embodiments, its minimum width is equal to the width of memory 100 holding the image data (without padding) plus any width that may account for columns of padding 144 , 146 ,
...
Any number of kernels 150 may be used by a convolution layer to apply a set of weights 152 to data in a convolution window of an image. In embodiments, weights 152 may have been learned by a CNN during a training phase, e.g., to generate an activation value associated with the convolution window. For each kernel 150 , the convolution layer may have, for each data point, one network node, i.e., neuron, that outputs an activation value that may be calculated based on the set of weights 152 . The activation value for the convolution window may identify a feature or a characteristic, such as an edge that then may be used to identify the same feature at other locations within the image. In embodiments, weights 152 in kernel 150 are applied to elements in compute section 140 . The data in compute section 140 may be used, e.g., in each cycle of a convolution operation, as will be discussed in greater detail with reference to FIG. 15.

[0038] The mathematical concepts underlying convolutional neural networks are well-known in the art. In brief, a set of filters in the form of a limited-size kernel or weight data is applied to a set of larger input channel data (e.g., passed across an area of an image) or image data to produce output channel data (e.g., an output matrix) for a particular layer of the CNN. Each element in each output channel represents the sum of the products of the individual weights in the kernel multiplied by the individual data values of the input channels, passed through a nonlinear activation function, such as a ReLU or Sigmoid function
.


1656387168510.png


[0011] FIG. 4- FIG. 6 illustrate data shifting and rotation by a cache and compute structure prior to discarding data, according to various embodiments of the present disclosure.

[0044] Then, the contents of memory 120 may be rotated left by one element, such that the bottom right element is replaced by the previously read first data item 306 , such that one or more mathematical operations may be performed on compute structure 140 and the next data item 407 may be read, and so on, as shown in FIG. 4 through FIG. 6, which illustrate data shifting and rotation by memory 120 , according to various embodiments of the present disclosure.

[0045] Once the data items, including any padding that has not been ignored, in row 240 have been read and processed, row 132 in memory 120 may be discarded and a new row, e.g., comprising zeros, may be loaded from the bottom of memory 120 , as shown in FIG. 7.
[0046] In embodiments, once source data is loaded from memory 100 into memory 120 , memory 120 may perform the ordered sequence of rotating and shifting operations shown in FIG. 8 through FIG. 9 to allow compute section 140 to use the loaded data many times, i.e., over many cycles, to ultimately obtain an arithmetic result without having to reload the same source data numerous times
.

[0047] As a result, the systems and methods for memory 120 allow for efficient reuse of once loaded data over for a number of operations without having to re-fetch or reload the data over and over again from addresses in the standard memory. This advantageously avoids re-duplication of read operations and the need to perform computationally expensive data movement operations.

[### This is how Maxim avoid the von Neumann bottleneck by using a rotating buffer memory to retain data in the working memory once it has been processed a first time ###]




1656387388773.png



[0015] FIG. 11 illustrates data shifting and rotation by a cache and compute structure on a padded memory, according to various embodiments of the present disclosure.

[### Figure 11 shows the columns of data being progressively shifted (rotated) to the left with the leftmost column reappearing in the rightmost column. They go on to describe shifting by two or more columns at a time (strides) which would ab associated with greater processing speed. ###]



1656386906802.png


[0022] FIG. 20 is a flowchart of an illustrative process for using a compute structure to perform calculations according to various embodiments of the present disclosure.

In another patent, (@Slymeat take note) Maxim also talk of analog matrix multiplication:

US2020167636A1 SYSTEMS AND METHODS FOR ENERGY-EFFICIENT ANALOG MATRIX MULTIPLICATION FOR MACHINE LEARNING PROCESSES


1656390612184.png


[0012] FIG. 6 illustrates an exemplary matrix multiplication system that uses column weights according to various embodiments of the present disclosure.

[### Note the analog-to-digital converters (ADC). ###]

[0056] In embodiments, weights may be moved from the analog domain into the digital domain. FIG. 6 illustrates an exemplary matrix multiplication system that uses column weights according to various embodiments of the present disclosure. For clarity, components similar to those shown in FIG. 2 are labeled in the same manner. For purposes of brevity, a description or their function is not repeated here. Matrix multiplication system 600 comprises ADC 670 , digital multiplier 672 , column weight 674 , column adder 676 , and activation function unit 678 .



[0013] FIG. 7 illustrates a simplified system utilizing matrix multiplication according to various embodiments of the present disclosure.
1656390239961.png


A novel energy-efficient multiplication circuit using analog multipliers and adders reduces the distance data has to move and the number of times the data has to be moved when performing matrix multiplications in the analog domain. The multiplication circuit is tailored to bitwise multiply the innermost product of a rearranged matrix formula to output the generate a matrix multiplication result in form of a current that is then digitized for further processing.

Analog NN is probably more energy efficient than digital NN, if not as accurate bitwise.

Fig 6 shows them doing the maths (MAC) in digital, so it's another Frankenstein.

Fig 7 shows analog MAC, so 5 bob each way? Overall, the specification reads on an analog configuration, so I guess they have just thrown in a hybrid version to cover their embarrassment.

If Maxim are using analog in-memory-compute, they aren't using Akida.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Diogenese

Top 20
Speaking of delivery parcels, I just received my copy of "Higher Intelligence' by Peter AJ Van der Made and I've already come to the conclusion that I'm a lot stupider than I thought I was.

View attachment 10321
I've had it for a few months, but haven't dared to open it yet. I think I'll just put it on the shelf next to Dostoyevsky and Joyce.
 
  • Haha
  • Like
  • Love
Reactions: 14 users
While Maxim don't refer to spikes, they do refer to 1-bit weights. They also refer to 8-bit weights in the same breath, so they have gone for additional accuracy as an option cf Akida's optional 4-bit weights/actuations.

Maxim have several NN patents, mainly directed to CNN using the now fashionable in-memory-compute, eg:

US2020110604A1 ENERGY-EFFICIENT MEMORY SYSTEMS AND METHODS
Priority: 20181003.
1 . A high-throughput compute system for performing arithmetic calculations, the compute system comprising:
a source memory that stores source data for an arithmetic operation;
a compute cache to cache some of the source data;
a compute memory coupled to the compute cache, the compute memory being used in one or more cycles of the arithmetic operation, the compute cache and the compute memory forming a computing structure;
a weight memory coupled to the compute memory, the weight memory stores weight data for use in the arithmetic operation; and
a controller coupled to the computing structure, the controller performs steps comprising:
in response to data in a first row located at a first end of the computing structure having undergone a full rotation cycle in a direction of a second end of the computing structure, discarding data in a second row located at a third end of the computing structure;
shifting data elements in the computing structure towards the third end;
at the first end, loading data from a third row into the computing structure to replace the data in the first row;
shifting the data elements in the computing structure towards the second end, such that a new data element is loaded into the computing structure at a fourth end; and
using two or more data elements in the computing structure to perform the arithmetic operation
.

It sounds like the thimble-and-pea trick, so here's a few pictures to complete the illusion:


[0003] Some of the most exciting applications of machine learning use Convolutional Neural Networks (CNNs). CNNs apply a number of hierarchical network layers and sub-layers to, for example, an input image to determine whether to categorize an image as containing a person or some other object. CNNs use neural network-based image classifiers that can automatically learn complex features for classification and object recognition tasks. Arithmetic operations for convolutions are typically performed in software that operates on a general-purpose computing device, such as a conventional microprocessor. This approach is very costly in terms of both power and time, and for many computationally intensive applications (e.g., real-time applications) general hardware is unable to perform the necessary operations in a timely manner as the rate of calculations is limited by the computational resources and capabilities of existing hardware designs.
...


As the amount of data subject to convolution operations increases and the complexity of operations continues to grow, the inability to reuse much of the data coupled with the added steps of storing and retrieving intermediate results from memory to complete an arithmetic operation present only some of the shortcoming of existing designs.

[### Our old friend the von Neumann bottleneck ###]


View attachment 10327

[0008] FIG. 1 illustrates an exemplary cache and compute structure according to various embodiments of the present disclosure.
0034] In operation, memory 100 may serve to store source data, e.g., input image data, video data, audio data, etc., arranged in a matrix format that has a certain height and a width.
[0035] The dimensions of cache and compute structure 120 may be designed such that, in embodiments, its minimum width is equal to the width of memory 100 holding the image data (without padding) plus any width that may account for columns of padding 144 , 146 ,
...

Any number of kernels 150 may be used by a convolution layer to apply a set of weights 152 to data in a convolution window of an image. In embodiments, weights 152 may have been learned by a CNN during a training phase, e.g., to generate an activation value associated with the convolution window. For each kernel 150 , the convolution layer may have, for each data point, one network node, i.e., neuron, that outputs an activation value that may be calculated based on the set of weights 152 . The activation value for the convolution window may identify a feature or a characteristic, such as an edge that then may be used to identify the same feature at other locations within the image. In embodiments, weights 152 in kernel 150 are applied to elements in compute section 140 . The data in compute section 140 may be used, e.g., in each cycle of a convolution operation, as will be discussed in greater detail with reference to FIG. 15.

[0038] The mathematical concepts underlying convolutional neural networks are well-known in the art. In brief, a set of filters in the form of a limited-size kernel or weight data is applied to a set of larger input channel data (e.g., passed across an area of an image) or image data to produce output channel data (e.g., an output matrix) for a particular layer of the CNN. Each element in each output channel represents the sum of the products of the individual weights in the kernel multiplied by the individual data values of the input channels, passed through a nonlinear activation function, such as a ReLU or Sigmoid function
.


View attachment 10325

[0011] FIG. 4- FIG. 6 illustrate data shifting and rotation by a cache and compute structure prior to discarding data, according to various embodiments of the present disclosure.

[0044] Then, the contents of memory 120 may be rotated left by one element, such that the bottom right element is replaced by the previously read first data item 306 , such that one or more mathematical operations may be performed on compute structure 140 and the next data item 407 may be read, and so on, as shown in FIG. 4 through FIG. 6, which illustrate data shifting and rotation by memory 120 , according to various embodiments of the present disclosure.

[0045] Once the data items, including any padding that has not been ignored, in row 240 have been read and processed, row 132 in memory 120 may be discarded and a new row, e.g., comprising zeros, may be loaded from the bottom of memory 120 , as shown in FIG. 7.
[0046] In embodiments, once source data is loaded from memory 100 into memory 120 , memory 120 may perform the ordered sequence of rotating and shifting operations shown in FIG. 8 through FIG. 9 to allow compute section 140 to use the loaded data many times, i.e., over many cycles, to ultimately obtain an arithmetic result without having to reload the same source data numerous times
.

[0047] As a result, the systems and methods for memory 120 allow for efficient reuse of once loaded data over for a number of operations without having to re-fetch or reload the data over and over again from addresses in the standard memory. This advantageously avoids re-duplication of read operations and the need to perform computationally expensive data movement operations.

[### This is how Maxim avoid the von Neumann bottleneck by using a rotating buffer memory to retain data in the working memory once it has been processed a first time ###]




View attachment 10326


[0015] FIG. 11 illustrates data shifting and rotation by a cache and compute structure on a padded memory, according to various embodiments of the present disclosure.

[### Figure 11 shows the columns of data being progressively shifted (rotated) to the left with the leftmost column reappearing in the rightmost column. They go on to describe shifting by two or more columns at a time (strides) which would ab associated with greater processing speed. ###]



View attachment 10323

[0022] FIG. 20 is a flowchart of an illustrative process for using a compute structure to perform calculations according to various embodiments of the present disclosure.

In another patent, (@Slymeat take note) Maxim also talk of analog matrix multiplication:

US2020167636A1 SYSTEMS AND METHODS FOR ENERGY-EFFICIENT ANALOG MATRIX MULTIPLICATION FOR MACHINE LEARNING PROCESSES


View attachment 10329

[0012] FIG. 6 illustrates an exemplary matrix multiplication system that uses column weights according to various embodiments of the present disclosure.

[### Note the analog-to-digital converters (ADC). ###]

[0056] In embodiments, weights may be moved from the analog domain into the digital domain. FIG. 6 illustrates an exemplary matrix multiplication system that uses column weights according to various embodiments of the present disclosure. For clarity, components similar to those shown in FIG. 2 are labeled in the same manner. For purposes of brevity, a description or their function is not repeated here. Matrix multiplication system 600 comprises ADC 670 , digital multiplier 672 , column weight 674 , column adder 676 , and activation function unit 678 .



[0013] FIG. 7 illustrates a simplified system utilizing matrix multiplication according to various embodiments of the present disclosure.
View attachment 10328

A novel energy-efficient multiplication circuit using analog multipliers and adders reduces the distance data has to move and the number of times the data has to be moved when performing matrix multiplications in the analog domain. The multiplication circuit is tailored to bitwise multiply the innermost product of a rearranged matrix formula to output the generate a matrix multiplication result in form of a current that is then digitized for further processing.

Analog NN is probably more energy efficient than digital NN, if not as accurate bitwise.

Fig 6 shows them doing the maths (MAC) in digital, so it's another Frankenstein.

Fig 7 shows analog MAC, so 5 bob each way? Overall, the specification reads on an analog configuration, so I guess they have just thrown in a hybrid version to cover their embarrassment.

If Maxim are using analog in-memory-compute, they aren't using Akida.
Appreciated D

Figured you'd be able to break it down better.

Didn't think Akida involved was more about the comparison of some of the figures eg weights that caught my eye.
 
  • Like
  • Fire
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This is fantastic read, albeit a trifle long, but it really re-enforces why we're onto an absolute winner IMO. I give it a 9 out of 10 simply because it doesn't mention the word "ubiquitous".

It refers to Honeywell's SMART EDGE SENSORS 👀, and we we were only discussing Honeywell a few days ago.🧐 It says here Honeywell's "smart edge sensors monitor temperature, humidity, and CO2 levels, helping to create an intelligent building system that can automatically adjust energy and lighting use to keep costs down while optimizing for carbon neutrality and maintaining building comfort".



The promise of edge computing comes down to data​

By Beth Stackpole

Jun 27, 2022


Edge computing is unlocking the real-time insights and intelligent automation that separate business leaders from the laggards.

Cloud adoption has rocketed as companies seek computing and storage resources that can be scaled up and down in response to changing business needs. But even given the cost and agility upsides to cloud, there’s rising interest in yet another deployment model — edge computing, which is computing that’s done at or near the source of the data. It can empower new use cases, especially the innovative artificial intelligence and machine learning applications that are critical to modern business success


The promise of the edge comes down to data, according to three industrial technologists who spoke at the recent Future Compute conference hosted by MIT Technology Review. Specifically, there is a need to gather, process, and analyze data closest to where it’s being generated, whether that’s on the factory floor, in an autonomous vehicle, or in a smart building system.

The ability to run artificial intelligence models directly on data at the edge without the extra step of moving workloads to the cloud reduces latency and costs. Most important, it is the key to unlocking the real-time insights that separate the leaders from the laggards, the panelists agreed.

Companies are starting to recognize the role edge computing can play in driving successful data-driven business transformation. Gartner estimates that while only 10% of enterprise data was created and processed outside the data center and cloud in 2018, this number will be 75% by 2025.

George Small, the chief technology officer of Moog Inc., a $3 billion motion control solutions company, said he’s seen measurable progress from edge applications.

“There's real use cases. We're now seeing where value's being created,” he said. “It's actually making significant improvements in … productivity.”

Where the edge meets the cloud

As companies move ahead with data-driven business, they need to create an IT landscape that includes both edge and cloud computing. Data collected and analyzed at the edge can initiate a real-time response to troubleshoot a piece of industrial equipment to prevent machinery downtime or to redirect a self-driving car out of harm’s way.

At the same time, device data from that machine or vehicle can be sent to the cloud and aggregated with other data for more in-depth analysis that can drive smarter decision making and future business strategy.

Gartner estimates that 10% of enterprise data was created and processed outside the data center and cloud in 2018.

“Connectivity has gotten to the point that it’s a baseline, which is feeding this idea of an intelligent edge,” Small said.“Intelligence starts at a sensing level at the edge and spans to a networked system of systems that ultimately gets to cloud. We look at it as a continuum.”

Applications where edge makes a difference

Moog is experimenting with edge computing for a variety of applications, Small said. In the agricultural space, the company is using edge capabilities and machine learning recognition for almond and apple farming, helping harvesting equipment autonomously navigate terrain and improve crop yields. In construction, Moog’s edge and AI-based automation efforts are focused on material movement — for example, turning a piece of an excavator into a robotic platform to enable automation, Small said.

Ongoing labor and productivity challenges drove Moog to experiment with edge-based automation in the agriculture sector, Small said.

“There are opportunities where you don’t have as much of a structured environment or people need to interact with the actual work site,” he said. “That was our introduction to this definition of edge. We came at it from the point of view of automating a vehicle.”

Another potential use case combines edge computing, 3D printing, and blockchain to orchestrate on-demand, on-location output of spare parts. Moog customers in sectors like aerospace and defense could create spare parts for critical equipment on-site, using blockchain as a means to verify the providence and integrity of the part, Small said.

At Honeywell Building Technologies, edge computing is a key part of transforming building operations to improve quality of life, said Manish Sharma, vice president and general manager of Honeywell’s sustainable building technologies. Smart edge sensors monitor temperature, humidity, and CO2 levels, helping to create an intelligent building system that can automatically adjust energy and lighting use to keep costs down while optimizing for carbon neutrality and maintaining building comfort.

Connecting heating, cooling, and air filtering systems to edge devices creates an intelligent network that facilitates data sharing and makes smarter decisions closer to where they have the most impact.

“You’re building a system of systems and to do the right computation, you need to have a common network where data can be shared and decisions can be made at the edge level” in a matter of milliseconds, Sharma said.

Best practices for edge deployments

The panelists outlined some best practices that can help companies identify the right candidates for edge deployments while avoiding some of the more common deployment challenges.

Move computing power to where the data is. Determining whether edge or cloud is optimal for a particular workflow or use case can cause analysis paralysis. Yet the truth is the models are complementary, not competing.

“The general rule of thumb is that you’re far better moving compute to the data than vice versa,” said Robert Blumofe, executive vice president and chief technology officer at Akamai. “By doing so, you avoid back hauling, which hurts performance and is expensive.”

Consider an e-commerce application that orchestrates actions like searching a product catalog, making recommendations based on history, or tracking and updating orders.

“It makes sense to do the compute where that data is stored, in a cloud data warehouse or data lake,” Blumofe said. The edge, on the other hand, lends itself to computing on data that’s in motion — analyzing traffic flow to initiate a security action, for example.

Go heavy on experimentation. It’s still early days in edge computing, and most companies are at the beginning of the maturity curve, evaluating how and where the model can have the most impact. Yet capabilities are improving rapidly and companies can’t afford to remain on the sidelines.

“You really need to start pushing because there is value to be created,” Small said. “You have to be out there looking for new opportunities — you’re not just going to think them up, you have to find them.”

Don’t skip over ROI. Edge-enabled automation can help companies do more with less labor and free up people to do higher value-added work, noted Moog’s Small. But in addition to those obvious first-order productivity gains, there are other, harder to quantify benefits from automation at the edge,
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I've had it for a few months, but haven't dared to open it yet. I think I'll just put it on the shelf next to Dostoyevsky and Joyce.

Wise choice. I've popped mine here in my cabinet and I'm just going to read it a little bit at a time so that I don't do anything untoward to my basal ganglia (see page 118).

chinese-household-shrine-floor-dashboard-54552496.jpg
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 21 users
Might need to look a bit further into these guys at Opteran to ensure there are no patent breaches??? 😱

UK AI chip startup Opteran has raised $12m to further develop its optical flow AI technology into decision making.

Extract
“By mimicking Nature’s genius to enable machines to move like natural creatures we are redefining the global market for machine autonomy,” said David Rajan, CEO, Opteran. “We expect Opteran Natural Intelligence to become the standard solution for autonomy anywhere on the ground or in the air, on any machine, large or ultra small, because it combines such sophisticated natural brain capabilities in a light-weight, efficient package.”

From Opteran Website
View attachment 10322

View attachment 10324
Absolutely nothing going on here of concern it is all entirely software:

  • "Pioneer the third wave of AI, reverse-engineering nature’s solutions for perception, cognition and action into Opteran’s software brain that delivers true autonomy for machines (ground robots/drones/vehicles)"

As for when they are currently hiring software engineers to create and build the cognition software so they are very optimistic claiming they will be bringing it to market in 2023 but wait they are a venture capital backed start-up so they can say what they like no rules.


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
What?!!!


Screen Shot 2022-06-28 at 3.23.02 pm.png



Screen Shot 2022-06-28 at 3.27.47 pm.png
 
  • Haha
  • Like
Reactions: 11 users
ANALYZATION - is this a real word? FF

Google claims it is - Analyzation definition, an uncommon variant of analysis

That being the case why not just 'Video analysis' I suppose just like the product they don't care about getting it right before they put it out there.

FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Fire
Reactions: 12 users

Diogenese

Top 20
Wise choice. I've popped mine here in my cabinet and I'm just going to read it a little bit at a time so that I don't do anything untoward to my basal ganglia (see page 118).

View attachment 10334
"basal ganglia" - I have no truck with those religions.
 
  • Haha
  • Like
Reactions: 5 users
F

Filobeddo

Guest
Wasn't aware of the Advanced Robotics For Manufacturing (ARM - The other ARM 😁) Hub in Queensland, and its potential for crossover with Brainchip so thought I'd post

This ARM Hub is 'a technology centre focused on robotics, artificial intelligence and design-led manufacturing with the aim of accelerating industry’s digital transformation' drawing together 'scientists, technical specialists, designers and engineers working side by side to develop commercial advanced manufacturing solutions – transforming high potential ideas into new products and services and accelerating adoption to generate high-value jobs and economic growth'

This HUB has had a few interesting AI projects - NOTE No known Brainchip link (I think) but would think they are right in its wheelhouse

AI for identifying disease in crops, (AI and computer vision and advanced algorithms to identify disease)

World-first building facade inspection system. Using Artificial Intelligence (AI), this inspection system (AutoBat) captures dangers and defects invisible to the human eye.

Artificial Intelligence enabling remote monitoring for construction loads ( including application in windturbine installation & maintenance (Verton Windmaster)



 
  • Like
  • Fire
  • Love
Reactions: 25 users
Just had a look on Commsec and it has taken until 3.39pm to manufacture the movement of approximately 9 million shares which is just shy of half of one percent of shares on issue. They keep this up and Commsec and ChiX will have to lift their fees just to keep the lights on. What a pathetic effort just shows what a great tool for investors is complete apathy to the actions of shorters and traders. Nothing annoys more than being ignored.

Blind Freddie claims that being completely blind to these sort of antics is one of the reasons he is so successful because he does his research and then buys to hold and unless fundamentals change he does not concern himself with that which he cannot see.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
ANALYZATION - is this a real word? FF

Google claims it is - Analyzation definition, an uncommon variant of analysis

That being the case why not just 'Video analysis' I suppose just like the product they don't care about getting it right before they put it out there.

FF

AKIDA BALLISTA
He-he-he! I'm still cracking up about this! 😂:LOL::ROFLMAO:

John comes home after a fun night at a costume party only to find out that Alexa refuses to let him into his own house.


Screen Shot 2022-06-28 at 3.40.12 pm.png
 
  • Haha
  • Like
  • Fire
Reactions: 26 users

Dr E Brown

Regular
I have done a little digging around presenters at the Detroit Autosens event where Anil was a presenter. I am not sure if anyone has spoken of Xperi as a potential customer.
They claim to have developed the first neuromorphic in cabin sensing solution in June 2021. Youtube video -
At CES 2022 they launched the DTS Autosense driver and passenger monitoring solution.
A report of this includes the lines -
  • uses proprietary AI/ML technology to ensure the quality and reliability of drowsiness or attentiveness analytics
  • understands driver state based on overall activity, rather than just on face and eyes analytics
  • is deployed using edge computing, without a need for cloud connectivity, meaning it is designed to enable all data to remain within the vehicle
This is the article link - https://aithority.com/vision/xperi-...solution-for-driver-and-occupancy-monitoring/

Am I late to the party and you all have discussed this to death previously? I just hadn't noticed it so sorry if it is a repeat.
 
  • Like
  • Love
  • Fire
Reactions: 25 users

jk6199

Regular
Just had a look on Commsec and it has taken until 3.39pm to manufacture the movement of approximately 9 million shares which is just shy of half of one percent of shares on issue. They keep this up and Commsec and ChiX will have to lift their fees just to keep the lights on. What a pathetic effort just shows what a great tool for investors is complete apathy to the actions of shorters and traders. Nothing annoys more than being ignored.

Blind Freddie claims that being completely blind to these sort of antics is one of the reasons he is so successful because he does his research and then buys to hold and unless fundamentals change he does not concern himself with that which he cannot see.

My opinion only DYOR
FF

AKIDA BALLISTA
Did you outbid me for that one share sold at 359 pm ???
 
  • Haha
  • Like
  • Love
Reactions: 10 users
An oldie but a goodie.....

I remember this it was back in the Aziana days!! Seems like a long time ago but how far things have come since then!
 
  • Like
  • Fire
  • Love
Reactions: 10 users
Top Bottom