Hi alby,
There isn't much technical detail available other than Wiki's discussion on Tsetlin machines.
https://en.wikipedia.org/wiki/Tsetlin_machine
View attachment 57363
A
Tsetlin machine is a form of learning automaton collective for learning patterns using propositional logic. Ole-Christoffer Granmo created[1] and gave the method its name after Michael Lvovitch Tsetlin, who invented the Tsetlin automaton[2] and worked on Tsetlin automata collectives and games.[3] Collectives of Tsetlin automata were originally constructed, implemented, and studied theoretically by Vadim Stefanuk in 1962.
The Tsetlin machine uses computationally simpler and more efficient primitives compared to more ordinary artificial neural networks.[4]
As of April 2018 it has shown promising results on a number of test sets.[5][6]
...
The Tsetlin automaton is the fundamental learning unit of the Tsetlin machine. It tackles the multi-armed bandit problem, learning the optimal action in an environment from penalties and rewards. Computationally, it can be seen as a finite-state machine (FSM) that changes its states based on the inputs. The FSM will generate its outputs based on the current states.
... which is above my pay grade.
There are no published patents for Literal Lbs.
The LL people did publish a paper in Norway in 2022. The abstract is public:
Resultat #2068852 - A Convolutional Tsetlin Machine-based Field Programmable Gate Array Accelerator for Image Classification - Cristin
app.cristin.no
Vitenskapelig Kapittel/Artikkel/Konferanseartikkel
2022
A Convolutional Tsetlin Machine-based Field Programmable Gate Array Accelerator for Image Classification
- Svein Anders Tunheim
- Jiao Lei
- Rishad Ahmed Shafik
- Alexandre Yakovlev og
- Ole-Christoffer Granmo
T
ITTEL
A Convolutional Tsetlin Machine-based Field Programmable Gate Array Accelerator for Image Classification
SAMMENDRAG
This paper presents a Field Programmable Gate Array (FPGA) implementation of an image classification accelerator based on the Convolutional Tsetlin Machine (CTM). The work is a concept design, and the solution demonstrates recognition of two classes in 4 × 4 images with a 2 × 2 convolution window. More specifically, there are two sub-Tsetlin Machines (TMs), one per class. A single sub-TM employs 40 clauses, each controlled by 20 Tsetlin Automata. The accelerator features random patch selection, in parallel for all clauses, based on reservoir sampling. The design is implemented in a Xilinx Zync XC7Z020 FPGA. With an operating clock speed of 30 MHz, the accelerator is capable of inferring at the rate of 3.3 million images per second with an additional power consumption of 20 mW from idle mode. The average test accuracy is 96.7% when trained on data with 10% noise. A training session with 100 epochs and 8192 examples takes 1.5 seconds. Due to the limited hardware resources required, the CTM accelerator represents a promising concept for online learning in energy-frugal systems. The solution can be scaled to multi-class systems and larger images.
From their website:
Literal Labs - Cambridge Future Tech (camfuturetech.com)
A new world of AI-enabled devices that are 10,000 times more energy efficient.
camfuturetech.com
A
rtifical Intelligence Redefined
Similar to NNs in that customers train a dataset, Literal Labs trains Tsetlin machine models specific to customer datasets. This approach results in an optimised machine model that is then deployed onto the target hardware.
The Tsetlin machine model can be deployed as software only or can be accelerated using Literal Labs accelerators. Literal Labs benchmarking shows it can achieve 250X faster inferencing than XG Boost using software only, and up to 1,000X faster and up to 10,000X less energy consumption when using hardware acceleration.
Value Proposition
One of the major challenges with traditional neural network-based models is their resource-intensive nature. As models become more complex, so too does the resource requirement. Literal Labs’ architecture, based on propositional logic, requires significantly fewer resources to solve AI problems, meaning that Literal Labs can deliver intelligent compute on devices with minimal energy usage and with little or no internet coverage.
The company was spun out of Newcastle University by world leaders in Tsetlin machine Dr. Alex Yakelov and Dr. Rishad Shafik, and led by former Arm CPU division VP and semiconductor startup founder Noel Hurley.
There is a video which shows that the processor is involved in the calculations:
View attachment 57364
This would add to latency, so it is probably slower than Akida.
No wonder Jem was a bit cryptic on the podcast - buyer's remorse. I think he's Tsetlin for second best.