Machine Learning At Breakneck Speed
What We Do
We are a research lab that have developed new algorithmic, mathematical and architectural foundations for deep neural networks
Mathematical innovations
Newly discovered mathematical properties underpin the algorithmic foundations
New types of neural networks
We have developed several new types of neural networks, including a Monte Carlo Tree Search embedded as a neural network or an AI architecture for setting goals at multiple levels of abstraction
Algorithmic innovations
Innovations in the algorithmic foundations for function approximators, classifiers, auto-encoders and CNNs enable much more efficient from-scratch neural network training and in some cases inference
Architectural innovations
Novel architectures for LLMs unlock new capabilities such as Turing completeness, accurate context approximation and more efficient training
We have developed parametric refactoring methods to reduce the number of trainable parameters. This increases training speed whilst reducing memory requirements during inference
Results
Classifiers
Our product Antares significantly reduces training time and costs for classifiers, being a few times to in some cases a few hundred times faster than other state of the art methods with comparable or better accuracy.
CNNs
CNNs are still a work in progress, but early results are looking promising.
LLMs
Antares can train LLMs up to ten times faster, and also adds capabilities that allow richer LLM modelling with more compact models, resulting in a lower final validation loss.
Function Approximators
Antares can train function approximators with comparable or better accuracy with an average speedup of 25x. This heightened accuracy can result in more reliable prediction and forecasting in areas such as risk management, fraud detection, and financial market analysis.
Auto Encoders
Antares can train deep auto-encoders networks several times faster than other state of the art gradient descent methods. Combined with the other capabilities it allows us to train large neural networks like CNNs efficiently from scratch.
Tree Search and Goal Setting
Incorporates Monte Carlo Tree Search into neural networks, embedding the exploration and optimization phases of the search into the layers of the network, which is fully learnable from training data. This approach beats the state-of-the-art implementation alpha-zero at playing Go.
Our architecture for goal setting allows goals at multiple levels of abstraction, which can be useful for alignment / ethical AI.
Theffjfyy