Powering the Next Wave of AI: Noel Hurley’s Logic-Driven Approach at Literal Labs
Noel Hurley is the CEO of Literal Labs, recognized for his relentless drive to combine power efficiency and industry partnerships, a hallmark of his leadership at Arm, with a bold vision for logic-based artificial intelligence. Drawing from deep experience in the semiconductor industry, Noel is leading Literal Labs to pioneer AI models that are not only faster, smaller, and significantly more energy-efficient than traditional neural networks, but also highly suited for real-world edge environments and resource-constrained sectors.
In the recent interview with Eqvista, Noel elaborates on his transition from academic roots to scalable industrial impact, emphasizing Literal Labs’ commitment to building explainable, low-power AI solutions—particularly leveraging Tsetlin machines and other logic-based architectures. His pragmatic approach centers on market needs, ensuring the technology delivers tangible benefits and usability for customers across industries like manufacturing, logistics, and utilities. Noel also stresses the importance of democratizing AI through user-friendly training tools, enabling non-experts to harness efficient models for their own applications.

Noel, your background at Arm was characterized by a relentless focus on power efficiency and industry partnerships. How are those experiences shaping Literal Labs’ strategy today?
Achieving more with less is a constant challenge across businesses large and small, and it underpins a great deal of innovation in semiconductors. Literal Labs’ mission is no different; we are building AI models with logic-based techniques that are smaller, faster, cheaper and more energy efficient than traditional neural networks. Particularly in edge environments, there is a pressing need to run systems with low compute and energy requirements, which presents a huge challenge for traditional AI models.
The transition from academic innovation to a scalable industrial product is notoriously difficult. What have been the critical challenges in industrializing Literal Labs’ technology, and how are you overcoming them?
The transition from research to real-world is a challenge that many businesses struggle with, and particularly AI businesses, where the focus is too often on the technology itself, rather than how it can be implemented. When we spun Literal Labs out of our university incarnation, we did so with very clear go-to-market and end applications in mind; logic-based AI training tools for resource-constrained environments at the edge. That has helped us stay focused on achieving impact for our customers and partners.
Literal Labs is positioning logic-based AI as an alternative to neural networks. What major capability gaps are you addressing, and where do you see the most urgent demand for your approach?
We know there is a huge opportunity for AI to improve projections in industrial processes and close control loops in the physical world. Edge AI, embedded in industries like manufacturing, logistics and utilities, can turn raw sensor data into local decisions under tight latency, bandwidth and safety constraints. None of this is possible with network-heavy and power- hungry neural networks.
For our audience, can you explain what Tsetlin machines are and how they differ from traditional neural networks? What makes this approach fundamentally different from the current AI paradigm?
The development of neural networks has been a significant step forward in ‘artificial intelligence’, by using a strategy of simulating the human neuron. In doing so, the understanding of AI has leapt forward. However, this approach is very computationally heavy; it uses vast amounts of multiplication, which is expensive in silicon, creating a large circuit that burns power.
We are taking the learnings from neural networks and applying them to logic-based approaches to minimise computational load and create AI models that are faster, lower power and explainable. We started with Tsetlin machines, which now make up one part of our architecture. Through a calculated and benchmarked blend of techniques, each of our models are fine-tuned to maximise speed, efficiency, explainability, and accuracy, while stripping away computational waste.
Tsetlin machines are based on the principles drawn up by the visionary mathematician Mikhail Tsetlin. Instead of mimicking biological neurons, Tsetlin’s approach was rooted in learning automata and game theory. He recognised that logic could classify data more efficiently, forging a new direction in AI. This powerful yet elegantly simple machine learning architecture enables our models to deliver state-of-the-art speed and ultra-low power consumption all the while being naturally interpretable. If you’re interested in reading more, we have an explainer on our website.
How do you plan to educate the market about Tsetlin machines given most people are unfamiliar with the technology?
Educating the market and creating an ecosystem of developers is going to be important, but at the end of the day, most customers care more about beneficial results that our approaches offer, i.e. how can they get fast, efficient models using their data in their application. That’s why we’re focusing our engineering not only on the core architectures, but also on creating a set of training tools that will allow our customers to create and train specific, targeted, efficient models. We use large amounts of automation within the tools so that users do not need to be AI or data-science experts to use our tools.
With AI adoption accelerating and sustainability demands rising, where do you see Literal Labs contributing to the broader AI ecosystem in three to five years?
We are still in the earlier days of AI and I believe we are going to see many more technologies, strategies and techniques coming to the market. Environmental and cost concerns are going to accelerate that demand for innovation and refinement. We are at the heart of that demand, bringing computational efficiency and explainable AI technology to bear.
In light of AI’s impact on the workplace, what opportunities do you foresee for democratizing AI development and deployment using your technology?
AI has the potential to substantially impact the workplace, either with increases to productivity or cuts to cost. At Literal Labs, we are firmly focused on AI that can deliver the former, targeting industries and applications that are underserved by existing AI innovations.
Literal Labs is an IP-heavy company with a focus on logic-based AI. What does your innovation pipeline look like, and how do you ensure ongoing differentiation in a rapidly evolving field?
We are, and will remain, an IP and science-heavy business. We have a deep vision of trying to develop AI that is good for all and treads lightly on the environment. We want to be the number one company when it comes to compute- and energy-efficient AI. To do that, we’re building a team of AI algorithm and training tooling experts that are constantly searching for excellence and elegance in design. This means we’re really thinking about the entire solution, always trying to put ourselves in the position of the customer, and truly understand everything from their perspective.
Literal Labs recently secured a significant pre-seed funding round of £4.6M. What are your specific plans for this capital, and what milestones are you targeting?
We’re currently in the process of delivering our product toolkits on several projects across manufacturing, logistics and utilities. The capital is helping us build out our team, and get our product to market later this year.
What advice would you give to other executives considering the transition from established tech companies to AI startups?
I was very lucky to have joined ARM when it was still in the late stages of its startup period and then being part of its growth. In doing so, I got to experience what good looks like and also the effects of pitfalls and mistakes. But, I would say to anyone considering a move from an established company to an early-stage startup that you need to be comfortable with ambiguity and making decisions based on a judgement call where the facts are unclear, and then accepting if that was wrong when new information comes to light. It is part of creating an agile company.
