Redefining AI Governance: The MAISA Path to Compliant Digital Teams
In this exclusive interview for Eqvista, Manuel Romero, Co-Founder and Chief Science Officer of MAISA, shares his insights on building trustworthy AI for enterprise automation. A leading Hugging Face contributor with over 700 open-source models garnering 15 million monthly downloads, Romero draws from his extensive AI expertise to discuss MAISA’s innovative KPU framework.
Founded in 2024 alongside CEO David Villalón, MAISA raised $25 million to pioneer agentic process automation, enabling “Digital Workers” that overcome LLM limitations like hallucinations through auditable, step-by-step execution. Manuel explores MAISA’s mission to deliver reliable AI for regulated industries such as banking and energy, emphasizing governance, compliance, and continuous learning via feedback loops.
This interview highlights how MAISA redefines enterprise AI, making it transparent and scalable, perfect for Eqvista’s audience focused on startup innovation and equity management.

Manuel, you’ve built over 700 open-source models with 15 million monthly downloads. How did that experience shape your understanding of what enterprises actually need from AI?
My experience working with AI at the foundational level — from pre-training and fine-tuning to RAGs — helped me realize that the very architecture powering LLMs, namely Transformers, comes with inherent limitations. These limitations make it difficult for both users and enterprises to simply adopt them and “trust” them out of the box.
Some of the most critical challenges include hallucinations, lack of explainability, limited context windows, and the inability to stay up-to-date. At Maisa, we address these issues through our framework, which was specifically designed to overcome these architectural constraints.
Maisa positions itself as a pioneer in agentic automation, empowering enterprises to build trustworthy Digital Workers that go beyond traditional RPA. Could you walk us through the company’s core mission and how Maisa is redefining enterprise automation across industries?
Our mission — something we outline in our manifesto — is to ensure that no matter how advanced AI becomes, and no matter how fast it operates (soon reaching millions of tokens per second), the user should always have a reliable trace of the reasoning and actions that led to a final solution.
If we don’t safeguard this, and simply “trust” the outputs blindly, two problems emerge:
- AI can be wrong.
- We won’t learn. Without visibility into the reasoning process, we cannot understand or trust it.
From this conviction, at Maisa we designed a framework called KPU, which redefines how AI is perceived and used. With KPU, AI is no longer a black box — instead, it can be audited, given feedback, and continuously improved.
Can you share insights into how Maisa’s AI agents learn and improve over time while ensuring auditability and compliance?
First of all, I wouldn’t call what we build at Maisa “agents” in the conventional sense. For me, an agent is essentially an LLM given a role, some tools, and an objective — and then you flip a coin hoping it lands the right way.
At Maisa, our framework is different. It acts as a reasoning + execution engine. Instead of making one big leap, it takes small, incremental steps — each one backed by code, auditable, and deterministic. This means every action can be traced, verified, and trusted.
When it comes to learning and improving over time, the system is designed around feedback and recovery:
- If a Digital Worker executes a step incorrectly, the user can provide feedback either at the step level or for the entire execution. That feedback is stored and reused in future runs, guiding the worker toward the correct path.
- Additionally, our engine includes a process recovery mode. This allows it to retrieve similar successful executions from the past whenever needed. Over time, this builds a dynamic knowledge silo — a repository of proven solutions — that continuously updates and remains available for the execution engine to leverage.
This combination ensures that Maisa’s Digital Workers don’t just execute tasks reliably but also learn from mistakes, accumulate organizational know-how, and remain fully auditable and compliant.
What industries and use cases have shown the most immediate benefit from Maisa’s platform?
We see the most immediate benefits in highly regulated industries — especially banking, energy, and telecommunications. Banking in particular makes me especially proud, because deploying AI in a sector that deals with such sensitive and mission-critical information — and earning the trust of clients there — is a major milestone.
It validates both our vision and our framework 100%, showing that trustworthy, auditable AI can bring real value where reliability and compliance matter most.
There are numerous AI companies claiming to solve enterprise reliability issues. What makes Maisa’s approach fundamentally different from what we’re seeing with other enterprise AI solutions?
What truly sets Maisa apart is our framework — the KPU. Many companies take an approach that can sometimes work, but it isn’t traceable, auditable, or reliable. Most importantly, it places too much responsibility on the LLM itself, with all the risks that entails.
Our engine, by contrast, uses AI in a highly controlled way. It’s specifically designed to overcome the inherent weaknesses of LLMs — things like hallucinations, lack of transparency, and non-deterministic outputs. By doing so, we provide enterprises with a system that is not only powerful but also trustworthy, auditable, and compliant by design.
That fundamental difference — putting control, auditability, and feedback loops at the heart of AI automation — is what gives us a significant advantage over other enterprise AI solutions.
What is your approach to onboarding enterprises new to AI automation, particularly those wary due to prior AI failures or “black-box” experiences?
When onboarding enterprises that are new to AI automation — especially those that have experienced failures or “black-box” frustrations in the past — our first step is always education and transparency. We clearly explain why Maisa’s approach is different, and how our framework works in a way that ensures traceability, auditability, and control from day one.
But beyond just explanation, we strongly believe in show, don’t tell. After understanding the principles, enterprises can immediately experience the framework in action — seeing for themselves how every step is deterministic, auditable, and open to feedback. This combination of clarity and hands-on experience quickly builds trust and confidence, even for the most cautious adopters.
Maisa has been recognized by Gartner alongside tech giants like Google and Amazon. What does this recognition mean for your team and vision?
For us, this recognition is a very important endorsement. Being mentioned by Gartner alongside companies like Google and Amazon is both an honor and a validation of our vision. It’s encouraging, but more than anything, it fuels our determination to keep pushing forward — staying true to our mission and executing on our ambitious roadmap.
What are the biggest challenges you see for enterprise AI adoption going forward, and how is Maisa positioned to address them?
I see three major challenges for enterprise AI adoption going forward:
Governance and Compliance – Enterprises need AI systems that are fully auditable, explainable, and compliant with strict regulations. Without governance baked in, adoption in sensitive industries becomes impossible.
Privacy and Security – Many companies deal with highly sensitive data. They can’t afford solutions that expose data to “black-box” models. Ensuring privacy-preserving AI and secure execution is critical.
Tool and Integration Gaps – Enterprises operate across a wide array of services, APIs, and legacy systems. Most AI solutions today are not built to integrate seamlessly into that ecosystem, creating friction and slowing adoption.
Looking ahead, where do you see the industrial AI market heading? What should enterprise leaders be preparing for in terms of AI integration over the next 2-3 years?
I believe AI will become increasingly present across industries, much like other transformative technologies before it. In the next 2–3 years, we’ll see not just isolated Digital Workers in production, but entire teams of Digital Workers collaborating to execute complex workflows.
At first, these teams may be supervised by a human acting as an “admin” or “lead,” but over time, even those supervisory roles could be handled by AI. The critical point, however, is that this evolution must happen in a way that is fully transparent, auditable, and trustworthy. That’s where Maisa plays a vital role — ensuring enterprises can adopt this future with confidence, without sacrificing governance or control.
Final question – given your unique position bridging open-source AI development and enterprise deployment, what advice would you give to industrial leaders who are still hesitant about AI adoption?
My advice to industrial leaders who are still hesitant about AI adoption is simple: don’t let past “black-box”/ failed experiences define your future with AI. The technology has evolved, but what matters is choosing solutions that are transparent, auditable, and adaptable to your organization’s needs.
Start small, in areas where reliability and compliance are non-negotiable. That’s where you’ll see the most immediate value and build trust internally. From there, scale gradually — with Digital Workers that can be supervised, audited, and improved step by step.
The key is not just adopting AI, but adopting it in a way that your teams can trust, understand, and continuously learn from. If you achieve that, AI stops being a risky experiment and becomes a sustainable competitive advantage.
