From Klarna to AI Dominance: Daniel Espejo on the Rise of Omnia
As AI assistants rapidly become the new gateway to information, search, and discovery, entire industries are being reshaped once again. Standing at this inflection point is Daniel Espejo, Founder and CEO of Omnia, a company pioneering the emerging field of AI engine optimization—helping brands understand, measure, and improve how they appear inside AI-driven answers.
After seven years leading product innovation at Klarna and other fintech leaders, Daniel recognized a new shift underway: users moving from typed searches to conversational prompts, and from pages of results to singular, trusted recommendations. In that transition, brand visibility is being rewritten. With Omnia, he’s building the tools that allow companies to compete—and win—in an AI-first world.
In this interview with Eqvista, Daniel discusses what the next evolution of “search” really looks like, how Omnia’s platform bridges the gap between traditional SEO and AI intent optimization, and what brands can do now to remain visible as intelligent agents become the default interface to the internet.

Daniel, before launching Omnia, you held pivotal roles at Klarna and other leading technology companies, driving product innovation and growth in the fintech sector. What inspired the transition, and how have your experiences scaling tech businesses influenced your vision for Omnia so far?
I spent seven years at Klarna helping to build and scale products used by millions of people, especially around payments and cards. That gave me a very clear view of how quickly distribution can change. First we had the shift to mobile, then open banking, then the normalization of BNPL. Every time the user entry point changed, entire categories were reshaped.
Over the last couple of years I started to see another shift: people beginning to ask AI assistants instead of typing into search boxes. When I realised how invisible most brands were inside tools like ChatGPT or Perplexity, it felt similar to watching the early days of SEO from the outside. That was the trigger for Omnia.
How do you see consumer discovery and research evolving as AI assistants and search agents become the primary interface for information retrieval?
Discovery is shifting from clicking through a list of links to having a conversation with a single assistant that does the work for you. Instead of “best tools for X” followed by 10 open tabs, people will increasingly say “find the best option for me, explain the trade offs, and help me decide.”
Research journeys will look more like multi step chats than isolated queries. Assistants will remember context, preferences, constraints and previous decisions. That means brands are no longer just competing for clicks, they are competing to be the example, the vendor or the step that appears inside an assistant’s final plan.
At the same time, assistants sit on top of many channels, from websites and docs to reviews and communities. Being strong on a single channel will not be enough. Brands will need to be consistently legible across this whole ecosystem so that AI systems can confidently understand what they do, who they serve and when they are a good fit.
How does Omnia’s AI engine optimization differ from traditional SEO and digital PR methods? Could you describe some core technical innovations behind your monitoring and insights engine?
Traditional SEO and digital PR revolve around keywords, backlinks and authority in web search. AI engines work differently. They read many sources, compress knowledge into model parameters and respond based on intent and context, not just exact keyword matches.
Omnia is built specifically for that world. We optimize for prompts and intents instead of just keywords. We track how often a brand is mentioned, recommended or excluded across realistic AI queries in a category, and we do this across multiple engines rather than just one search index.
Under the hood there are a few key pieces.
First, our Trends engine estimates what people are actually asking AI about in your category. No one except OpenAI, Google or Perplexity sees full query logs, so we combine several data sets: real prompts collected through a Chrome extension, SEO data, and social listening across different platforms. From there we build topics and estimate volume and difficulty for each. Difficulty is relative to your brand and your direct competitors in that topic, so it guides effort rather than acting as an absolute score. It is a helpful signal, not the thing to optimise at all costs. This allows you to choose which prompts and topics to optimize for based on demand and opportunity, not guesswork, and we pay special attention to long tail, natural language prompts that reflect how people actually talk to AI assistants.
Second, our Monitoring layer runs those high value prompts against the AI surfaces that matter most today. At the moment we focus on ChatGPT and Google AI Overviews, and optionally Perplexity, because those three cover the vast majority of practical volume and impact. We monitor at least daily and, because LLMs are non deterministic, we increase sampling until the data converges and we can see the statistical truth of who really shows up for a given intent. After about one to two weeks you get a stable baseline. In Trends you see aggregated topic volume. Once you start monitoring, you see performance broken down at prompt level, including which engine produced each answer and which sources it cited. We also separate branded from non-branded prompts so that you can see the real competitive fight rather than inflating your metrics with your own name.
Third, our insights engine is built around citations, not just visibility scores. We compute share of voice as a clear scoreboard of how often you appear, but we treat citations as the real lever, because AI engines tend to reuse brands that appear frequently and prominently on the pages they cite. For each monitored prompt we inspect the answer, read its citations and map the pages and domains that are doing the heavy lifting. From there we generate concrete actions: appear in top citations through PR, partnerships or listings, find similar publications that follow the same content patterns and publish there, and structure your own site content in the same way that top cited pages are structured so it can become a trusted source itself.
Finally, all recommendations are grounded in that evidence rather than being generic AI copy. We analyse real answers and their citations over days, look for repeatable patterns, and then propose specific changes and outreach moves tied back to particular prompts and URLs. In other words, the share of voice tells you what happened, but our monitoring and citation driven engine tells you what to do next.

Can you elaborate on how Omnia identifies and parses AI engine prompts and connects them back to actionable recommendations for B2B clients?
We think about this in three layers: what to listen to, how to read it, and what to do with it.
First, we decide what to listen to through Trends. Because nobody except OpenAI, Google or Perplexity sees full query logs, we estimate what people are actually asking AI about in your category by combining three data sets: real user prompts collected (with consent) via our Chrome extension, SEO data, and social listening across different platforms. From that we build topics and prompts, and estimate demand and relative difficulty for each. Difficulty is relative to your brand and your competitors inside that topic, so it is a guide to effort, not an absolute truth. This helps B2B clients pick the right intents to monitor and optimize for, including long tail prompts that sound like full sentences, which is how people really talk to AI assistants.
Second, we monitor how AI engines answer those prompts. Today we focus on ChatGPT and Google AI Overviews, and optionally Perplexity, because they represent the vast majority of practical volume. We run high value prompts at least daily. Since LLMs are non deterministic, we increase frequency when results are noisy until the data converges and we reach a stable picture of reality. After one to two weeks you usually have a reliable baseline of how often you show up, how you are framed, and who else appears. At this level we parse each prompt and answer pair into a structured schema: intent, funnel stage, entities like brands and products, platform, and outcome, for example whether you were recommended, mentioned as an alternative, or ignored. We also separate branded from non branded prompts so you can use non branded prompts to see the real competitive battle.
Third, we translate that monitoring into concrete actions using citations and patterns. For each answer we inspect the pages and domains the AI engine cites. Brands that appear more often and more prominently across those top citations tend to win the answer. So we group and score citations at page and domain level, look for patterns in content structure and depth, and then generate recommendations in three main tracks:
- Appear in top citations: identify the third party sites that AI relies on the most in your category, and suggest PR, partnerships, listings or contributed content to get you included there.
- Leverage similar publications: if a certain type of site is frequently cited, we surface similar domains where you could publish content that follows the same successful patterns.
- Use your own channels as citations: we analyse how top cited pages are structured and help you mirror those formats on your own site so it becomes a trusted source in its own right.
Because AI engines often use retrieval augmented generation, improvements on those cited pages can have impact as soon as they are crawled or updated. Third party fixes can move the needle almost immediately. New content on your own site typically becomes visible once it is indexed, usually within days.
All of this is packaged into an action layer for B2B teams. You do not just see share of voice as a vanity metric. You see which prompts matter, how you compare, which pages and domains are driving answers, and a ranked list of tasks for marketing and product marketing to execute. In short, share of voice tells you what happened, and our prompt, monitoring and citation engine tells you what to do next.
What types of brands are seeing the most immediate ROI from adopting Omnia? Are there common challenges or misconceptions your team has to address with new clients?
The fastest ROI comes from categories where purchases involve research and comparison. Think B2B SaaS and infrastructure, fintech and payments, security, data platforms, and complex consumer services like healthcare or education. In those spaces, buyers naturally ask many questions, and a small increase in how often you are recommended by AI engines can have a noticeable impact on pipeline and revenue.
We are also seeing very strong uplift with startups and scale ups. They tend to move fast, and Omnia gives them a way to punch above their weight by focusing on the long tail prompts where AI engines really shine. When they implement our recommendations quickly, especially around long tail, high intent queries, we often see a sharp jump in AI visibility compared to much larger incumbents who are slower to react. We see a few recurring misconceptions. One is “we just need more traditional SEO.” SEO still matters, but AI engines do not simply mirror search rankings. They use documentation, community threads, review sites and long tail content that might never appear on page one. Another is “these models are black boxes, there is nothing we can do.” You cannot buy your way into an answer with ads, but you can systematically improve the signals and narratives that models rely on.
Could you share an example where Omnia’s insights led to a measurable improvement in a client’s AI-driven visibility or sales?
One good example is a B2B payments company operating across Europe. When we started working together, they were rarely mentioned in AI assistants for high intent prompts like “best payment solution for B2B SaaS in Europe” or “alternatives to [incumbent] for marketplace payments.”
Omnia showed them three things: the specific prompts where they were invisible but clearly competitive, the third party sources AI engines leaned on most in their category, and the gaps in their own content and messaging compared to how engines were describing the space.
They focused on a small set of actions, such as refreshing key docs, co creating content with strategic partners, and improving presence on two review and comparison platforms that our attribution layer highlighted as highly influential.
Within a few months, their inclusion rate in high intent prompts for their category roughly doubled, and they started to appear alongside the main incumbents as a recommended option.
Omnia recently raised pre-seed funding of €3.5 million. How is this capital being deployed to stay ahead of larger incumbents entering the AI optimization space?
Most of the capital is going into product depth and into making Omnia the best possible platform for content that wins in AI.
First, we are investing heavily in our monitoring and insights engine. That means deeper coverage of the key AI surfaces, richer understanding of prompts and long tail intent, and stronger insights generation so that clients do not just see dashboards but get clear, evidence based recommendations.
Second, we are putting a lot of effort into content creation workflows. Our goal is that startups and scale ups can come into Omnia, understand what AI engines are looking for in their category, and then create or adapt content that is truly optimised for AI in one place. We want Omnia to be the platform where fast moving teams can go from “we see the opportunity” to “we have shipped the right content” without friction.
What is your vision for Omnia’s platform in the context of agentic AI frameworks? Where do you see the greatest opportunity for both brands and Omnia as AI engines get smarter and more autonomous?
If you look at what Omnia does today, you can already see the agentic future in very practical terms.
Agents still have to do three things: understand what the user wants, research options and then choose a plan. Omnia is already focused on those three layers.
Trends tells you what people are actually asking about AI in your category, especially the long tail prompts that look like real sentences and real tasks. In an agentic world those long tail prompts become things like “migrate my billing to a new provider in Europe” rather than “billing provider Europe”. Knowing which of those intents exist, and how big they are, is the first step to being chosen.
Monitoring shows how AI engines answer those prompts today: who they recommend, who they ignore, and which pages and domains they rely on as citations. For an agent, those same citations are the evidence it will use to justify picking vendor A over vendor B. So when we show you “you never appear in the citations behind these high intent prompts” we are effectively saying “an agent has no reason to pick you yet”.
Actions and content creation are where it becomes very concrete. Based on the citations we see, Omnia tells you exactly what to do: which third party pages to get onto, which types of publications to replicate, and how to structure your own content so that AI engines and future agents can understand and reuse it. We are actively building workflows so that startups and scale ups can go from “we see the opportunity in this prompt” to “we have shipped content that looks like what AI already trusts” inside the product.
What advice would you give to founders and brand managers looking to future-proof their online presence as generative AI engines reshape how customers discover and engage with businesses?
I would focus on a few principles.
First, think in prompts rather than only keywords. Map the real questions, worries and jobs to be done your customers have, and make sure there is clear, high quality evidence for each of them across your content and proof points.
Second, make your brand consistently legible across the web. Your website, docs, blog, partners, communities and review sites all now feed into AI systems. Align your facts and your core positioning everywhere, not just on your homepage.
Third, prioritise depth over volume. Models are getting better at ignoring shallow content. Invest in material that genuinely helps someone make a decision or solve a problem, with concrete detail and proof.
Fourth, start measuring your AI visibility early. Track how often you appear in AI answers for your category, who you are compared with, what narratives are attached to your name and where there are gaps.
Finally, give this topic a real owner. AI engine presence should not be an informal side project. Assign responsibility, even part time at first, define a simple set of KPIs and treat it as a new distribution channel that sits alongside search, social and partnerships.
If you do that, you are not just reacting to generative AI. You are actively shaping how it talks about you and the role you play in your market.
