The Promise of Sovereign Cloud Computing: Insights from Lyceum’s Magnus Grünewald
Magnus Grünewald is the CEO and co-founder of Lyceum, a Berlin- and Zurich-based startup building a sovereign European GPU cloud to democratize AI compute access amid Europe’s infrastructure lag. In the recent Eqvista interview, he shares how his background in energy and infrastructure at Enpal and Arbio fueled Lyceum’s launch, envisioning AI as the next industrial revolution and a “power socket for compute” with automatic optimization and EU data sovereignty.
Magnus highlights Lyceum’s €10.3M pre-seed raise, 2-3x efficiency gains over hyperscalers, and bold advice for European founders: be aggressive to lead in distributed, sustainable AI infrastructure.

Magnus thank you for joining us, Could you please share with us what inspired you to found Lyceum?
Thank you for having me! The inspiration for Lyceum really came from recognizing that we’re witnessing something transformative – AI isn’t just another tech trend, it’s genuinely the next industrial revolution. We’re at this incredible inflection point where AI has the potential to reshape how we work, learn, and solve problems.
But what really drove me to action was seeing a disconnect between this revolutionary potential and who actually has access to it. Too often, cutting-edge AI remains locked behind technical barriers or concentrated in the hands of a few large players. At Lyceum, we fundamentally believe that everyone – regardless of their technical background – should be able to harness this technology.
That’s why we’re focused on two core missions: First, making AI incredibly accessible. We want to remove the complexity and create tools that feel as natural to use as any everyday application. Second, and this is particularly important given the current geopolitical landscape, we’re committed to building AI sovereignty within the EU. Europe needs its own robust AI ecosystem – not just for economic competitiveness, but to ensure our values and regulations are reflected in the technology that will shape our future.
Essentially, Lyceum exists to democratize AI while ensuring Europe has a strong, independent voice in this new era.
How do you see Europe’s position in the global AI infrastructure race and what gap did you see in the European market?
Europe’s position in the AI infrastructure race is stark when you look at the hardware layer. The reality is, we don’t have a single major GPU manufacturer. NVIDIA, AMD, Intel – they’re all American. Their supply chains are mainly from Asia.
The cloud infrastructure gap is equally concerning. AWS, Azure, Google Cloud – yes, they have European regions, but the control, the innovation, the strategic decisions all happen in Seattle and Mountain View. When you look at pure AI compute capacity, Europe has maybe 10-15% of what’s available in the US. And it’s not just quantity – it’s about having the latest hardware.
The gap we saw wasn’t just about having data centers in Europe – it’s about having AI-native infrastructure. Traditional European hosting providers have data centers, but they’re built for web hosting, not for AI workloads. They lack the high-bandwidth interconnects, the specialized cooling for GPU clusters, and the infrastructure to handle massive parallel computing jobs.
That’s where Lyceum comes in. We’re building cloud infrastructure specifically optimized for AI workloads, with European data sovereignty guaranteed. We’re aggregating GPU capacity across European providers, creating economies of scale that individual companies can’t achieve. Think of us as creating a European alternative to the hyperscaler AI clouds – where the infrastructure, the data, and the control all stay in Europe.
What do you think will define the next chapter for European AI infrastructure?
The next chapter will be defined by two major shifts.
First, distributed compute as a competitive advantage. Europe probably won’t win by building single massive data centers like in the US – and we don’t need to. Our strength is in our distributed nature. I see a future where we’re orchestrating AI workloads across a federated network of smaller, specialized compute centers across the continent. This isn’t a limitation – it’s actually perfect for edge AI, for data residency requirements, for resilience. The technology to efficiently distribute training and inference is finally catching up to make this viable.
Secondly, and this is crucial – hardware sovereignty through specialization. We’re not going to out-NVIDIA NVIDIA. But Europe is already leading in specialized chips – look at what’s happening with neuromorphic computing, quantum-classical hybrid systems, or other novel concepts. The next chapter isn’t about catching up on general-purpose GPUs; it’s about defining new compute paradigms where Europe can lead. Where Europe has always struggled is moving from scientific discovery to commercial adaption and this is what we need to focus on.
At Lyceum, we’re building for this future. We’re creating the orchestration layer that can seamlessly distribute workloads across this emerging European compute fabric. We’re partnering with renewable energy providers to lock in sustainable compute. And we’re designing our platform to be hardware-agnostic, ready to leverage whatever specialized European silicon emerges.
The winners in the next chapter won’t be those with the most GPUs, but those who can most efficiently orchestrate diverse compute resources while guaranteeing sovereignty and sustainability. That’s Europe’s opportunity.
What makes Lyceum unique compared to global cloud providers?
What makes Lyceum fundamentally different is that we’ve reimagined what cloud infrastructure should be. Global providers give you a toolbox – hundreds of services, instance types, configuration options. They expect you to be the architect. We think that’s backwards.
Our vision is the power socket for compute. When you plug something into an electrical outlet, you don’t think about which power plant it comes from, how the grid is balanced, or what voltage transformations are happening. It just works. That’s what we’ve built for AI compute.
When a customer sends a workload to Lyceum, they don’t choose instance types. They don’t configure load balancers. They don’t worry about GPU availability. Our platform automatically handles everything – we analyze your workload in real-time, select the optimal hardware, distribute it across our infrastructure, handle all the orchestration. You just plug in and compute.
The key is that we’ve built this on our own datacenter infrastructure. Unlike global providers who’ve retrofitted AI onto general-purpose clouds and datacenters, every decision we’ve made – from our network topology to our cooling systems to our hardware selection – is optimized for AI workloads. This vertical integration lets us deliver performance that matches or beats the hyperscalers, but with radical simplicity.
The global providers are building Swiss Army knives – incredibly powerful but complex. We’re building the power grid for AI – invisible, reliable, and so simple that using it requires no thought at all. That’s not just an incremental improvement; it’s a fundamental rethink of how AI infrastructure should work.”
What challenges did you face building Lyceum, and how did you overcome them?
The two biggest challenges we faced were actually interconnected – capital and talent.
On the capital side, we were trying to raise money for AI infrastructure in Europe, which is a tough sell. VCs would ask, ‘Why not just resell AWS?’ or ‘How can you compete with hyperscalers?’ Traditional European investors wanted to see a lot proof points and revenue before investing in infrastructure, while US investors questioned why we’d build in Europe at all. Infrastructure is capital intensive – you need serious money before you can generate your first euro of real revenue.
What changed everything was finding investors who understood the strategic importance of what we’re building. Our investors aren’t just writing checks – they’re true partners thinking creatively about how to fund European AI sovereignty.
The software challenge was equally daunting. We’re not building a simple SaaS product – we’re creating an orchestration layer that needs to be incredibly sophisticated yet completely invisible to users. That requires world-class distributed systems engineers, ML infrastructure experts, people who’ve built hyperscale systems before.
The breakthrough was realizing we didn’t need to compete with Silicon Valley on compensation alone. The engineers who joined Lyceum came because they wanted to solve problems that matter.
The key to overcoming both challenges was the same: finding people who saw beyond the immediate obstacles to the massive opportunity. Whether investors or engineers, the ones who joined Lyceum understood we’re not just building another cloud provider – we’re building critical infrastructure for Europe’s AI future.
How does Lyceum ensure compliance with the constantly evolving European data privacy regulations beyond GDPR?
This is actually one of our core advantages – we’re not retrofitting American infrastructure for European compliance. We’re European-first, which means compliance is architected into every layer of our stack.
Beyond GDPR, we’re seeing a wave of new regulations – the AI Act, sector-specific rules for healthcare and finance, national interpretations like France’s CNIL guidelines. Most global providers treat these as a compliance checklist. For us, it’s our operating reality.
The foundation is compliance by architecture. Data residency isn’t just a configuration option – it’s physically enforced. When you run a workload in Germany, it’s impossible for that data to leave German infrastructure. The networking layer literally won’t allow it. We’ve built what we call ‘regulatory boundaries’ into our infrastructure fabric.
But honestly, technical architecture is just the start. What really matters is that we’re deeply embedded in the European regulatory context. We work closely with our customers’ compliance teams to understand their specific requirements. When a financial services company needs to ensure their AI models meet ECB guidelines, or a healthcare company needs to navigate both GDPR and medical device regulations, we’re not learning about these requirements for the first time.
The key difference is that global providers are trying to serve a hundred markets with one platform. They’ll always optimize for their largest customers – usually American enterprises. When European regulations get stricter or more complex, you’re often left figuring out workarounds on your own.
We’re different because Europe isn’t an edge case for us – it’s our entire focus. Our infrastructure decisions, our roadmap, our partnerships – everything is built around succeeding in the European regulatory environment. That means when new regulations like the AI Act come into force, we’re not scrambling to adapt. We’ve been preparing for it since day one.
This focus might seem limiting, but it’s actually liberating for our customers. They can innovate with AI knowing their infrastructure partner truly understands the European context – not just technically, but culturally and regulatorily. That peace of mind is invaluable when you’re building AI systems that need to be trusted by European users and regulators alike.
Can you share any success stories from clients who have built or scaled innovative AI solutions on Lyceum?
We’re seeing two clear patterns in our early customer engagements that validate our vision.
First, European companies across sectors – from AI startups to research institutions – are hitting the same wall. They need advanced compute infrastructure but face an impossible choice: use US hyperscalers and compromise on sovereignty, or accept inferior performance. We’re proving there’s a third option – European infrastructure that actually delivers better results.
Second, and this is what excites me most, we’re consistently finding 2-3x efficiency improvements when companies migrate their workloads to us. It’s not magic – it’s focus. The hyperscalers built general-purpose clouds and bolted on AI capabilities. We built everything from the ground up specifically for AI workloads. That specialization translates directly into better performance and lower costs.
What’s particularly telling is the range of interest we’re seeing – from startups doing inference at scale to enterprises training proprietary models. They all share the same frustrations with current options and the same surprise when they see what purpose-built AI infrastructure can actually deliver.
These early engagements prove that Lyceum isn’t just filling a gap in the market. We’re demonstrating that European infrastructure can be both sovereign AND superior. That combination is why I’m confident we’ll win.
Congratulations for recently closing a €10.3M pre-seed round, what are your priorities for Lyceum’s next phase?
Thank you! It’s an exciting milestone, but really just the beginning. We have three immediate priorities.
First, building out our core platform. We’ve proven the concept, but now we need to turn it into production-ready infrastructure. This means developing our orchestration layer, the automated workload optimization engine, and the APIs that make using Lyceum as simple as plugging into a power socket. The technical architecture is clear – now it’s about flawless execution.
Second, assembling a world-class team. Infrastructure at this scale requires exceptional talent. We’re hiring distributed systems engineers who’ve built hyperscale platforms, ML infrastructure experts who understand AI workloads at a fundamental level, and critically, people who share our vision for European AI sovereignty. The capital allows us to attract engineers who might otherwise default to big tech.
Third, laying foundations for rapid scaling. This means securing datacenter partnerships, locking in GPU allocations for the next 18 months, and building relationships with the enterprises that will become our early adopters. We’re also establishing our presence in key European tech hubs – you can’t build European infrastructure from a single city.
What’s crucial is that these aren’t sequential – they’re happening in parallel. Every engineer we hire accelerates our platform development. Every datacenter partnership increases our value to customers. Every customer conversation shapes our product roadmap.

What trends in AI infrastructure or the broader tech landscape excite you most as you look ahead?
Three trends have me particularly excited about what’s coming.
First, the shift from training to inference. Everyone’s focused on who can train the biggest model, but the real infrastructure demand is shifting to inference at scale. We’re seeing ratios of 1:100 – for every dollar spent on training, companies need a hundred dollars of inference compute. This plays perfectly to our distributed European model. You don’t need one massive cluster for inference; you need an efficient, distributed compute close to your users.
Second, hardware diversity exploding. Every year NVIDIA releases three more chips, each one more expensive than last year’s version. At the same time, competitors try to catch up and build competitive hardware.
But here’s the thing – the vast majority of consumers do not actually know whether they needs the newest, best GPUs. Sure Jensen says “twice the price, three times the performance” but can that really be generalised.
This massive asymmetry of information in the hardware space means millions of euros wasted for companies. Someone needs to abstract away this complexity. Developers shouldn’t need to rewrite their code for each chip. Our platform is built to be hardware-agnostic from day one, automatically routing workloads to whatever hardware runs them best.
There is of course the fairy tale of simply “reducing compute down to FLOPS, vRAM, etc.” and enabling everybody to use any chip. But even just telling users “You do not need a B300, your model runs just fine on an A100” adds a lot of value.
And as the hardware landscape fragments, our orchestration layer becomes even more valuable.
Third, and this is the big one – AI becoming mundane infrastructure. Right now, AI is special. Companies have AI strategies, AI teams, AI budgets. But it’s rapidly becoming just another part of the stack, like databases or networking. When that happens, the winners won’t be those with the most exotic technology, but those who make AI infrastructure boring, reliable, and invisible. Just like electricity.
What excites me most is how these trends intersect with Europe’s moment. The inference shift means our distributed infrastructure is an advantage, not a compromise. Hardware diversity means we can leapfrog by adopting the best new chips without legacy lock-in. And as AI becomes critical infrastructure, European sovereignty becomes non-negotiable.
We’re building Lyceum at exactly the right moment. The market is shifting from ‘how do we build AI?’ to ‘how do we run AI efficiently, sustainably, and sovereignly?’ That’s our sweet spot.
What advice would you give entrepreneurs launching technical infrastructure startups in Europe?
My biggest advice? Be bold. Be aggressive. Too many European entrepreneurs apologize for building in Europe or limit their ambitions to being a ‘regional player.’ That’s nonsense.
Look, infrastructure is hard. It’s capital intensive, it takes time, and yes, you’re competing with companies that have seemingly unlimited resources. But that’s exactly why you need audacious goals. If you’re going to build infrastructure, don’t build for today’s market – build for the market in five years. Don’t aim to be 80% as good as the incumbents. Aim to be 10x better at something specific.
Being aggressive means moving fast on partnerships, talent, and capital. The best datacenter locations? They’re being locked up now. The top engineers? They’re getting offers from five companies. Strategic GPU allocations? You need to secure them 18 months in advance. In infrastructure, hesitation is death.
But here’s what being bold really means – reject the narrative that Europe can’t lead in technical infrastructure. We have advantages that Silicon Valley doesn’t – energy infrastructure, regulatory clarity, and honestly, a more sustainable approach to building technology. Use them.
If you’re not getting told your vision is impossible at least once a week, you’re not thinking big enough. Europe doesn’t need more conservative infrastructure companies. It needs founders willing to be as aggressive as our American counterparts – but building for European values. That’s how we win.
