Decoding the Communication AI Ignores: Frederik Sally on Interhuman Social Intelligence Layer
In an exclusive Eqvista interview, Frederik Sally, Co-Founder and COO of Interhuman AI, reveals how his Copenhagen-based startup is pioneering a social intelligence layer for AI to decode non-verbal cues like tone, body language, and pauses, elements comprising up to 93% of human communication. Frederick credits co-founder Paula Petcu’s decade in brain health tech for spotting AI’s blind spots in subtle signals, paired with CSO Line Clemmensen’s multimodal research and his own product expertise from prior startups. Their vision shifts AI from transactional to relational interactions, rooted in DTU origins.
Interhuman’s models blend computer vision, audio analysis, and behavioral science to detect real-time signals like confusion, engagement, or disagreement—beyond simplistic emotion labels. A forward lean with a smile signals interest, while a backward one does not, enabling nuanced, context-aware AI.
Post-€2M pre-seed, funds fuel model training, API development for easy integration, team expansion to 10 (mostly AI engineers), and pilots in training/healthcare. As COO, he prioritizes alignment on key goals amid AI’s pace, emphasizing ethical guardrails like EU AI Act compliance and no surveillance use.

Frederik, Interhuman AI aims to build a social intelligence layer that enables AI to understand non-verbal human communication. What inspired you and your co-founders to focus on this particular aspect of human-AI interaction?
AI is getting really good at language, but it still misses almost everything that makes human communication… human. The most important part of what we say isn’t in the words themselves – it’s in the tone, our body language, the pauses – all the non-verbal signals.
Paula, my co-founder, has spent more than a decade working at the intersection of technology and brain health. She led the development of a digital therapeutic for brain health, and before that she worked on bringing a traditional pharma company into the digital health era. So she’s seen up close how much communication depends on subtle signals beyond words and how easy it is to miss them if you’re only looking at text or data. That experience shaped the idea that AI needs to get better at picking up those cues if it’s going to be trusted in sensitive areas like healthcare. Line, our CSO, has been researching multimodal behavior analysis for years. And then I came in with a product and business background, helping to shape the vision, define where this technology could have the biggest impact, and turn it into a business case investors and customers could believe in. We came together around the idea that the future of AI shouldn’t just be transactional – it should be relational.
Could you explain the core technology behind the social intelligence layer; how does your AI simultaneously process body language, facial expressions, and voice tone to detect emotional states in real time?
Emotion recognition has been around for a while – you’ve seen it in plenty of applications where software says you’re happy, sad, or neutral. We’re actually doing something different. The problem is, that approach oversimplifies what’s really going on. What we do is look at observable behaviors – the stuff people naturally notice in each other without thinking about it. Emotions are internal and often messy to label – happy, sad, neutral doesn’t really help in real-world interactions. Instead, we train our models to detect external signals: confusion, hesitation, curiosity, disagreement. We do this by combining computer vision and audio analysis with behavioral science. So we’re capturing facial expressions, micro-movements, posture, and tone of voice, and then coding them into “social signals” that can be interpreted in context. For example, a smile while leaning forward might signal engagement, while the same smile leaning back could mean the opposite. That’s the nuance we’re teaching AI to read in real time.
What industries or market segments do you see as the most promising for adopting social-aware AI technology in the near term, and why?
In the near term, we see the biggest opportunities in areas where communication really is the product. So things like training and coaching – sales, leadership, healthcare – where how you come across is just as important as what you say. Digital health is another big one, because if a tool can pick up on stress or disengagement in real time, it can actually adjust and be more supportive. And then there’s customer service, which is probably the most obvious one – nobody likes talking to a bot that doesn’t get you. Adding social intelligence there could turn a frustrating experience into something that actually feels human.
How does Interhuman AI plan to stay competitive and innovative, given the rapid advancements in AI by large tech players and startups alike?
Yeah, it’s a fair question because the AI space moves insanely fast these days. For us, the way to stay competitive isn’t to try and outscale the giants, it’s to go deep in social intelligence. The big players are focused on scale and general intelligence. We’re focused on depth in a very specific domain. We’re building the specialist layer that can plug into any of those systems and make them socially aware. And with our easy-to-use API, customers will be able to control that layer at a very granular level – tuning it to their own context and use cases, rather than getting a one-size-fits-all solution.
How can startups identify the right market fit for emerging AI technologies like social intelligence, where the use cases and customer education may be nascent?
With emerging tech like this, you can’t just throw it out there and expect people to get it. You have to find the moments where existing solutions clearly fall short – where people already feel the pain. For us, that’s in soft skill training, where words alone don’t capture the full picture. The trick is to run small, focused pilots that prove real impact quickly, and then build from there. I think that market fit in a new category isn’t about selling the big vision right away but rather about showing in very concrete terms that your tech solves a problem better than anything else.

How do you balance privacy concerns with the need to analyze sensitive behavioral and emotional data? What steps have you taken to ensure ethical AI use?
That’s been a priority from day one. We’re dealing with sensitive data, so we’re building the guardrails before scaling the tech. First, we only look at observable behaviors – the same signals people naturally notice in a conversation – we’re not trying to read minds. Second, we comply with the EU AI Act and make transparency part of how the product works. And third, we’ve drawn some clear lines on where we won’t go, like surveillance or military use. At the end of the day, we’re building this to help people understand themselves and each other better.
What were some of the biggest operational challenges you faced building a deep-tech startup at the intersection of AI and behavioral science, and how did you overcome them?
Honestly, we’ve been fortunate to bring together some very competent people from the start, so alignment has come pretty naturally. It’s been incredibly valuable having a behavioral scientist on the founding team. He makes sure our interpretations are rooted in actual science, not just from intuition or existing AIs that might be biased. That combination of behavioral science and top-notch AI engineering has made things much easier and smoother than I ever could’ve hoped for.
As COO, how do you prioritize activities and align your teams across product, engineering, and go-to-market strategies in such a rapidly evolving AI market?
For me it comes down to focus and alignment. There’s so much happening in AI right now that it’s easy to get distracted. So I try to set a few really clear priorities – the things that matter most for moving us forward – and then make sure product, engineering, and go-to-market are all pointed at those same goals. Making sure everyone knows what the north star is. And because we’re still a small team, we can keep communication tight and move quickly without losing that alignment.
Your company recently raised a €2 million pre-seed round. How do you plan to deploy these funds towards product development, team expansion, and market penetration?
This round is really about taking us from research and prototype into a real product. Most of the funding goes into developing and training our models, and into building a clean API with a great developer experience so companies can plug social intelligence into their products with just a few lines of code. We’re also expanding the team, going from just the founders to about 10 people, mainly AI engineers and tech team. On the commercial side, we’re putting resources into pilots with customers in areas like training and healthcare, so we can prove the value quickly and build our first wave of adoption.The plan is straightforward: build the core technology, prove its value in pilots where communication really matters, and make it effortless for developers to adopt.
For companies looking to raise early-stage funding in deep tech and AI, what approaches or messaging have you found most effective in attracting investors?
It was not easy, to be honest. And I think that’s partly because we’re in Europe. Deep tech here doesn’t always get the same immediate hype as in the US, so you have to work harder to show both the vision and the practical business case. What worked for us was being very clear on the gap we’re solving – that up to 93% of human communication is non-verbal and current AI ignores it – and why now is the right time to fix it. Pairing that with a team that has scientific depth, tech and product experience made the story credible. So my takeaway is that in Europe, you can’t just sell the dream, you need to prove the wedge and the execution path. That’s what convinced our investors at least.
What key advice would you give to founders looking to build AI startups focused on human-centered technology?
I don’t know if I’m in a position to give big advice yet, we’re still early in our own journey. But one thing I’ve seen work is how much it matters to build a really clear vision and story around what you’re doing. In a crowded space like AI, people need to instantly get why you exist and why it matters. Having a strong visual brand and narrative has helped us a lot in attracting both investors and talent, because they can see themselves in the mission.
