Join us on August 21 to unwrap the WHOOP 5.0 launch. Save your spot!
Unwrap Q&A

The Founder Loop: Breaking down the buy versus build debate

Read why the decision to build an internal customer intelligence tool isn't as strategic as you think.

Unwrap
August 11, 2025

Table of Contents

Book a demo

To build or to buy—that is the question.

It’s one that comes up for teams with the technical resources available to consider building their own customer intelligence platform from scratch. In this edition of The Founder Loop, we sit down with Ryan (Unwrap’s CEO and co-founder) to explore the tradeoffs behind that consideration.

From the hidden costs of building and maintaining internal tools to the challenge of surfacing insights you didn’t know to look for, this conversation is a candid look at why building in-house isn’t always the strategic advantage it appears to be—and how Unwrap is purpose-built to deliver value out of the box. 

When companies think about building their own customer intelligence platform, what do they usually underestimate?

Ryan Millner: “It’s pretty simple—it's the engineering hours required to build and maintain such a platform. 

There's a lot of things under the hood that, if you’re not familiar with this space, you’ll naturally underestimate. People see ChatGPT, and other tools like it, as a place where they can upload data and get pretty straightforward outputs. So they think, ‘Let's just use that approach’ for understanding their customer feedback. 

But in reality, there's a ton of other things that need to get built in order to get value. Things like: building a UI that's easy to interact with, upgrading underlying technology, and simply maintaining the system over time. Being able to integrate with a variety of different sources, besides just the one source you may have uploaded to GPT.”

Can you explain how costs add up when teams try to build this internally?

Millner: “People are expensive. Building a great product requires someone's time—and a lot of it. The engineering resources piece is important, but so is teaching others how to use the platform correctly, train them on it, and maintain it over time. 

If the tool is effective, the engineering demand only increases. Adding new data sources, refining the product when you have higher volumes of feedback, incorporating new improvements to underlying AI models. 

Plus, there are huge areas of opportunities that you continue to unlock the more you build—more feature requests, more people integrating it into different workflows. Those never end, and you need a fully staffed team to constantly maintain and improve the tool.”

The Unwrap team talks about the difficulty in building a system that can uncover things you didn’t think to look for. Why is that such a key difference? 

Millner: “This gets to why an off-the-shelf LLM is not well suited for this task—becuase they can’t detect the right granularity of insights to make them actionable. 

When an insight is actionable, that means a few different things. One, it's specific enough that you don't need to go do further research—you can go act on it right away, because you understand the quantifiable impact of doing so. LLMs, in general, are not great at quantifying themes. 

Second, is traceability. You need to be able to, for a given customer feedback theme, go deep and read all the customer entries that led that grouping. This allows you to understand how accurate the platform you’re using is, so you can build trust in it.” 

Can you speak to the technology? What makes it harder to find net-new trends?

Millner: “There are two discrete tasks there. One is, someone has a sense of what the top 10 issues are for their customers and wants to use AI to quantify them. 

Let's say I'm a PM, working on a particular part of a product. I know what the top biggest product issues are, but I don't know, specifically, which one's most impactful. That's an easier task because you have a predefined set of labels—the system takes feedback and matches it to that label. The hard work of coming up with the label is already done. It’s just a matching game—which is an easy task for both machines and humans, and therefore less valuable. 

What's really hard (and more valuable) is finding new themes that you didn't know existed. Let's say you asked the system to rank those top 10 priorities, but then find you're actually missing 2 or 3 items that you weren’t aware of. That's an entirely different task, called clustering

It’s where a model has to take unlabeled sets of feedback and cluster, or map them, to different issues. It's hard to get right because there's no ground truth, but so valuable to solve—because you're showing people entirely new friction points or issues that they didn't know existed or didn’t think to look for.”

How does Unwrap approach usability and why is it such a critical piece of the build vs. buy equation?

Millner: “When people start going down the “build” route, typically they will take a dataset, upload it to an LLM, and then share out the results. They think, ‘Great, my work here is done!’ There are limitations—it’s not a recurring process, it’s a one-off exercise that requires manual work. 

Also, it's isolating. You are interacting with this system and seeing the results, but if you want to share the results, if you want others to verify the insights, if you want people to perform deep dives of their own on the same system—that's nearly impossible without a centralized platform. 

Even integrating with data visualization platforms, like Tableau, has its limitations. You have to build and maintain all the visualizations, update them over time—and transparently, they're fairly difficult to use. 

Having a really simple-to-use, yet powerful UI, where anyone on your team can see the same insights, or build their own charts with live-updating data, that's a big difference. It’s often why internal tools that are cobbled together don't see wider adoption and don't drive as much impact.”

What does it take to maintain and evolve a customer intelligence platform over time?

Millner: “There are the basics of platform maintenance—you want to have a system that is getting better over time, faster, more proficient, more capable. 

What's harder is that technology in the AI space—particularly around natural language processing (NLP)—is evolving so quickly. To understand the latest capabilities, to constantly test and upgrade the underlying system when there are improvements, is a full-time job. 

You have to be really plugged into the ecosystem. You have to have a robust testing framework that allows you to test and swap in new models as improvements come up—without disrupting existing workflows. Because the speed of innovation in this space is so fast, if you’re not on top of it, what you've built today won’t be effective in 6 months.” 

How does Unwrap stay ahead, in terms of platform quality and innovation?

Millner: “First through our industry knowledge. We understand what's evolving and what's new. Second through our ability to quickly test, verify and launch new models. 

We are constantly measuring the performance of 20+ AI tasks our system is performing, and when we notice and improvement, we have the tooling to seamlessly swap it in without disrupting existing trends and insights. 

Our team is incredibly passionate about NLP, so everyone is constantly working to understand the latest models—reading blogs about what's coming up, combing through the intricacies of why it's performing better, and understanding the cost implications of bringing such a model to production.”

For a team that's still weighing the decision to build or buy, what's the most important takeaway you'd want them to hear?

Millner: “This is all we do—all we think about every day, every night. To try and build in-house is going to yield a more expensive product that's less capable, and at the end of the day, no one wants that. 

Leave it to the experts.”

Discover what matters most.

Book a demo to unwrap what matters to your customers, so you can build what they'll love.

Book a demo