
Many hackathons end the same way. Engineers spend several days building something clever, present it to applause, then watch it fade into a line item in a spreadsheet nobody opens again. I’ve seen it happen. I’ve been guilty of it myself.
When we decided to run our AI hackathon last October, I was determined we’d have a different outcome.
We’ve been using machine learning to power TV measurement and performance since day one, so when the gen AI wave hit, we were ready. Our company-wide OKR is to become an AI-native organization, not simply a company that plasters AI onto its website, but one where AI is embedded in how we build, how we hire, and how our product helps clients.
That meant our AI hackathon had to be structured differently from the start. Every project had to clear two bars: meaningfully use gen AI, and connect directly to TV advertising. We trained the team beforehand through an internal AI guild that built a shared toolkit.
About 50 ideas came in. Sixteen teams pitched Shark Tank–style with live demos. Engineers we surveyed afterward called it the most productive hackathon they’d been part of because they knew their work would actually ship. It did. All 16 projects are now on our product roadmap, with several already live.
One project moved especially fast, and it’s a great example of why we structured the hackathon the way we did. The problem it solved wasn’t hypothetical: our self-serve advertisers were running into it every time they set up a programmatic CTV campaign.
When you build a campaign on our platform, you have access to over 1,000 audience segments — interests, behaviors, demographics — that determine which viewers see your ads. The breadth is intentional: the right audience targeting is the difference between a campaign that performs and one that doesn’t. But navigating a thousand options to find the segments most relevant to your brand is a different problem. For a first-time advertiser, it’s overwhelming before you’ve even started.
We knew this was friction worth eliminating. The hackathon took it on, and what the team built became three features now reshaping how advertisers interact with our platform.
The first is Search-Intent. Previously, a keyword search returned only exact matches, which was useful, but limited. Now, when an advertiser searches for “yoga,” the platform also surfaces contextually related segments like “Fitness,” tagged “May be relevant.” The platform understands what you’re trying to accomplish, not just what you typed.
Figure: Example of the AI-powered search intent dashboard advertisers can use to search audience segments while building their TV ad campaigns.
The second is Segment Recommendations. This is where the AI does the heavier lift. The platform analyzes a client’s website, understands their brand and what they sell, and generates a list of audience segment categories most likely to perform. These segments surface with a “Recommended” tag. For a brand launching their first TV campaign with us, there’s no longer a blank page to stare at.
Figure: Example of the AI-powered segment recommendation feature built into our platform that returns relevant audience categories based on our client’s core brand attributes.
The third is the S&P (Standards & Practices) Approvals Tool. Anyone who has worked in Media Ops knows the grind: a constant stream of network emails, each requiring someone to read, interpret, and translate the response into a standardized status. The team built a smarter way. Now, when a network reply comes in, our AI solution reads it, proposes a standardized approval status, and surfaces a short rationale in a clean review UI. Media Ops still makes the final call; the model recommends, never decides. AI handles the translation layer so the people who know this business best can focus on the judgment calls that actually require them.
The hackathon changed something beyond the features it produced. The shipped features were the visible output, but the more durable win happened inside the team.
Every engineer left knowing how to build AI features — not from a tutorial, but from shipping one. They wired Agentic solutions to real data, debugged prompts under pressure, judged outputs against business needs, and learned which tools held up. Fluency comes from working on a real problem with a deadline. The team isn't waiting to feel "ready" anymore. Having done it with a real use case, they are now prepared to build any AI solutions going forward. And that fluency is accelerating a shift in how our engineering team works. When the problem is clearly defined, AI can increasingly handle the build. The leverage shifts to clarity, judgment, and speed.
It’s how I think about calculators. Mathematicians didn’t stop doing math when calculators arrived, they started doing better math. That’s where we’re headed, and the hackathon was our proof-of-concept.
We’re planning a second hackathon later this year. Same bar, same model: real problems, real shipping, and more engineers leveling up by doing.

I lead engineering at Tatari with a focus on AI-driven solutions that make a real difference for customers. Off the clock, life revolves around family, travel, scuba diving, and snowboarding.
Brands are quietly using AI to produce TV commercials in days, not months, and the results might surprise you. We talked to the marketers behind it to find out what's working, what isn't, and what it means for the future of advertising.
Read more
Tatari's strategic approach to leveraging AI across its TV advertising tools is a fundamental component of our business model, enabling significant growth and market expansion.
Read more
Our AI-powered Planning Engine tool helps brands and agencies build TV media plans with unprecedented speed, accuracy, and comprehensiveness.
Read more