Context
This is a project for a multi-billion, industry-leading AI marktech company. Marketers want to target the right audience based on nuanced behaviors and preferences. However, analyzing individual user attributes across millions of site visitors is nearly impossible. While existing AI-powered tools attempt to solve this, they often fall short:

Opaque: AI outputs are hard to explain
Unverifiable: no way to test or override
Risky:
Costly to apply at scale without certainty in accuracy

My Role:
Product Strategist & Design Lead working alongside a team of Junior UX Designers, UI Designers, PM, AI Researcher, Developers
My Impact:
• Enabled marketers to segment audiences 5x faster compared to manual workflows
• Increased adoption of AI-assisted workflows by
40% among pilot marketing teams
• Reduced reliance on data science teams, freeing technical resources for strategic initiatives
• Enhanced marketing campaign precision, boosted targeting outcomes by
75%
Goals
Create an intuitive, AI-powered experience that enables marketers to understand how predictions are made, validate and annotate LLM-generated user attributes, confidently apply insights across massive audiencesSaving time, improving segmentation accuracy, and building long-term trust in AI predictions.​​​​​​​
Problem
A feature-first project. Before I joined the project, the team had built a simulation tool that generates real-time attribute predictions (demographics, hobbies, and brand preferences, etc) based on the testers’ (marketers’) own browsing behaviors.
However, it lacked real-world utility:
❌ Insights were based on biased simulations, not real user behavior
❌ There was no practical use case beyond showcasing technical capabilities. It was a "look what we can do!" prototype, not a product solving real marketer needs
Approach
Redefining the Problem Space
Challenged initial assumptions about the need for “simulation”,  driving a team-wide mindset shift
Led workshops to uncover practical, profitable applications for the underlying AI technology
Redirected the project to focus on generating user attributes derived from real behavioral data​​​​​​​
Creating Strategic Value
Transformed a stagnant, sandbox feature into a scalable, high-value marketing intelligence tool
Mapped clear workflows from data definition, prediction to audience segmentation
Building Cross-Functional Alignment & Mentorship
Bridged gaps between PM, Engineering, and UX to align on a shared vision centered on real user needs
Advocated for human-centered design during technical discussions, ensuring UX had a seat at the decision table
Mentored junior designers by guiding them through information architecture decisions, interaction design patterns, and stakeholder communication
— How it Works —
Define Attribute
Marketers start by defining the insight they want to learn from their customers such as a user’s preferred TV display technology.
Review Sample Predictions
They’re shown a curated set of real user profiles with predicted values, confidence scores, and plain-language explanations. This allows them to test accuracy, provide feedback, and refine the prediction algorithm in a human-in-the-loop manner.
View Value Distribution & Apply
After refining the prediction logic, the system projects how the attribute will be distributed across the entire user base. This gives marketers a clear, scalable view of segment potential before deploying it. Once confident in the accuracy and reach, they can apply the attribute to launch targeted campaigns. (e.g., promoting OLED TVs to users most likely to prefer them)
Results
Established trust in AI tagging workflows
Reduced time-to-segmentation from days to minutes
Introduced a flexible system that’s scalable, and auditable
UX Challenges
1. How do we simplify the AI workflow and make it intuitive for users?
Streamlined the attribute creation process, automated value generation, support CSV upload support, and combine the steps with real-time prediction examples
2. How do we instill trust in AI prediction?
Offered a "sandbox" live preview, allowing marketers to understand how predictions are generated in real time. Highlighted model prediction confidence scores and explanation in the preview sample stage
3. How do we encourage users to provide feedback on the samples to improve prediction accuracy?
Reduced information overload, showing only one user card at a time, and prioritized clear CTAs for feedback, making it feel more “mandatory” but not demanding
Assumptions Tested (with Clients)
• What helps marketers trust AI predictions at first glance?
• Which data points are most useful for verifying accuracy?
• Are random or filtered demographic previews more insightful for marketers?
Takeaway
Building trust in AI requires more than “good” predictions, it’s about transparany and control. Keeping human-in-the-loop in an automated AI workflow not only ensures accuracy, but also increases willingness to adapt at scale.
Back to Top