Product strategist working on live mobile games with millions of daily users. I build the systems that turn experiments into insights, and insights into decisions that stick.
Anyone can ship a feature. I build systems—experimentation frameworks, decision playbooks, guardrails—so good decisions become the default, not the exception.
Every choice opens doors and closes others. I ask: "If this works, what does it enable? If it fails, what does it break?" before committing.
Vague problems get vague solutions. I frame every problem with explicit constraints—budget, timeline, risk tolerance—so the right path becomes obvious.
Real problems I've solved—focused on how I framed them, not just what I shipped.
How I improved ad yield without adding more ads—by targeting the right moments, not the most moments.
Ad revenue had plateaued. The obvious move—show more ads—would hurt the experience. We needed more yield without more volume.
The Constraint: Any solution must be retention-neutral. Short-term revenue that hurts long-term trust was off the table.
I ran a cohort-level CPM variance analysis using BigQuery to identify which users were seeing underpriced ads. Then I built session-depth behavioral models to find "safe windows"—moments where users could tolerate longer, higher-paying ad formats without churning.
Key Techniques: User-level CPM modeling · Behavioral segmentation · Session-depth analysis · Retention guardrail monitoring
Serve higher-value ads to high-value users during low-risk moments. More complex to build, but sustainable.
Easier to implement, but would compound UX debt and hurt retention over time.
Yield improved meaningfully with zero retention impact. More importantly, we created a repeatable playbook for "sustainable monetization"—improvements that don't borrow against future engagement.
How I used adaptive challenge to create natural monetization moments without feeling manipulative.
Puzzle games were too easy. Players never needed hints, so rewarded video engagement was low. But making games harder across the board would frustrate casual users.
The Constraint: Difficulty must feel fair. Players should want help, not feel forced into it.
I analyzed completion times, error rates, and retry patterns across hundreds of thousands of users to build a difficulty scoring model. The insight: engaged players can handle more challenge mid-session. By tuning difficulty dynamically based on engagement depth, we created natural friction—not artificial pain points.
Key Techniques: Behavioral funnel analysis · Difficulty curve optimization · A/B testing with retention guardrails · Player psychology mapping
Personalize challenge based on engagement depth. Harder to tune, but feels organic to players.
Would improve short-term revenue but destroy trust and long-term retention.
Hint usage increased significantly with no drop in retention. Key learning: monetization works best when it emerges from genuine gameplay design, not overlay mechanics.
How I evaluated ad networks on what matters—retention impact—not just what they promised.
As we expanded into new markets (EU, LATAM, APAC), existing ad partners underperformed. The easy fix—add more networks—would fragment our stack and add latency.
The Constraint: Evaluate partners on holistic impact (retention + fill + yield), not just headline CPMs they pitch.
I built a multi-factor partner scoring framework that weighted retention impact equally with yield metrics. Then I ran structured negotiations—treating each partnership as a strategic commitment, not a plug-and-play integration. I was the single point of contact across legal, finance, ad ops, and external partners.
Key Techniques: Partner scorecard development · Geo-specific demand analysis · Contract negotiation · Cross-functional stakeholder alignment
Focus on regional specialists with strong demand. Slower expansion, but cleaner stack and better performance.
Would increase fill but also latency, complexity, and maintenance overhead.
Reduced the monetization gap in new markets while keeping the stack simple. Established a repeatable evaluation framework for future expansion decisions.
Structured proposals for products I use obsessively—written like PM interviews, not feature requests.
A ranked progression mode for puzzle training
Puzzles feel like grinding. ELO changes are abstract. There's no sense of climbing toward something meaningful.
Strong engagement, weak monetization. No natural entry points for optional spend beyond premium gating.
ELO was designed for competitive skill measurement, not engagement. It's reactive—adjusts after performance, not proactively. There's no progression identity, no milestones, no "one more game" hook.
Puzzle Climb — A tiered ladder layered on existing puzzles.
Capture and share your best moments
Epic shots vanish instantly. No way to relive a hole-in-one or share a physics-defying bounce with friends.
No viral loop. Organic acquisition relies on word-of-mouth with no content to share.
Golf Battle treats each round as ephemeral. No persistent artifacts, no social currency generation. This limits both organic growth and emotional stickiness.
Perfect Shot Replay — Auto-capture and surface exceptional moments.
Help players help themselves
Loss streaks trigger tilt. No in-app friction to pause and recalibrate. Players churn after preventable spirals.
Preventable churn erodes LTV. Responsible gaming tools exist but feel punitive, not helpful.
Responsible gaming tools are compliance-oriented—hard limits, self-exclusion. There's no "soft middle" that helps users make better decisions while still playing.
Bankroll Coach — Real-time session awareness with supportive nudges.
Ad Monetization team · Portfolio serving ~5 million daily active users, primarily in US & Tier-1 markets
I'm looking for roles where I can shape how products grow—not just optimize what already exists.