I'm Going to Experiment with AI-Powered Market Research (And I'll Show You What Happen
Can AI truly map out your ideal customer profile and deconstruct your competitors' funnels? I’m putting Claude Code to the test in a "zero-hype" experiment to see if the results are actually actionable or just expensive noise.
Hey, it's G.
I came across this interesting approach to market research using Claude Code, and honestly? I'm curious enough to try it myself.
Full disclosure: I have no idea if this will work. But that's kind of the point. I want to test it, document what happens, and share the results with you - the good, the bad, and the "what was I thinking" moments.
The Experiment: Finding Marketing Insights with Claude Code
The basic idea: Use Claude Code to analyze customer data, understand your ideal customer profile (ICP), research competitors, and build better marketing funnels.
Why I'm trying this: Right now, I'm working on positioning for AI For Pinoys and a few side projects. Instead of guessing what resonates, I want to see if AI can help me understand my audience better and find angles I'm missing.
The Process I'll Be Testing
Here's the step-by-step I'll be following:
Step 0: Start with a lead/customer list
- Export my existing community data (anonymized, of course)
- Get an enrichment API to fill in details (company info, roles, etc.)
- Feed this to Claude Code
Step 1: Understand my ICPs Prompt Claude to grab:
- Company descriptions
- Job roles
- Industries
- Company sizes
- Any other relevant data
Goal: Paint a clear picture of who's actually in my community
Step 2: Create a detailed ICP report Ask Claude to analyze:
- Voice of customer (how they talk about their problems)
- Pain points (what they're struggling with)
- Potential angles (what messaging might resonate)
- Company categories and sizes
Goal: Move beyond demographics to actual insights
Step 3: Competitive analysis Have Claude:
- Find competitors serving the same audience
- Analyze their positioning and messaging
- Identify gaps in their approach
- Suggest unique angles based on my actual offering
Goal: Figure out how to stand out, not just sound different
Step 4: Deep dive into competitor funnels Use Claude Code + Claude for Chrome to:
- Scrape competitor landing pages
- Analyze their funnel structure
- Study their copy and positioning
- Document what's working (and what's not)
Goal: Learn from what's already working in the market
Step 5: Build recommendations Based on everything learned:
- Recommend funnel structure for my specific goal
- Use subagents to review and refine the approach
- Include reasoning and references for each recommendation
Goal: Create an actual plan, not just interesting observations
Why I Think This Might Work (Or Fail Spectacularly)
What could go right:
- Faster than manual research
- Uncovers patterns I'd miss looking at data manually
- Gives me a structured process instead of random guessing
- Creates documentation I can reference and improve over time
What could go wrong:
- AI might miss nuance that only humans catch
- Data quality issues (garbage in, garbage out)
- Enrichment APIs might be expensive or inaccurate
- The insights might be too generic to be useful
- I might not have enough data to make this worthwhile
Honestly? I'm expecting it to be somewhere in the middle. Some useful insights, some nonsense, and a lot of learning along the way.
The Real Point: Process Over Prompts
Here's what I'm actually testing: Does having a structured process produce better results than just throwing prompts at AI and hoping something sticks?
Because that's the real difference, right?
Anyone can ask Claude "analyze my customers." But:
- What data are you giving it?
- What specific questions are you asking?
- How are you validating the output?
- What are you doing with the insights?
The process matters more than the prompts.
That's what separates "that looks like AI built it" from "this actually works."
What I'll Document
If you're interested in following along, here's what I'll share:
The Setup:
- What data I'm starting with
- Which enrichment API I choose (and why)
- How I prepare the data for Claude Code
- What the initial prompts look like
The Process:
- Screenshots of the actual Claude Code sessions
- What insights come back (with commentary on what's useful vs. garbage)
- Where I get stuck or confused
- What I have to redo or refine
The Results:
- The final ICP report
- Competitive analysis findings
- Funnel recommendations
- Whether any of this was actually worth the time
The Honest Takeaways:
- What worked better than expected
- What was a complete waste of time
- What I'd do differently next time
- Whether I'd recommend this approach to others
Want to Try This Yourself?
I'm not saying "do this." I'm saying "I'm doing this, and you can watch what happens."
If it works, you'll see the exact process. If it fails, you'll learn from my mistakes without wasting your own time.
Either way, we all learn something.
Following Along
I'll document this experiment across:
- Newsletter updates (probably 2-3 emails as I go through this)
- YouTube videos (screen recordings of the actual process)
- Dashboard (final reports and templates if this actually works)
First update coming in about a week once I have the data ready and start the analysis.
No hype. No promises. Just an honest experiment with whatever results we get.
Let's see what happens.
G
P.S. - If you've tried something similar, reply and let me know. I'm going into this semi-blind and would love to hear what worked (or didn't) for you.
P.P.S. - If this ends up being a disaster, at least we'll have a good story about what NOT to do. That's valuable too, right?
Questions I'm Asking Myself (and You Can Too):
- Is manual market research actually better than AI-assisted research?
- Can AI find patterns in customer data that humans miss?
- How much time does this actually save vs. doing it manually?
- Are the insights actionable or just interesting observations?
- Does this work for small communities or only at scale?
I genuinely don't know the answers yet. Let's find out together.