How to actually know your competitors, not just stalk their website
The 13-dimension framework that turns competitive research from guesswork into a strategic weapon
Most competitive research ends at the homepage. You skim the features page, check if they have a pricing page (they don’t), maybe Google their name and skim a G2 review or two. Then you write a positioning doc based on vibes.
The problem isn’t effort — it’s method. Without a systematic framework, you miss the signals that actually matter: how they’re acquiring customers, what their reviews reveal about product gaps, whether their job postings signal a pivot, what ads they’re running at the bottom of the funnel.
Structured competitive intelligence changes what you can build on top of it. When positioning, messaging, battlecards, and sales decks are grounded in verified competitor data — not assumptions — every downstream deliverable gets sharper.
This is the skill I run before any positioning engagement. And now it’s systematized in Claude Code.
And that’s what I’ll be sharing every week — one Claude skill every week for free to help you 3X your output and velocity as a GTM operator, and get one step closer to owning your outcomes.
How it works — step by step
The skill runs in four phases. The core is Phase 2, where you research 13 dimensions systematically. Each dimension has a prescribed set of sources, a fallback chain if primary sources fail, and explicit confidence levels on every finding.
Phase 1: Confirm before you research
Before anything, lock down the competitor URL. This sounds obvious until you realize “Bolt” could mean ride-sharing, fintech, or Bolt.new (StackBlitz’s AI coding product). The skill prompts yo to verify the exact company, confirm the website URL, and choose your research mode — single deep dive (all 13 dimensions) or comparison matrix (3-6 competitors, core dimensions only, tabular output).
Phase 2: Research all 13 dimensions
This is where the real work happens. Each dimension has a primary source chain and a fallback sequence:
Dimension 1 — Company: Crunchbase and LinkedIn for funding, headcount, HQ. You’re looking for signals like runway, growth rate, burn posture.
Dimensions 2-3 — Product + ICP: Homepage, features page, docs, and case studies. The ICP work goes one layer deeper — searching Reddit for who actually uses the product surfaces segments the company doesn’t publicly claim.
Dimension 4 — Pricing: The skill has a five-step fallback chain. Start with their `/pricing` page. If it’s hidden or “contact us”, check G2 snippets, then Vendr (which publishes negotiated contract data), then Reddit threads for user-reported pricing. Only after exhausting all four sources do you mark pricing as unavailable.
Dimension 5 — Reviews: G2, Capterra, and Reddit for unfiltered sentiment. The reviews dimension surfaces the product gaps and support frustrations that competitors won’t mention in their own marketing — and that become your positioning ammunition.
Dimensions 6-7 — Content + Launches: Blog, resource library, event pages, and Product Hunt. You’re extracting topic themes, format mix, cadence, and any recent announcements in the past 90 days.
Dimension 8 — SEO/AEO: If Ahrefs MCP is available, pull domain rating, top organic keywords, and referring domains directly. If not, fall back to Serper.dev for SERP position checks on 5-10 category keywords. You want to know where they rank, what they rank for, and whether they’re showing up in AI-generated answers.
Dimensions 9-10 — Technographics + Openings: Integrations page for tech stack signals; careers page and LinkedIn Jobs for hiring patterns. A sudden cluster of SDR/BDR postings is a GTM signal. A slate of enterprise CS roles signals a segment shift.
Dimension 11 — GTM: This one requires reading between the lines. Analyze CTAs, check job descriptions for mentions of outbound sequencing tools (Outreach, Salesloft, Apollo), and search Reddit for buyer-reported experiences of their sales process. PLG or sales-led? Inbound-heavy or outbound-aggressive? This shapes how you position against them in deals.
Dimension 12 — LinkedIn/Social: Company page follower count and growth, post frequency and content types, founder activity. The question: are they building an organic content moat or is their brand presence thin?
Dimension 13 — Paid advertising: All three ad libraries (Google Ads Transparency Center, LinkedIn Ads Library, Meta Ads Library) are JavaScript-rendered — direct HTML fetches return nothing. The skill uses two APIs running in the background to get around this:
Apify is the primary method. It’s a marketplace of cloud-hosted scraping “Actors” — serverless apps that someone else has already built and maintained for scraping specific sites. The skill auto-discovers the right Actor for each ad library, checks its input schema, then calls it with the competitor’s domain. Three tool calls in sequence:
search-actors(find the right scraper) →fetch-actor-details(check what inputs it needs) →call-actor(run it). You get structured data back: active campaign count, ad copy themes, CTA patterns, geographic targeting.Firecrawl is the fallback. When no suitable Apify Actor exists for a particular ad library, Firecrawl spins up a headless browser session. It navigates to the ad library URL, renders the JavaScript, and extracts what’s on the page. Two tool calls:
browser_create(launch the session) →browser_execute(interact with the page and extract data).If both fail — which is rare — the skill logs it as a data gap with a manual check URL. But here’s the thing: no active ads found is itself a high-confidence data point. It tells you the competitor is sales-led or organic-first, not that the research failed.
The APIs running in the background — and what they cost
Dimension 13 is the one that trips people up when they try to replicate this manually. The ad libraries are JavaScript walls — you can’t just fetch the HTML. But with two MCP servers connected to Claude Code, the skill handles it automatically. Here’s how to set them up.
Apify — the primary scraping engine
Apify is a marketplace of 3,000+ pre-built web scrapers called “Actors.” You don’t write scraping code — you find an Actor someone already built (e.g., “Google Ads Transparency Scraper”), pass it the competitor domain, and get structured JSON back.
For the competitor research skill, Apify handles:
→ Google Ads Transparency Center scraping (highest priority for B2B SaaS)
→ LinkedIn Ads Library scraping
→ Meta Ads Library scraping
→ G2/Capterra review scraping (Dimension 5)
→ Deep site crawling via the RAG Web Browser Actor
Setup (5 minutes):
Create an account at apify.com
Go to Settings → Integrations → API Token → copy it
In Claude Code, ask it to add Apify as a remote MCP server
That’s it — the skill auto-discovers the right Actors at runtime via
search-actors
Cost: Free tier gives $5/month in platform credits. Lightweight scraping (like ad library checks) costs fractions of a cent per run. A full competitor research pass — 3-4 competitors, 3 ad libraries each — runs about 12 Actor calls. The free tier handles that comfortably.
Firecrawl — browser rendering fallback + site crawling
Firecrawl is an API for crawling and scraping websites with full browser rendering. Where Apify uses pre-built Actors, Firecrawl gives you a raw headless browser that can navigate and interact with any page — including JavaScript-rendered ones that return empty HTML on direct fetch.
For the competitor research skill, Firecrawl handles:
→ Fallback for Dimension 13 when no Apify Actor exists for a specific ad library
→ Full competitor website crawling for deeper product and content analysis
→ Any JS-heavy page that needs browser rendering
Setup (5 minutes):
Create an account at firecrawl.dev
Copy your API key from the dashboard
In Claude Code, ask to add Firecrawl as a remote MCP server
Cost: Free tier gives 500 credits. A single page scrape costs 1 credit. A full competitor site crawl uses 50-200 credits depending on site size. For a typical 3-competitor research run, budget 200-600 credits. The Hobby plan ($16/month) gives 3,000 credits if you need more.
I’ve been running this skill across 6 active client engagements per month. I’ve never needed to go beyond the free tiers.
Both services connect as remote MCP servers — Claude Code talks to them automatically when the skill needs them. You don’t manage API calls or write scraping code. The skill handles the tool chain, the fallback logic, and the error handling.
Phase 3: Synthesis
After all 13 dimensions, you assign confidence levels to every claim (High / Medium / Low), write a 2-3 paragraph executive summary, and document data gaps with explicit follow-up actions. The Iron Law here: no data point without a source.
“Widely known” doesn’t count. Estimates need their reasoning shown. “Not available” is always better than invented data.
Phase 4: Aggregate analysis (multi-competitor)
Once you’ve run two or more deep dives for the same client, Phase 4 kicks in. You cross-reference all the individual profiles and ask Claude Code to run the aggregate insights of all competitors. You get:
A threat matrix ranking competitors: PRIMARY / ENTERPRISE TIER / DIRECT ICP / STEALTH WATCH / LOW / DEFUNCT — each justified with evidence, not gut feel
A feature parity matrix classifying capabilities as Commoditized / Differentiator / Emerging
A credibility signal audit comparing funding, press, enterprise logos, and founder visibility
Strategic recommendations tied to exploitable vulnerabilities — each with rationale, evidence, and effort/impact scoring
The aggregate output is one core ingredient that feeds positioning. Individual deep dives give you facts. The aggregate gives you strategy.
See it in action
Company: Ramp (corporate spend management)
Scenario: Ramp’s PMM team needs competitive intelligence on Brex before refreshing their positioning.
Competitor inputs:
Competitor: Brex
Website: brex.com
Client context (optional): Ramp competes head-to-head in mid-market; Brex recently shifted enterprise
Phase 2 excerpt — what the research surfaces:
Pricing (Dimension 4): Brex’s pricing page shows a free tier and an “Enterprise” tier at “Contact us.” G2 snippets reveal negotiated contracts typically $15-25/user/month for mid-market. Vendr data confirms 12-month contracts with 15-20% negotiation room. Confidence: Medium.
Reviews (Dimension 5): G2 shows 4.7/5 (1,200+ reviews). Reddit surfaces a recurring complaint: “Brex’s support response time degraded significantly after their pivot to enterprise.” This becomes a positioning wedge — Ramp’s mid-market responsiveness vs. Brex’s enterprise-first resource allocation.
Openings (Dimension 10): Brex LinkedIn Jobs shows 8 open Enterprise AE roles and 3 Enterprise CSM roles, zero SMB-focused positions. Confirms the upmarket pivot is structural, not messaging-level.
Paid (Dimension 13): Apify scrapes the Google Ads Transparency Center and returns 47 active search campaigns. Primary copy themes: “scale your business spend” and “enterprise finance platform.” No mid-market language in any creative — they’ve written out of that segment entirely.
Output: Executive summary with 22 verified data points, 9 inferred, 4 estimated. Data gaps flagged: Brex revenue estimate unavailable (Sacra and PitchBook checked, no public figure). Four follow-up actions suggested. Ready to feed directly into Ramp’s positioning refresh.
When to use it
Before a positioning sprint — run this on 3-4 competitors first. before writing a single positioning statement
When a sales rep keeps losing deals to the same competitor — the GTM and reviews dimensions can surface why
When a new competitor enters your market — single deep dive, fe mins, and get the full picture before anyone on your team starts panicking
Get the Skill
Want the full 13-dimension methodology, source chains, and output templates?
This is one of 50+ GTM skills I’ve built in Claude Code to run positioning, content, and launches for Series A-B SaaS companies. If you need the whole system, consider working with me.


