About This Playbook
This playbook was generated using the Blueprint GTM Intelligence System, created by Jordan Crawford. The methodology combines public data sources with non-obvious synthesis to create hyper-specific outreach messages that mirror exact prospect situations. Each message is validated through 5-gate testing and buyer critique to ensure ≥7.0/10 quality scores.
Company Context
Company: Vetsource
Core Offering: Prescription management, payment processing, client engagement tools, and business intelligence platform for veterinary practices and groups.
ICP (Ideal Customer Profile): Multi-location veterinary groups (5+ practices) and high-volume independent companion animal practices seeking operational efficiency, revenue optimization, and consolidated reporting.
Target Persona: Veterinary Practice Managers, Group CFOs/COOs, and Practice Owners responsible for revenue cycle management, operational efficiency, client retention, and financial reporting.
The Old Way (Generic SDR Outreach)
Most sales teams send messages like this:
Subject: Quick Question about Vetsource
Hi [First Name],
I noticed on LinkedIn that your veterinary practice recently expanded. Congrats on the growth!
I wanted to reach out because we work with practices like VCA and Banfield to help with prescription management and payment processing.
Our platform streamlines prescription routing, improves client engagement, and provides business intelligence. We've helped practices increase prescription capture rates by 15-20%.
Would you have 15 minutes next week to explore how we might be able to help your practice?
Best,
Generic SDR
Why This Fails:
- Generic triggers: "recently expanded" could mean anything (or nothing)
- Soft signals: No specific data about their actual situation
- Feature dumping: Lists what the product does, not what problem it solves
- High friction: Requires 15-minute meeting commitment before any value
- No credibility: Recipient can't verify any claims
The New Way (Blueprint GTM Methodology)
Blueprint messages are built on three principles:
Traditional Approach
- Generic pain points
- Soft signals (funding, hiring, growth)
- Feature-focused
- Meeting required
- Unverifiable claims
Blueprint Approach
- Hyper-specific situations
- Hard data (government, competitive, velocity)
- Outcome-focused
- Low-friction reply
- Verifiable data sources
Message Types:
Strong PQS (Pain-Qualified Segment): Uses observable data to identify prospects in a specific painful situation. Score threshold: 7.0-8.4/10. Goal: Earn a reply by mirroring their exact situation with verifiable data.
Validated Plays (4 Strong PQS Messages)
Play 1: High-Volume Practices Without Digital Ordering Strong PQS (7.4/10)
Target Segment
Independent veterinary practices with high patient volume (80+ Google reviews/month) but no online prescription ordering system detected on their website. These practices are likely losing prescription revenue to online pharmacies (Chewy, 1-800-PetMeds) because clients must call or visit for refills.
Why This Works
Buyer Critique Score: 7.4/10
- Situation Recognition (8/10): Exact review count is verifiable on Google Business Profile
- Data Credibility (7/10): Review velocity is observable, visit estimate disclosed as calculation
- Insight Value (7/10): They know they lack online ordering, but don't know their volume translates to significant leakage
- Effort to Reply (9/10): Simple yes/no question, minimal friction
- Emotional Resonance (6/10): Triggers curiosity about revenue loss
The Message
Subject: 142 monthly reviews
Your practice averaged 142 Google reviews per month over the past 90 days, but I don't see online prescription ordering on your website.
At industry standard 3% review rates, that suggests ~4,700 monthly patient visits—many likely filling prescriptions at Chewy or 1-800-PetMeds instead of through you.
Does this match what you're seeing?
DATA SOURCES:
•
Google Maps Places API - reviews[].time field for review velocity
• Website inspection (manual or BuiltWith) - online ordering presence detection
• Industry benchmark: 3% review rate (AVMA veterinary practice studies)
CALCULATION WORKSHEET:
Claim 1: "142 Google reviews per month"
- Source: Google Maps API reviews[].time field
- Calculation: Fetch last 300 reviews, filter to 90 days, average by month
- Confidence: 85% (API data reliable, monthly variance exists)
Claim 2: "~4,700 monthly patient visits"
- Source: Industry benchmark (3% review rate)
- Calculation: 142 reviews ÷ 0.03 = 4,733 visits/month
- Confidence: 50-60% (assumes industry avg, actual rate could vary 2-5%)
- Disclosure: "suggests" signals this is estimated
Overall Message Confidence: 60-65% (observable data + benchmark estimation)
Play 2: High-Volume Growth Practices (Alternative) Strong PQS (7.8/10)
Target Segment
Same segment as Play 1, but focuses on growth trend and operational burden rather than revenue leakage. Shows increasing review velocity over time, highlighting staff time spent fielding prescription phone calls.
Why This Works
Buyer Critique Score: 7.8/10
- Situation Recognition (9/10): Growth trend is specific and verifiable
- Data Credibility (8/10): Month-by-month review data is observable
- Insight Value (7/10): Growth trend + operational burden creates clearer picture
- Effort to Reply (8/10): Call volume question is easy ballpark answer
- Emotional Resonance (7/10): Staff burden resonates with practice managers
The Message
Subject: prescription revenue gap
I pulled 90 days of your Google review data—142 monthly average with steady growth (128 in Jan → 156 in March).
That volume suggests significant prescription opportunities, but without online ordering, clients default to Chewy's one-click refills instead of calling your office.
How many prescription calls are your staff fielding daily?
DATA SOURCES:
•
Google Maps Places API - reviews[].time field for monthly breakdown
• Website inspection - online ordering detection
CALCULATION WORKSHEET:
Claim 1: "142 monthly average with growth (128 → 156)"
- Source: Google Maps API reviews[].time
- Calculation: Group reviews by month (Jan, Feb, Mar), show trend
- Confidence: 85% (observable month-by-month data)
Claim 2: "significant prescription opportunities"
- Source: Volume inference from review rate
- Confidence: 60% (directional, not quantified in this version)
Overall Message Confidence: 70% (removed weaker claims from original version)
Play 3: Multi-Location Expansion Groups Strong PQS (7.6/10)
Target Segment
Veterinary groups operating 5+ locations that have opened new practices in the past 12 months. These groups face operational complexity: fragmented payment processing, inconsistent workflows, and lack of consolidated reporting during the critical first-year post-expansion window.
Why This Works
Buyer Critique Score: 7.6/10
- Situation Recognition (9/10): Exact locations and opening dates are verifiable
- Data Credibility (7/10): Location data observable, fee claim is industry benchmark
- Insight Value (7/10): Payment fragmentation cost insight is non-obvious
- Effort to Reply (8/10): Straightforward question about reporting
- Emotional Resonance (7/10): Hits real pain point during chaotic growth phase
The Message
Subject: 3 locations, 12 months
You've opened 3 new veterinary locations in the past 12 months—Scottsdale in March, Tempe in July, Mesa in November.
Most multi-location groups don't realize fragmented payment processors cost 2-3% more in fees than consolidated platforms, especially in that first year when each location is using different systems.
How are you handling reporting across all 7 locations?
DATA SOURCES:
•
Google Maps - business listings with opening dates
•
LinkedIn - company page location updates
• Industry data: Payment processing fee benchmarks (fragmented vs consolidated)
CALCULATION WORKSHEET:
Claim 1: "3 new locations with specific dates/cities"
- Source: Google Maps Place.opening_date + LinkedIn company updates
- Confidence: 75% (dates can be approximate, cross-verify multiple sources)
Claim 2: "2-3% more in fees for fragmented processing"
- Source: Industry payment processing benchmarks
- Calculation: Fragmented (2.9% + $0.30) vs consolidated (2.4% + $0.20)
- Confidence: 60% (industry average, not practice-specific)
- Disclosure: "Most multi-location groups" implies pattern, not exact
Claim 3: "7 locations total"
- Source: LinkedIn company page OR Google Maps count
- Confidence: 85% (observable count)
Overall Message Confidence: 65% (observable expansion + industry benchmarks)
Play 4: Multi-Location Reporting Gaps Solid PQS (7.0/10)
Target Segment
Same multi-location groups as Play 3, but focuses on data visibility and reporting lag. Targets CFOs/COOs who need real-time consolidated financial visibility across locations but are stuck with fragmented, outdated reporting.
Why This Works
Buyer Critique Score: 7.0/10
- Situation Recognition (8/10): Location count and hiring signal are accurate
- Data Credibility (5/10): Reporting lag is assumption, not proven
- Insight Value (6/10): IF lag claim is accurate, insight is valuable
- Effort to Reply (9/10): Very easy yes/no question
- Emotional Resonance (7/10): IF true, hits CFO pain point directly
This play may benefit from additional data refinement to strengthen the reporting lag claim.
The Message
Subject: 7-location data gap
Your group operates 7 locations across Arizona, and you're actively hiring (I see 4 open positions posted this month).
But I'd bet you don't have real-time consolidated revenue visibility—most groups this size are looking at 30-45 day old data stitched together from different systems.
Is that the case here?
DATA SOURCES:
•
LinkedIn - company locations count
•
Indeed / LinkedIn Jobs - job posting dates and counts
• Industry inference: Multi-location reporting lag patterns
CALCULATION WORKSHEET:
Claim 1: "7 locations across Arizona"
- Source: LinkedIn company page Locations section
- Confidence: 85% (company-reported data)
Claim 2: "4 open positions this month"
- Source: Indeed, LinkedIn Jobs, or company careers page
- Calculation: Filter jobs posted in last 30 days, count unique positions
- Confidence: 80% (job boards current, but postings could be older/unfilled)
Claim 3: "30-45 day old data" lag
- Source: Industry reports on multi-location vet group challenges
- Confidence: 40-50% (WEAK - inference, not practice-specific)
- Disclosure: "I'd bet" + "most groups" clearly signals assumption
Overall Message Confidence: 55-60% (weaker due to unproven lag claim)
The Transformation
The difference between generic outreach and Blueprint methodology is simple:
- Generic SDR: "Your practice is growing" (unverifiable, low trust)
- Blueprint: "You opened 3 locations in 12 months—Scottsdale (March), Tempe (July), Mesa (November)" (exact, verifiable, high trust)
When prospects can verify every claim in your message, they don't delete it—they reply.
Key Principles:
- Hyper-specific: Use exact numbers, dates, locations, field values
- Factually grounded: Every claim traces to a documented data source
- Non-obvious synthesis: Connect data points they don't have access to
- Low friction: Questions should be answerable in 1-2 sentences
- Curious tone: Sound like a helpful colleague, not an auditor
Implementation Notes
For Vetsource Sales Team:
- Data Limitation: This playbook uses situation-based plays (not pain-proven) because prescription revenue leakage and payment inefficiencies are not externally visible in public data
- Confidence Levels: All messages are 55-70% confidence (hybrid approach using observable signals + industry benchmarks)
- Message Classification: All 4 plays are Strong PQS (7.0-7.8/10). No TRUE PVPs were possible due to lack of complete actionable data in public sources
- Expected Performance: These are TIMING plays, not traditional PAIN plays. Expect 2-5% response rates vs 8-15% for true pain-proven PVPs
- Best Use Case: SDR-driven campaigns with quick follow-up. Messages create curiosity and earn discovery calls rather than proving urgent pain
Disclosure Strategy:
Because these messages use hybrid data (observable signals + inferred impact), always disclose estimation:
- Use "suggests," "likely," "estimated" language
- Frame as opportunity exploration, not proven crisis
- Position as "Does this match what you're seeing?" rather than "You have this problem"