Founder of Blueprint. I help companies stop sending emails nobody wants to read.
The problem with outbound isn't the message. It's the list. When you know WHO to target and WHY they need you right now, the message writes itself.
I built this system using government databases, public records, and 25 million job posts to find pain signals most companies miss. Predictable Revenue is dead. Data-driven intelligence is what works now.
Your GTM team is buying lists from ZoomInfo, adding "personalization" like mentioning a LinkedIn post, then blasting generic messages about features. Here's what it actually looks like:
The Typical ImagineSoftware SDR Email:
Why this fails: The prospect is an expert. They've seen this template 1,000 times. There's zero indication you understand their specific situation. Delete.
Blueprint flips the approach. Instead of interrupting prospects with pitches, you deliver insights so valuable they'd pay consulting fees to receive them.
Stop: "I see you're hiring compliance people" (job postings - everyone sees this)
Start: "Your ER claim volume increased 41% between March and November based on CMS data" (government utilization database with specific months)
PQS (Pain-Qualified Segment): Reflect their exact situation with such specificity they think "how did you know?" Use government data with dates, record numbers, facility addresses.
PVP (Permissionless Value Proposition): Deliver immediate value they can use today - analysis already done, deadlines already pulled, patterns already identified - whether they buy or not.
These messages are ordered by quality score - the strongest plays come first, regardless of data source type. Each play demonstrates precision understanding or delivers immediate value.
Combine public CMS fee schedule changes with internal data on the customer's specific CPT code distribution and billing volumes to calculate exact quarterly impact before the changes hit their revenue.
You're surfacing financial impact the prospect won't see until February when payments post. The specificity of using THEIR actual volumes against the new rates proves you did custom analysis. Most groups won't model this until they see revenue drop - you're giving them 6-week advance warning.
This play requires the recipient's CPT code distribution and monthly claim volumes from your platform (aggregated over 3-6 months to identify their most-billed codes).
Combined with public fee schedule changes to calculate exact dollar impact. This synthesis is unique to your business.Identify the recipient's highest-volume CPT code from internal data, apply the upcoming CMS fee schedule reduction to that specific code, and calculate monthly revenue loss starting January 1st.
Single-code impact is easier to verify and act on than multi-code analysis. The prospect can check their own volumes immediately. By focusing on their #1 code, you demonstrate precision research while offering a concrete optimization strategy before the change hits.
This play requires the recipient's monthly claim volumes by CPT code from your platform.
Combined with public CMS fee schedule data to calculate exact monthly revenue impact. Only you have their specific billing patterns.Track payment velocity trends across your customer base by payer, region, and quarter. When a major payer's payment timeline increases significantly (e.g., Blue Cross Illinois going from 28 to 39 days), alert practices in that state with their specific claim volume to quantify float impact.
The prospect cannot see this pattern from their single practice. They'll notice slower payments eventually, but by then they've already lost weeks of float. You're giving them forward-looking intelligence they can act on immediately - adjust cash flow forecasts, contact the payer, or shift volume to faster-paying plans.
This play requires aggregated payment velocity data across 50+ customers by payer, state, and quarter (median days-to-payment, with trend analysis over time).
This is proprietary data only you have - competitors cannot replicate this play. Requires multi-year payment posting data from thousands of practices nationwide.Track payer mix changes quarter-over-quarter from internal claim submissions. When a practice's payer distribution shifts toward lower-reimbursing plans (e.g., Medicaid increasing 8 percentage points), calculate margin impact based on their top procedures and reimbursement differentials.
Strategic payer mix shifts are invisible until quarterly financials close. The prospect sees individual claims but not the macro trend. By surfacing this 30-60 days before they'd notice organically, you're providing executive-level strategic insight that helps them course-correct patient acquisition or renegotiate contracts.
This play requires the recipient's claim submissions by payer over 2+ quarters, with reimbursement rate differentials by procedure code.
Combined with payer mix percentage calculations to quantify margin impact. This is strategic insight they cannot get elsewhere.Use CMS Physician Utilization Files to identify emergency medicine groups with 30%+ claim volume increase over 6-12 months, then cross-reference LinkedIn to verify their billing staff count stayed flat. This proves operational stress: more work, same headcount.
The prospect knows their volume increased. They probably think "we're doing more with less" is sustainable. By showing them the exact percentage increase alongside their flat staffing, you're reflecting operational reality back to them in a way that validates their stress and makes automation an obvious solution.
Use aggregated payment velocity data from your multi-state customer base to show how the same payer processes claims at different speeds in different states. Alert practices in slower states that they're experiencing worse cash flow than peers in other regions.
Geographic payment velocity differences are completely invisible to single-state practices. They have no comparison point. By revealing that Anthem pays Texas groups 9 days faster than California groups for the same specialty, you're exposing an unfair pattern they can escalate to payer reps or use to justify operational changes.
This play requires payment velocity data across multiple states, aggregated by payer and specialty (median days-to-payment with state-level breakdowns).
This is proprietary data only you have - competitors without multi-state customer bases cannot replicate this play.Use CMS Hospital Price Transparency Enforcement Data to identify hospitals that received warnings or corrective action requests, then cross-reference with NPPES to find emergency medicine groups affiliated with those facilities. Compliance failures at the hospital level often indicate revenue cycle documentation problems affecting the ER group's billing.
The ER group may not know their affiliated hospital is under CMS scrutiny. By connecting the hospital's compliance problem to the ER group's potential billing risk (enhanced oversight often leads to stricter claim audits), you're surfacing a threat they can verify immediately and may need to address urgently.
Monitor payer policy updates for pre-authorization requirement changes (typically announced via provider bulletins). When a major payer like Cigna changes pre-auth rules for molecular pathology, identify labs with high Cigna claim volumes and calculate likely denial impact based on their recent submission patterns.
Pathology labs process hundreds of claims weekly - policy bulletins get buried in email. By surfacing a specific policy change with the exact effective date and their actual denial count, you're proving they missed the update and quantifying the damage. This creates urgency to prevent future denials.
Use CMS Physician Utilization Files to identify practices with high claim volumes across multiple payers, then model average payment timelines by payer using industry benchmarks. Compare payer velocity to highlight slow-paying payers as cash flow problems.
Multi-payer practices often don't track payment velocity by payer - they just see aggregate A/R. By isolating one slow payer and quantifying the delay in dollars, you make the problem actionable. The 18-day gap is specific enough to feel researched.
Old way: Spray generic messages at job titles. Hope someone replies.
New way: Use public data to find companies in specific painful situations. Then mirror that situation back to them with evidence.
Why this works: When you lead with "Your ER claim volume increased 41% between March and November" instead of "I see you're hiring for billing roles," you're not another sales email. You're the person who did the homework.
The messages above aren't templates. They're examples of what happens when you combine real data sources with specific situations. Your team can replicate this using the data recipes in each play.
Every play traces back to verifiable public data. Here are the sources used in this playbook:
| Source | Key Fields | Used For |
|---|---|---|
| NPPES NPI Registry API | npi_number, organization_name, taxonomy_code, practice_location, state | Identifying group practices by specialty and location |
| CMS Physician Utilization Files | claim_count, allowed_amount, group_pac_id, specialty_code, procedure_code | Measuring billing volume and reimbursement patterns |
| CMS National Downloadable File - Physician Compare | group_practice_pac_id, specialty, accepting_new_patients, address | Mapping individual providers to group organizations |
| CMS Hospital Price Transparency Enforcement Data | hospital_name, warnings_issued, corrective_action_requested, compliance_finding_date | Identifying hospitals with compliance gaps |
| CMS Physician Fee Schedule API | procedure_code, conversion_factor, facility_value, non_facility_value | Mapping procedures to reimbursement rates |
| CMS Clinical Laboratory Fee Schedule | test_code, payment_rate, clinical_lab, provider_id | Identifying clinical labs and test reimbursement rates |
| Medicare Data on Provider Practice and Specialty (MD-PPAS) | practice_id, organization_name, practice_size, primary_specialty | Tracking practice consolidation and size |
| CMS MIPS Group Public Reporting | group_pac_id, quality_score, patient_count, performance_year | Identifying group practices with quality score trends |