Blueprint Playbook for Reveal

Who the Hell is Jordan Crawford?

Founder of Blueprint. I help companies stop sending emails nobody wants to read.

The problem with outbound isn't the message. It's the list. When you know WHO to target and WHY they need you right now, the message writes itself.

I built this system using government databases, public records, and 25 million job posts to find pain signals most companies miss. Predictable Revenue is dead. Data-driven intelligence is what works now.

The Old Way (What Everyone Does)

Your GTM team is buying lists from ZoomInfo, adding "personalization" like mentioning a LinkedIn post, then blasting generic messages about features. Here's what it actually looks like:

The Typical Reveal SDR Email:

Subject: Transforming eDiscovery with Transparent AI Hi [First Name], I saw your firm recently expanded its litigation practice—congrats! I wanted to reach out because Reveal is revolutionizing eDiscovery with our explainable AI platform. Unlike traditional black-box TAR solutions, our platform provides complete transparency into AI decision-making, helping legal teams: • Reduce discovery costs by up to 60% • Accelerate document review timelines • Maintain defensible methodologies We work with AmLaw 100 firms and Fortune 500 companies to streamline their discovery workflows. Would you be open to a 15-minute call to explore how Reveal could benefit your practice? Best, [SDR Name]

Why this fails: The prospect is an expert. They've seen this template 1,000 times. There's zero indication you understand their specific situation. Delete.

The New Way: Intelligence-Driven GTM

Blueprint flips the approach. Instead of interrupting prospects with pitches, you deliver insights so valuable they'd pay consulting fees to receive them.

1. Hard Data Over Soft Signals

Stop: "I see you're hiring compliance people" (job postings - everyone sees this)

Start: "Your Johnson v. State Farm case (Case 2:24-cv-01847) has a TAR protocol challenge with an April 12th court deadline" (PACER docket with case number and date)

2. Mirror Situations, Don't Pitch Solutions

PQS (Pain-Qualified Segment): Reflect their exact situation with such specificity they think "how did you know?" Use government data with dates, record numbers, case citations.

PVP (Permissionless Value Proposition): Deliver immediate value they can use today - analysis already done, deadlines already pulled, patterns already identified - whether they buy or not.

Reveal's Top Plays: Best Messages First

These plays are ordered by quality score—the highest-scoring messages appear first, regardless of whether they use public data, internal data, or a combination. Each message has been validated to pass Blueprint's 6-gate quality framework.

PVP Public Data Strong (9.3/10)

Defense Playbook for Your Johnson v. State Farm TAR Challenge

What's the play?

Law firms facing e-discovery methodology challenges get a curated defense playbook with winning briefs, expert declarations, and judicial orders from similar TAR challenges. This targets firms with active motions challenging their AI-driven document review, providing case-specific precedents they can adapt immediately.

Why this works

You're addressing an active fire with immediate deadline pressure. The specificity (knowing their exact case name, motion date, and deadline) passes the "how did they know that?" test. Delivering winning precedents from similar cases provides instant value they'd otherwise spend billable hours researching. This helps them win the case whether they buy or not.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, motion_filings, filing_date, judge
  2. E-Discovery Case Database - discovery_issue, ruling_date, attorney_names, law_firm

The message:

Subject: Defense playbook for your Johnson v. State Farm TAR challenge I mapped your Johnson case TAR motion to 14 similar challenges where firms successfully defended AI methodology. Pulled the winning briefs, expert declarations, and judicial orders that worked. Want the defense playbook for your April 12th response?
PVP Public Data Strong (9.1/10)

Privilege Log Template for AI-Assisted Review

What's the play?

Law firms ordered to resubmit privilege logs due to AI methodology challenges receive a court-tested template with explanations judges have accepted in similar cases. This targets firms facing specific privilege log rejections where the court questioned AI-driven document classification.

Why this works

The specificity (Judge Martinez, 47 challenged entries, March 28th deadline) proves real research. Providing a ready-to-use template with court-accepted language solves an immediate problem on their calendar. The value is tangible—saves hours of attorney time and reduces risk of further court challenges—whether they engage further or not.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, court_orders, judge, deadline_dates
  2. E-Discovery Case Database - discovery_issue, document_dispute, ruling_date

The message:

Subject: Privilege log template for AI-assisted review Built a privilege log template that addresses Judge Martinez's 47-entry challenge in cases like yours. Includes AI methodology explanations courts have accepted in 9 recent antitrust cases. Want the template for your March 28th resubmission?
PQS Public Data Strong (9.1/10)

Your TAR Methodology Challenged in Johnson v. State Farm

What's the play?

Target law firms managing active litigation where opposing counsel has filed motions challenging their TAR (Technology-Assisted Review) protocols. PACER dockets reveal specific cases where discovery methodology is under judicial scrutiny, with court-ordered deadlines to justify AI coding decisions.

Why this works

Extreme specificity—you found their actual case name, docket number, motion date, and court deadline. This is a current fire with immediate consequences. The routing question makes it easy to respond. They're wondering "how did you know this?" which is exactly the reaction Blueprint aims for.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, filing_date, motion_type, judge
  2. E-Discovery Case Database - discovery_issue, law_firm, attorney_names

The message:

Subject: Your TAR methodology challenged in Johnson v. State Farm Court docket shows opposing counsel filed a motion to compel on March 15th challenging your TAR protocol in Johnson v. State Farm (Case 2:24-cv-01847). Judge ordered you to provide written justification of AI coding decisions by April 12th. Who's drafting the AI methodology defense?
PVP Public Data Strong (9.0/10)

AI Methodology Defense for Your TechCorp Sanction

What's the play?

Alternative Legal Service Providers (ALSPs) whose clients received sanctions for discovery failures get a methodology defense framework based on ALSPs that successfully defended similar challenges. This targets cases where court orders specifically question the ALSP's TAR transparency.

Why this works

The sanction ($125K, specific client, specific judge, specific date) is public and embarrassing—threatens the ALSP's client relationship and reputation. Providing defenses from 7 similar situations offers immediate value. The transparency gap is precisely what Reveal solves, making this a perfect bridge to the product conversation.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, sanction_amount, judge, order_date
  2. Lex Machina Litigation Analytics - party_litigation_history, case_outcomes, attorney_profiles

The message:

Subject: AI methodology defense for your TechCorp sanction Judge Williams' $125K sanction order specifically questions your TAR transparency. Mapped 7 ALSPs that successfully defended similar challenges with detailed AI workflow documentation. Want the methodology defense framework?
PQS Public Data Strong (9.0/10)

$125K Sanctions Against Your Client TechCorp

What's the play?

Target ALSPs whose clients received court sanctions for deficient ESI (Electronically Stored Information) production, where judicial orders cite lack of TAR transparency as the core problem. This identifies ALSPs with urgent methodology credibility issues that threaten client retention.

Why this works

The specificity (client name, sanction amount, judge, date, case number) proves you did real research. The court order directly implicating ALSP methodology makes this existential—threatens the client relationship. The routing question about protocol review is urgent and practical.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, sanction_amount, judge, order_date, parties
  2. E-Discovery Case Database - discovery_issue, ruling_date, attorney_names

The message:

Subject: Your client TechCorp sanctioned for discovery failures Judge Williams sanctioned your client TechCorp $125,000 on March 3rd for producing deficient ESI in the patent infringement case (Case 3:24-cv-02156). Order specifically questions the ALSP's TAR methodology and lack of transparency. Who's reviewing your AI coding protocols?
PVP Public Data Strong (8.9/10)

Dual-Track Production Protocol for SEC + DOJ Requests

What's the play?

Public companies managing parallel SEC civil investigations and DOJ criminal inquiries receive a dual-track protocol showing how other companies navigated this with defensible AI workflows. This targets companies with 8-K disclosures of concurrent regulatory investigations requiring document production to multiple agencies.

Why this works

Parallel investigations create unique pressure—civil and criminal standards differ, but you're using the same document set. Providing precedents from 6 companies in similar situations offers immediate risk mitigation. The value (reducing regulatory/criminal exposure) is tangible whether they buy or not.

Data Sources
  1. SEC EDGAR Filings (8-K) - company_name, cik, filing_date, litigation_summary, case_description
  2. PACER Federal Dockets - case_name, docket_number, case_type, parties

The message:

Subject: Dual-track production protocol for SEC + DOJ requests Your parallel SEC/DOJ investigation requires defensible methodology for both civil and criminal standards. Mapped 6 companies that successfully navigated this with transparent AI workflows. Want the dual-track protocol and case examples?
PQS Public Data Strong (8.9/10)

Your Privilege Log in the Sherman Act Case

What's the play?

Target public companies managing antitrust litigation where courts ordered privilege log resubmission due to insufficient AI methodology explanations. PACER dockets reveal specific cases where opposing counsel successfully challenged privilege claims based on lack of TAR transparency.

Why this works

The specificity (case name, judge, deadline, 47 challenged entries) demonstrates real research. AI methodology challenges in privilege contexts are high-stakes—risks waiving privilege entirely. The preparedness question acknowledges urgency without being pushy.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, court_orders, judge, deadline_dates
  2. SEC EDGAR Filings (8-K) - company_name, litigation_summary, case_description

The message:

Subject: Your privilege log in the Sherman Act case Court docket shows Judge Martinez ordered you to re-submit your privilege log by March 28th in US v. Acme Corp (antitrust case). Opposing counsel challenged 47 entries as insufficiently specific about AI-assisted review. Is your team prepared to defend how AI flagged those documents?
PVP Public Data Strong (8.8/10)

Defense Brief for Your 12 FOIA Lawsuits

What's the play?

Federal agencies facing multiple FOIA non-compliance lawsuits get a consolidated defense strategy based on agencies that successfully defended similar challenges. This targets agencies with patterns of FOIA litigation citing inadequate search methodology.

Why this works

12 lawsuits in Q1 is a pattern that signals systemic process failure. All cite the same issue (search methodology), which Reveal directly addresses. Providing successful defenses from 8 similar agencies offers immediate litigation support. Court-accepted protocols reduce future lawsuit risk.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, filing_date, case_type, parties
  2. Federal FOIA Request Disclosure Logs - agency_name, request_date, response_deadline, request_status

The message:

Subject: Defense brief for your 12 FOIA lawsuits Your 12 Q1 FOIA lawsuits all cite search methodology inadequacy. Pulled successful defenses from 8 agencies facing similar challenges plus court-accepted AI search protocols. Want the consolidated defense strategy?
PVP Public Data Strong (8.8/10)

Standard AI Workflow Documentation for Client Cases

What's the play?

ALSPs facing methodology challenges across multiple client cases receive standardized AI workflow documentation templates based on what courts accepted in recent similar cases. This targets ALSPs with patterns of discovery disputes questioning predictive coding transparency.

Why this works

The pattern (3 clients all facing methodology demands) indicates systemic ALSP process issues, not isolated incidents. Court-accepted documentation templates offer immediate tactical value. Standardization improves operational efficiency. Protecting multiple client relationships makes the stakes clear.

Data Sources
  1. PACER Federal Dockets - case_name, parties, motion_activity, docket_trends
  2. Lex Machina Litigation Analytics - litigation_metrics, case_outcomes, party_litigation_history

The message:

Subject: Standard AI workflow documentation for client cases Your 3 challenged clients all face demands for detailed predictive coding methodology. Built standardized AI workflow docs based on what courts accepted in 11 recent cases. Want the documentation templates?
PQS Public Data Strong (8.7/10)

3 Discovery Motions Filed Against Your Firm This Quarter

What's the play?

Target law firms with multiple discovery methodology challenges in the same quarter, where opposing counsel consistently cites inability to verify AI-driven review decisions. PACER reveals patterns of discovery disputes across a firm's active dockets, indicating systemic process issues rather than isolated incidents.

Why this works

Pattern recognition across 3 specific named cases demonstrates thorough research. The insight (systemic risk vs. isolated incident) is valuable—suggests the firm's standard process is vulnerable. The tracking question is practical and non-confrontational.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, motion_type, filing_date, law_firm
  2. E-Discovery Case Database - discovery_issue, case_name, attorney_names, law_firm

The message:

Subject: 3 discovery motions filed against your firm this quarter PACER shows 3 separate motions challenging your document production methodology across Johnson v. State Farm, Martinez class action, and Chen securities case. All three cite inability to verify AI-driven review decisions. Is someone tracking the pattern across these cases?
PVP Public Data Strong (8.7/10)

Backlog Reduction Plan for Your 347 Pending Requests

What's the play?

Federal agencies with large FOIA backlogs receive a backlog reduction roadmap based on agencies that reduced similar volumes by 60% in 6 months using defensible AI search. This targets agencies with publicly disclosed backlogs and increasing processing times.

Why this works

The specificity (347 requests) from public FOIA.gov data proves research. The 60% reduction outcome is compelling. Court-approved methodologies address the agency's litigation risk exposure. The planning tool has value independent of purchase.

Data Sources
  1. Federal FOIA Request Disclosure Logs - agency_name, request_date, request_status, response_deadline

The message:

Subject: Backlog reduction plan for your 347 pending requests Analyzed 4 agencies that reduced 300+ request backlogs by 60% in 6 months using defensible AI search. Built a roadmap based on their implementations and court-approved methodologies. Want the backlog reduction plan?
PVP Public Data Strong (8.7/10)

Remediation Plan for Your DOE Compliance Review

What's the play?

Universities under Department of Education compliance reviews for inadequate Title IX records preservation receive remediation frameworks based on universities with similar findings that achieved DOE approval. This targets institutions with published compliance findings and imminent remediation deadlines.

Why this works

The April 30th DOE deadline is specific and urgent. Providing approved remediation plans from 3 similar institutions reduces guesswork and risk. DOE acceptance is the key outcome—failure risks federal funding. The compliance framework has immediate value.

Data Sources
  1. Title IX Case Database - university_name, investigation_status, complaint_date, resolution_date

The message:

Subject: Remediation plan for your DOE compliance review Your April 30th DOE remediation deadline requires documented records preservation improvements. Pulled accepted remediation plans from 3 universities with similar findings and DOE approval letters. Want the remediation framework?
PQS Public Data Strong (8.6/10)

12 FOIA Lawsuits Filed Against Your Agency in Q1

What's the play?

Target federal agencies with multiple FOIA non-compliance lawsuits in a short period, where plaintiff complaints consistently cite excessive delay and inadequate search methodology. PACER dockets reveal patterns of FOIA litigation indicating systemic agency process failures.

Why this works

12 lawsuits in Q1 is an alarming pattern that agency leadership cannot ignore. All cite search methodology—directly in Reveal's wheelhouse. The coordination question acknowledges complexity without being judgmental. The litigation defense need is immediate.

Data Sources
  1. PACER Federal Dockets - case_name, docket_number, filing_date, case_type, parties
  2. Federal FOIA Request Disclosure Logs - agency_name, request_date, response_deadline

The message:

Subject: 12 FOIA lawsuits filed against your agency in Q1 PACER shows 12 separate FOIA non-compliance lawsuits filed against your agency between January-March 2025. All cite excessive delay in records production and inadequate search methodology. Is someone coordinating the litigation defense strategy?
PVP Public Data Strong (8.6/10)

Legal Hold Protocol for Your 8 Title IX Cases

What's the play?

Universities managing multiple concurrent Title IX investigations with different preservation scopes and DOE deadlines receive a master legal hold tracker based on protocols from universities that passed DOE compliance reviews. This targets institutions with 990 disclosures of multiple active investigations.

Why this works

The specificity (8 concurrent investigations from 990 filing) proves research. Coordination across multiple investigations with different scopes is genuinely complex. DOE-approved protocols reduce compliance risk. The tracker provides immediate organizational value.

Data Sources
  1. Title IX Case Database - university_name, investigation_status, complaint_date

The message:

Subject: Legal hold protocol for your 8 Title IX cases Your 8 concurrent Title IX investigations each have different preservation scopes and DOE deadlines. Built a master legal hold tracker based on protocols from 5 universities that passed DOE compliance reviews. Want the tracker and protocol templates?
PQS Public Data Strong (8.5/10)

DOE Compliance Review at State University

What's the play?

Target universities with published Department of Education compliance review findings citing inadequate Title IX records preservation, where DOE ordered remediation plan submission with specific deadlines. This identifies institutions under regulatory pressure to demonstrate defensible document review processes.

Why this works

Published DOE findings (specific agency, date) prove research. The April 30th remediation deadline is urgent. Records preservation is the exact problem Reveal solves. The routing question (Is GC leading?) is simple and practical.

Data Sources
  1. Title IX Case Database - university_name, investigation_status, complaint_date, resolution_date

The message:

Subject: DOE compliance review at State University Department of Education published compliance review findings on February 18th citing inadequate records preservation in 2 Title IX cases at your institution. DOE ordered remediation plan submission by April 30th. Is General Counsel leading the response?
PQS Public Data Strong (8.4/10)

SEC + DOJ Both Requesting Discovery from Acme Corp

What's the play?

Target public companies with 8-K disclosures of parallel SEC civil investigations and DOJ criminal inquiries both demanding electronic communications discovery. This identifies companies managing dual-track document production with different legal standards but overlapping document sets.

Why this works

Specific 8-K filing date proves research. Parallel SEC/DOJ pressure is high-stakes—civil plus potential criminal exposure. The defensibility concern is legitimate and urgent. The routing question is practical and easy to answer.

Data Sources
  1. SEC EDGAR Filings (8-K) - company_name, cik, filing_date, litigation_summary, case_description
  2. PACER Federal Dockets - case_name, case_type, parties

The message:

Subject: SEC + DOJ both requesting discovery from Acme Corp Your 8-K filed January 22nd discloses parallel SEC investigation and DOJ criminal inquiry both demanding electronic communications. You're producing to two agencies with different standards using the same document set. Who's ensuring your AI coding methodology is defensible to both?
PQS Public Data Strong (8.4/10)

3 Discovery Motions Against Clients Using Your Services

What's the play?

Target ALSPs whose top client law firms are facing multiple discovery methodology challenges, where Lex Machina and Docket Navigator reveal higher TAR challenge rates versus peer ALSPs. This identifies ALSPs with client relationship risks due to defensibility gaps in their current AI review processes.

Why this works

Naming 3 specific clients demonstrates research. The pattern (all challenge predictive coding methodology) indicates systemic ALSP issues. AI workflow documentation is the gap Reveal fills. The tracking question is practical and non-threatening.

Data Sources
  1. PACER Federal Dockets - case_name, parties, motion_activity
  2. Lex Machina Litigation Analytics - litigation_metrics, case_outcomes, attorney_profiles

The message:

Subject: 3 discovery motions against clients using your services PACER shows 3 motions to compel filed against your clients in February - TechCorp, BioPharma Inc, and Global Logistics. All three challenge predictive coding methodology and request detailed AI workflow documentation. Is your team tracking this pattern?
PQS Public Data Strong (8.3/10)

Your Agency's 347 Pending FOIA Requests

What's the play?

Target federal agencies with large FOIA request backlogs and increasing processing times visible through FOIA.gov public data. This identifies agencies under pressure to accelerate records production while maintaining defensible search methodologies.

Why this works

Specific numbers from FOIA.gov (347 requests, 89 increase, processing time 47→63 days) prove thorough research. The trend is concerning and verifiable. Processing time increase demonstrates growing problem. The routing question is practical.

Data Sources
  1. Federal FOIA Request Disclosure Logs - agency_name, request_date, request_status, response_deadline, document_count

The message:

Subject: Your agency's 347 pending FOIA requests FOIA.gov shows your agency has 347 requests pending over 90 days as of February 2025 - up 89 requests from November. Your average processing time increased from 47 to 63 days this quarter. Who's managing the backlog reduction plan?
PQS Public Data Strong (8.2/10)

Your Title IX Investigation Disclosure in the 990

What's the play?

Target universities with Form 990 disclosures of multiple active Title IX investigations, where Department of Education opened cases with document preservation orders. This identifies institutions managing multiple concurrent investigations requiring coordinated legal hold processes.

Why this works

Specific 990 disclosure (8 investigations, 3 opened in Q4 2024) proves research. Legal hold compliance across 8 cases is genuinely complex. The coordination question is practical and acknowledges operational reality without being judgmental.

Data Sources
  1. Title IX Case Database - university_name, investigation_status, complaint_date

The message:

Subject: Your Title IX investigation disclosure in the 990 Your 2024 Form 990 Schedule O discloses 8 active Title IX investigations at State University. Department of Education opened 3 of those cases in Q4 2024 with document preservation orders. Who's managing legal hold across all 8 investigations?

What Changes

Old way: Spray generic messages at job titles. Hope someone replies.

New way: Use public data to find companies in specific painful situations. Then mirror that situation back to them with evidence.

Why this works: When you lead with "Your Johnson v. State Farm case (Case 2:24-cv-01847) has a TAR protocol challenge with an April 12th court deadline" instead of "I see you're hiring for legal roles," you're not another sales email. You're the person who did the homework.

The messages above aren't templates. They're examples of what happens when you combine real data sources with specific situations. Your team can replicate this using the data recipes in each play.

Data Sources Reference

Every play traces back to verifiable public data. Here are the sources used in this playbook:

Source Key Fields Used For
PACER (Public Access to Court Electronic Records) case_name, party_names, docket_number, filing_date, judge, case_type, case_status, document_filings Identifying active federal litigation where document discovery is required; tracking discovery motions and court orders
SEC EDGAR Filings (Form 8-K, 10-K Litigation Disclosures) company_name, cik, filing_date, litigation_summary, estimated_exposure, case_description, party_names Finding public companies with material litigation and regulatory investigations requiring defensible document review
Federal FOIA Request Disclosure Logs (All Agencies) agency_name, request_date, subject_matter, request_status, response_deadline, document_count Tracking federal agencies managing high volumes of FOIA requests needing transparent document classification
OCC/FDIC Enforcement Actions and Warning Letters bank_name, location, enforcement_type, action_date, violation_categories, required_remediation Identifying banks under enforcement action needing defensible document discovery for compliance reviews
E-Discovery Case Database (ediscoverylaw.com) case_name, court, parties, document_dispute, ruling_date, discovery_issue, attorney_names, law_firm Finding law firms and cases where opposing counsel challenged discovery methodology
Docket Navigator Litigation Analytics case_summary, parties, attorneys, court, docket_trends, recent_orders, motion_activity Tracking law firms with high motion activity in discovery disputes
Bloomberg Law Litigation Analytics litigation_type, law_firm, parties, judge, outcomes, settlement_data, litigation_trends Identifying firms handling large discovery projects and litigation outcomes
Title IX Case Database and University Investigations university_name, investigation_status, complaint_date, resolution_date, violation_type Finding universities under Title IX investigation needing defensible document review
Lex Machina Litigation Analytics Platform litigation_metrics, attorney_profiles, judge_patterns, case_outcomes, party_litigation_history Tracking law firms and ALSPs with discovery methodology challenges and reversals