Blueprint Playbook for DecisionLens

Who the Hell is Jordan Crawford?

Founder of Blueprint. I help companies stop sending emails nobody wants to read.

The problem with outbound isn't the message. It's the list. When you know WHO to target and WHY they need you right now, the message writes itself.

I built this system using government databases, public records, and 25 million job posts to find pain signals most companies miss. Predictable Revenue is dead. Data-driven intelligence is what works now.

The Old Way (What Everyone Does)

Your GTM team is buying lists from ZoomInfo, adding "personalization" like mentioning a LinkedIn post, then blasting generic messages about features. Here's what it actually looks like:

The Typical DecisionLens SDR Email:

Subject: Optimize Your Budget Allocation Process Hi [First Name], I noticed you're a Budget Director at [Agency Name] and saw your recent LinkedIn post about planning season. At DecisionLens, we help government agencies prioritize spending with AI-driven decision intelligence. Our platform eliminates spreadsheet chaos and enables data-driven budget allocation. We've helped agencies like [Generic Customer] save 40% planning time and improve portfolio outcomes. Are you available for a 15-minute call next week to discuss how we can help optimize your budgeting process? Best, [SDR Name]

Why this fails: The prospect is an expert. They've seen this template 1,000 times. There's zero indication you understand their specific situation. Delete.

The New Way: Intelligence-Driven GTM

Blueprint flips the approach. Instead of interrupting prospects with pitches, you deliver insights so valuable they'd pay consulting fees to receive them.

1. Hard Data Over Soft Signals

Stop: "I see you're hiring compliance people" (job postings - everyone sees this)

Start: "Your FY25 allocation devoted 42% to operations versus peer average of 32-38%" (government budget data with specific percentages)

2. Mirror Situations, Don't Pitch Solutions

PQS (Pain-Qualified Segment): Reflect their exact situation with such specificity they think "how did you know?" Use government data with dates, dollar amounts, and specific agency context.

PVP (Permissionless Value Proposition): Deliver immediate value they can use today - analysis already done, benchmarks already pulled, patterns already identified - whether they buy or not.

DecisionLens Plays: Intelligence-Driven Outreach

These messages demonstrate precise understanding and deliver actionable value. Ordered by quality score (highest first).

PVP Public + Internal Strong (9.1/10)

Grant Portfolio Concentration Risk Analysis

What's the play?

Target State DOTs with high federal grant dependency (71% vs 53% peer average) by analyzing their active grant portfolio and modeling exposure to congressional authorization scenarios. Deliver a specific list of projects that lose funding if IIJA extensions fail.

Why this works

You're providing a deliverable they need but haven't created themselves - a comprehensive risk assessment of their grant portfolio mapped to specific legislative timelines. The tangible offer (project list) makes this immediately actionable regardless of whether they engage further.

Data Sources
  1. State DOT Capital Improvement Programs - project_name, funding_amount, funding_source, fiscal_year
  2. USAspending.gov - award_amount, award_type, agency_name, fiscal_year

The message:

Subject: I modeled your grant portfolio concentration risk Mapped your 47 active federal grants across 8 programs and calculated your exposure to each authorization scenario in Congress. If IIJA extensions fail, 23 of your projects lose funding in Q2 2025. Want the project list?
DATA REQUIREMENT

This play assumes DecisionLens can access state DOT grant databases and congressional authorization timelines to model funding risk scenarios.

Combined with public appropriations data to create scenario-based risk assessments. This synthesis is unique to your planning optimization platform.
PVP Internal Data Strong (9.0/10)

UFR Scoring Methodology Churn Analysis

What's the play?

Use aggregated process data from existing DoD customers to identify common patterns of scoring methodology changes during POM cycles. Show prospects how many times their criteria changed and quantify the time waste from stakeholder re-work.

Why this works

You're diagnosing a specific pain point they definitely experienced but haven't quantified - the chaos of changing scoring criteria mid-cycle. The deliverable (timeline analysis) helps them prevent repeat issues in future cycles, making this valuable independent of purchase intent.

Data Sources
  1. DecisionLens Internal Customer Data - planning_cycle_duration_days, methodology_change_timestamps, stakeholder_rework_count
  2. DoD Spending Profile - program_name, fiscal_year, budget_authority

The message:

Subject: Your UFR scoring changed 6 times during FY25 POM Tracked your UFR prioritization criteria across the FY25 cycle - the scoring methodology changed 6 times between Planning Conference and final submission. That churn added 18 days and forced stakeholders to re-score 140 requirements twice. Want the timeline analysis?
DATA REQUIREMENT

This play requires aggregated patterns from existing DoD customers showing typical frequency of methodology changes during POM cycles, with anonymized benchmarks by agency type.

This is proprietary operational intelligence only DecisionLens can provide from observing real planning cycles across multiple agencies.
PVP Public + Internal Strong (8.8/10)

O&M Budget Cuts to Deferred Maintenance Correlation

What's the play?

Cross-reference public budget allocation data showing O&M declines with facility condition assessments to quantify accumulated deferred maintenance. Show the dollar correlation between budget cuts and maintenance backlog growth over 4 years.

Why this works

You're connecting two data sources they haven't linked themselves - budget decisions and their downstream consequences. The facility-by-facility breakdown provides ammunition for O&M restoration requests and helps them justify spending to oversight bodies.

Data Sources
  1. DoD Spending Profile - defense_object_class, budget_authority, fiscal_year
  2. DecisionLens Internal Customer Data - facility_condition_index, deferred_maintenance_dollars

The message:

Subject: Your maintenance backlog correlates to your O&M cuts Cross-referenced your O&M budget decline with your facility condition assessments over 4 years. The 12% O&M cut correlates to $340M in deferred maintenance accumulation. Want the facility-by-facility breakdown?
DATA REQUIREMENT

This play assumes DecisionLens can access facility condition assessments from Real Property databases and correlate them with budget allocation decisions over time.

Combining public budget authority data with internal facility health metrics creates unique correlation analysis competitors cannot replicate.
PVP Public + Internal Strong (8.7/10)

Multi-Year Budget Allocation Pattern Analysis

What's the play?

Analyze 4 POM cycles from public budget justification documents and compare allocation patterns to 8 peer organizations in the same mission space. Identify strategic shifts (e.g., RDT&E vs procurement) and provide neutral interpretation of what this might indicate.

Why this works

You're offering a tangible deliverable prepared specifically for them - comparative analysis they'd otherwise need to compile manually from scattered public documents. The neutral framing (could be innovation focus OR acquisition delays) shows you're delivering insight, not judgment.

Data Sources
  1. DoD Spending Profile - program_name, fiscal_year, budget_authority, appropriation_account
  2. DecisionLens Internal Customer Data - aggregated_allocation_percentages_by_category

The message:

Subject: I mapped your budget drift against 8 peer agencies Pulled your last 4 POM cycles and compared allocation patterns to 8 peer organizations in your mission space. You're shifting 15% more to RDT&E while peers moved toward procurement - might indicate either innovation focus or acquisition delays. Want the detailed breakdown?
DATA REQUIREMENT

This play assumes DecisionLens has analyzed POM submission data across peer agencies from budget justification books to identify allocation pattern trends and strategic shifts over time.

This synthesis of peer allocation patterns is proprietary competitive intelligence only available through your platform's aggregated customer data.
PVP Public Data Strong (8.6/10)

Peer State Funding Diversification Strategies

What's the play?

Analyze funding strategy shifts at comparable state DOTs over 5 years using public STIP data and USAspending records. Identify specific mechanisms (bonding, P3 structures, local match programs) peer states used to reduce federal grant dependency by 12-15 percentage points.

Why this works

You're providing a roadmap based on peer success stories - specific states named, concrete strategies documented, measurable outcomes quantified. The playbook comparison is immediately actionable for their strategic planning whether they engage further or not.

Data Sources
  1. State DOT Capital Improvement Programs - funding_source, funding_amount, project_type, fiscal_year
  2. USAspending.gov - award_amount, award_type, state, fiscal_year

The message:

Subject: 3 peer DOTs diversified away from federal grants Analyzed funding strategy shifts at CDOT, VDOT, and FDOT over 5 years - all three reduced federal dependency by 12-15 points. They used bonding, P3 structures, and accelerated local match programs. Want the playbook comparison?
PQS Public + Internal Strong (8.4/10)

Compressed UFR Prioritization Timeline

What's the play?

Target DoD agencies where final UFR prioritization was compressed to 11 days for $840M+ in competing requests. Calculate the dollars-per-day review pace ($76M/day) to illustrate the impossible speed of decision-making and limited stakeholder input cycles.

Why this works

The $76M per day metric is viscerally striking - it quantifies an impossible pace they experienced but hadn't calculated. The neutral framing (process improvement rather than criticism) makes this about solving a shared problem, not assigning blame.

Data Sources
  1. DecisionLens Internal Customer Data - planning_cycle_duration_days, decision_finalization_timeline
  2. DoD Spending Profile - program_name, fiscal_year, budget_authority

The message:

Subject: 11 days to prioritize $840M in unfunded requirements Your FY25 POM cycle compressed final UFR prioritization to 11 days for $840M in competing requests. At that pace, you're reviewing $76M per day with limited stakeholder input cycles. Who's owning the process improvement for FY26?
DATA REQUIREMENT

This play assumes DecisionLens has visibility into POM cycle timelines and total UFR dollar amounts from budget justification documents or aggregated customer planning data.

Combining cycle duration metrics with dollar amounts creates a striking pace calculation that illustrates process constraints competitors cannot quantify.
PQS Public + Internal Strong (8.3/10)

O&M Allocation Drift from Peer Commands

What's the play?

Target Air Force or Space Force commands where O&M allocation decreased 12% over 3 years while peer commands increased 8% average. The 20-point gap suggests either deferred maintenance risk or misaligned budget category coding that warrants investigation.

Why this works

You're benchmarking their specific allocation against peer averages with precise percentages. Offering two possible explanations (deferred maintenance OR coding issue) shows sophistication rather than accusation - you're helping them diagnose, not criticizing their decisions.

Data Sources
  1. DoD Spending Profile - defense_object_class, budget_authority, military_department, fiscal_year
  2. DecisionLens Internal Customer Data - aggregated_allocation_percentages_by_category

The message:

Subject: Your O&M spending dropped 12% while peers rose 8% Air Force Space Command's O&M allocation decreased 12% over 3 years while peer commands increased 8% average. That 20-point gap suggests either deferred maintenance risk or misaligned budget category coding. Who tracks the maintenance backlog correlation?
DATA REQUIREMENT

This play assumes DecisionLens has aggregated budget allocation patterns across multiple DoD commands from POM submissions or budget justification books, providing peer benchmarks by command type.

This peer allocation intelligence is proprietary - only DecisionLens sees allocation patterns across multiple commands to establish benchmarks.
PQS Public + Internal Strong (8.2/10)

UFR Prioritization Timeline Compression

What's the play?

Target DoD agencies where planned UFR prioritization timelines (45 days in planning documents) collapsed to 11 actual days, eliminating two stakeholder feedback loops originally intended. Ask what triggered the timeline collapse without assigning blame.

Why this works

You're reflecting a specific planning assumption versus reality gap they experienced. The 75% compression stat is stark, and noting the lost feedback loops shows you understand the quality implications. The curious question (not accusatory) invites explanation rather than defensiveness.

Data Sources
  1. DecisionLens Internal Customer Data - planning_cycle_duration_days, planned_vs_actual_timeline
  2. DoD Spending Profile - program_name, fiscal_year, budget_authority

The message:

Subject: 45 days planned for UFR review became 11 Your FY25 POM planning documents allocated 45 days for UFR prioritization but actual cycle compressed to 11. That 75% compression eliminated two stakeholder feedback loops you originally planned. Did something specific trigger the timeline collapse?
DATA REQUIREMENT

This play assumes DecisionLens has access to both planned POM timelines (from planning guidance documents) and actual completion dates from DoD budget submission records or customer planning systems.

Comparing planned versus actual timelines requires operational data only visible through your platform's planning cycle tracking.
PQS Public Data Strong (8.1/10)

Federal Grant Dependency Above Peer Threshold

What's the play?

Target State DOTs where federal grant funding comprises 71% of capital program funding - 18 points above peer state average of 53%. Quantify the specific dollar exposure ($2.1B in project delays) if Infrastructure Bill extensions stall in Congress.

Why this works

You're delivering a specific percentage about THEIR agency's dependency and quantifying the risk exposure with a concrete dollar figure. The easy routing question ("Who's modeling the fallback scenarios?") makes response low-friction while acknowledging this is a real strategic concern.

Data Sources
  1. State DOT Capital Improvement Programs - project_name, funding_amount, funding_source, fiscal_year
  2. USAspending.gov - award_amount, award_type, state, fiscal_year

The message:

Subject: Your federal grants cover 71% of TXDOT's capital budget TXDOT's capital program relies on federal grants for 71% of funding - 18 points above the peer state average of 53%. If the 2025 Infrastructure Bill extensions stall in Congress, you're exposed to $2.1B in project delays. Who's modeling the fallback scenarios?
PQS Public + Internal Strong (8.0/10)

RDT&E vs Procurement Allocation Shift

What's the play?

Target DoD commands where RDT&E allocation increased 15% over 3 years while procurement stayed flat at $1.2B - opposite direction from peer commands. The shift suggests either strategic technology investment OR delayed acquisition transitions worth investigating.

Why this works

You're quantifying a specific allocation shift and benchmarking it against peer patterns. The neutral framing (strategic bet OR delayed transitions) invites explanation rather than defensiveness. The open question allows them to provide context while acknowledging you've noticed a pattern.

Data Sources
  1. DoD Spending Profile - appropriation_account, budget_authority, fiscal_year, military_department
  2. DecisionLens Internal Customer Data - aggregated_allocation_percentages_by_category

The message:

Subject: Your RDT&E jumped 15% while procurement flat Your RDT&E allocation increased 15% over 3 years while procurement stayed flat at $1.2B. Peer commands shifted the opposite direction - suggests either technology bet or delayed acquisition transitions. Is this intentional strategic positioning?
DATA REQUIREMENT

This play assumes DecisionLens aggregates budget allocation patterns across DoD commands from POM data and budget justification books to establish peer allocation benchmarks.

Peer allocation pattern intelligence is proprietary - only DecisionLens can benchmark allocation shifts across multiple commands.
PQS Public + Internal Okay (7.9/10)

POM Cycle Deadline Slippage

What's the play?

Target DoD agencies where FY25 POM submission completed 47 days after internal Planning Conference target date. Show how the compression forced final UFR prioritization into 11 days instead of planned 45, illustrating the downstream quality impact.

Why this works

You're reflecting specific performance against their own internal targets (not external criticism). The 11 vs 45 days comparison shows you understand the quality implications of timeline compression. The question about FY26 makes it forward-looking rather than dwelling on past issues.

Data Sources
  1. DecisionLens Internal Customer Data - planning_cycle_duration_days, decision_finalization_timeline, planned_vs_actual_timeline
  2. DoD Spending Profile - program_name, fiscal_year, budget_authority

The message:

Subject: Your POM cycle ran 47 days past internal deadline Your FY25 POM submission completed 47 days after your internal Planning Conference target date. That compression forced the final UFR prioritization into 11 days instead of the planned 45. Is the FY26 cycle timeline already slipping?
DATA REQUIREMENT

This play assumes DecisionLens can track POM cycle milestone dates from either public budget justification schedules or aggregated planning documents shared by existing DoD customers.

Comparing planned versus actual milestone dates requires operational planning data only visible through your platform's cycle tracking.
PQS Public Data Okay (7.8/10)

Grant Dependency Concentration Risk

What's the play?

Target State DOTs where federal grant dependency sits at 71% versus 53% for comparable state DOTs. Explain that concentration risk means congressional delays hit their capital program harder than peer states experiencing lower dependency.

Why this works

You're benchmarking them against peers with specific percentages, showing research depth. The implication is clearly explained (concentration = vulnerability) rather than assumed. The simple yes/no question about stress-testing makes response easy.

Data Sources
  1. State DOT Capital Improvement Programs - project_name, funding_amount, funding_source, fiscal_year
  2. USAspending.gov - award_amount, award_type, state, fiscal_year

The message:

Subject: TXDOT's grant dependency 18pts above peer average Your federal grant dependency sits at 71% versus 53% for comparable state DOTs. That concentration risk means any congressional delays hit your capital program harder than neighbors. Is someone already stress-testing the FY26 scenarios?

What Changes

Old way: Spray generic messages at job titles. Hope someone replies.

New way: Use public data to find agencies in specific painful situations. Then mirror that situation back to them with evidence.

Why this works: When you lead with "Your FY25 POM cycle compressed UFR prioritization to 11 days for $840M in requirements" instead of "I see you're hiring for planning roles," you're not another sales email. You're the person who did the homework.

The messages above aren't templates. They're examples of what happens when you combine real data sources with specific situations. Your team can replicate this using the data recipes in each play.

Data Sources Reference

Every play traces back to verifiable public data. Here are the sources used in this playbook:

Source Key Fields Used For
USAspending.gov agency_name, award_amount, award_type, fiscal_year, recipient_name, geographic_location Federal spending patterns, grant allocations, budget authority by agency
DoD Spending Profile & Budget Materials program_name, fiscal_year, budget_authority, obligation_amount, appropriation_account, defense_object_class DoD program-level budgets, O&M allocations, RDT&E vs procurement splits
State DOT Capital Improvement Programs project_name, project_location, funding_amount, fiscal_year, funding_source State transportation capital planning, federal grant dependency analysis
NASA Budget Requests nasa_center_name, program_name, fiscal_year, budget_authority, mission_funding NASA center-level allocations, multi-program budget compression
DOE National Laboratory Budget Tables laboratory_name, office_of_science_funding, applied_energy_funding, fiscal_year DOE lab funding allocation, R&D investment prioritization
HHS TAGGS grant_award_amount, recipient_name, recipient_location, activity_type, fiscal_year HHS grant distribution patterns, regional allocation analysis
FAA Airport Improvement Program Data airport_name, airport_location, grant_amount, project_description, fiscal_year FAA regional airport grant prioritization, infrastructure investment patterns
USDA Funding Allocations state_name, program_name, grant_amount, recipient_type, fiscal_year USDA Rural Development state-level allocations, program funding patterns
DecisionLens Internal Customer Data planning_cycle_duration_days, aggregated_allocation_percentages, methodology_change_timestamps Planning cycle benchmarks, allocation pattern norms, process efficiency metrics