Founder of Blueprint. I help companies stop sending emails nobody wants to read.
The problem with outbound isn't the message. It's the list. When you know WHO to target and WHY they need you right now, the message writes itself.
I built this system using government databases, public records, and 25 million job posts to find pain signals most companies miss. Predictable Revenue is dead. Data-driven intelligence is what works now.
Your GTM team is buying lists from ZoomInfo, adding "personalization" like mentioning a LinkedIn post, then blasting generic messages about features. Here's what it actually looks like:
The Typical Sesami SDR Email:
Why this fails: The prospect is an expert. They've seen this template 1,000 times. There's zero indication you understand their specific situation. Delete.
Blueprint flips the approach. Instead of interrupting prospects with pitches, you deliver insights so valuable they'd pay consulting fees to receive them.
Stop: "I see you're hiring compliance people" (job postings - everyone sees this)
Start: "Your branch count jumped from 47 to 52 locations between July and September per FDIC filings" (government database with exact numbers)
PQS (Pain-Qualified Segment): Reflect their exact situation with such specificity they think "how did you know?" Use government data with dates, record numbers, facility addresses.
PVP (Permissionless Value Proposition): Deliver immediate value they can use today - analysis already done, deadlines already pulled, patterns already identified - whether they buy or not.
These messages demonstrate precise understanding of the prospect's situation (PQS) or deliver immediate actionable value (PVP). Every claim traces to verifiable data sources.
Track actual store closing times vs posted hours across multi-location retailers to identify cash reconciliation delays. Late closing patterns signal manual end-of-day cash handling bottlenecks.
This is creepy-specific operational intelligence the prospect can verify immediately. Showing them store-by-store variance reveals labor cost inefficiencies they didn't know existed. The specificity proves you've done real research, not just pulled LinkedIn data.
This play requires aggregated cash processing metrics (median processing time, error rates, labor hours) from 10+ comparable retail customers in same region and business segment, segmented by location type and transaction volume.
Combined with location monitoring or posted hours tracking to identify late closing patterns. This synthesis is unique to your operational visibility.Use aggregated cash reconciliation timing data to identify specific retail locations that are significantly slower than peers. Pinpoint exact stores where manual processes are creating labor cost drains.
Extremely specific to one location with immediately actionable intelligence. The prospect can verify this today and route internally. This isn't generic pain mirroring - it's surgical identification of a specific operational problem they likely didn't know about.
This play requires aggregated transaction timestamp data or POS close-out timing across retail locations to benchmark reconciliation speed by store.
This is proprietary operational data only you have - competitors cannot replicate location-specific performance benchmarking.Monitor deposit timestamp patterns to identify retail locations completing cash reconciliation after midnight, signaling overtime labor waste and process inefficiencies.
Specific stores identified with immediately verifiable data. The overtime implication hits labor KPIs directly. Showing deposit data that isn't publicly exposed proves differentiated insight access.
This play requires access to deposit timestamp data or late-night transaction monitoring across retailer locations to identify after-hours reconciliation patterns.
Combined with public retailer licensing data. This synthesis reveals operational inefficiencies invisible to competitors.Track actual opening times vs posted hours to identify locations with consistent delays tied to manual cash drawer preparation and safe-to-register transfer bottlenecks.
Specific store and specific metric that's immediately verifiable. The operational implication (cash prep delays) is insightful and actionable. The creepy-but-valuable factor demonstrates serious research investment.
This play requires location monitoring or posted hours tracking to identify late opening patterns across retail locations.
Combined with public retailer data. This operational intelligence is invisible to competitors without your monitoring infrastructure.Compare vault cash levels at newly opened branches vs established locations using FDIC Call Report data. New branches typically over-allocate working capital without historical velocity data to optimize float sizing.
Specific new branches with verifiable vault cash figures from FDIC filings. The Texas comparison is smart synthesis that identifies working capital inefficiency. Offers actionable branch-by-branch analysis.
Analyze vault cash allocation between newly opened branches and mature locations to identify excess working capital tied up without velocity-based optimization.
Specific branch comparison with real FDIC vault cash numbers. The working capital insight is immediately valuable to CFOs and treasurers. Non-obvious synthesis of public quarterly filing data with actionable analysis offer.
Monitor quarter-over-quarter vault cash volatility in FDIC Call Reports to identify banks with forecasting variance that signals manual reconciliation gaps or location-specific cash flow blind spots.
Specific verifiable data point from FDIC filings. Identifies a problem pattern (variance) rather than just a deadline. The implication about manual gaps hits a real operational blind spot with clear actionable question.
Combine FDIC branch expansion data with quarterly cash holding volatility to identify banks struggling to integrate new locations into cash forecasting systems during high-volume periods.
Combines expansion data with cash holding patterns in non-obvious synthesis. The 23% spike is specific and verifiable from FDIC filings. Identifies a blind spot (velocity tracking) rather than just stating facts. Strong Gate 3 - not just listing expanding banks.
Track quarterly vault cash trends from FDIC Call Reports to identify banks reducing cash holdings before seasonal transaction peaks, signaling either intentional optimization or forecasting gaps.
Specific data from FDIC filings with accurate figures. Identifies an interesting counter-intuitive pattern (decrease before peak season). Good diagnostic question that makes the prospect think about intentionality vs variance.
Analyze Monday deposit timing patterns across retail locations to identify weekend cash sitting idle without overnight reconciliation, revealing working capital inefficiency.
Specific timing pattern that's verifiable from deposit data. The working capital implication is immediately relevant. Identifies a blind spot (weekend lag) that operations teams often overlook.
This play requires deposit timing data across retail locations to identify weekend reconciliation delays and Monday deposit patterns.
Combined with public retailer licensing data. The smart safe mention provides solution context without being overtly salesy.Map branch locations from FDIC data against estimated CIT routing costs to provide immediate visibility into weekly and annual cash-in-transit expenses by network segment.
Specific to branch network with immediately relevant P&L cost math. Low-commitment ask for full analysis. However, the CIT cost per stop is industry estimate rather than their actual contracted rate, which limits precision.
This play combines public FDIC branch data with internal CIT routing cost benchmarks based on metro routing complexity and typical carrier contracts.
The cost modeling provides immediate value even without access to actual CIT contracts.Identify banks opening new branches in Q3 (before holiday transaction surge) using FDIC BankFind data. New locations without 90 days of historical cash velocity data struggle with November-December forecasting accuracy.
Specific FDIC filing data with expansion timing analysis. The holiday forecasting concern is legitimate operational challenge. Practical routing question keeps it conversational.
Cross-reference FDIC branch locations with county population data to identify rural branches receiving same CIT pickup frequency as metro locations despite lower transaction velocity.
Specific branch count and rural designation verifiable from FDIC and census data. The efficiency opportunity is clear with straightforward yes/no question. However, uses "typically" for velocity comparison rather than actual data.
This play combines FDIC branch location data with county population data and CIT optimization logic based on metro vs rural transaction patterns.
The velocity assumptions are industry benchmarks rather than customer-specific data.Track FDIC Call Report filing deadlines and historical vault cash variance patterns to offer proactive cash position forecasting before quarterly reconciliation deadlines.
Specific timeline and verifiable historical variance data from FDIC filings. Proactive help before deadline crunch would be genuinely useful. However, offering a "forecast model" might feel too salesy for initial outreach.
Calculate regional CIT costs for specific metro branch clusters using FDIC location data and metro routing complexity modeling to provide annual cost visibility by market.
Specific to Tulsa network with annual cost math relevant to budgeting. Low-commitment ask for full breakdown. However, the cost per stop is industry estimate rather than their actual contracted rate.
This play combines FDIC branch location data with CIT routing cost modeling based on metro complexity and typical carrier contracts in secondary markets.
Provides immediate cost visibility even without access to actual CIT contracts.Analyze branch locations to identify downtown metro cores receiving same CIT frequency as suburban branches despite higher transaction velocity requiring dynamic pickup scheduling.
Identifies specific inefficiency pattern in their network. The downtown vs suburban insight is non-obvious. Offers specific next step (lag mapping). However, uses "typically" for velocity comparison rather than their actual data.
This play combines public branch location data with transaction velocity assumptions and CIT optimization logic to identify scheduling inefficiencies.
The velocity estimates are industry benchmarks rather than customer-specific data.Monitor FDIC Call Report filing deadlines and use previous filing data to identify the 45-day pre-deadline window when cash forecasting errors appear in variance reports.
Specific to their institution with exact vault cash figure from FDIC filings. The 45-day timing trigger is relevant. Easy routing question. However, this is mostly telling them about a deadline they already know.
Identify banks opening branches in September (from FDIC filings) that will face November-December holiday forecasting challenges without 90 days of historical cash velocity data.
Specific timing analysis of branch openings from FDIC data. The holiday forecasting concern is legitimate. Practical routing question. However, feels somewhat like common sense observation rather than deep insight.
Old way: Spray generic messages at job titles. Hope someone replies.
New way: Use public data to find companies in specific painful situations. Then mirror that situation back to them with evidence.
Why this works: When you lead with "Your vault cash variance jumped $2.1M last quarter per FDIC Call Reports" instead of "I see you're expanding your branch network," you're not another sales email. You're the person who did the homework.
The messages above aren't templates. They're examples of what happens when you combine real data sources with specific situations. Your team can replicate this using the data recipes in each play.
Every play traces back to verifiable data. Here are the sources used in this playbook:
| Source | Key Fields | Used For |
|---|---|---|
| FDIC BankFind Suite | institution_name, branch_locations, address, city, state, zip_code | Branch expansion tracking, multi-state network analysis |
| FDIC Call Reports | institution_id, quarterly_date, total_assets, cash_holdings, deposit_accounts, branch_count | Quarterly cash variance, vault cash trends, working capital analysis |
| State Money Transmitter Licensing (NMLS) | licensee_name, license_status, license_expiration, state_jurisdiction | Multi-location money transmitter identification |
| Multi-State Lottery Association | retailer_name, retailer_location, state, license_status | Lottery retailer identification with cash handling needs |
| Internal Data: Cash Processing Metrics | aggregated_cash_processing_time, error_rates_by_location_type, labor_hours_per_location | Regional efficiency benchmarks, location performance comparison |
| Internal Data: CIT Optimization | aggregated_cit_pickup_frequency, cit_cost_reduction_percentage | Pickup frequency optimization, cost savings benchmarks |
| Internal Data: Deposit Timing | deposit_timestamp_data, late_night_transaction_monitoring | Weekend lag analysis, after-hours reconciliation detection |