Should I double down on what’s working or experiment?
Google Ads is working. Should you pour more into it, or test something new? The answer depends on data most businesses do not have in one place. Here's how to find it.
The short answer
Scale or experiment? Double down on what is working until you see diminishing returns. Check whether your CAC on the winning channel is rising as you increase spend. If it is flat, keep scaling. If it is climbing, the channel is approaching saturation and it is time to test alternatives. You need 3+ months of channel data to see the trend.
The scale vs. experiment trap that burns marketing budgets
You found something that works. Google Ads is bringing in customers at $300 each. The instinct is to double the budget and double the customers. But paid channels do not work that way. Double the spend often means 1.4x the customers, not 2x. Your CAC jumps from $300 to $430 and your ROI drops significantly.
On the other side, experimentation has its own trap. You spread $500 across five new channels, none of which get enough budget to produce statistically meaningful results. You learn nothing and waste the money.
The right answer is neither “always scale” nor “always experiment.” It depends on where your current channels are on the diminishing returns curve. And the only way to know that is to track the relationship between spend and customer acquisition across platforms over time.
Diminishing returns: how to tell if a channel is approaching saturation
Here are the signals that tell you when to scale and when to pivot:
- CAC is flat despite increased spend. You increased Google Ads budget from $3,000 to $4,200 and CAC stayed at $300. This channel has room to grow. Keep scaling.
- CAC is rising with increased spend. You increased Meta Ads from $1,200 to $1,800 and CAC jumped from $600 to $900. Diminishing returns are setting in. Time to hold or reduce.
- Impressions are up but conversions are flat. Your ad platform is showing your ads to more people, but the incremental audience is less qualified. The easy targets are already reached.
- Quality of customers is declining. New customers from a channel are generating less revenue per customer than earlier ones. The channel is reaching less ideal buyers.
The general rule: scale when CAC is flat and customer quality is stable. Experiment when your best channels show rising CAC or declining customer quality.
How to decide between scaling and experimenting using cross-platform data (6 steps)
You need at least 3 months of channel-level spend, customer count, and revenue data. Plan for about 60 minutes.
- 1Build a 3-month channel performance table
For each channel over the last 3 months, record: spend (from Google Ads, Meta Ads Manager, etc.), new customers (from HubSpot or Salesforce, verified in QuickBooks or Xero), and total revenue from those customers.
- 2Calculate CAC and revenue per customer by month
For each channel-month, compute CAC (spend/customers) and revenue per customer (revenue/customers). You now have a 3-month trend for each metric.
- 3Plot spend vs. CAC for each channel
Create a simple chart or table showing: as spend went up, did CAC stay flat (scalable), go up slightly (approaching limits), or go up sharply (saturated)?
- 4Check customer quality trends
Is the average revenue per customer from each channel stable, growing, or declining? A channel where CAC is flat but customer value is dropping may still be losing efficiency.
- 5Categorize each channel
Based on the trends, put each channel in one of three buckets: “Scale” (flat CAC, stable quality), “Maintain” (slightly rising CAC or slightly declining quality), or “Reduce” (sharply rising CAC or declining quality).
- 6Allocate experiment budget from “Reduce” channels
Take budget from channels in the “Reduce” bucket and allocate it to testing 1-2 new channels. Give each test channel enough budget for at least 20-30 leads (enough to measure conversion rates). Typically $500-$2,000 per test.
Total time: 60 minutes with data ready. The insight requires 3+ months of historical data, which means 3 months of cross-platform data gathering before you can even start this analysis.
Why scale vs. experiment decisions are usually reactive
Most businesses do not proactively decide when to scale and when to experiment. They react: a channel stops working, so they panic and try something new. Or a channel works great, so they pour money in until it stops working, and then they panic.
The proactive approach (tracking diminishing returns curves and making pre-emptive reallocations) requires consistent monthly data from multiple systems. Very few businesses have the discipline or infrastructure to do this manually. The result is a cycle of over-investment in saturated channels followed by scrambling when performance drops.
Or see your scale vs. experiment signals automatically
Bottomline tracks the relationship between spend and customer acquisition for each channel over time. It flags when channels are approaching diminishing returns and recommends where to reallocate budget.
The actionable insight: Google Ads and Content/SEO are still scaling efficiently. LinkedIn is holding steady. Meta Ads is deteriorating. Instead of waiting for Meta to crater, you can proactively shift budget to channels that are still in their growth phase. And with $600/month freed from Meta, you can test one new channel with enough budget to get meaningful data.