AI ad strategy visual

Why we needed to have this conversation internally

Twelve months ago we had a difficult client situation. An e-commerce brand we were managing had given their Meta account's AI-driven Advantage+ campaigns almost complete autonomy. The results looked good in the platform. ROAS was up 18% quarter-over-quarter. Impression share was climbing. The AI was clearly finding efficient pockets of spend.

Then their sales team flagged something. The quality of inbound customers had dropped. Refund rates were up. Support ticket volume was higher. The AI had found a large, cheap audience that clicked and bought — but that audience had significantly different expectations from the product than the brand's intended customer. The platform metrics were healthy. The business metrics were quietly deteriorating. It took us two months to trace the root cause back to an AI audience expansion we had approved without enough scrutiny.

That experience forced us to articulate exactly where AI earns its place in our workflow and where it doesn't. Here's the framework we arrived at.

The 4 places AI earns its keep

Weekly performance pattern detection. Our analysts now use AI to scan campaign data before their weekly review. What would take two hours of manual analysis — spotting anomalies, surfacing correlations, flagging underperforming segments — takes 20 minutes. The analyst arrives at the review with a prioritised list of things worth examining. They still make every decision. AI just removes the data grunt work.

Ad copy and headline variation generation. For every major campaign, we brief an AI model on the brand voice, the audience's decision context, and the single message we want to land. It generates 30–40 headline and copy variations. We select 4–6. This process consistently produces options that no single copywriter would have generated alone, because the variation volume forces us to consider angles we'd have dismissed as too obvious or too unusual. Our selected options are always human-edited, never AI-final.

Search term analysis at scale. For PPC accounts with thousands of search terms, AI review of search term reports catches irrelevant patterns faster than any human. We review AI-surfaced negative keyword recommendations weekly and accept roughly 70% of them. The 30% we reject are usually cases where the AI doesn't understand the strategic context — competitor terms we intentionally want to capture, or niche terms with low volume but high intent.

Lead quality scoring from CRM data. We use AI to analyse historical CRM data and identify which lead attributes correlate with close rates and LTV. This gives sales teams a prioritized call list each week based on fit signals, not just recency. In one client's case, this surfaced a counterintuitive insight: leads who mentioned a specific competitor in their initial form submission had a 2.7x higher close rate than average. That signal was invisible in standard reporting.

The 3 places we never let AI lead

Positioning and messaging strategy. AI can generate messaging. It cannot decide what your brand should stand for, what trade-offs to make in how you describe your value, or whether a particular message will age well in your competitive landscape. These decisions require context, values, and judgment that doesn't exist in training data.

Audience definition and expansion decisions. The Meta Advantage+ situation taught us this. AI will find efficient audiences. Efficient is not the same as right. The decision about which audience represents your actual target customer — and which represents a cheap approximation of them — requires human judgment about your business, your product, and your long-term brand positioning.

Any decision with irreversible consequences. Budget commitments above a defined threshold, campaign pauses that affect active revenue, creative direction changes that touch brand positioning — all require human sign-off. AI can surface the recommendation. A strategist makes the call. That boundary is non-negotiable in every engagement we run.

AI improves velocity, not certainty. Results depend on data quality, strategic judgment, and execution.