What are Google Ads Demand Gen Asset Experiments and why do they matter now?
Google Ads Demand Gen is Google’s visual-first campaign type that reaches people on YouTube including Shorts, Discover, Gmail, and since this year expanding image inventory on the Google Display Network. That surface mix turns social-style creative into measurable performance at scale. Google’s own updates in 2025 pushed Demand Gen from a mid-funnel curiosity to a full-funnel driver which is why systematic testing is no longer optional. It is the fastest way to find the creative that actually sells. Recent coverage shows Google adding placement controls and creative testing features that make this possible in a clean A/B framework.
Demand Gen Asset Experiments are structured A/B tests that compare images and videos inside a Demand Gen campaign. You duplicate the campaign into two experiment arms then split traffic evenly and change only the creative assets you want to test. The point is to isolate one variable like hook, length, orientation, or thumbnail so the winner is clear and defensible. Search Engine Land documented this rollout in August with steps and caveats such as “control edits sync to treatment” which matters for test hygiene.
This testing muscle matters right now because Google is shipping monthly Demand Gen Drops and new placement controls. You can target YouTube only or open to Discover, Gmail, and GDN which lets you tailor experiments to where your audience actually watches or shops. Google’s September update framed Demand Gen as delivering a 26% increase in conversions per dollar over the past year with a 33% uplift when product feeds are attached. That momentum puts creative and placement tests at the center of modern media planning rather than on the edges.
At a glance: where Demand Gen runs and what to test
Surface | Typical format | What an Asset Experiment can isolate |
---|---|---|
YouTube Shorts | Vertical video in feed | First 3-second hook, captions, overlays, CTA frames |
YouTube In-feed / In-stream | Horizontal or vertical video | Cut length, opening shot, VO vs on-screen text, end-card |
Discover | Image or video tiles | Image style, product collage vs single hero, headline variants |
Gmail | Image + text modules | Visual motif, promo line, offer framing |
Google Display Network (image inventory expansion) | Image tiles | Product grid vs lifestyle image, price badges, colorways |
Why this mapping helps: it guides one-variable tests that align with how people consume each surface. Shorts wants punchy vertical hooks. Discover leans on visuals and concise copy. Gmail rewards clarity and offers. Your test plan should mirror those realities.
What changed in 2025 that makes testing unavoidable
- Creative A/B testing for Demand Gen became available which removes the guesswork around visuals. You can finally measure which video or image drives more clicks and conversions.
- Channel controls let you constrain experiments to YouTube only or include Discover, Gmail, and GDN which keeps placement bias out of your creative read.
- Google launched Demand Gen Drops, a monthly cadence of updates across measurement, bidding, and creative which means your test backlog should evolve every few weeks.
Quick wins to pursue first
- Run “Shorts-first cut vs horizontal cut” to validate vertical repurposing before scaling budgets on YouTube.
- Test “with product feed vs without feed” if you sell SKUs since Google cites a sizable conversion lift when feeds render product tiles next to your creative.
- Use channel controls to compare YouTube-only results against a YouTube+Discover+Gmail mix which reveals where your creative actually wins attention.
Bottom line: Demand Gen is no longer just discovery. It is a performance channel fueled by visual storytelling and rapid Asset Experiments. Treat creative like a product feature that ships weekly and you will compound learnings faster than your competitors.
Fast recap of 2025 changes that shape your Google Ads Demand Gen Asset Experiments strategy
You test smarter when you understand the ground that just shifted under your feet. In 2025 Google tightened the screws on Demand Gen with controls and measurements that finally make creative testing rigorous. Here is what changed and why it matters for your Google Ads Demand Gen Asset Experiments.
The 60-second summary
- Creative A/B testing for Demand Gen is live. You can now A/B test images and videos directly inside Demand Gen to see what actually moves clicks and conversions. This removes guesswork and speeds creative iteration.
- Channel Controls arrived. You can constrain delivery to YouTube or open it up to Discover, Gmail, and GDN which means you can isolate placement bias in your experiments.
- Monthly “Demand Gen Drops.” Google now ships a regular bundle of Demand Gen updates so your test backlog should refresh every few weeks.
- Performance context to set expectations. Google says advertisers saw a 26% year-over-year increase in conversions per dollar and a 33% uplift when product feeds are attached. Set hypotheses that reflect that direction and then verify it in your own data.
What changed, when it landed, and how it affects your experiments
Change (2025) | When | Why it matters for Demand Gen Asset Experiments |
---|---|---|
A/B testing for images & videos in Demand Gen | Aug 5, 2025 | You can run clean creative-only tests. Keep budgets and bids fixed. Read winners on CTR, CPA, and Conv. rate without placement noise. |
Channel Controls for YouTube, Discover, Gmail, GDN | Apr 23, 2025 | Split experiments by surface. For example run YouTube-only to judge hooks for Shorts then expand to Discover once a winner emerges. |
Demand Gen Drops (monthly update cadence) | Sept 18, 2025 | Treat testing like a product sprint. New features can invalidate old learnings so keep a rolling backlog. |
Performance proof-points announced | Sept 23, 2025 | Target hypotheses around product feeds and multi-format creative since those areas show outsized lifts. Validate in your account before scaling. |
What this means for your next 30 days of experiments
- Prioritize creative A/Bs over broad campaign restructures. Your first win comes from the opening three seconds, not from another audience shuffle.
- Use Channel Controls to run YouTube-only trials for Shorts hooks. Then run a second phase across YouTube+Discover+Gmail to test portability.
- Attach a product feed if you sell SKUs. Google cites 33% more conversions at similar CPA on average when feeds render product tiles next to your creative. Your experiment should be with-feed vs without-feed as a baseline.
- Refresh your backlog with each Drop. Skim the latest Demand Gen Drops post and translate each feature into a testable hypothesis. Ship one meaningful A/B each week.
Demand Gen experiment types (and when to use them)
You get two clean paths to test inside Google Ads Demand Gen Asset Experiments. Pick based on what you want to learn right now. If you need a single-variable read on creative then use Asset A/B Experiments. If you need to compare audiences, bids, or more than two variants then use Custom Experiments. Google’s Help docs outline both flows and the rules that keep results trustworthy.
Asset A/B Experiments – the creative workhorse for Google Ads Demand Gen Asset Experiments
What this is: A controlled A/B where you duplicate a Demand Gen campaign, split traffic, and change only the assets (images or videos). Google supports success metrics like CTR, Conversion rate, Cost/conv, Avg. CPC. Any change to the control mirrors to the treatment so your test stays clean.
When to use it:
- You want to prove which video cut or image style wins before scaling.
- You want a sanctioned with-feed vs without-feed scenario for SKUs. Google explicitly supports this experiment format.
- You need a result that is easy to communicate to stakeholders with a confidence read inside the Experiments report.
How it works (high level):
- Go to Experiments → Demand Gen experiment → A/B test assets.
- Choose a primary success metric from the supported list.
- Select a control campaign then let Google duplicate it as the treatment.
- Add your new videos or images to the treatment arm only.
- Launch and monitor the confidence level card in reporting. You will see statuses like Collecting data, Similar performance, or One arm is better. Conversion-metric results start after roughly 100 data points.
What to watch:
- Keep the test to one variable. Swap only the creative or only the feed state. Do not mix variables.
- Avoid budget testing in this format. Google does not recommend budget as a test variable for Demand Gen experiments.
Great first A/Bs to run (fast wins):
- Vertical Shorts cut vs horizontal recut for the same concept. Success metric: CTR or Conversion rate.
- Lifestyle image vs product grid for Discover tiles. Success metric: CTR.
- With product feed vs without feed for catalog brands. Success metric: Cost/conv and Conversion rate.
Tip: If your asset leans vertical and you want only YouTube inventory for a read on Shorts, use Channel controls to constrain delivery at the ad-group level. Then re-run the winner across All Google channels to test portability.
Custom Experiments – flexible multi-arm testing for Google Ads Demand Gen Asset Experiments
What this is: A flexible framework to test audiences, bidding strategies, formats, or creative when you need more than two variants. You can add up to 10 arms, assign one or more campaigns per arm, and set a traffic split. Google recommends 50% split when you compare two groups because it gives the cleanest read.
When to use it:
- You need to compare Max Conversions vs tCPA or try value-based bidding later once you have value data.
- You want to pit audience constructs against each other like broad vs lookalike-seeded.
- You must run three or more creative concepts at once and you accept slower reads for more breadth.
How it works (high level):
- Go to Experiments → Demand Gen experiment → Custom Experiments.
- Add arms and label them clearly.
- Split traffic across arms. Use 50/50 for two arms unless you have supply constraints.
- Assign campaigns to each arm and choose a primary success metric.
- Launch and read the confidence level dropdown (70% default, 80% balance, 95% conclusive).
Volume and data guardrails:
- If you use conversion-based bidding, plan for a minimum of ~50 conversions per arm to surface results. This is Google’s published guidance for Demand Gen experiments.
- The experiment report starts showing conversion-metric results once the test collects ~100 data points. Be patient or lower the confidence level if you need a directional call.
Asset A/B vs Custom – which Google Ads Demand Gen Asset Experiments type should you pick?
Choose this | If you need | Typical runtime reality |
---|---|---|
Asset A/B | A crisp answer to “which creative wins” or “does the feed lift results” | Faster reads because traffic concentrates into two arms and the variable set stays tight. |
Custom Experiments | Broader answers like “which audience + bid pair works” or “which of three concepts scales” | Slower reads because split traffic dilutes signal across more arms so plan budget accordingly. |
Prerequisites & eligibility for Google Ads Demand Gen Asset Experiments (don’t skip)
[Unverified note]: Several official Google Ads Help pages do not display a “last updated” date. I cite them for accuracy, yet I cannot verify their recency. Where possible, I also link to dated articles from the last six months.
You run faster tests when setup is clean. You also avoid false wins. Before launching any Google Ads Demand Gen Asset Experiments, walk through this tight preflight so your reads are trustworthy and repeatable.
Non-negotiables before you launch
- Have at least two Demand Gen campaigns ready and not currently running. Use them as your experiment arms.
- Change only one variable between control and treatment. Keep budgets, bids, audiences, and settings identical so creative is the only lever that moves.
- Pick a single success metric for the experiment: CTR, Conversion rate, Cost/conv, or Avg. CPC. Use the same primary metric to judge both arms.
- Split traffic evenly (50/50) for clean comparisons unless you face inventory constraints.
- Do not test budget as your variable. Budget shifts confound reads during learning.
Data thresholds & confidence settings
You will see faster or slower reads depending on your chosen confidence level and metric type.
- Confidence levels: 70% (fast, directional), 80% (balanced), 95% (conclusive). Choose based on decision size.
- Conversion-metric visibility: results begin after ~100 conversion data points are collected across arms. Until then the report shows Collecting data.
- Volume guidance for conversion-bidding: plan for ~50 conversions per arm to surface results when you use conversion-based bidding. Consider optimizing to a shallow conversion (e.g., add-to-cart or lead step) if needed to reach volume.
Placement scope and inventory awareness
- Demand Gen experiments can test images and videos and run across Google’s visual surfaces. Pair this with Channel Controls when you want a YouTube-only read before expanding to Discover, Gmail, and GDN.
Conversion tracking & bidding readiness
- Verify that primary conversions are configured and firing. Poor tags pollute reads.
- If you plan to graduate to value-based bidding (VBB) later, audit that your conversion actions carry values and meet eligibility guidance for Maximize conversion value / tROAS in Demand Gen.
Creative & feed readiness
- Asset specs & quality: ensure your videos and images meet Demand Gen asset specs, and check Ad Strength to avoid low-quality outliers skewing results.
- Product feed (optional but powerful): if you sell SKUs, connect a Google Merchant Center feed so you can A/B with-feed vs without-feed. Feeds render browsable product tiles next to your creative.
Audience hygiene
- Lock audience targeting before launch. If you test audiences, use Custom Experiments rather than asset A/B.
- Lookalike segments exist only in Demand Gen, so ensure the right seed list and inclusion/exclusion rules are set before you flip the experiment live.
Step-by-step: How to set up Google Ads Demand Gen Asset Experiments
You can launch clean, defensible tests in minutes if you follow a strict flow. The steps below cover Asset A/B Experiments for creative-only reads and Custom Experiments for broader, multi-arm comparisons. Every step maps to Google’s official guidance so you can move with confidence.
Asset A/B Experiment walkthrough for Google Ads Demand Gen Asset Experiments
Goal: Isolate images or videos as a single variable and split traffic evenly across two arms.
- Open Experiments. In Google Ads, go to Experiments → Demand Gen experiment → A/B test assets.
- Choose one primary success metric. Pick CTR, Conversion rate, Cost/conv, or Avg. CPC from the metric dropdown. Do not swap metrics mid-test.
- Select your control campaign. Pick the Demand Gen campaign that will act as the control. It can be live or new.
- Create the treatment campaign. Google duplicates the control and keeps the same daily budget. You will add new assets only to the treatment arm.
- Add test assets. On the Ad card click + Add videos or add images from the Asset Library. Keep all other settings identical.
- Name and save. Use clear labels like “DG — Shorts hook v1 vs v2 — CTR primary.” Then Save to launch the experiment.
- Understand mirroring. Any changes you make to the control will mirror to the treatment automatically which protects test hygiene.
- Read results inside the experiment report. Use the confidence level dropdown (70% default, 80% balanced, 95% conclusive). Watch for statuses like Collecting data, Similar performance, or One arm is better. Conversion-metric results populate after ~100 conversion data points.
Pro tip: If your creative is vertical and you want a pure read on YouTube Shorts, set Channel controls to Let me choose then pick YouTube → Shorts for that ad group. Run a follow-up test across All Google channels once you have a Shorts winner.
Custom Experiment walkthrough for Google Ads Demand Gen Asset Experiments
Goal: Compare audiences, bidding strategies, formats, or run more than two variants at once.
- Open Experiments. Go to Experiments → Demand Gen experiment → Custom Experiments.
- Add and label arms. You get two arms by default and can add up to 10. Label clearly, e.g., “Max Conversions,” “tROAS,” “Lookalike-Balanced.”
- Split traffic. Use 50/50 when you run two arms since that gives the cleanest comparison. Adjust splits only if you have supply constraints.
- Assign campaigns to each arm. A campaign can sit in only one arm at a time. Arms can contain multiple campaigns if needed.
- Choose the primary success metric. Pick the single metric you will use to call the winner.
- Launch and monitor. Use the same confidence level controls and result statuses described above. Plan volume accordingly if you test conversion-based bidding.
Minimums, thresholds, and guardrails you must respect
- Two Demand Gen campaigns ready and not running to start an experiment. Keep one variable different only.
- With-feed vs without-feed is explicitly supported as an Asset A/B scenario. Use it if you sell SKUs.
- Budget is not recommended as a test variable for Demand Gen. Lock budgets before launch.
- For conversion-based bidding, plan for ~50 conversions per arm so results can surface. If needed optimize to a shallower conversion temporarily.
- Expect conversion-metric results to appear after ~100 conversion data points. Be patient or use a lower confidence level for directional reads.
What to test inside Google Ads Demand Gen Asset Experiments
Dial in what you test and you’ll ship wins faster. Demand Gen now supports true A/B testing for creative plus broader Custom experiments for audiences, bidding, product feeds, and placement. You pick a single success metric like CTR, conversion rate, cost/conv., or CPC. Google duplicates your control into a treatment and splits traffic—50/50 is the default recommendation. Experiments surface directional results at 70–80% confidence or conclusive at 95% after you hit data thresholds.
Creative variables to A/B in Google Ads Demand Gen Asset Experiments
Creative drives the biggest swings in performance. Start here.
What to test
- Video hooks & first 3 seconds: Pose a problem vs social proof vs offer-led. Track CTR and CVR.
- Orientation & length: Vertical vs square vs landscape. Short (<20s) vs mid (20–45s) vs long (60–90s).
- Thumbnails & overlays: Bold vs product-first. Price or promo overlays on vs off.
- UGC vs polished production styles for the same message.
- Text assets: Headlines, descriptions, CTA phrasing.
- Enhancements: Try with/without Google’s video enhancements when cloning the control.
Why it works
Demand Gen now lets you A/B test images and videos directly, so you can isolate creative and measure lift without muddying targeting or bidding. Pair this with the Asset report to see which images, videos, headlines, and logos carry their weight.
Product feed variables inside Demand Gen Asset Experiments
If you sell products, plug in your Google Merchant Center feed. Then test how the feed changes outcomes across YouTube, Discover, and Gmail.
What to test
- With feed vs without feed (native scenario supported by Demand Gen experiments).
- Product set selection: Full catalog vs curated sets by brand, price tier, or margin.
- Creative pairing: Video + products vs Image + products ads.
- Asset mix: Add fallback images and upload videos in three aspect ratios to unlock Shorts and feed placements.
- Filtering logic: Campaign-time filter vs post-construction filtering with subdivisions and exclusions.
Why it works
Google reports that adding product feeds drives ~33% more conversions at similar CPA and ~18% more clicks on shallow-conversion goals on average. Bring square imagery, ensure feed approval, and mind region alignment.
Audience variables to test in Demand Gen Asset Experiments
Build tests that separate prospecting vs re-engagement, then layer creative that matches intent.
What to test
- Lookalike segments (Narrow vs Balanced vs Broad). Seed with high-quality first-party lists or YouTube engagers.
- Your data vs Google segments: Customer Match vs In-market/Affinity vs Custom segments.
- Optimized targeting on/off to see how broadening affects scale and CPA.
- Exclusions: Existing customers excluded vs included for blended goals.
Why it works
Lookalikes are exclusive to Demand Gen. You choose reach trade-offs – 2.5% narrow, 5% balanced, 10% broad of the target location – and the segment refreshes every 1–2 days when requirements are met.
Bidding variables to test in Demand Gen Asset Experiments
Pick one bidding lever per test. Keep budgets steady during ramp.
Where to start
- Maximize Conversions / tCPA for initial data build if you lack value signals.
- Value-based bidding once you pass eligibility thresholds. Choose Max Conversion Value for spend-led scale or tROAS for efficiency control.
- Initial tROAS set ~20% below recent achieved ROAS to give the system room, then ratchet up.
ligibility checkpoints
You can switch a Demand Gen campaign to value-based bidding after either of these are true:
- 50 conversions with value in the last 35 days and at least 10 in the last 7 days, or
- 100 conversions with value across all Demand Gen campaigns in the last 35 days.
Plan for ~3 weeks of steady running to evaluate VBB outcomes without knee-jerk edits.
When to test which bidding strategy
Test focus | Strategy to pit | Success metric | When to try |
---|---|---|---|
Scale at stable CPA | Max Conversions vs tCPA | Cost/conv., Conversions | Early data build or lead gen |
Revenue efficiency | Max Conv. Value vs tROAS | Conv. value / cost, ROAS | After value eligibility |
Mixed micro + sales goals | Max Conv. Value vs Max Conversions | Both CVR and Value | When micro-conversions matter |
Channel & placement variables to test in Demand Gen Asset Experiments
Placement choice changes intent mix and cost structure. Use Channel controls to test coverage.
What to test
- All channels vs YouTube-only vs Google-owned-and-operated (Discover + YouTube + Gmail) vs Shorts-only.
- Shorts-only ad groups when your creative is vertical and punchy.
- Display on/off to compare reach quality across third-party inventory.
Why it works
You can now configure channel strategy at the ad group level, including YouTube Shorts only. That lets you isolate a surface without changing audiences or bids. Keep the same creative set when you compare channel mixes to avoid cross-contamination.
A ready-to-run test matrix for Google Ads Demand Gen Asset Experiments
Hypothesis | Variable | Arms | Success metric | Sample-size target | Notes |
---|---|---|---|---|---|
Vertical video lifts CTR on Shorts | Creative orientation | A: Square, B: Vertical | CTR | Let conversion metrics begin after ~100 data points | Keep audio/copy identical. |
Product feed improves efficiency | Product feed | A: No feed, B: Feed on | Cost/conv. | 50 conversions/arm | Expect more clicks and conversions on average. |
Lookalike breadth scales at stable CPA | Audience | A: Narrow LAL, B: Balanced LAL | Cost/conv. | 50 conversions/arm | Seed with highest-quality list. |
tROAS beats Max Conv. Value for revenue efficiency | Bidding | A: MCV, B: tROAS | Conv. value / cost | Meet VBB eligibility first | Start tROAS ~20% below historical. |
YouTube-only tightens quality | Channel controls | A: All channels, B: YouTube-only | CVR | 50 conversions/arm | Configure at ad-group level. |
Guardrails, timelines, and “do not test” items for Google Ads Demand Gen Asset Experiments
Lock these rules in before you launch. They stop false positives and protect your data quality.
Non-negotiable guardrails
- Use two Demand Gen campaigns that are ready but not running. Build your control and treatment from these, and change only one variable between them.
- Keep budgets out of scope. Google states, “We don’t recommend testing the budget as a variable at this time.”
- Split traffic 50/50 for two-arm tests. Google’s recommendation gives the cleanest statistical read.
- Respect mirroring. Any change to the control (e.g., toggling video enhancements or optimized targeting) automatically reflects in the treatment to preserve test hygiene.
- Know your success metrics upfront. Pick one primary metric: CTR, Conversion rate, Cost/conv, or Avg. CPC, and hold it for the full run.
- End the experiment in the UI once you call a winner. If you don’t, paused arms can continue to serve on restricted traffic.
Data thresholds and confidence
- Confidence levels: 70% (fast, directional), 80% (balanced), 95% (conclusive). Results and tables respect your selection.
- When conversion metrics appear: the report starts calculating once you’ve collected ~100 conversion data points across arms.
- If you use conversion-based bidding: plan for ~50 conversions per arm so results can surface. Google suggests optimizing to shallower conversions (e.g., Add to Cart) if needed.
Learning-phase rules that protect your read
- Avoid major edits during learning. Google warns that changes to bids, budgets, or creatives reset calibration and delay optimal performance.
- Plan timelines by signal, not by calendar. Google notes the learning phase can take up to ~50 conversions or ~3 conversion cycles. Aim to reach those within ~2 weeks.
“Do not test” list for Google Ads Demand Gen Asset Experiments
- Budget size as the variable. Google explicitly does not recommend it.
- Multiple variables at once. Google’s setup guidance asks you to keep only one variable different.
- Mid-test edits to bids, budgets, or creatives. These changes disrupt learning and contaminate results.
- Forgetting to formally end the experiment. Restricted traffic can linger if you skip this.
How to structure asset groups for cleaner reads in Google Ads Demand Gen Asset Experiments
Think of each asset group as a single creative concept aimed at a single audience and surface. You will get faster answers when one idea meets one segment on one set of placements. Demand Gen supports rich mixes of video, image, and text assets, optional product feeds, and now channel controls at the ad group level which lets you isolate YouTube including Shorts before you expand to Discover, Gmail, and the Google Display Network. Use that flexibility to keep variables tight so your Google Ads Demand Gen Asset Experiments stay honest.
The core blueprint
- One concept per asset group. Group shots, scripts, and visual style that tell the same story. Do not mix wildly different creatives in one group or your read blurs. Start with the required formats so your group can serve everywhere you intend. Google lists minimum image and video specs by aspect ratio.
- One audience intent per asset group. Use your asset group to speak to a clear stage like new prospect or cart abandoner. Keep exclusions identical across experiment arms when creative is the only variable.
- One placement scope per test phase. Constrain to YouTube for a shorts-first read then re-run the winner across All Google channels. Channel controls sit at the ad group level and they now cover YouTube, Discover, Gmail, and GDN.
- Right-size the asset count. Upload the required minimum so you serve broadly then add variants that test one idea at a time like new hooks or thumbnails. Asset quantity limits and required fields live in Google’s specs.
FAQ: your most common questions about Google Ads Demand Gen Asset Experiments
How long should I run a Google Ads Demand Gen Asset Experiment?
Run the test until your chosen confidence level (70% / 80% / 95%) declares a winner on your primary metric. Conversion-based reads start to populate only after meaningful volume. Google’s experiment UI shows Collecting data, Similar performance, or One arm is better once thresholds are met.
What sample size or volume do I need?
If you optimize to conversions then plan ~50 conversions per arm to surface results. The experiment card starts to show conversion-metric results after roughly 100 conversion data points across arms. Keep non-test settings fixed.
What traffic split should I choose?
Pick 50/50 for two-arm tests. That split gives the cleanest read.
Can I A/B test with product feed vs without feed?
Yes. Google documents with-feed vs without-feed as a supported scenario for Demand Gen experiments. Feeds add browsable product tiles next to your creative.
Where do I see asset-level winners for images and videos?
Open the Asset report. You can view performance at the channel, campaign, ad, and multi-campaign levels. The report covers images, videos, headlines, descriptions, and CTAs. It updates daily and shows assets that received impressions in the last 30 days.
Can I force YouTube-only or even Shorts-only for a clean read?
Yes. Use Channel controls at the ad-group level to choose YouTube, Discover, Gmail, or Google Display Network. You can further narrow to formats like Shorts when you want a vertical-video read. After you call a winner, test portability on All Google channels.
The card says “Similar performance.” What now?
End the test or extend runtime. Pick a sharper single variable for your next A/B. Do not stack variables. Keep budgets and bids unchanged during the run.
Should I test budget inside an experiment?
No. Google does not recommend budget as a test variable for Demand Gen experiments. Lock budgets before launch.
Will mid-test edits affect results?
Yes. Significant edits can extend or reset the learning period. Google recommends allowing about 50 conversions after a bid-strategy change before making more changes. Avoid mid-test churn.
I have low volume. How do I get a read?
Pick a shallower conversion for signal during the test, for example an add-to-cart or lead step. Widen placements with Channel controls only after your creative proves out on a constrained surface. Keep one variable different.
What metrics should I optimize for in creative A/Bs?
Use CTR to measure attention when hooks or thumbnails change. Use Conversion rate or Cost per conversion when offers and intent cues are constant. Select a single primary metric before launch and hold it.
As you run Google Ads Demand Gen asset experiments, round out your strategy with these deep dives: strengthen your fundamentals with our Google Ads interview questions, improve brand safety and inventory control using the Performance Max placement exclusions guide, and understand evolving SERP dynamics in Google Ads in AI Overviews: what it is, why it matters, and how to win. These resources complement your testing and help turn insights into scalable performance.