SEO Tips 2026: What Actually Works When AI Is Eating Your Clicks

SEO Tips 2026


  • AI Overviews suppress organic CTR by 58–61% on informational queries. Being cited in those overviews restores 35% of that traffic. Citation is the new #1 ranking goal.
  • Google ran three core updates in 2025 and just launched another today. The pattern is consistent: demonstrated expertise and topic depth survive; thin AI-churned coverage doesn’t.
  • GEO (Generative Engine Optimization) isn’t replacing SEO. It’s a parallel discipline sharing ~70% of the same technical work. You need both.

SEO Tips

1. The problem with every “best SEO tips” list

Bad timing.

Most SEO tip articles are written for a Google that existed 18 months ago. They recycle guidance from a search landscape where ranking #1 still meant capturing 28–39% of clicks. That world didn’t disappear — but it got a lot smaller.

Advanced Web Ranking data still shows the #1 organic position, capturing around 39.8% of clicks on queries without AI Overviews. The catch: AI Overviews now appear on 13–25% of all searches, and Seer Interactive’s September 2025 study (3,119 informational queries, 25.1M organic impressions across 42 organisations, June 2024–September 2025) found that on those queries, organic CTR dropped from 1.76% to 0.61%.

That’s a 61% drop. Not theoretical. Measured.

And this morning — March 27, 2026 — Google announced a broad core update. It just started. We don’t have impact data yet because it takes 2–4 weeks to fully roll out. What we know from the three updates in 2025 (March, June, December — each 13–18 days) is a consistent signal: demonstrated expertise, original content, and topic depth hold. Keyword-stuffed coverage and AI-churned summaries lose.

The short version: you now need to do two things simultaneously that used to be the same thing — rank in traditional search and get cited in AI answers. Those goals mostly overlap. The divergences are where the real work is.

“Being cited in AI Overviews earned brands 35% more organic clicks and 91% more paid clicks than those not cited on the same queries.”

Seer Interactive, November 2025 — 42 organisations, 25.1M organic impressions tracked

2. The six things that actually move rankings in 2026

Not twenty. Not a framework with phases and quadrants. Six things — explained with the mechanism behind each, because that’s the part most tip lists skip.

2.1 Earn AI citation — not just AI mention

There’s a meaningful difference between appearing in an AI Overview and being cited in one. Mentioned brands — summarised, paraphrased, and included in the synthesis without an explicit link — get none of the CTR lift Seer documented. Cited brands (linked, attributed) get the 35%/91% organic/paid recovery. The mechanism matters: AI Overviews pull from a retrieval layer that favours structured, authority-marked, schema-rich content. The same signals that help you rank position 1 help you get cited — with one additional requirement: your content needs to answer the specific question cleanly within the first 40–60 words of the relevant section. Not the whole page. The section.

⚠ The invisible measurement gap

A cited-vs.-mentioned signal is invisible in standard analytics. You’ll see zero clicks from an AI Overview that mentions your brand without a link — but GSC will show it as an impression. Marketing teams measuring “AI visibility” by impression share are measuring the wrong thing. The real damage is in the gap between the impressions that generated citations and those that didn’t. Most organisations currently can’t see that gap.

What you do: Run your top 20 informational queries through an AI citation tracker (Seer, Profound, or Semrush Enterprise AIO — each uses a different sampling methodology, none is definitive). Identify queries where you appear in an overview but aren’t cited. Those are your highest-ROI revision targets. The fix is usually structural: add a direct-answer sentence within the first paragraph of the relevant H2/H3 section, then the FAQPage schema. Test citation rate after 4–8 weeks.

Stop doing this: Rewriting page titles and meta descriptions to “optimise for AI.” Title tags are not how AI Overviews select citations. Structured content and topical authority are. Title rewrites are a distraction from the actual problem.

2.2 Treat E-E-A-T as an infrastructure problem, not a content problem

E-E-A-T

Experience, Expertise, Authoritativeness, Trustworthiness. The December 2025 core update hit health publisher sites hard — Amsive’s post-update analysis documented major health publishers losing over 20 visibility points. Notably, Mayo Clinic and Cleveland Clinic also dropped briefly before recovering — suggesting Google refined how it attributes authority rather than stripping it from established institutions entirely. Sites relying on brand name recognition without on-page experience signals (author credentials, first-person account language, sourced data) took longer to recover or didn’t.

The practical challenge for agencies: E-E-A-T signals are scattered across technical infrastructure (schema), content (author bios, disclosure language, source attribution), and off-site presence (author bylines elsewhere, brand mentions, LinkedIn activity). Most clients have some of each, none working together. Agencies winning on E-E-A-T treat it as a product sprint: audit current state, identify weakest signals, close gaps systematically. Not a one-time content project — an ongoing infrastructure build.

The real barrier: Author schema implementation requires coordination between content, dev, and CMS admin — usually three teams with three separate sprint cycles. Build this into the retainer scope explicitly, or it won’t happen.

2.3 Core Web Vitals: INP is the bottleneck now, not LCP

Interaction to Next Paint replaced First Input Delay as a core metric in March 2024. By 2026, it’s the hardest CWV to optimise — unlike LCP (a single load-time event), INP measures the full latency of every user interaction throughout a visit. One slow click handler on a modal, 80% of users never open, still tanks your score. Google’s threshold is 200ms for a “good” result.

For agencies managing WordPress sites with 40+ plugins: start with the chat widget and the consent banner. Those two alone cause a significant share of INP failures on marketing sites. INP diagnosis specifically requires browser DevTools Performance recording — that’s not a tool purchase, it’s a skills gap question for your technical SEO team.

2.4 Schema for AI retrieval — not just rich results

In June 2025, Google simplified the SERP UI and phased out some structured data types. That’s the wrong thing to focus on. Schema still matters — it just matters more as a machine-readability signal for AI systems than as a rich-result trigger. The relevant types for most marketing content: Article (with author and datePublished), FAQPage, HowTo, and Organization with a proper SameAs array linking to LinkedIn, Crunchbase, and Wikipedia, where applicable.

The FAQPage schema is the most underused type in the stack. It structures content exactly the way AI retrieval systems prefer — question/answer pairs with clean attribution. A page with a properly implemented FAQPage schema isn’t just answering user questions; it’s handing the answer engine a pre-formatted citation package.

2.5 Topical authority over keyword targeting

The December 2025 update hit sites behaving like “answer engines with defined topic ownership” less hard than those behaving like general-interest news hubs. A site that owns a defined topic — with consistent internal linking, logical content clusters, and no gaps in coverage of related subtopics — outperforms a site that ranks for 200 disconnected keywords. Google’s systems increasingly evaluate sites as entities, not pages.

One thesis-complicating data point: Wikipedia — the most-cited source in AI Overviews with over 1.1 million mentions — nonetheless lost 8% of human pageviews in 2025 (Wikimedia Foundation, 2025). That directly challenges any simple “topical authority = traffic” argument. Authority can increase AI citation share while traffic declines. For brands that sell things, citation may still drive commercial-intent searches even when informational traffic shrinks. The evidence doesn’t resolve this cleanly yet, and anyone who tells you it does is overreading the data.

2.6 GEO: the parallel discipline you can’t skip

GEO optimization

Generative Engine Optimisation is the practice of structuring content so AI systems — ChatGPT, Perplexity, Google AI Overviews, Claude — retrieve and cite it in generated answers. It shares about 70% of SEO’s technical work: structured content, authoritative sourcing, clean schema, and topic depth. The divergences are real, though.

ChatGPT processes 2.5 billion prompts per day (OpenAI platform figures via Frase.io, March 2026 — directional, not independently audited). AI-referred sessions jumped 527% year-over-year in the first five months of 2025 (Previsible’s 2025 AI Traffic Report). These aren’t the majority traffic sources yet — but they will be first for research, product comparison, and B2B vendor evaluation queries. If your clients operate in those categories, GEO is no longer optional.

Calibrate your confidence: Early research from Princeton and IIT Delhi suggests content with verifiable statistics and named citations achieves meaningfully higher AI retrieval rates than unoptimised content. That research is promising but the production evidence is still thin — treat GEO tactic choices as directional, not proven. Platforms change retrieval behaviour faster than researchers can study it, and Authoritas tracking found roughly 70% of AI Overview citations rotate over 2–3 months. A citation today is not guaranteed tomorrow.

Cross-source synthesis — not present in any single cited source

Three data streams converge to something worth naming. Seer’s data shows AI-cited brands recover 35%+ CTR versus non-cited brands on the same queries. Early GEO research suggests content with verifiable statistics and named citations achieves significantly higher AI retrieval rates. The February 2026 Discover core update specifically rewarded in-depth, original, timely content from sites with demonstrated expertise.

Together, they suggest one compound strategy: content that is fact-dense, citation-rich, structurally clean, and author-attributed performs across traditional ranking, AI citation, and Discover simultaneously. One investment, three distribution channels. That’s the strongest moat argument for quality content in 2026.

“Zero-click searches rose from 56% to 69% of all queries between May 2024 and May 2025. The traffic SEO built is being absorbed by AI answers. The citation is the new click.”

Editorial synthesis — Similarweb zero-click research (2025), Seer Interactive AIO CTR study (Nov 2025)

3. What the December 2025 update taught us — and one thing the analysis got wrong

The December 2025 core update (December 11–29, 18 days to complete) surfaced clear winners: eCommerce sites in Apparel, Retailers, Real Estate, Sports & Fitness. And a clear loser: Wikipedia dropped 435 visibility points — the biggest absolute loss of the update. Major health publishers lost 20+ visibility points, with Mayo Clinic and Cleveland Clinic briefly affected before recovering.

The pattern that holds: pages satisfying transactional or navigational intent are held or gained. Pages trying to serve every intent simultaneously are lost. Google is getting better at matching intent to page type, and content designed to rank for everything tends to serve nothing well enough to survive that evaluation.

What the recovery timeline literature gets wrong: Multiple sources cite 3–6 months for substantial recovery after a negative core update hit. That’s probably right for content-quality improvements alone. It’s wrong for cases where technical issues were amplifying the content problem. An INP score of 600ms combined with thin content doesn’t recover in 3 months once you fix only the INP — the content still needs rebuilding. But fixing both simultaneously creates false recovery readings in GSC because you can’t isolate which change drove improvement.

The correct sequence: Fix technical issues first because they remove noise from your content quality signal. Then rebuild content. This adds 2–4 weeks to the start of content work. It subtracts 2–4 months from the time to accurate recovery attribution.

The author schema alone does nothing without off-site author presence to corroborate it. A schema pointing to a LinkedIn with 12 connections is not an authority signal.Evidence levelTime to impact⚠ Adversarial note
AI citation optimisation (answer-first section structure, FAQPage schema)Moderate — Seer data is strong; AI citation selection mechanism is partially opaque4–12 weeks per pageAuthoritas found ~70% of AIO citations rotate over 2–3 months. Citation today is not guaranteed tomorrow.
INP optimisation (interaction latency, third-party script audit)Strong — Google’s CWV thresholds are published and verifiedImmediate to 4 weeks (depends on dev availability)INP gains don’t produce visible ranking lift in isolation. They remove drag on content quality signals. Hard to attribute directly in reporting.
Topical cluster expansion (pillar + supporting content, internal linking)Strong — consistent across multiple update cycles 2023–20253–6 months per clusterSites with large legacy content inventories face a cleanup cost that can exceed the build cost. Thin legacy pages suppress cluster authority.
Author schema + E-E-A-T infrastructureDirectional — correlation with December 2025 recovery, not controlled1–3 months for implementation; signal build takes longerAuthor schema alone does nothing without off-site author presence to corroborate it. A schema pointing to a LinkedIn with 12 connections is not an authority signal.
GEO content restructuring (direct-answer leads, stat density, citation hygiene)Directional — early academic research promising; production evidence is thinUnknown — AI training and retrieval cycles vary by platformChatGPT, Perplexity, and Google AI Overviews use different retrieval architectures. Platform-specific testing methodology is not yet mature.
Sources: Seer Interactive (Nov 2025) · Amsive (Feb 2026) · Authoritas AIO study (2025) · NitroPack CWV guide (2026) · Google Search Central documentation. Evidence levels: Strong = consistent findings across multiple sources or published Google threshold; Moderate = solid primary source, mechanism partially opaque; Directional = promising but limited or early production evidence.

4. A failure pattern: what happens when technical and content fixes go in the wrong order

A senior technical SEO director at a mid-size B2B agency described this pattern in a retainer debrief (shared on condition of anonymity): a site gets hit by a core update, the content team does a quality audit and rewrites 60 pages, dev is three sprints out from the INP fix and the consent banner rebuild. Four months later, GSC shows minimal recovery. The content team blames the algorithm. The dev team blames the content. Nobody looks at the sequence.

The mechanism: Google’s quality assessment systems evaluate content in context of the full page experience — which includes Core Web Vitals signals, crawlability, and real-user behaviour signals. An INP score of 600ms on a page with genuinely good content doesn’t just slightly reduce ranking potential. It introduces quality signal drag that may prevent content quality gains from registering accurately. You’re trying to measure a cleaner engine through a dirty sensor.

The unavailable data: no named brand has published a controlled before/after study of sequenced technical + content recovery. The failure mode circulates in agency retainer conversations, not case study libraries. That’s itself informative — organisations that understand the sequencing issue don’t publish it because it requires admitting the initial mistake.

“An INP failure doesn’t just slow your site. It introduces signal noise that prevents content quality gains from registering accurately in Google’s evaluation. You’re trying to measure a cleaner engine through a dirty sensor.”

Editorial synthesis — Google Search Central CWV documentation (2024), NitroPack INP guide (2026), Amsive December 2025 core update analysis

5. The 2026 SEO toolkit: what’s worth paying for

SEO toolkit

Quick and direct. For agency context, you probably have Ahrefs or Semrush already. Both cover the traditional SEO stack well. What neither covers adequately yet: AI visibility tracking.

For AI citation tracking: Profound, Seer’s generative AI tracker, and Semrush Enterprise AIO are the current options. None is cheap. None has a clearly superior methodology — each platform samples different AI queries differently. If your clients have enterprise budgets and care about AI visibility, you need one. If not, you’re flying partly blind. The tools are approximately 18 months behind the problem.

For Core Web Vitals and INP: Chrome User Experience Report (free, via Google Search Console), PageSpeed Insights (free), and WebPageTest for deeper interaction tracing. INP diagnosis requires browser DevTools Performance recording — a skills question for your technical SEO team, not a tool purchase.

For schema validation: Google’s Rich Results Test and Schema.org validator. Both free. Both are consistently underused.

Note: I’m not linking individual vendor pricing pages because tool features and pricing have changed fast enough that specific recommendations risk being stale by Q2. Check G2 or independent review aggregators for current comparisons.


6. What this means for your specific situation

For: Agency account managers presenting SEO strategy to clients

Reframe the KPI conversation before the March update report lands

Here’s what’s going to happen in the next 3–5 weeks: the March 2026 core update finishes rolling out, some clients see ranking changes, and they ask what happened. If your reporting still shows organic traffic as the sole primary KPI, you’ll have a hard conversation without good answers.

What you do: Before the update settles, introduce AI citation share and zero-click impression share as supplementary metrics alongside traffic. Frame it as: “Traffic is still the business outcome. These new metrics explain what’s driving changes in traffic that were previously invisible.” Don’t replace traffic metrics. Contextualise them. Clients who understand why traffic dropped are clients who authorise the budget to fix it.

The barrier: Most client reporting dashboards don’t have AI citation metrics built in. You’ll need to pull manually from your AI visibility tool and add a slide or section. It’s clunky. Do it anyway — the alternative is reporting traffic declines with no explanatory framework, which is worse.

Stop doing this: Framing AI Overviews as a temporary disruption that will stabilise. The Seer data is 15 months of trend. The Ahrefs data is a two-year comparison. This is the new baseline. Clients who understand that early make better investment decisions than those surprised by it in Q3.

For: Marketing managers with direct SEO ownership

The content audit you actually need to run this quarter

Not a full site audit. A targeted one. Pull your top 30 informational pages by impression count from GSC. For each, note whether an AI Overview appears for the primary query (check manually or via Ahrefs’ SERP features filter). For pages where an AIO appears, check whether you’re cited. That gives you a prioritised list in about 2 hours.

Pages where you rank top 3 but aren’t cited in the AIO are your highest-ROI revision targets. The content exists — Google trusts it enough to rank it. The AI system isn’t pulling from it, which usually means the answer isn’t structured cleanly enough. The fix: add a direct-answer H3 within the relevant section that leads with the answer in the first sentence, then the FAQPage schema. Measure citation rate after 4–8 weeks.

The barrier: FAQPage schema implementation requires dev time unless your CMS handles it automatically (some WordPress SEO plugins do). Budget 1–2 hours per page for implementation if your dev team is involved.

Stop doing this: Publishing new content to compete with existing content on the same queries. If you have a page ranking position 2–5 on a target query, a second page on the same topic cannibalises the first and confuses Google’s intent mapping. Revise the existing page. Add the direct-answer structure. Update the date. Don’t publish a second article.


7. What I don’t know yet — and you shouldn’t pretend to either

The March 2026 core update is in its first hours. No one has clean data. Anyone publishing “March 2026 update analysis” today is either republishing Google’s announcement or speculating. The emfluence blog I linked earlier is accurate about what’s known: it started today, it typically takes 2–4 weeks, Google said it’s designed to surface relevant and satisfying content, and they gave less context than usual in the announcement. That’s it.

GEO is also still genuinely emerging. The production evidence for specific GEO tactics is thin. Platforms change retrieval behaviour faster than researchers can study it. Authoritas tracking found ~70% of AIO citations rotate over 2–3 months — the current evidence does not resolve how durable any specific GEO tactic is. That’s worth knowing before you build a budget around it.

Whoever tells you they have the definitive 2026 SEO playbook for AI search is working from incomplete data and calling it certainty. The honest position is: here’s what the data shows, here’s where it’s solid, here’s where it’s directional, and here’s what’s genuinely unknown. That’s the analysis that holds up when the next update hits in Q2.


References

  1. Seer Interactive. “AIO Impact on Google CTR: September 2025 Update.” Published November 4, 2025. seerinteractive.com →
  2. Ahrefs. “Update: AI Overviews Reduce Clicks by 58%.” Published February 2026. ahrefs.com →
  3. Amsive. “Google’s December 2025 Core Update: Winners, Losers & Analysis.” Published February 2026. amsive.com →
  4. emfluence. “Google’s Core Algorithm Updates.” Updated March 27, 2026. emfluence.com →
  5. ALM Corp. “Google Algorithm Updates in 2026.” Published March 2026. almcorp.com →
  6. Found.co.uk. “SEO & Google Algorithm Updates & Changes 2026.” Updated March 2026. found.co.uk →
  7. Pew Research Center. “AI Overviews and Search Behavior.” July 2025. [68,879 searches, 900 US adults]
  8. Similarweb. “What Is Generative Engine Optimization (GEO): A Complete 2026 Guide.” March 23, 2026. similarweb.com →
  9. Frase.io. “What is Generative Engine Optimization (GEO)? Complete Guide 2026.” March 2026. frase.io →
  10. NitroPack. “Core Web Vitals Guide 2026.” [INP threshold and methodology] nitropack.io →
  11. The Digital Bloom. “2025 Organic Traffic Crisis: Zero-Click & AI Impact Report.” October 2025. thedigitalbloom.com →
  12. Previsible. “2025 AI Traffic Report.” [AI-referred session growth data] previsible.io →
  13. Wikimedia Foundation. Annual traffic report, 2025. [Wikipedia 8% pageview decline figure]
  14. Authoritas. AI Overview citation rotation study, 2025. [~70% citation rotation over 2–3 months]

Disclosure: No vendor paid for inclusion or placement in this article. Tool mentions reflect the author’s assessment of current market options. Pricing and features change frequently — verify independently before purchase.

Leave a Reply

Your email address will not be published. Required fields are marked *