TL;DR: On Tuesday, 18 November 2025, a global outage at Cloudflare briefly disrupted parts of the internet. The issue originated within Cloudflare’s own systems and was not a cyberattack. The majority of the websites we host experienced instability for up to three hours, after which normal service resumed.

  • Cause: A faulty configuration file used by Cloudflare’s Bot Management feature exceeded software limits and temporarily broke request handling across parts of its network (11:20–14:30 UTC primary impact; full normalisation by 17:06 UTC).
  • Impact: Major global brands (e.g., X, ChatGPT, Spotify, Zoom, Visa, Ikea) were affected alongside numerous smaller sites.
  • Search visibility: Brief 5xx errors like these do not harm Google or Bing rankings; crawlers slow down and retry.
  • Our stance: While the incident was outside our control, we apologise to all clients who were inconvenienced. We’re enhancing failover and communication procedures.

For details on our infrastructure and support options, see our Website Hosting & Domain Services

What happened on 18 November - timeline, cause, and who was hit

Copy shareable section link

The timeline. At 11:20 UTC, Cloudflare began experiencing failures that surfaced to end‑users as 5xx error pages. Engineers initially suspected attack‑like symptoms, then identified the real culprit and began stabilising core traffic flows by around 14:30 UTC. All systems returned to normal by 17:06 UTC. :contentReference[oaicite:4]{index=4}

The root cause. Cloudflare’s post‑incident analysis attributes the outage to an internal change that caused a database query to over‑populate a Bot Management feature file. The file doubled in size, surpassed a built‑in limit, and triggered a crash in the proxy software that handles customer traffic. Once Cloudflare halted propagation of the bad file and rolled back, services recovered. Cloudflare emphasised the event was not a cyberattack.

Scale of disruption. Because Cloudflare accelerates and protects a significant portion of global web traffic, even a partial failure reverberated widely. During UK business hours and early morning in the US, users encountered error messages across a range of high‑profile destinations. The Guardian’s live coverage described impact on platforms including Spotify, ChatGPT, X (formerly Twitter), Zoom, Microsoft Teams, retailers such as Asda and M&S, and payments/telecoms brands such as Visa and Vodafone.

Other outlets reported overlapping impacts. ABC News noted Cloudflare’s confirmation that the issue was resolved after several hours and quoted its CTO explicitly ruling out a cyberattack, adding that ChatGPT and X were among the affected services. The Associated Press highlighted interruptions touching League of Legends, Shopify, Dropbox, Coinbase, Moody’s, and public transit systems such as NJ Transit and France’s SNCF.

Representative list of affected big sites and services (drawn from the above reports and status dashboards):

  • X (formerly Twitter)
  • ChatGPT / OpenAI
  • Spotify
  • Zoom & Microsoft Teams
  • Visa
  • Vodafone & Vinted
  • Asda & M&S
  • Ikea & Canva
  • Shopify & Dropbox
  • Coinbase & Moody’s
  • NJ Transit & SNCF (public transit)

Cloudflare’s own status and blog updates—which we link below—provide the canonical timeline. Where third‑party trackers (e.g., Downdetector) showed spikes, they were consistent with Cloudflare’s stated windows.

Why this one felt so visible. Cloudflare is a foundational layer for the modern web: its network front‑ends millions of sites to accelerate delivery, mitigate DDoS attacks, and filter automated traffic. As UK media observed, Cloudflare handles roughly a fifth of global web traffic, which is why any blip creates outsized ripple effects.

External references for this section: Cloudflare post‑incident blog; The Guardian live blog; ABC News report; Associated Press coverage; Cloudflare status page (links below).

What this meant for our clients - our apology, the practical impact, and next steps

Copy shareable section link
Man sitting on a sofa holding his head in frustration while looking at a laptop during a website outage.

Our apology. The outage was outside of our direct control, but it affected you all the same. The majority of the sites we host were unstable for up to three hours on Tuesday, 18 November. We’re genuinely sorry for the disruption to your teams and customers, and we appreciate the patience and professionalism many of you showed while we monitored Cloudflare’s recovery and validated our own systems.

What you may have seen. Depending on traffic levels and caching, some visitors encountered a Cloudflare 5xx error page; others saw content load intermittently. Admin logins protected by Cloudflare Turnstile and certain API calls could have failed during the core impact window. This aligns with Cloudflare’s account of transient 5xx responses and temporary authentication issues during the incident.

E‑commerce and form submissions. Transactions that reached your origin servers succeeded as normal; requests that failed at the edge never reached your application. If your analytics show a dip during the impact window, it reflects requests that Cloudflare could not forward rather than an application‑side fault.

Search rankings and SEO. A frequent concern after any outage is whether Google or Bing will penalise your site. The answer here is reassuring: when widespread infrastructure incidents cause short‑lived 5xx errors, search engines throttle crawling and come back later. There is no lasting ranking impact from a few hours of 5xx responses. This point was reiterated by the search community on the day, supported by statements from Google’s John Mueller.

Security posture. Cloudflare confirmed the event was not the result of a cyberattack and that mitigation was achieved by rolling back the faulty feature file. We saw no evidence of data compromise.

What we did during the incident. Our team:

  • Verified origin infrastructure (servers, databases, and upstream provider metrics) to rule out localised faults.
  • Monitored Cloudflare’s status, engineering notes, and third‑party telemetry to track recovery and confirm resolution.
  • Communicated updates to clients with observed impact and performed post‑recovery smoke tests on critical journeys (checkout, account access, lead forms).

What we’re doing next. Incidents like this are rare but instructive. We are:

  • Reviewing multi‑CDN / multi‑edge options for mission‑critical sites where additional failover is warranted, and expanding our runbooks for rapid edge routing changes.
  • Tuning caching and TTL strategies so popular content remains highly available during edge instability.
  • Improving comms with status hooks and optional SMS alerts for extended incidents affecting shared infrastructure.
  • Scheduling resilience reviews for clients who request them. If you’d like one, contact Gary or explore our Hosting & Domain Services to discuss options.

What you can do now. If you still see stale errors, clear browser/CDN caches and verify third‑party integrations (payment gateways, CRMs) are normal. Should anything look off, please open a ticket and we’ll investigate immediately.

Context from independent sources. For those who want the technical deep dive from Cloudflare (including exact timestamps and the underlying ClickHouse permissions change that inflated the feature file) we recommend the official post‑incident write‑up. For broader context about the scale of the event and the brands impacted, see coverage from The Guardian, ABC News, and AP. And for SEO reassurance, Barry Schwartz’s analysis at Search Engine Roundtable remains a go‑to reference for how Google and Bing treat short‑term 5xx incidents.

Once again, our sincere apologies. Even though this outage originated upstream, we own the experience of our clients. Thank you for bearing with us while we monitored and validated recovery. If you have questions about resilience tiers, incident communications, or multi‑CDN strategies, we’re here to help.

Key Takeaways

Copy shareable section link
  • The 18th of November 2025 Cloudflare outage appears to have been caused by an internal configuration issue, rather than a cyberattack.
  • The disruption affected a wide range of websites and services, creating intermittent instability for users during the incident window.
  • For most sites, the most visible symptoms were loading failures and server errors while Cloudflare engineers identified the cause and rolled back changes.
  • Short-lived infrastructure outages like this are unlikely to cause lasting SEO harm, as search engines typically retry crawling after temporary errors.
  • The incident is a reminder to invest in monitoring, incident comms, and resilience planning when you rely on third-party infrastructure.

References & Further Reading

Copy shareable section link