Home

Why Will Wikipedia Wither?

aka: in the age of AI, what is the role of summarizers

Wikipedia is the world's free encyclopedia.

Some definitions: an encyclopedia summarizes knowledge—let's call this primary sources—in a useful, digestable format. And primary sources are, well, primary! Research papers, quotes, police reports, press releases, survey data, photos, stats on Pokémon, medical records, …the list goes on.

So maybe you can see where this is going. 👀

Wikipedia intends to be an unbiased, representative summary of primary source data. (Whether it is or not is a different post, but let's go with this.)

I have this theory that, as we as a society head down the infinite slippery slide of all-knowing AI, anything that 'summarizes'—not just Wikipedia, but also literal journalism—has no utility in a world where an AI can just do this for you.

Star Trek

There's a famous memable scene from Star Trek: The Voyage Home, released in 1986, a few months after I was born. Scotty tries to talk to the computer. Laughable at the time, but today this is something we can just do.

A scene from 1986's Star Trek movie
go ahead, watch the whole scene, as a treat

When I think about the future, I have great respect for sci-fi writers of the past: what did they imagine, even and often especially concerning the seemingly inane world-building that purely fills out the scene.

(As a digression, I love that in The Expanse, various scenes feature intermodal containers but IN SPACE. Because why would humanity redesign the standard? 📦🚀)

I won't be the first or last person to think about this, but I think that Star Trek predicted this new world aeons ago (and the newer series basically approach it the same way).

In Star Trek, or other sci-fi canon, the primary way our protagonists—or honestly, average citizens—interact with technology is indirect, via what we now know as LLMs or whatever, that summarize and present information derived from primary sources.

These sources may be "is the ship on fire", but also "what is the population of planet XYZ", or "who was the victor in the Great War of 2775".

We don't see our stars sit around reading blog posts trying to glean some insight into the culture of a new race they're meeting, not just because it'd make for pretty lackluster TV, but because this is a 'solved problem' in their universe.

Deep Implications

Let's apply this to today's world, or say the world we want to build given the advent of LLMs (and LLMs masquerading as "agentic AI").

I think it basically tells us that the web is dying.

This is a big take. My job at Google for years was to promote it.

But even in the early days of Gemini or ChatGPT, I found that the best use of AI was basically just to 'summarize' some content, news or otherwise, that was otherwise covered in ads and cookie notices. And while I do think content sites have realized their hubris here, and generally pulled back on some of the noise, I don't see a world where this business model works.

But to take this point to its logical extremes, why do say, news sites even exist? I'm aware this is a horrific idea—we're killing off journalism—but at its core, journalism is reporting on what happened, i.e., primary sources.

Instead, why not simply ask an all-knowing AI to summarize those same primary sources? Isn't that what journalists and Wikipedia do?

Again, it goes without saying that this (as it stands today) is a terrible idea:

Back to my point about the web: Google is diving head-first into the death of the web, because I can answer my queries with zero-click searches—that's even without using Gemini explicitly. And of course, ChatGPT was never a search engine to begin with, so every answer it gave you was zero-click.

So why have the results at all?

Critical Thinking

I haven't thought this out too much—I'm going to write another view on what AI means for education, especially for the sort of future I want my young kids to experience.

But right now, in 2025, some students live in the ludicrous world you already know about: they press a button to receive an essay, which is submitted to a teacher who might just pass it back to another AI to grade it.

One naïve interpretation here is that this may weed out performative assessments. Is the kind of counter-intuitive 'benefit' here we can dismiss with topics that students don't actually want to learn? Is this today's calculator? (Of course, this assumes they want to learn anything.)

Primary Sources

I think we're moving towards a world where the importance of primary sources, content, whatever, is raised.

This blog post is itself a primary source—it's an opinion, which arguably still counts. And LLMs will happily slurp it up probably minutes after I post it, just like every bit of content out there.

And cynically, you could twist my argument here to mean that everyone should be a "content creator".

But I actually kind of mean that. We should all be collecting (and publishing) small data.

If AIs are free, and they are effectively all-knowing (again, big ifs) then the benefit here is that they can identify new, novel data or connect the dots between data in interesting ways. The average research paper is read by zero people (rounded down).

Is the next big thing going to be the idea of collecting new and novel data, or at least working out how to best and most effectively publish your own discoveries?

Will the new journalist class not publish 'articles' as we know now, but purely publish their primary discoveries in as plain-text as possible for a LLM to consume for it to surface to the world?

Discovery

Going back to Star Trek v. Wikipedia (2025), one contrasting issue with how we imagine people in the future interact with content is that of discovery.

A better question is where are the doom-scollers in the Star Trek universe? Again, this doesn't make for great content, but is Boimler sitting in his bunk reading a book (AI-generated or not), or doom-scrolling his TikTok feed?

And zero-click searches are great for answers, but are seemingly not great for discovering more. (To be fair, maybe that will change.) But for better or worse, Wikipedia is curated, and it has novel things like headings that I can scan to find out whether I really want to dig into the 'controvery' section for my favorite actor.

There's also the concern that an agent acting on my behalf will reinforce my biasies. If I only want to know if my favorite actors own a dog, every question will start reporting that in favor of something that pushes me off that path.

Parting Thoughts

I don't know why I wanted to write this post. I think I'm trying to imagine the world I want, given the constraints—given where we seem to be heading.

Do search engines survive this change? Does journalism survive? Does it need to?

These are all interesting questions I don't have the answer to. Anyway, bye 👋