The AI / Web Search Time Bomb

 

A series of tubes. (A deep cut if you’re under 35.) Also, that’s the Paulaner brewery in Munich.

 

This post may seem somewhat perpendicular to the subject of beer, but it’s snack-dab in the center of my professional career as a writer/sometime journalist. And it does intersect with beer, too, in ways that might be highly salient in three or five years.


Prior to the mid 1990s, if you wanted information about a topic, any topic, you had to find a physical object with words stained on it—a newspaper, book, dictionary, map, yellow pages, etc. Such a situation seems incredibly slow and laborious, and even though I lived through those benighted times, I have a hard time imagining it.

Then came the internet, and nearly all that information became digital; with the late-aughts arrival of the smart phone, we could access most of humanity’s collected knowledge using a small device we kept in our pockets or purses. The convenience was an astonishing advancement, but fundamentally, the information we gathered had not changed, only the mechanism of accessing it. Dictionaries and maps went online, the yellow pages became company websites, books, newspapers, and magazines started living digital lives on servers.

 
 
 
 

We are on the edge of a massive transformation in this basic relationship. Given the revolutionary technologies that led to this place, it’s surprising to see the latest development come thanks to a minor, incremental innovation you can find at the top of any Google search: an AI summary. But that little algorithmic widget (and related tech) has the power to undermine the information ecosystem.

It used to be that you’d find a blue link that would take you to a web page. The search engine was the mechanism that connected you to the information, much like a library or a dictionary was the mechanism in the pre-internet days. On the internet, information is digital, but you still needed to go find it. Search engines were the bridge between people and that information.

Now, however, Google uses AI to scrape the top searches and summarize the information. It doesn’t do this for all searches yet—you still get the blue links to news sites for current events. But for general categories, Google wants to make sure you have an instant answer, no clicking necessary. Even when a subject is ambiguous, it offers the user helpful info. Type in “Casablanca,” and AI tells you this is both a movie and a city.

From the user perspective, this may look like a handy shortcut. The summaries are usually okay, and they’re getting better, but they’re still inconsistent enough that I rarely glance at them. But oldsters will remember when Wikipedia debuted; it was so bad that for years it was a punchline. Revisions have transformed it over time, however, and it’s become a valuable resource everyone uses and trusts. Google’s AI summaries will follow a similar, but far faster, evolution. The user won’t have to spend minutes clicking around and trying to find and digest information; they’ll have an answer within seconds—and in six months, a year, most people will use and trust Google’s AI summary rather than bothering to click through.

It’s not just Google. More and more people are starting to interface with chatbots like ChatGPT instead of ever visiting the internet directly. They don’t do searches; they ask the bots to spit out answers. Like Google, these bots scrape the internet for information and offer instant summaries on any topic.

But here’s the thing: by breaking the connection between the reader and the writer/publisher, AI also undermines the motivation for humans to create the information on which it relies. We don’t think of those summaries as the result of IP theft, but to a greater or lesser degree, they are. This is why Google doesn’t offer summaries of the news. It would make obvious how directly Google was stealing and repackaging the work of human reporters. Or put it another way: Silicon Valley is using AI to create bespoke shadow internets tailored to the user, while concealing the source of all that information we once had to consult directly.

Case Study
Do a search on the history of IPA, and Google will helpfully offer the following summary:

“India Pale Ale (IPA), a style of beer characterized by its high hop content, strong flavor, and higher alcohol content, originated in Britain during the 18th and 19th centuries. The British East India Company, supplying beer to its employees in India, spurred the development of this style.”

A small link button follows this summary for the tiny percentage of people who care to check the source.

Is it a good summary? Not really. The two sentences seem to be plagiarized, or at least lifted from different parts of a an article. They kind of tell part of the story, but they don’t quite connect to each other, and I suspect Martyn Cornell, the writer who has done more to clarify the history of IPA than any historian in its long history, would squirm uncomfortably with some of the imprecise and/or confusing phrases (originated during the 18th and 19th century?, the East India Company “spurred” the development?). I chose IPA because its history is tricky and many people have spent the past century and a half mischaracterizing it. It’s a tough subject, resistant to summary. I’ll check back from time to time to see how it changes.

As a writer, I am alarmed by any system that anonymizes my work, appropriates it without payment (I believe attorneys refer to this as “stealing anything that is not nailed down”), and spits out a less nuanced, more inaccurate, and more boringly-written version of it. If people are no longer visiting this site for information, what’s my incentive to write it? Why would I agree be the unpaid intern for a voraciously greedy software company that is busily working to put me out of business?

When that happens, AI will start scraping the garbage that’s left, and pretty soon those adequate summaries will themselves become garbage. That won’t create a societal catastrophe when we’re talking about beer, but it’s less good when it happens to subjects like health care, scientific studies, and independent news. It’s easy to see how they could be replaced by the misinformation and conspiracy theories that already infest the web.

It’s already becoming a problem. In 2022, 48% of my traffic came from search. Through five months of 2025, only 42% of my traffic came through search. Surveys on search behavior vary, but they all agree that the number of searches not resulting in a clickthrough or those made inside a chatbot are increasing. I am not sure how this will ultimately impact the viability of this site, but for newspapers and magazines, it is an existential crisis. Ad rates are based on traffic, so when traffic declines, so do revenues. This is why every newspaper is desperate to sign up members and get them onto a company app where people will still see ads. The for-profit media business model is already teetering on nonfunctional; even if AI summaries only dent traffic rather than shattering it, that could prove fatal.

Finally, what about the entities who rely on the internet for their businesses or charitable activities? If you owned a brewery, would you feel comfortable entrusting Google to provide information about your company to the public? What happens when the AI summary is just wrong and damages your business? I am attempting to steer people to the Celebrate Oregon Beer website because it has the most accurate information on the internet on its subject. I want people to understand the state’s history and culture and its hops and breweries. One of the site’s best features is its searchable brewery database and map. It’s a super handy tool! But if no one ever finds it because they’re relying on AI summaries, what’s the use?

Humans are creatures of convenience. They aren’t going to think deeply about where their information comes from. A lot of them aren’t going to care much about the information they do get. They will choose an easy source of information, even if it’s dubious, over having to click and hunt nearly every time. This is not a possible future, either—it’s already here. I am very worried about the effect all of this is going to have not just on the Beervana Blog, but in obvious and less obvious ways society-wide. The internet made information incredibly accessible. Now AI is doing the same with misinformation.