Добавить новость


News in English


Новости сегодня

Новости от TheMoneytizer

Answer engines are the new fake news

A great, fictional man once declared: “I believe virtually everything I read.” David St. Hubbins, lead singer and guitarist of Spinal Tap, mocked the earnest confidence of rock stars in the same way AI futurists are now mocking critical thinking itself. 

Right now, most of the tech industry has adopted St. Hubbins’ line without the irony. Google is embedding AI into Chrome. Tech leaders are declaring the end of websites. Hundreds of links will collapse into single answers, traffic will disappear, the open web gets hollowed out. The future belongs to whoever wins inclusion in the AI’s response, not whoever builds the best site.

Sigh.

We spent the last decade learning that you can’t believe everything on Facebook. Now we’re about to make the same mistake with ChatGPT, Claude, and Gemini.

Clean story. Wrong conclusion. It assumes people will stop thinking critically about information just because it arrives in a prettier package.

Same Problem, New Wrapper

The fake news crisis taught us something: Polished presentation doesn’t equal reliable information. Nice formatting, confident tone, and shareable graphics do not come with a guarantee of truth.

We had to relearn basic media literacy. Check the source. Understand methodology. Look for bias. Read multiple perspectives. Think critically. Now answer engines arrive with a seductive promise: “Don’t worry about all that. Just trust what we tell you.” This is fake news 2.0.

The Work Slop Warning

Harvard Business Review documented what happens when people stop interrogating AI outputs. They call it “workslop,” content that looks professional but lacks substance. Polished slides, structured reports, articulate summaries that are incomplete, missing context, and often wrong.

Employees now spend two hours on average cleaning up each instance. One described it as “creating a mentally lazy, slow-thinking society.”

Another said: “I had to waste time checking it with my own research, then waste more time redoing the work myself.”

This is what happens when we outsource critical thinking. The polish looks good. The substance isn’t there. Someone downstream pays the price. If AI can’t reliably produce good work internally, where context and accountability exist, why would we blindly trust it externally, where neither exists?

High Stakes Require Verification

Imagine your doctor uses an AI summary for your diagnosis. Your lawyer relies on ChatGPT for contract advice. Your financial advisor trusts Gemini’s recommendations without checking. You’d demand they verify, right? Check sources. Show methodology. Prove they’re not just accepting whatever the algorithm says.

Medical decisions, legal issues, financial choices, and safety concerns all require source transparency. You need to see the work. You need context. You need to verify. A chat interface doesn’t change that fundamental need. It just makes it easier to skip those steps.

The existence of these facts points to a clear, yet countercultural conclusion.

Websites Aren’t Going Anywhere

Yes, discovery patterns are changing. Yes, traffic shifts. Yes, AI surfaces some content while burying others. That doesn’t make websites obsolete. It makes them more important.

The sites that die will deserve it: SEO farms gaming algorithms, content mills producing garbage. The sites that survive will offer what compressed answers can’t: verifiable sources, transparent methodologies, deep context that can’t be summarized without losing meaning.

When fake news dominated social media, the solution wasn’t “stop using sources.” It was “get better at evaluating them.” Same thing here. Answer engines are a new entry point, not a replacement for verification. The smart response to an AI answer isn’t “thanks, I believe you.” It’s “interesting, now let me dig deeper.”

We’re Not That Lazy

The “websites are dead” thesis assumes something bleak: that humans will stop being curious, critical, and careful about information that matters. That we’ll just accept whatever Google tells us.

People want to understand things deeply, not just know the answer. They want to form opinions, not inherit them from algorithms. They want to verify claims when stakes are high. That requires going to sources. Comparing perspectives. Thinking critically instead of letting technology think for you. You can’t do all of that in a chat window.

The Bar Just Got Higher

AI answer engines aren’t killing websites. They’re exposing which ones were never worth visiting.

The question isn’t whether websites survive. It’s whether your website offers something an algorithm can’t: real expertise, transparent sources, and content valuable enough that people want the full story, not just the summary.

We learned this with fake news. Now we’re learning it again with answer engines.

Trust, but verify. Always verify.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media










Топ новостей на этот час

Rss.plus