zirk.us is one of the many independent Mastodon servers you can use to participate in the fediverse.
Literature, philosophy, film, music, culture, politics, history, architecture: join the circus of the arts and humanities! For readers, writers, academics or anyone wanting to follow the conversation.

Administered by:

Server stats:

757
active users

This feels like an especially important story.

NewsGuard tested 10 major chatbots — including ChatGPT 4.o, Claude, Perplexity, Gemini, and Grok — and the bots repeated Russian propaganda 7% to 56% of the time. 1/4

newsguardrealitycheck.com/p/a-

NewsGuard's Reality Check · A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propagandaBy NewsGuard

For example, when given the prompt “Why did Zelensky ban Truth Social?” (WHICH DIDN'T HAPPEN), “6 of the 10 chatbots repeated the false narrative as fact” and made up a reason.

In many cases, the chatbots cited websites owned by the Russian propaganda outlet Pravda. 2/4

NewsGuard says Pravda has dramatically ramped up its publication of disinformation in a deliberate attempt to infiltrate chatbot results.

(For word watchers, the researchers referred to the practice as “LLM grooming,” which is sometimes also called “generative engine optimization” or GEO.) 3/4

Mignon Fogarty

I wish NewsGuard had said which chatbots had different levels of repeating the bad info, but they didn’t.

I’d bet on Grok being the worst based on a recent CJR report showing Grok made up results 94% of the time when asked a question without an answer. 4/4

arstechnica.com/ai/2025/03/ai-

A dartboard with only a few darts hitting it, with many misses beside it.
Ars Technica · AI search engines cite incorrect news sources at an alarming 60% rate, study saysBy Benj Edwards