I like it. Step in right direction. Should go even further in to include Internet literacy....
Chatbots were supposed to usher us into a New Age of quality information !!!!
The hype about chatbots has reached galactic proportions. Most of the fantastical tales involve some version of ChatGPT, the 800-pound gorilla of “bot writing.”
ChatGPT is the creation of OpenAI, which is backed by $13 billion in Microsoft investments and billions more from other investors. As of March 2024, ChatGPT’s website had 500 million unique users — that’s 12% of the global workforce — according to a World Bank study (
abstract,
PDF) . The site gets a massive 82% of the traffic of the top 40 generative bots the organization studied. For comparison, the authors say, it’s 17 times the number of visits to Google’s Gemini chatbot.
Unfortunately, bots that have been “trained” on the content of the Internet aren’t the paragons of knowledge that we’re often led to believe.
The dark underbelly of bot-generated material is revealed most clearly by an organization called
NewsGuard. Founded by former
Wall Street Journal publisher Gordon Crovitz and National Magazine Award–winning investigative journalist Steven Brill, here’s the real story:
- More than 1,000 bogus “news” sites that are entirely bot-generated and unreliable — running with little or no human oversight — have been identified by NewsGuard to date.
- The websites typically have legitimate-sounding names. Domains such as “Daily Time Update” and “iBusiness Day” present themselves as well-established news sites. But their chatbot-invented features are often just made-up articles about celebrity death hoaxes and other fabricated events.
- The scope is international. NewsGuard has so far found untrustworthy, bot-automated “news” sites written in French, German, Chinese, Russian, Spanish, and 11 other languages.
- More than 150 bot-written sites are spreading Russian disinformation. You might be impressed by their dignified names, like DC Weekly. But they carry fake stories, such as the one stating that Ukrainian president Volodymyr Zelenskyy supposedly skimmed foreign aid to buy one of King Charles’s estates in England. In total, these English-language Russian sites have been viewed more than 37 million times, according to a NewsGuard investigation.
- The Chinese government is in on the action, too. One Beijing-run site used chatbot-generated text to invent a false claim about a bioweapons lab. The US was said to operate one in Kazakhstan to infect camels, trying to spread disease to people in China. Hmm.
- Popular chatbot systems readily spread misinformation. When leading chatbots were subjected to red-teaming (i.e., systematic testing), the systems would merrily repeat false claims in 80% to 98% of the cases, according to a 2023 NewsGuard study (PDF).
- Top advertisers spend $2.6 billion per year on fraudulent chatbot sites. More than 6,500 sites were examined, and many automated rumor mills were found that display these ads, in an analysis by NewsGuard and ad-monitoring firm Comscore.
- Chatbots are being trained mostly on low-quality “news” sites. Of the information sites that NewsGuard rates as high in quality, 67% take steps to block bots from scraping their content. By contrast, 91% of low-trust bot sites allow all Web crawlers to access them.
In the real world, chatbots can ruin people’s lives !!!
What difference does it make if chatbots are “learning” mostly by absorbing content from low-trust, chatbot-generated sites that often lack factual information?
Protect yourself from bot-written nonsense by questioning everything
Massive damage to the public’s ability to learn the truth has already occurred.
The Pew Research Center published this month a
study showing that more than half (52%) of Americans say they “generally find it difficult to determine what is true and what is not when getting news” regarding the country’s 2024 elections.
Wordsmiths have invented a term to describe content that is designed to create strong negative emotions:
angertainment. Wiktionary
defines it as “programming, especially talk shows and talk radio, which is characterized by anger or which provokes anger in its audience,”
adding that the first documented use of the word was in 2000.
A CityAM
article plainly states that “outrage is the most profitable emotion.” It cites a years-old University of Pennsylvania social-media study showing that “content which evoked anger was 34 per cent more likely to be shared.” How prescient they were.
Several top video creators participate in TikTok’s so-called Pulse program. TikTok pays the influencers half of the revenue from ads that are placed next to their feeds. Each personality said there was more “rage-baiting” than ever before. One called it “a race to the bottom,” according to a Business Insider
report.
How can you avoid low-quality, often-inaccurate, chatbot-generated, rage-baiting media?
Ask yourself two questions when you find yourself consuming what may be bot-made content:
- Does the content make you feel anger toward someone or something?
- Is the creator asking you to — just a guess here — send them money?
If your answer to both questions is “yes,” there’s a simple way to lower your blood pressure and avoid getting your pocket picked: change the channel.