Indigenous human societies have many things in common. Among them is the invention of games. Some of those games have survived to the present day. The most familiar example is the modern Olympic Games derived from ancient Greece. Another, less familiar example is the Arctic Winter Games. Last month, the 2023 edition of those games was held in Canada. While many competitors at this annual event are of indigenous descent, an entire category of events, known as the Dene Games, is based on ancient games invented by the indigenous Dene peoples.
The Maya peoples also invented games. In particular, their ball game (adapted from an earlier Olmec version and which the Maya called pitz) has been the subject of much research and speculation by historians, anthropologists and archaeologists. This month's photo was taken at the site of one of the largest Maya ball game courts ever discovered — at Chichén Itzá in Mexico.
Often, indigenous games had associations with war. The association of pitz with war was a part of ceremonial life at Chichén Itzá. Last month, most of the usual suspects (the mainstream media) reported that, for the first time, residential quarters had been discovered at Chichén Itzá. Their reports were misinformation. Those residential quarters were actually discovered over a hundred years ago.
Their news stories were so misinformed that they could have been generated by Artificial Intelligence bots. They also wrote stories about such bots last month. Their main, factual story appeared when Alphabet, the parent company of Google, lost US$ 100 billion in market value in a single day. That loss was due to Google's AI chatbot, named Bard, making a simple mistake on the subject of astronomy.
That story led to a plethora of mainstream clickbait about the impending takeover of humanity by AI chatbots. Some of the more informed journalism should actually give us cause for concern. Clarkesworld, a prominent publisher of science fiction, used to accept direct, unsolicited submissions from first-time authors — bypassing the need for representation by a human literary agent. That changed last month when the publisher saw a massive spike in submissions it rejected due to plagiarism. That spike, Clarkesworld said, is due to submissions generated by AI bots.
I "conversed" last month with one of the prominent AI chatbots. I was not impressed. After fourteen months, I would have expected it to know that the James Webb Space Telescope is no longer something that "will be" launched to space (especially after it told me that its knowledge base is updated "regularly"). Nor was I surprised last month to hear of a Tesla electric vehicle recall due to a beta version of its Full Self-Driving software. Driving safely and writing good science fiction require human intelligence or Artificial General Intelligence. The latter is being greatly overhyped.
There is potential danger here, but it doesn't come from AI chatbots themselves. It lies in our misuse of them as tools. Some AI chatbots come with warnings that the content they generate may be inaccurate or misleading. While "human chatbots" can also be misleading, we often fall for their misinformation if it is congruent with our own ideologies.
Indigenous people fell for misinformation from their "human chatbots" when they were told that their games justify war. I would suggest that nothing has really changed. As more AI chatbots appear online, the potential for them to call us to arms increases. We were told fifty years ago (by Pete Townshend) that "in the battle on the streets/You fight computers and receipts." If we continue to misuse them, computers will soon be fighting us.
If you enjoyed reading this month's opinion editorial, please consider supporting independent, advertising-free journalism by buying us a coffee to help us cover the cost of hosting our web site. Please click on the link or scan the QR code. Thanks!