Map of life expectancy at birth from Global Education Project.

Thursday, March 30, 2023

Artifiicial Stupidity

A few weeks ago I posted a sample of ChatGPT output, specifically a (very lame) fairy tale. What I didn't discuss here, as far as I remember, was that I later asked it some fairly technical questions in my academic specialty and it answered them accurately enough that a student would definitely get full credit on an exam. The New England Journal of Medicine today has launched what will be a series of articles on so-called Artificial Intelligence in medicine, beginning with ChatGPT and its competitors. 


Unfortunately it's paywalled, but I will just tell you that ChatGPT can pass the U.S. Medical Licensing Examination with a commendable score, produce accurate notes of a clinical encounter from an audio recording, and provide accurate specialty consultation. It doesn't seem there have been formal studies, but it appears to be no more fallible than a human physician, although its proponents do insist that human experts still need to review its output. 


You may have read or heard about the open letter with 1,000 signatories (of whom for some mysterious reason the NYT singled out Elon Musk, who they apparently regard as the world's most perspicacious human) call for a moratorium in the development of chatbots, seeing potential grave dangers to society. They may be right but history since the 19th Century has made it clear that you can't put the technological genie back in the bottle. Actually, I would say history since the neolithic revolution tells us that. People will inevitably grab hold of technologies for the immediate apparent rewards, but we only find out later what the harms may be, and by then the technology is so deeply embedded that we don't have a way out -- this goes for everything from agriculture to the automobile. 

 

The only reason there is hope to free ourselves from fossil fuels is that there are alternative technologies that can achieve pretty much the same results. But we aren't talking about giving up our cars and our electric lights and our factories, we're just talking about powering them differently. It does seem to me that once AI is embedded in processes that have hitherto been reliant on the 3 pounds of gray goo in our heads, it will be pretty much impossible to get it out. 


I think the most dystopian visions are at most highly improbable. The science fiction writer Dan Simmons, [Spoiler alert -- you don't really figure this out until near the end . . . ] in the Hyperion tetralogy, envisioned as long ago as 1989 a far future in which an artificial intelligence ecosystem had escaped from human control and come to exploit humanity for its own ends. (He also envisioned that human descendants from the even farther future would send a messiah to free humanity from its oppression, but that's probably not relevant here.) People do worry about something like this happening concretely, in the fairly near future. I don't find that plausible with existing technology, because these entities have no goals or desires of their own. They don't want anything, they don't want to control anything, they're purely reactive. But who knows, somebody might create a goal directed bot that does get out of control, before we can pull the plug. 

 

But of more concern is how these technologies will affect social relations, employment, and the information environment. Beyond putting physicians out of work, they could  be exploited to deceive and manipulate, and could unintentionally restructure society in ways we can't predict or even conceive. I don't know how alarmed we ought to be, but we do need to be paying attention.


2 comments:

Truman Bradley said...


Great post. Scary stuff.

Don Quixote said...

How are we supposed to know with utter certainty that you wrote today's blog post unassisted by AI technology?