Your article on generative AI had so many weaknesses that I thought it might have been written by, well, generative AI (The Big Read, January 26).
To pick on some points: yes it can produce misinformation, but that is because it is based on information available out there, much of it false; this is not a particular feature of AI. Anyone who uses a modicum of critical judgment when they read something would naturally apply the same judgment to AI-generated text.
Yes it can automate tasks and have an impact on jobs, but what technology hasn’t, from the loom to the electric motor to spreadsheets? It has no “real understanding” — whatever that means — but do people always have a real understanding of what they are talking about? It has no real memory outside of a single conversation, you say, but in fact it can be programmed to have infinite memory of conversations, which is not true of humans.
You advocate a “sense check” of text before it is released, which is what is routinely done for any academic paper (and I am sure for most FT columns) through peer and editor reviews. It can be misused by bad actors, but so can a kitchen knife, gunpowder, nuclear power or social media. And it can create fake content — wow, I am impressed you are discovering the concept of fake content.
AI has been around a long time. I remember conversing with an AI “doctor” in the early 1980s at my university. The experience was poor, but we knew what the scientists were developing. What has changed since is primarily a dramatic increase of online information (and misinformation), and an exponential increase of computing power. Today’s tools are finally getting good at combining large amounts of information and writing syntheses with frankly better grammar and spelling and more cogent structures than the average population’s, and will some day, probably in the not so distant future, generate original ideas as well.
I am betting on AI in good hands to actually help weed out misinformation using proven facts, rather than making things worse, because in some ways it will be, well, more “intelligent”, whatever that means, than us.
London N3, UK