LLM, ChatGPT, and Luddites
I’ve seen a few articles and status updates of late calling those people who are against the development of large language models (LLM) like ChatGPT or Bard, ‘Luddites’. The focus of those articles is the potential that this technology will displace human jobs, exacerbate existing biases, or facilitate the spread of misinformation. Or all of this.
I wrote about Luddites back in 2014, after Audrey Watters spoke of them in her ALT keynote, and it’s why I’m getting annoyed at this again – saying those people who are against technological development are Luddites.
No. Luddites were not opposed new technology (weaving looms), they were against the misuse of the technology (mechanical weaving looms) in a deceitful manner to avoid labour laws.
So, as I wrote nine years ago:
The modern “Luddite pedagogues will wield a [metaphorical] hammer, but they won’t see any urgency in bringing it down on trivial things like touch-screen gadgetry. Instead, the targets lie elsewhere.”
So, Luddites do not oppose development or advancement. They do not oppose technology in any form. To brand someone a Luddite is to acknowledge their understanding of the implication and application of technology in the setting described. To call someone a Luddite is to show respect to their moral and ethical consideration of the use of technology?
Critics of LLMs do not necessarily oppose technological progress (but some do) but instead advocate for a responsible approach to this innovation that considers the ethical, social, and economic impact of these technologies. By engaging in a more informed discussion we should be working together to ensure that the development and deployment of LLMs and other AI technologies benefit society as a whole.
The call for a halt to LLM development seems to have come a little too late. I agree with the sentiment, with the need to think more clearly about the direction, ethics, development, and ultimate goal of this work, but why wasn’t this considered over the last few years whilst these tools have been in the planning stages? What was the project and product team thinking – have they not been paying attention to all the books and films of the last 80 years, where an AI or sentient robot goes a bit ‘odd’? Did someone not think to say “hang on folks, there’s another aspect to this we need guidance on”?
In March it was reported that Microsoft had laid off its team(s) of Ai ethics and security workers, just at the time it was becoming ever more apparent the importance of their work. That’s really bad timing.
Whatever we end up using these new tools for, in academia, with/by students, and the world of work, it’s clear this is a step we can’t take back. We need to understand it and the opportunities it offers, in order to properly and appropriately include it, whilst understanding and working with any limitations, technical or ethical.
UPDATE – Here’s one person, Nick Chatrath, who has a “cautiously optimistic outlook on how artificial intelligence will change our world” based on the “countless hours I’ve spent in conversation with the researchers, scientists, and leaders who are building and implementing AI systems.” This is good; here’s someone who is close to the people who designed and developed the tools, and their persectives of how they did it, not just what they did.
- I’ve reignited my Flipboard account of late to act as somewhere I can collate stories and important articles I find in this area. Feel free to browse, follow, and suggest additions.
Cautious steps forward seems like the wisest choice but tech companies driven by profit and “I am the first out the door” usually ignore caution until it’s too late.
Kevin