Is Something Like a Successful Neo-Luddite Movement Too Much to Hope For?
I should be honest. For the most part, this is not a topic that I want to write about. I don't find myself wowed or particularly interested in ChatGPT, DALL-E, or any related tech, and I honestly can’t understand why most people would, other than out of a mild curiosity. Or as a fleeting distraction. In fact, I have a theory. I would guess that the average civilian plucked from the public gives any of these apps a month or two of attention before moving on with their lives, while the more avaricious employers and business owners-and people or governments who are otherwise looking to deceive-will meanwhile be adding it to their arsenals. Because despite finding an optimistic spin on their potential in many places, it's hard not to be skeptical about how these technologies could end up being used. I would apply this to all of the latest art and speech generators, but the chatbots seem the most dubious.
So let's focus on one for a minute. There are already stories floating around about students using ChatGPT to write their papers for them as well as the app being capable of writing malware and potentially causing other cybersecurity issues in the future as the technology improves. Then there is the fact that OpenAI-the company that built the app-outsourced a portion of the job to a company in Kenya where the employees were paid less than two dollars an hour to classify and filter harmful text. One example of such text was "a graphic description of a man having sex with a dog in the presence of a young child". That's just one. The Kenyan company ended up cancelling their deal with OpenAI eight months early due to the "torture" of the work. And this is all public information after the app has been on the market for just under four months now. Kind of a rough start.
Despite this, you can find tons of prognosticating online about the beneficial applications of ChatGPT across a variety of different industries. In fact, if you ask the app itself what industries it will affect it gives you a speculative list that includes banking, finance, education, retail, and marketing, among others. Some jobs that are expected to be impacted within those industries include customer service, tutoring, research and development, copy editing, and many more. It's not a short list.
And I noticed in the generated response that there were two frequently used, choice words concerning the role of the technology as it might apply to the humans currently holding these jobs; it aims to "provide" and "assist". This could actually prove to be the case for a while. But even when you go to the source on whether or not it will eliminate certain industries altogether, it answers, "Yes, there is a possibility that certain industries could be eliminated or significantly impacted by technological advancements like artificial intelligence, automation, and robotics." I know that this is not an original or mind-blowing point to make about AI in general. But the contradiction in this case between "assisting" and "eliminating" is pretty glaring, and seems to be the same conflicted message being sold as a societal good by articles praising ChatGPT online. At least you will have some temporary assistance from the bot that you help train to take your job from you.
I am already getting out of my depth here, though. I have no predictions or paranoias about specifically where and how widespread this technology will be in the future because it's impossible to know. But I do have some more fundamental and possibly naive questions regarding all of these apps. For instance, did anyone ask for this? Does some portion of us accept the story that tech companies sell us every time-how their product is something that you didn't know you needed until yesterday, or that convenience is your god, or whatever other fiction that they come up with-while the rest of us are either oblivious or feel helpless to stop the march of progress? Because that breakdown seems about right. And all the while, there is no broad public demand for these products. Just big-brained tech bros competing with each other to be the first to release the thing that no one asked for into the wild. Tell me if I am missing something.
I just re-listened the other day to a great interview with one of the original tech bros, Jaron Lanier, although he also happens to be a brilliant thinker and critic of the current ad-model internet. And not really a bro in any stereotypical sense of the word. There are a handful of worthy quotes from him in the interview, some relevant to this conversation specifically. In one, when referring to the "data dignity" model of what our future could look like, he says, "Data dignity is where you don't believe there is ever a brain in a box. There is no autonomous robot, all it is is giant human collaborations for which people are acknowledged and paid." It’s a refreshing and hopeful way of imagining a fully AI-integrated society, but nonetheless strikes me as strange. I don't doubt that he has inside information on some crazy tech that is being developed unbeknownst to the general public. But there is still something about the implied inevitability of the full integration that feels wrong, because since when is such a specific future already written? And why are we talking about how to cope with it instead of how to prevent going in a direction that requires coping in the first place? Meanwhile, it's the tone that totally dominates this conversation more broadly. In other words, Lanier is just one of the many people forming different models of dealing with a future that they've already decided is certain.
There is another part of that same interview where he is discussing the eventuality of tree trimming robots doing jobs that are currently held by people. He says, "You know, there should be a bunch of robots on a larger area than people can cover doing a better job." I don't quote this outside of the greater context of his point to make him look bad. It's obvious how much of a genuine humanist he is when you listen to the entire interview, but his point again brought up some potentially naive questions in my mind. Why should the tech world, or any niche within it, get to single-handedly determine the fates of other industries? Do the human tree trimmers in this example have any say over whether or not they have to hand their jobs over to the robots?
That feels like one of the more obvious but under-asked questions on this subject. And I realize, totally idealistic. I understand that industries always have and always will change and die over time. The recording industry is a good example, and it demonstrates a relatively organic way that it can go down. Laptops became more affordable, along with recording software, along with recording equipment, and eventually booking expensive studio time became less of a necessity for bands to be able to record. And while this is obviously an unfortunate development for studio owners, the takeaway is that it was the result of advancements in several, mostly unrelated areas having a cumulative effect on one industry over time. But the swift and calculating nature of what some of this AI proposes to do to several industries at once is very different, which might be exactly why, at least in the case of OpenAI, they need to create an uplifting narrative of "artificial general intelligence that benefits all of humanity" in order to sell it to you.
And so I remain incredibly skeptical, and hopeful that there is some movement-whether it’s Neo-Luddite or otherwise-that makes enough of an impression to remind us of how we might want to stay human for a little bit longer. Because I think that we desperately need competing narratives for what our future could look like. Maybe then, this one imagining of it won't feel quite so inevitiable. I'm also hoping not to feel compelled to write about anything tech-related again, but we'll see if that sticks. Until next time.