While large language models like OpenAI’s ChatGPT, or generative models like Midjourney’s and the likes, are somewhat interesting to me from a technological standpoint, the hype around both the models and their application, which this time around even reached outside the tech bubble, is just incredibly boring.
People just can’t help themselves when a shiny new toy is put in front of them. Throwing all common sense and ethical considerations out the window, be it that all of these models are owned by a handful of for-profit companies – yes, looking at you decentralized-Web3 bros who suddenly are all about a handful of closed-source machine learning without any knowledge on where the training data comes from, then again maybe decentralized for them always really just ment deregulated.
Similar questions on where exactly the vast amounts of training data come from are conviently swept under the rug. This lawsuit filed against GitHub/Microsoft arguing that using open-source code as training data for CoPilot which under certain licenses require attribution is a case of software piracy. Similar lawsuits exist against the companies behind image-generating models, for example Getty Images is suing Stability AI over using copyright infringment using their images without permission.
Mandy Brown had a good take on another angle to this whole story. Recently, voices around the fear of AI turning sentient grew louder, yet again. But as Mandy Brown argues in Smoke Screen, all of this is a distraction from the inequalities that are already caused by applying so called “artificial intelligence” in the world of medical care, policing, etc.
[…] the fear that some people may lose their superior status to a machine is the same fear that they may lose it to people they already deem inferior. It’s part and parcel of a blowback against human rights being extended to Black people, to women, to trans folks, to the disabled, to everyone they long assumed was deservedly less worthy (of money, care, attention, or respect) than themselves.
The story that “artificial intelligence” tells is a smoke screen. But smoke offers only temporary cover. It fades if it isn’t replenished. We have the power to tell different stories, to counter the narrative of “artificial intelligence” with one that is rooted in democracy and equality, in a vision of a living world in which life is not ranked according to perceived value under capitalism but in which care is extended to all.