Four Opinions on LLMs
I have four opinions on LLMs. I'm not quite sure they count as doublethink yet.
Obviously this is another take on the fever dream that seems to have captured the world at the moment. Depending on who you talk to, Large Language Models / AI (they are different but that’s for another time), are either going to solve all human problems, or bring on the end of society as we know it (these are also not mutually exclusive). I’ve managed to distil my opinions down to four positions, and at the moment I’m still trying to work out whether they meet the definition of double(quadruple?)-think.
1. The intellectual property theft that underpins AI training is morally wrong
Such a turnaround from big tech and the DCMA cracking down on individuals, to the biggest tech companies openly ingesting every copyrighted work on the face of the earth and being very open about it. The people whose content was used to train the machines should have been compensated and the content should have been licenced before being allowed in a training set. This goes for Open Source code through to everything any artist ever created. Despite this less than auspicious start in life…
2. As a technology, AI is amazing and has massive potential
Like everyone else, when ChatGPT launched, I enjoyed getting my words translated into pirate and then moved on with my life. I then moved to thinking of it as ‘fancy autocomplete that might just make stuff up like a ten year old in the playground’. Since the launch of agentic tools 1 1 I’m drawing a distinction here between the LLM itself and the harnesses that have been built to allow it to do work , I’ve become much more optimistic. This has also coincided with the point at which I started getting actual day to day value out of the tooling. Many more of my side projects (including this website) have been completed since I started using ChatGPT to write code snippets, and the Codex / Claude Code workflow is a revelation.
I’ll also get this out of the way: I agree with everyone who claims that for real work, you can’t beat the craftspersonship of an experienced developer or team of developers. I am not attempting to use an LLM to build large scale, hardened, financial transaction systems or anything that might touch on GDPR 2 2 I like GDPR but think it’s basically ignored by big tech companies or the UK Online Safety Act 3 3 The result of non-technical people ignoring technical reality while making policy . I am, instead, using it to help me set up self-hosted services and build small apps that (at most) a dozen people might interact with.
However, like any tool, it’s just a tool. In a few years the boosters will have either cashed out / moved on / gone bankrupt, and we’ll have a useful piece of technology that we can use to make the world better, should we choose to.
3. The economics of AI is a bubble that might take a lot of things with it when it bursts
Just read Ed Zitron for the details, and realise that we’re definitely into the billionaires juicing the rest of society before everything collapses.
I am also a bit bitter that the PC I built in 2022 might be worth more than it was when I bought it because I went overboard on SSDs and RAM.
I am very hopeful that, in time, we’ll have local models that can run on consumer hardware that are as powerful as Claude Sonnet 4.6. Once that’s possible, I’ll buy some hardware and enjoy the consistency of running my own LLM backend. When that happens, I don’t know where the AI companies will be getting their revenue from. Every medium to large enterprise will just roll their own or buy cheap hosted models from companies who aren’t burning money training new models.
4. AI is being used to justify all the worst behaviours of our neocapitalist society
AI isn’t making people redundant. Corporations in search of ‘shareholder value’ are making people redundant because they’re banking on the promises that the AI boosters are pushing, with no understanding of the actual technology, nor any reverence for the companies and people that they claim to lead.
I’m not a huge fan of our current mode of capitalism, and feel that this technology has been hijacked by a group of people with a very specific end-game, and it doesn’t involve everyone having safe, fulfilling lives where they can explore their potential and contribute to a fairer, more just society.
Final Thoughts
Writing this has made me think a bit more about the environmental angle to the technology and whether it counts as a specific opinion on AI. I have thought of including it as another opinion above, but I think it’s fundamentally wrapped up in the last two.
We have an amazing new technology which is definitely not pricing in its externalities. However, that isn’t a feature of LLMs specifically 4 4 We are still unable to properly account for the externalities of cars, and those are a pretty mature technology that’s been with us over 100 years. . Until we get a more general handle on pricing long term externalities into short term decision making, it will become another thing that’s ’too hard’ to fix. So unless we have a general fix for the climate crisis and global geopolitics, I’m going to leave it as another string on that already terrible bow.
Notes
-
I’m drawing a distinction here between the LLM itself and the harnesses that have been built to allow it to do work
-
I like GDPR but think it’s basically ignored by big tech companies
-
The result of non-technical people ignoring technical reality while making policy
-
We are still unable to properly account for the externalities of cars, and those are a pretty mature technology that’s been with us over 100 years.