Ai everywhere
In 2023 the Korean government launched a programme to create "AI digital textbooks". The Deputy Prime Minister and Minister of Education said:
“By harnessing the power of AI digital textbooks in our communication and learning, we can achieve the vision of ‘personalized education for all’”
Unsurprisingly, the programme failed, though not without textbooks companies spending $567 million. So maybe the programme didn't fail; after all, the purpose of a system is what it does. Educational "reform" is mostly a system to transfer public monies to well-connected tech companies. And while the textbooks companies might be in the hole for the moment, I'm sure the Ai companies got paid.
The basic idea of learner-driven education is great...for highly motivated learners. But most students don't fall into that category. So even if Ai-assisted education were actually capable of consistently producing correct answers, you'd still probably end up with lots of bored students trying to jailbreak their Ai. And motivated students? Maybe my experience isn't typical, but chatbots come across as dumb. Good natured, but not very bright.
- The hidden axis: the left-right spectrum has a non-ideology problem — Strength In Numbers, G.Elliott Morris
This one bothered me more, though it probably matters a lot less in the broad scheme of things. As part of their analysis, they relied on an LLM
To solve these problems, I rely on a large language model (LLM) to return various pieces of data for each answer in our survey. Using the computational backend to OpenAI’s GPT-5.1 model (in other words, using computer code instead of ChatGPT on my browser) I send each text answer to the LLM and ask it to perform the following tasks.
They go on to explain more of the specifics, and include their actual prompts. As you should, from the perspective of reproducibility, but when you're using an LLM it seems more like going through the motions of reproducibility. Not only are LLMs black boxes, they're also probabilistic. They aren't designed to produce the same output from the same prompt.
While they come with a learning curve and some risks, LLMs have been successfully used by both academics and practitioners to study various topics in politics today. For example, LLMs are good at placing elected politicians on the left-right ideological spectrum, and can also categorize historical speeches made by U.S. political figures. While LLMs/AI fall short of many promises made by enthusiasts over the last couple years, we are relying on them for the one thing they are good at: summarizing and analyzing text. [Emphasis added]
I'm skeptical. They're not "summarizing" text, and they're not "analyzing" it. They might produce an output that's a summary of text, and they might produce something you could call an analysis of it, but it's still just an imitation of those actions. Sometimes a very good one (not always — my experience with LLM summaries leaves a lot to be desired) but still classic pseudoscience.