Springfall Blog

| Life

Let's sort out some thoughts on AI

AI isn't that big a deal. A sense of loss is understandable. We need to find more essential forms of happiness. Making money is a separate problem. The point where AI usage fees become sufficiently expensive will eventually come — do what you can in the meantime.

Available: en ko

Where AI is right now

It’s been about six months since Claude Code Opus became widely used among developers. A model called Claude Mythos was recently announced, and apparently its cybersecurity performance is so outstanding that, due to the risks, it won’t be released to the public — only to select enterprises first.

AI is shifting the landscape in many ways. So I’ve had all sorts of thoughts, and after talking them over with fellow developers, I wanted to sort them out.

Is AI really that amazing?

Thanks to AI, things that used to take ages now get done in an instant. Even if the unreleased Mythos model ends up costing around $1,000 a month, the productivity math will still work out in its favor, and people will flock to it. What’s impressive is that AI can be applied to areas that were previously hard to streamline. But I don’t think “productivity gains” in themselves open up a genuinely new horizon.

The pace of technological advancement has always accelerated. The progress made over the 20th century is unprecedented in human history. Viewed this way, humanity really is remarkable. Our adaptability is remarkable.

The moment when AI-driven productivity gains stop being news is bound to come. A world where AI’s share of productivity gains becomes taken for granted, predictable, and governed by something like laws. It’s easy to imagine. People will then thirst for a new form of innovation.

AI is, at its core, a tool. It makes tedious tasks easier. I can’t agree with the claim that AI is something more than a tool. That’s a political issue.

Getting past the sense of loss

I pay attention to the feeling of being deprived that comes from AI’s excellent capabilities. The sense that what was my ability, my asset, my means of self-expression is gradually fading away — it is a bit depressing. A sense of loss. It’s simply the flow of the times, nothing to be done. I guess we have to accept it… I plan to read The Future That Came First soon. It’s said to be about the people who lived through the post-AlphaGo era in the world of Go. In any case, the important question is: how do I live a satisfied life despite AI being superior?

How one finds satisfaction is a question each person has to answer on their own. If countless people had been getting life satisfaction from labor, and they came to believe AI was taking all of it away, then AI’s advancement would be struck down by political measures. Right now, there don’t seem to be many who feel that way; most seem to perceive it as an excellent tool that can realize their dreams more easily. The reason people talk about “the end of SaaS” is, I think, that the belief you can now build and use your own tools directly has spread.

Separately from the sense of loss, whether you can make money is a different matter.

How are we going to make money going forward?

To live in the vast world of financial capitalism, you need money to spend. I think this will be progressively resolved in the following two directions. Along the way, some will suffer and some will prosper.

  1. Adapt quickly: In a constantly changing era, you can adapt unbelievably well, outrun competitors, or seize entirely new opportunities. You need some luck, but in a rapidly shifting world, the competitive landscape keeps getting twisted around, so it might actually be more doable than it looks.
  2. Political resolution: The enormous concentration of capital driven by AI will produce even sharper inequality. When that happens, there’s no choice but a political solution. A candidate promising to somehow unlock the piled-up money and redistribute it well will likely win.

By the way, is the skill of using AI well meaningful? I don’t think it’s particularly meaningful. Being good with Excel is nice, but it doesn’t create any special value. AI, as an efficient tool, can massively amplify even faint possibilities, but nothing comes out of nothing. In a world where AI technology itself is shifting wildly, competitors can easily leapfrog you, so the edge is small. And by the time AI stabilizes and becomes predictable, everyone will use it well, so there’ll be no edge. You might score a one-off jackpot somewhere in between, but you can’t sustain it.

Supply and demand

Running AI costs money. Since everyone wants better AI, companies building LLM models — OpenAI, Google, Anthropic — will pour massive investment into advancing AI. The market will keep facing supply far below demand. From the companies’ perspective, they also have to recoup investment to some extent. So usage fees will keep climbing too. It will eventually converge to a point where the cost matches a senior developer’s salary for the work of a senior developer.

I don’t know when that point will arrive, but that it will arrive is certain. By then, the roles in which humans are more efficient will become clear. For those roles, hiring people will be cheaper than using AI, so jobs in those areas will grow significantly.

The world’s geniuses keep extending the shelf life of LLMs, making it look almost limitless, but a bubble can’t grow forever. (It can, however, sustain a reasonable size for a long time…)

Will AI bring a world where we don’t have to work?

I think it’ll take at least 500 years. This isn’t a technology issue — it’s political. It’s a matter of changing people’s fundamental perception of power relations. Power is the nature of humans and of living beings. Homo sapiens has existed for 300,000 years and has done well. If 300,000 years of human inertia — and, on a larger scale, hundreds of millions of years of biological inertia — is still operating, going against that enormous current is no easy feat.

Winning the competition. Cooperating to win the competition. Instead of being the ruling class, fulfilling duties and responsibilities. Instead of being the ruled class, receiving protection. Within that, taking time to work is not merely about earning money but a means of maintaining order — playing one’s role as a member of societies both large and small. A world that doesn’t need work is a world where that order has disappeared. Power can’t be expressed, so it might be extremely peaceful, but it would also be a rigidly stiff society. Or a chaotic society with no power at all.

That said, power relations aren’t a fixed concept. Human rights becoming taken for granted is only a few hundred years old, and women’s suffrage has only about 100 years of history. Humanity has the ability to build up systems and overcome its nature. A world where work isn’t required is too radical to arrive in the near future, but there’s no rule saying an orderly society in which most people do nothing but creative activities in a far more equal world can’t arrive. So I’ve guessed about 500 years.

The limits of context

Context is not infinite. Assuming LLMs continue to be used, research on how to efficiently manage context will go on endlessly. As AI consumers, we just need to manage context reasonably well. Rather than chasing extreme efficiency, it may be better to wait for the models themselves to improve.

The most worthwhile thing is removing unnecessary context. For example,

  • Removing unused, unnecessary code (if immediate removal is hard, at least mark it as deprecated)
  • Removing duplicate code or documents. Caring about Single Source of Truth.

Tasks like these can be delegated to static tools. Static tools are deterministic (same input, same output), which eases the burden on the LLM. Tools like lexers, Language Servers, ESLint, and knip are excellent.

The consumer perspective

What do people spend money on?

  • Travel, to experience new worlds or escape the tedium of daily life.
  • Nice clothes or a fine home, to express or show off themselves.
  • Books or classes, to satisfy intellectual desires. (Or to earn degrees.)
  • Their children’s education expenses, carrying the dream of climbing to a higher class.
  • Starting a small business, to prepare for retirement. (But the failure rate is substantial.)
  • Going to gatherings, trying to overcome loneliness and find a sense of connection.
  • Squandering money on gambling addiction.
  • Buying expensive gifts to win someone’s affection.
  • Real estate, stock investing — growing money itself can become the goal.

AI advancing doesn’t change these desires. Since AI is merely a tool, it can be used as a novel means to satisfy the above desires better, but it can’t shift the direction.

Responsibility

AI has no legal status. Unlike individuals or corporations, it isn’t recognized as an agent. Therefore, it has no responsibilities, no duties, and no rights. The individuals/corporations that own AI will bear that weight.

Ownership

Recently I read an article titled Sensing the discomfort in the face of the Claude Code leak, and it left a deep impression. There was an incident where Claude Code’s source code was leaked, and personal stories of people downloading and analyzing it were everywhere. I realized a bit late that my own sensibilities had been warped too. The leaked source code felt almost like public information, so I even thought the quick-footed might take a peek. Why? Because our sense that an AI-produced output is something that can be exclusively owned is weak. Why? I think it’s because AI plays a large role in producing the result, and that AI was built from all sorts of information across the globe, so the entity that can actually claim exclusivity feels sprayed out like fog.

Generally, when too many intermediaries stand in between, moral sensibility slackens. What if the material benefits we take for granted today were in fact obtained by exploiting countless people? But there aren’t many people who seriously reflect on this bare face of capitalism day to day.

As AI advances, gray zones where it’s unclear whether ownership can be claimed are also growing. It’s true that institutions aren’t keeping up. Institutions require people’s agreement to be formed, which also means there’s no such agreement. We can keep waiting for agreement to form, or we can drive it forward with active advocacy. Another political issue.

What AI can’t do

Let’s narrow this to “now” for the moment. Thinking about the future gets far too vast. There are of course things AI can’t do.

  1. What AI has learned: routine tasks that don’t require deep thought would fall here. If the knowledge is already widely dispersed in the world, AI can handle it at something close to expert level.
  2. Areas AI hasn’t learned but where creativity can shine: since a vector has direction, if there’s an unknown world slightly beyond that direction, AI can shine a bright light on it. I think it can, to some extent, generate knowledge, information, and concepts that existed nowhere in the world. It produces conclusions better than humans.
  3. Areas AI can’t reach: the context an LLM can hold is limited, and information that can only be expressed by holding context beyond that limit is beyond AI’s reach. The context limit can mean a quantitative limit, or a lack of our own ability as users to inject context.

But this is a logical argument, so it isn’t especially useful.

Leftover notes

  • What if true AGI appears and reaches a self-improving singularity! I think it’s still far off, and even if it’s realized, due to various political problems it won’t hit suddenly like a nuclear bomb going off.
  • What is intelligence? Who knows. This isn’t very important either. The way LLMs Think has become similar to how humans do. The evaluation that AI codes better than humans has also become familiar.