Will we never program in plain English, even with AI?
- published
This post is kind of a response to this article by Ben Kehoe. Not really a response because I agree with the article, more like an ellaboration of some things in that article.
The most commonly accepted definition of a program is a sequence of instructions that we write for a machine to execute. These instructions have to be specific enough to remove any ambiguity from them. Programming languages evolved to allow precise instructions at different levels of abstraction. But they’re still precise.
If you ignore programming languages and attempt to instruct a machine with plain English, you’ll quickly realise how absolutely impossible this is. Plain English is just too ambiguous to work for any kind of precise instruction.
We’ve figured that out already a long time ago when writing laws, and now we write them in a language that is currently so divorced from plain English that essentially everyone has trouble reading law text1. We needed to make law text more specific, but we weren’t yet smart enough to understand we needed a different language for that (law text predates computing by quite some time).
So, any people who say that anyone will be able to program computers with AI has no idea what they’re talking about. Computers, at a lower level, still need very precise instructions to do any work. This will always be the case as long as we have computers like the ones we have today. This means that no, we’ll never be able to use plain English to program, and any attempt to do so will quickly turn into its own programming language.
What will happen, however, is that we will definitely set goals in plain English with AI. And something else will figure out the specific instructions to achieve that goal.
Text like “create a game where the player controls a character in a 2D grid with a top-down view, …” is entirely goal-setting. There’s one instruction (create a game) followed by a million words describing what it is that we want. Programming is about a sequence of instructions.
So we can then come up with text like “create a game … and then make a post on Mastodon about the game with a thumbnail with the main character in it”. Sequence of instructions, right? Yes, and at this point the line between programming and goal-setting blurs.
Even we as programmers have languages that are more like goal-setting rather than instructing, but we still call them programming languages. That’s because we don’t really know what programming is, we’ve just been stuck on a specific view of programming for the past decades thinking we figured it all out. It took the current form of AI to make more people think about these things again, but well, at least it’s happening. Progress.
We will either evolve our definition of what programming is to include goal-setting, and create a different term for current-world programming, or we’ll do the other way around and create a new term for this mix of goal-setting and instructing. I’m not good with names so I’ll let someone else take the credit and figure out a name for it, but I have a bet: we’ll end up using a different term for this mix of goal-setting + instructing, because we have decades of content, books, experience, whatever, about “normal programming”. So if we evolve the term “programming” to mean more things, this is a significant change, and we all know humans don’t like significant changes.
So I’m betting the following sentence won’t age like milk: no, we will never program in plain English, even with AI.
-
As an aside, I think we should’ve invented a specific language for law a loooong time ago, because the current plain-language-abomination that is used for law text is absolutely ridiculous. Besides, it doesn’t even work as intended given that we still have systems in place to argue about what the law text actually means when people disagree and conflict about it. ↩︎