Programming, like writing, gets harder as you get better.

Writing’s hard. I don’t think anyone who has done it and taken it seriously, whether in creative fiction or in precise, technical nonfiction, disagrees with this claim. What makes it either very difficult or endlessly rewarding, depending on one’s perspective, is that it remains challenging as one progresses, because one’s increasing competency is paced equally with, if not outpaced by, escalating standards to which one’s own work is held. Many of the millions of wannabe novelists out there believe that the reason they haven’t written (or even started) “their novel” is because that first novel is just too damn hard. Most writers would say that, although it’s the hardest to sell, the first novel is actually the easiest to write. No reputation is at risk, the first novel is expected to be mediocre, and most importantly, one’s own intense self-criticism hasn’t set in, at least not in full force, yet. From what I’m told by experienced authors, writing never gets easier, even for the immensely talented and skilled. One notable exception to this exists, and it’s those who are writing only to make money and who have consciously decided to make writing purely a matter of economic optimization. As far as I’m concerned, such people don’t qualify as writers, but that’s another topic for another time.

Programming is similar. Performing specific tasks, obviously, becomes easier as a programmer’s competency grows, but good programmers don’t want only to “solve the problem”. They want to solve the problem correctly, which entails writing code that is generally useful, extensible, and of high aesthetic quality. The code should not be needlessly slow, complicated, or brittle, even if those concerns are irrelevant to the immediate use case (e.g. an inefficient algorithm may be acceptable on small data sets, but is intolerable in code that might be expected to scale to larger inputs). “Kludges”– inelegant solutions– and “anti-patterns”, such as busy-waiting to implement an event loop, that may be acceptable to a novice programmer trying to just get a program working, become embarrassments to intermediate programmers and intolerable for experts.

Definitions of good programming often diverge, as well. In the 1960s, self-modifying, clever and fast assembly code might be considered “good”, as it solved hard problems at record speed, although it would be opaque to anyone required to maintain it in the future. The scope of an average program was smaller than it is today, and a large project written in such a style would likely be discarded if major revisions were required by anyone other than the writer of the original code. In the 2010s, such unmaintainable code would hardly be considered good code, even if it were 20% faster than a more maintainable alternative. Then again, such may be perfectly acceptable code if generated by a compiler, as humans rarely read the machine or assembly code their compilers create.

Though there is no strong consensus on what constitutes good code, it’s a matter on which many programmers are immensely opinionated. It has to work, obviously, but that’s setting a low bar. Even the worst programmers can make software “work” according to a minimal specification, given enough time and allowance for inelegance; but the code of a bad or even mediocre programmer is often so unpleasant to read, use, and maintain that it inspires a gnawing and universal desire to throw it all out and start anew.

For my part, I would say that a good programmer must be a good teacher. The code and documentation should be instructive of how the code is to be used, and how each component works. Ideally, programmers would develop in such a manner that the function of every line of code is self-evident, due both to the innate clarity of the language (a virtue of, say, OCaml) and the quality of documentation. In practice, most managers will never budget sufficient time to make this a reality, but it’s what software engineers should aim for when they can.

Here we venture into the thicket of aesthetics, where every rule has exceptions that must be learned through practice, and where “known unknowns”– matters on which one knows of one’s lack of knowledge– are only a fraction of total unknowns. (An example of a “known unknown”, for me, would be the German language. I know that it exists, and a few stray words and grammatical principles, but I can’t read or write it. An unknown unknown would be any of the six thousand extant languages that I’ve never heard of.) And that is what makes writing, and software engineering as well, increase in difficulty as one’s skill increases. As one’s knowledge increases, one’s awareness of the gaps in one’s knowledge increase at a more rapid rate. One’s perceived “knowledge ratio” decreases as one’s actual ignorance wanes. A problem with one known solution is easy to solve; when there are ten, and when one knows there might be a hundred more worthy of study, selecting which is best becomes very difficult.

A genre of essay I sometimes find myself writing is the “problem essay”, the first act of which describes an undesirable or inefficient situation, with the second act managing its logical conclusions and avenues and approach, and the final act proposing solutions. This is a very common pattern in writing. Here would be the point at which I propose a “solution”, but I, frankly, don’t have one. To tell the truth, I don’t know if the counterintuitive tendency of a craft’s difficulty to increase with one’s improving skill and knowledge is a “problem” in the first place.

Actually, as a game-design snob who enjoys a well-structured challenge, I rather like this aspect of disciplines like writing and computer programming. It keeps things interesting.