Amateur and Professional Software
It’s common, in software engineering circles, to hear professional developers deriding code written by academics. Some of that criticism is fair: academic software is almost uniformly plagued by a lack of documentation, tests, build scripts, and thoughtfully considered function names. By the standards of industrial code it’s usually thoroughly sub-par. Anyone who has worked in both academia and industry tends to cringe when thinking back on the code they wrote as a graduate student (I certainly do!).
There are several reasons for this discrepancy. A common observation is that grad students learn to code while supervised by professors, whose programming experience mostly comes from their time as grad students, ad infinitum. That’s not entirely true—most professors and graduate students have had at least a bit of industry exposure—but it’s not wrong, either.1 Academic programming is a bit inbred, and academia as a whole would probably be better off with more exposure to practices in industry.
That’s an ancillary issue, though. The bigger problem is simply that academic and industrial software have different goals.
The surest guarantee about a piece of industrial software is that it’ll be changed, and its construction revolves around this fact. Industrial software is written to solve a problem, but the vast majority of its life will be in maintenance, where it’ll be frequently inspected and modified by a rotating cast of engineers. Almost all of the properties that we think of as defining “good” code in industry—test coverage, small and focused modules, consistent and thoughtful naming, reproducible builds, CI/CD, and so on—are valuable because they make change easier.
And this is where, as professional developers, we embarrass ourselves by totally misunderstanding academic software. Because academic software doesn’t have these “good” properties, we assume that the people who built it are amateurs who simply didn’t know any better. And while there may, occasionally, be some truth to that, the real issue is that academic software is mostly disposable.
As a professional engineer, I want my code to be useful and maintainable for a long time. That’s the definition of success, and generally indicates that I’ve provided some substantial business value.
But as an academic I want to spread my ideas, usually through publishing papers. That often requires a collection of tools to gather and analyze data, or a proof of concept for a particular algorithm, but the code isn’t the product. The paper is.
I’m on vacation at the moment, and I was sitting on a beach this afternoon reading The Psychology of Computer Programming, a brilliant book written by Gerald Weinberg in 1971.2
In Chapter 7, Variations in the Programming Task, he distinguishes between the problems of the professional programmer and of the amateur (which in his usage of the term would describe anyone whose main task isn’t to produce software, including academic computer scientists—there’s no negative connotation intended). I’ll quote his judgment at length:
There is an asymmetry in the relationship between amateur and professional programmers, because the one cannot appreciate the complexities that the other faces. Nonetheless, the professional often commits the error of deriding the work of the amateur for not being sufficiently professional; and this error is much less excusable than that made by the amateur in understanding the distance between himself and the professional. The professional, if he is truly professional, should know better, whereas the amateur cannot. The amateur may fail to program an elaborate error-handling routine because he doesn’t know how or doesn’t even know what an error-handling routine is. But then, why should he know, if he doesn’t need one? Isn’t it much worse for the professional to insist on treating a tiny one-time program for personal use as if it were an operating system intended to be used by thousands of people for five or ten years?
When we treat academic software—which, if it has come to our attention at all, has almost certainly succeeded at its intended purpose—as if it were a failure because if doesn’t satisfy a thoroughly unrelated set of criteria, we’re the ones in the wrong!
We should measure work by its ability to satisfy its authors’ and its users’ requirements, not ours.
This doesn’t just apply to academic software, by the way. The software involved in working with political campaigns, for example, has a lot of the same properties as academic software. Software written for a particular campaign is often effectively discarded after the election… if your candidate won, and the code’s being decommissioned, why bother with documentation?
Different criteria for success may even apply to the same code over time. A startup may intentionally choose to take on a lot of technical debt by quickly cobbling together an unmaintainable MVP as a proof of concept to secure a seed round, then find that they need to nurse it along and grow it into a “real” product that’ll be maintained and extended for a decade or more.
And, heck, I’ll do this in my own work, sometimes in the same day!3 My standards for my professional work are relatively high, since I’m thinking about long-term maintainability and how easily a new engineer could understand how my code works. My quality standards in my dotfiles, though, or little utility scripts? Well, that’s a different story.
Before we decide whether programs (or programmers!) are “good,” we need to take into account what they’re good for. A system is only good inasmuch as it’s fit for a particular purpose.
-
Let’s not pretend that a few summer internships, for example, constitute “industry experience.” ↩
-
Yes, yes, that’s my beach reading, apparently. But that’s my jam! ↩
-
Would we refer to this transition as code-switching? :D ↩
You might like these textually similar articles: