Category Archives: Technology

The Avalanche Theory of Programming Language Evolution

One of the other things that listening to Bjarne Stroustrup reminded me of is an idea that I’ve had kicking around in my head for quite some time about the way programming languages evolve. I can’t exactly quote Bjarne here, but I think he said something to the effect of, “One of the reasons I give this talk is because people still think of C++ as it existed back in 1986, not as it exists today.” Which reminded me of something I read a long time ago about avalanches and the way that they work.

The interesting thing about avalanches is that, should you ever be unlucky enough to be caught in one, they go through two distinct phases. In the first phase, while the avalanche is making its way down the slope, it behaves almost as if it were a liquid. Everything is moving, and it’s theoretically possible (if you don’t get bashed in the head by a rock or tree or something, and you can tell which way is “up”) to “swim” to the top of the avalanche and sort of float/surf on top of it. And it turns out that it’s pretty important to do this if you happen to be caught in an avalanche. Because once the avalanche hits the bottom of the slope and stops moving, it suddenly transitions to another state: a solid. At this point, if you aren’t at least partially free of the avalanche (i.e. on top of it), you’re totally screwed because you are now basically encased in a block of concrete. Not only will you be totally unable to move, but you’re quickly going to suffocate because there’s no way to get fresh air. Unless someone is close by with a locator and a shovel, you’re basically dead.

Programming languages, as far as I can tell, seem to evolve in a way similar to avalanches. Once a programming language “breaks free”–and, to be honest, most never do–it starts accelerating down the slope. At this point, the design of the language is very fluid as new developers pour into the space. And by “design of the language,” I don’t just mean the language spec but everything that makes up the practical design of a language, from the minute (coding standards, delimiter formatting, identifier casing) to the large-scale (best practices for APIs, componentization, etc.). Everything is shifting and changing and growing and it’s very exciting!

And then… boom! It all stops and freezes into place, just like that. Usually, this happens around maybe the second or third major release of the language, long enough for the initial kinks to get worked out and for things to cool and take shape. And the interesting thing about it is that it’s not so much that the language designers are done, as it is that from that point on the language design effectively become fixed in the minds of the public. There are number of reasons, but I think that it has to do with reaching a critical mass of things like books, blog posts, articles, samples, course materials, etc. and a critical mass of developers who were trained in a particular version of a language. Once enough of that stuff is out there and enough people have adopted a particular usage of the language, they effectively become the “standard” for the language.

And from that point on, I think, the language designer is essentially like those poor souls trapped at the bottom of the avalanche–they’re still alive and kicking (well, for a while, at least) but they increasingly find that they can’t really move anything. They can pump out new features and new ideas, but smaller and smaller slices of the developers in those languages are even aware of those features, much less willing to retrain (and rethink) to take advantage of them.

It makes me seriously wonder about all the work we did in VS 2008 to add things like LINQ to VB and C#. I suspect the .NET development avalanche largely came to rest around the time of VS 2005 (or maybe even VS 2003), and while I think a lot of high-end developers like and use LINQ, I don’t know that it ever penetrated the huge unwashed masses of .NET developers. I’m not saying we shouldn’t have done it–I think at very least, it helps push the language design conversation forward and influences the next generation of programming languages as they start their descent down the slopes–but just that I wonder.

And I also have to say that I’m very much a participant in this process. I originally learned C++ all by myself back in the ancient year of 1989 by buying myself Bjarne’s first C++ book and a copy of Zortech C++ for my IBM 386 PC. For a long, long time, C++ was effectively the C++ that I learned way back when C++ compilers were just front-ends for C compilers. Even with lots of exposure to more modern programming concepts while working on VB and so on, it’s taken me a long time to break the old habits and stretch within the C++ language. And, I have to admit, it’s really not a bad language once I did it. But I suspect I’m part of a somewhat small portion of the C++ market.

Anyway, all it really means is that I expectantly scan the metaphorical slopes, waiting for the large “BANG!” that will herald the descent of a new avalanche and the chance to try and surf it once again…

You should also follow me on Twitter here.

The Secret to Understanding C++ (and why we teach C++ the wrong way)

A little over a year ago, I asked “How on earth do normal people learn C++?” which reflected some of my frustration as I re-engaged with the language and tried to make sense of what “modern” C++ had become. Over time, I’ve found that as I’ve become more and more familiar with the language again, things have begun to make more sense. Then a couple of days ago I went to a talk by Bjarne Stroustrup (whose name, apparently, I have no hope of ever pronouncing correctly) and the secret of understanding C++ suddenly crystallized in my mind.

I have to say, I found the talk quite interesting which was a huge accomplishment because: a) I usually don’t like sitting listening to talks under the best of circumstances, and b) he was basically covering the “whys and wherefores” of C++, which is something I’m already fairly familiar with. However, in listening to the designer of C++ talk about his language, I was struck by a realization: the secret to understanding C++ is to think like the machine, not like the programmer.

You see, the fundamental mistake that most teachers make when teaching C++ to human beings is to teach C++ like they teach other programming languages. But with the exception of C, most modern programming languages are designed around hiding as many details of how things actually happen on the machine as possible. They’re designed to allow humans to explain to the computer what they want to do in a way that’s as close to the way humans think (or, at least, how engineers think) as possible. And since C++ superficially looks like some of those languages, teachers just apply the same methodology. Hence, when you learn about classes, teachers tend to spend most of their time on the conceptual level, talking about inheritance and encapsulation and such.

But the way Bjarne talks about C++, it’s clear that everything that C++ does is designed while thinking hard about the question how will this translate to the machine level? This may be a completely obvious point for a language whose main purpose in life is a systems programming language, but I don’t think I’d ever really groked how deeply that idea is baked in to C++. And once I really looked at things that way, things make a lot more sense. Instead of teaching classes at just the conceptual level, you really need to teach classes at the implementation level for C++ to make sense.

For example, I’ve never been in a programming class that discussed how C++ classes are actually implemented at runtime using Vtables and such. Instead, I had to learn all that on my own by implementing a programming language on the Common Language Runtime. The CLR hides a lot of the nitty-gritty of implementing inheritance from the C# and VB programmer, but the language implementer has to understand it at a fairly deep level to make sure they handle cross-language interop correctly. As such, I find myself continually falling back on my CLR experience when looking at C++ features and thinking, “How is this supposed to work?” I can’t imagine how people who haven’t had to confront these kinds of implementation-level details figure it out.

It makes me wonder if a proper C++ programming course would actually work in the opposite direction of how most classes (that I’ve seen) do it. Instead of starting at the conceptual level, start at the machine level. Here is a machine: CPU, registers, memory. Here’s how the basic C++ expression map to them. Here’s how basic C++ structures map to them. Here’s how you use those to build C++ classes and inheritance. And so on. By the time you got to move semantics vs. copy semantics, people might actually understand what you’re talking about.

You should also follow me on Twitter here.

Writing a Compiler in Haskell

In the useless memories department, xkcd’s cartoon today took me back:

In my junior year of college, I took the standard programming languages course that goes over fun stuff like how programming languages are put together. The main project for the class, of course, is to build a compiler for a small language. The twist was, that the professor for this class happened to be one of the designers of the Haskell language, so you can guess what programming language the compilers had to be written in.

The thing is, this class took place probably in the fall of 1990 or the spring of 1991. According to Wikipedia, Haskell debuted in 1990, so first and foremost the tools we were working with were… uh, primitive at best. I think the Haskell interpreter was written in Common LISP, and basically you took your program, invoked the interpreter on it, went and got yourself a nice cup of tea (so to speak), and then came back to an answer (if you were lucky), an undecipherable error message (if you were only kind of unlucky), or just nothing (most of the time). Definitely honed the skill of “psychic debugging.”

Anyway, with a brand-new language (that, of course, had no manuals or books written about it yet) and an alpha-level (at best) compiler, as I remember it at least half the class never even managed to get something working. I somehow managed to grok enough of Haskell to be able to write a functioning compiler that took our toy language in and produced correct output. I was one of the lucky ones. But here’s the thing…

It was, without a doubt, one of the most beautiful programs that I’ve ever written. Just a real work of art, with the data flowing through the code in one of the most natural ways I’ve ever seen. Just awesome. It’s the one piece of code that I look back on and wish that I still had a copy of.

So I’ve always had a soft spot in my heart for Haskell. Even if nobody else could actually understand what it was doing or why.

You should also follow me on Twitter here.

Thinking of rewriting your codebase? Don’t.

A friendly word of advice: if you’re thinking of rewriting your codebase… don’t. Just don’t. Please. Really.

Yes, I know, your codebase as it exists today is a steaming pile of crap. Yes, I know, that you had no idea what you were doing when you first wrote it. Or that it was written by idiots who have long since moved on. Or that the world has changed and whatever style of development that was in vogue at the time is now antiquated. Yes, I know that you’re tired of dealing with the same old architectural limitations over and over again. And that the programming language you wrote it in is no longer fashionable. And that there are lots of new features in the newer versions of the language that would allow you to express things so much more elegantly. Or maybe, god help me, you think that you can write it more modularly this time and that will allow you to more quickly iterate on new features.

Whatever you think, let me make a bold pronouncement: YOU ARE WRONG. WRONG, WRONG, WRONG. FOR THE LOVE OF GOD STOP WHAT YOU ARE DOING RIGHT NOW AND PUT DOWN THE KEYBOARD BEFORE YOU HURT ANYONE.

OK, now that that’s out of my system, let’s get to the natural question: why? Why am I saying this? Because if you ignore my advice and plunge ahead anyway, you’re going to run into what I modestly call:

Vick’s Law of Rewrites: The cost of rewriting any substantial codebase while preserving compatibility is proportional to the time it took to write that codebase in the first place.

In other words, if it took you one year to write your codebase, it’s going to take you on the order of one year to rewrite that codebase. But, of course, we’re not talking about codebases that are only around one year old are we? No, we aren’t. Instead, people usually start talking about rewrites about around the 5-7 year mark. Which means that, as per the law above, it’s going to take on the order of 5-7 years to rewrite that codebase and preserve any semblance of compatibility. Which is not what most people who embark on large rewrites usually think. They think that they can do it in substantially less time than it took in the first place. And they’re always wrong, in my experience.

I first came to this law way back when I worked on Access and the leaders of the project (who, I realize now, were still laughably young, but seemed very old and wise to me at the time) started talking about rewriting Access after we shipped version 2.0 (development time: approx. 4 years total at that point). At the time, I owned the grid control that was the underpinning of the Access data sheet and other controls, and let me tell you–that piece of code was a bit of a beast. Lots of corner cases to make the UI work “just so.” I was talking to one of the leads about this and he dismissed me with a proverbial wave of the hand: “Oh, no, since we understand how that code works now, we can rewrite it in three months.” I think it was at that moment I knew that the rewrite was doomed, although it took three more years for the team to get to that realization.

The bottom line is that the cost of most substantial codebases comes not from the big thinking stuff, but from details, details, details and from bugs, bugs, bugs. And every time you write new code–even if you totally understand what it’s supposed to be doing–you’ve still got the details and bugs to deal with. And, of course, it’s almost entirely unlikely that you do totally understand what it’s supposed to be doing. One of the things that continually humbled me when I worked on Visual Basic was how we’d be discussing some finer point of, say, overload resolution, and someone would point out some rule that I didn’t remember. No way, I’d say, that’s not a rule. Then I’d go check the language specification which I wrote, and there it would be, in black and white. I had totally forgotten it even though I had written it down myself. And the truth is, there are a million things like this that you will inevitably miss when you rewrite, and then you will have to spend time fixing up. And don’t even get me started on the bugs. Plus, you’re probably rewriting on a new platform (because who wants to write on that old, antiquated platform you were writing on before?), and now you’ve got to relearn all the tricks you knew about how to work with the old platform but which don’t work on the new one.

In my experience, there are three ways around this rule:

  1. You’re working on a small or very young codebase. OK, fine. In that case it’s perfectly OK to rewrite. But that’s not really what I’m talking about here.
  2. Decide that you don’t care so much about preserving compatibility or existing functionality. Downside: Don’t expect your users to be particularly pleased, to put it mildly (see: Visual Basic .NET).
  3. Adopt a refactor rather than rewrite strategy. That is, instead of rewriting the whole codebase at once, take staged steps towards the long-term goal, either rewriting one component at a time, or refactoring one aspect of the codebase at a time. Then stabilizing, fixing problems, etc. Downside: Not as sexy or satisfying as chucking the whole thing out and “starting clean.” Requires actually acquiring a full understanding of your existing codebase (which you probably don’t have). Plus, this will take a lot longer than even rewriting, but at least you always have a compatible, working product at every stage of the game.

Rewriting can do enormous damage to a product, because you typically end up freezing the development of your product until the rewrite is done. Maybe at best you leave behind a skeleton crew to pump out a few minor features while the bulk of the team (esp. your star coders) works on the fun stuff. Regardless, this means your product stagnates while the rest of the world moves on. And if, god forbid, the re-write fails, then you run the risk of your team moving on and leaving the skeleton crew as the development team. I’ve seen this happen to products that, arguably, never really recover from it.

So please, learn to live with that crappy codebase you’ve got and tend to it wisely. The product you save may be your own.

You should also follow me on Twitter here.

The Use/Build Fallacy

Working in the language space, especially in language design, you frequently encounter people who fall victim to what I call the “Use/Build Fallacy.” It goes something like this:

Because I know how to use something, I know how to build it as well.

This fallacy is best illustrated by a story I heard from a friend who’s a teacher (another profession that frequently has to deal with this). She was teaching middle-school when teacher conferences rolled around. Talking to the father of one of her students, she explained to him that his daughter was having a lot of trouble in English class and that, based on her observations of how hard the daughter was working, she was pretty sure that the daughter had some sort of language learning disability. She therefore strongly recommended that he take his daughter to an expert to get tested, and that she be tutored by someone trained to deal with the specific kind of learning disability. The father was nonplussed, mainly because he didn’t like the idea that all this would cost him money. “Can’t you just help her more in class?” he asked. My friend explained that she was helping her all she could, but she wasn’t an expert in diagnosing learning disabilities and his daughter really needed to see someone who had the appropriate training.

After a bit of back-and-forth, the father finally got exasperated and said, “Fine, I’ll just tutor her myself! I mean, how hard could it be? I went to school!” My friend then shot back, “Look, you’re a general contractor, right? What would you think if I came to you and said, ‘I don’t need you to build my house—I’ve lived in a house before, so how hard could it be to build one myself?” This, finally, stumped him. I’m not sure whether he actually got his daughter the help she needed, but the story stuck with me because my friend’s response is the perfect distillation of the Use/Build Fallacy.

Note that I’m not saying that just because you’re not an expert on something you can’t have an opinion. I may not know how to build a house, but that doesn’t mean I have nothing to say to the contractors if I decide to do some renovations on my house. Not falling prey to the fallacy, though, means that I always keep a healthy respect for the expert in a field—as long as they truly seem to know what they’re talking about. (I hear this from friends who are architects all the time—they get hired by someone to build or renovate a house for them, and then their client spends all their time endlessly arguing with everything they do. Why bother to hire an expert if you think you already know how to do it yourself?)

I try to remember this myself every time I encounter some aspect of some programming language that I don’t like. Right now, I’m neck-deep in C++ code and it’s tempting to spend all my time kvetching about how how horrible a job Bjarne has done over the years. And then I try to remember—even as someone who’s actually built a language—that this stuff is hard. A language of any complexity has a huge number of moving parts, all of which interact with each other in an unpredictable manner. Historical choices can come back to bite you in all sorts of unexpected ways. Oftentimes all you have are a bunch of imperfect choices, and you have to simply pick the least bad of them all. And then you get to sit there and listen to everyone on the sidelines complain about how horrible a job you’ve done and how they could do it so much better than you because, hey, they’ve used a programming language before.

So I try to temper my complaints with a little humility, and remember how much different building is from using.

Black Hole Projects

OK, so I may have reset my blog, but there were some interesting posts that probably shouldn’t disappear totally down the memory hole. This is one of them, which I am rescuing from back in 2004 because of it’s continuing relevance. It seems that six months can’t go by without something I hear making me think of this. Edited from the original for clarity and to bring it up-to-date.

Many, many years ago, Steve Maine took the opportunity to reminisce about a project at Microsoft that was being worked on while he was an intern. He says:

[ When I was an intern… ] there was this mythical project codenamed “Netdocs”, and it was a black hole into which entire teams disappeared. I had several intern friends who got transferred to the Netdocs team and were never heard from again. Everyone knew that Netdocs was huge and that there were a ton of people working on it, but nobody had any idea what the project actually did.

I also knew a few people who invested quite a few years of their lives into “Netdocs” and it got me thinking about the phenomenon of “black hole projects” at Microsoft (and elsewhere, I’ll wager). There was one I was very close to very early in my career that I managed to avoid, many others that I just watched from afar, and one or two that I got dragged into despite my best intentions. I can’t really talk about most of them since most never saw the light of day, but it did get me thinking about the peculiarly immutable traits of a black hole project. They seem to be:

  • They must have absurdly grandiose goals. Something like “fundamentally reimagine the way that people work with computers.” Nobody, including the people who originate the goals, has a clear idea what the goals actually mean.
  • They must involve throwing out some large existing codebase and rewriting everything from scratch, “the right way, this time.”
  • They must have completely unrealistic deadlines. Often this is because they believe that they can rewrite the original codebase in much, much less time than it took to write that codebase in the first place.
  • They must have completely unrealistic beliefs about compatibility. Usually this takes the form of believing you can rewrite a huge codebase and preserve all of its little quirks without a massive amount of extra effort.
  • They are always “six months” from from major deadlines that never seem to arrive. Or, if they do arrive, another milestone is added on to the end of the project to compensate.
  • They must consume huge amounts of resources, sucking the lifeblood out of one or more established products that make significant amounts of money or have significant market share.
  • They must take over any group that does anything that relates to their absurdly broad goals, especially if that group is small, focused, has modest goals, and actually has a hope of shipping in a reasonable timeframe.
  • They must be prominently featured as demos in public settings such as company meetings, all-hands, conferences, etc. to the point where people groan “Oh, god, not another demo of this thing. When is it ever going to ship?”
  • They usually are prominently talked up publicly by high level executives for years before dying a quiet death.
  • They usually involve “componentizing” some monolithic application or system. This means that not only are you rewriting a huge amount of code, you’re also splitting it up across one or more teams that have to all seamlessly work together.
  • As a result of the previous point, they also usually involve absolutely massive integration problems as different teams try madly to get their components working with each other.
  • They usually involve rewriting the application or system on top of brand-new technology that has not been proven at a large scale yet. As such, they get to flush out all the scalability problems with the new technology.
  • They are usually led by one or more Captain Ahabs, madly pursuing the white whale with absolute conviction, while the deckhands stand around saying “Gee, that whale looks awfully big. I’m not sure we can really take him down.”
  • Finally, 90% of the time, they must fail and die a flaming death, taking down other products with it (or at least severely damaging them). If they do ship, they must have taken at least 4-5 years to ship and be at least 2 years overdue.

It’s kind of frightening how easy it is to come up with this list – it all kind of just poured out. Looking back over 19 years at Microsoft, I’m also kind of frightened at how many projects this describes. Including some projects that are ongoing at the moment…

You should also follow me on Twitter here.

Seven Rules for Beginning Programmers

A little while ago Phil Wadler posted “VS Naipaul’s Rules for Beginners,” listing the famous author’s seven rules for beginning writers. Upon reading them it occurred to me that, with a little adaptation, they could equally apply to beginning programmers. So, with apologies to Mr. Naipaul, here are my “Rules for Beginners:”

  1. Do not write long procedures. A procedure should not have more than ten or twelve lines.
  2. Each procedure should have a clear purpose. It should not overlap in purpose with the procedures that went before or come after. A good program is a series of clear, non-overlapping procedures.
  3. Do not use fancy language features. If you’re using something more than variable declarations, procedure calls, control flow statements and arithmetic operators, there is something wrong. The use of simple language features compels you to think about what you are writing. Even difficult algorithms can be broken down into simple language features.
  4. Never use language features whose meaning you are not sure of. If you break this rule you should look for other work.
  5. The beginner should avoid using copy and paste, except when copying code from one program they have written to a new one they are writing. Use as few files as possible.
  6. Avoid the abstract. Always go for the concrete. [Ed. note: This one applies unchanged.]
  7. Every day, for six months at least, practice programming in this way. Short statements; short, clear, concrete procedures. It may be awkward, but it’s training you in the use of a programming language. It may even be getting rid of the bad programming language habits you picked up at the university. You may go beyond these rules after you have thoroughly understood and mastered them.

You should also follow me on Twitter here.

Murphy’s Computer Law

A long time ago, my family took a trip to Expo `86 in Vancouver, with stop offs in San Francisco and Los Angeles. In LA, we went on the Universal studio tour, something which I basically have no memory of. I did get a memento, though-a poster entitled “Murphy’s Computer Law” with a bunch of humorous computing “laws” on it. This poster went up in my room, accompanied me to college and has been in most of my offices at Microsoft. However, a few years ago, a corner ripped off in a move. Then while it was sitting around waiting to be repaired, it got a bit stained. And then I realized just how dated and ratty the thing looked. So, I figured it’s time to retire it. However, I would like to hang on to the “laws” since some of them are are still quite pertinent, even if some are quite outdated. So here they are, on my “permanent record:”

Murphy’s Computer Law:

  1. Murphy never would have used one.
  2. Murphy would have loved them.

Bove’s Theorem: The remaining work to finish in order to reach your goal increases as the deadline approaches.

Brooks’ Law: Adding manpower to a late software project makes it later.

Canada Bill Jones’ Motto: It’s morally wrong to allow na‹ve end users to keep their money.

Cann’s Axiom: When all else fails, read the instructions.

Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic.

Deadline-Dan’s Demo Demonstration: The higher the “higher-ups” are who’ve come to see your demo, the lower your chances are of giving a successful one.

Deadline-Dan’s Demon: Every task takes twice as long as you think it will take. If you double the time you think it will take, it will actually take four times as long.

Demian’s Observation: There is always one item on the screen menu that is mislabeled and should read “ABANDON HOPE ALL YE WHO ENTER HERE.”

Dr. Caligari’s Come-back: A bad sector disk error occurs only after you’ve done several hours of work without performing a backup.

Estridge’s Law: No matter how large and standardized the marketplace is, IBM can redefine it. [ed, later “Microsoft”, now “Apple,” I guess]

Finagle’s Rules:

  1. To study an application best, understand it thoroughly before you start.
  2. Always keep a record of data. It indicates you’ve been working.
  3. Always draw your curves, then plot the reading.
  4. In case of doubt, make it sound convincing.
  5. Program results should always be reproducible. They should all fail in the same way.
  6. Do not believe in miracles. Rely on them.

Franklin’s Rule: Blessed is the end user who expects nothing, for he/she will not be disappointed.

Gilb’s Laws of Unreliability:

  1. At the source of every error which is blamed on the computer you will find at least two human errors, including the error of blaming it on the computer.
  2. Any system which depends on human reliability is unreliable.
  3. Undetectable errors are infinite in variety, in contrast to detectable errors, which by definition are limited.
  4. Investment in reliability will increase until it exceeds the probable cost of errors, or until someone insists on getting some useful work done.

Gummidge’s Law: The amount of expertise varies in inverse proportion to the number of statements understood by the general public.

Harp’s Corollary to Estridge’s Law: Your “IBM PC-compatible” computer grows more incompatible with every passing moment.

Heller’s Law: The first myth of management is that it exists.

Hinds’ Law of Computer Programming:

  1. Any given program, when running, is obsolete.
  2. If a program is useful, it will have to changed.
  3. If a program is useless, it will have to be documented.
  4. Any given program will expand to fill all available memory.
  5. The value of a program is proportional to the weight of its output.
  6. Program complexity grows until it exceeds the capability of the programmer who must maintain it.
  7. Make it possible for programmers to write programs in English, and you will find that programmers cannot write English.

Hoare’s Law of Large Programs: Inside every large program is a small program struggling to get out.

The Last One’s Law of Program Generators: A program generator creates programs that are more “buggy” than the program generator.

Meskimen’s Law: There’s never time to do it right, but always time to do it over.

Murphy’s Fourth Law: If there is a possibility of several things going wrong, the one that will cause the most damage with be the one to go wrong.

Murphy’s Law of Thermodynamics: Things get worse under pressure.

Ninety-Ninety Rule of Project Schedules: The first ninety percent of the task takes ninety percent of the time, and the last ten percent takes the other ninety percent. [ed: words to live by]

Nixon’s Theorem: The man who can smile when things go wrong has thought of someone he can blame it on.

Nolan’s Placebo: An ounce of image is worth a pound of performance.

Osborn’s Law: Variables won’t, constants aren’t.

O’Toole’s Commentary on Murphy’s Law: Murphy was an optimist.

Peer’s Law: The solution to a problem changes the problem.

Rhode’s’ Corollary to Hoare’s Law: Inside every complex and unworkable program is a useful routine struggling to be free.

Robert E. Lee’s Truce: Judgment comes from experience; experience comes from poor judgment.

Sattinger’s Law: It works better if you plug it in.

Shaw’s Principle: Build a system that even a fool can use, and only a fool will want to use it. [ed: also known as “Bob’s Law”]

SNAFU Equations:

  1. Given an problem containing N equations, there will be N+1 unknowns.
  2. An object or bit or information most needed will be least available.
  3. Any device requiring service or adjustment will be least accessible.
  4. Interchangeable devices won’t.
  5. In any human endeavor, once you have exhausted all possibilities and fail, there will be one solution, simple and obvious, highly visible to everyone else.
  6. Badness comes in waves.

Thoreau’s Theories of Adaptation:

  1. After months of training and you finally understand all of a program’s commands, a revised version of the program arrives with an all-new command structure. [ed: also known the “Office Principle”]
  2. After designing a useful routine that gets around a familiar “bug” in the system, the system is revised, the “bug” is taken away, and you’re left with a useless routine.
  3. Efforts in improving a program’s “user friendliness” invariably lead to work in improving user’s “computer literacy.”
  4. That’s not a “bug”, that’s a feature!

Weinberg’s Corollary: An expert is a person who avoids the small errors while sweeping on to the grand fallacy.

Weinberg’s Law: If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization.

Zymurgy’s First Law of Evolving System Dynamics: Once you open a can of worms, the only way to recan them is to use a larger can.

Wood’s Axiom: As soon as a still-to-be-finished computer task becomes a life-or-death situation, the power fails.

The Five Levels of Incompetence

In my “Learning and Teaching” post last week, I talked about the different stages of learning, from “unconscious incompetence” up to “unconscious competence.” It occurred to me today, though, that there really are different levels within those levels and, in particular, there are some very distinct levels of incompetence that I’ve encountered in my nearly (yikes!) two decades of working in the industry. The reason why the levels of incompetence are somewhat more important than the various levels of competence, it seems to me, is that incompetent people are often a very real threat to the stability of teams that they work in, while competent people usually aren’t.

The five levels of incompetence are, in increasing order of danger:

baby

Level 1: The N00b.

There’s not much to say about the n00b since, let’s face it, we’ve all been there. Hiring n00bs is unavoidable in most situations. Being a n00b is a basic fact of life.

Danger: Low, assuming that they are properly sandboxed or are experienced enough to sandbox themselves. (Otherwise, they’re likely to hit “launch” instead of “lunch” and then you’re really in trouble.)

sinking_ship

Level 2: Out of their depth.

Typically, this is someone who’s not really incompetent in general but who’s just been pushed up to a level of responsibility beyond their capabilities (a.k.a. a victim of the Peter Principle.). I’ve seen this happen most often in situations where a senior, experienced person leaves the team and the leadership decides that they have to put someone equally senior in their place regardless of whether that person can, you know, actually do the job. So they pluck someone who’s senior but not as experienced and plops them into the departing person’s chair.

Danger: Moderate. Because the person isn’t completely incompetent, they tend to be able to give the appearance of competence and avoid leading the team totally off the rails but usually end up leading the team in circles. So the team doesn’t make any forward progress and people eventually wise up and leave.

dumb-dumber_l

Level 3: Dumb and Dumber.

Now we start getting into the fun levels. This person is just plain incompetent, someone placed into a totally inappropriate position for them (which, for the most part, is going to be any position). I’ve actually encountered very few instances of this in my career, and it usually happens when someone transfers between two wildly different kinds of jobs. That tends to mask, for a little while anyway, their completely lack of ability to actually do anything under the guise of just being a n00b.

Danger: High to moderate. It really depends on how fast everyone figures out just how incompetent the person is-usually, truly incompetent people get shunted aside as soon as everyone figures out what’s going on. If that takes too long, competent people tend to get pissed off and leave.

Bozo

Level 4: Bozo.

The difference between a Bozo and a Dumb and Dumber is that a Bozo is Dumb and Dumber who thinks he is competent. Bozo’s tend to believe that they are as good or better than everyone else, deserve special treatment, and that their genius is being under-rewarded. And they ignore the fact that they have absolutely no idea what they are doing.

My best Bozo story is a Program Manager that I worked with a long time ago. I implemented a feature he specified. He entered a bug saying that the feature didn’t work correctly. I resolved the bug “by design” after I verified that the feature worked exactly the way he specified it. He then came to my office and started to argue with me that I shouldn’t have resolved the bug “by design,” because the feature didn’t work correctly. Finally, I pulled out a copy of his specification and pointed at the paragraph that said exactly how the feature should work. He then got totally exasperated with me and started ranting that I was supposed to implement the feature “the way he wanted the feature to work, not the way he specified it!”

Danger: High. Bozos are always on the lookout to get ahead (in line with their great abilities), so they often manage to worm their way in to management positions. They then tend to lead the team the way Mr. Toad drives motorcars: careening all over the road until they finally end up in the ditch. Bozo’s are adept at bringing down even the most experienced team in a surprisingly short amount of time.

evil_genius_drevil

Level 5: Evil Genius.

I was debating whether this is even a level of incompetence at all, because in many ways Evil Geniuses are not incompetent people. Quite the contrary, they are often quite adept at many things, including manipulation, spin, intimidation, self-aggrandizement, and sucking up. But I think in a deep sense, Evil Geniuses are just a more highly evolved form of Bozos because the end result tends to be the same: the team blows up in a very spectacular way. However, while a Bozo usually does this in a totally oblivious way (“What happened?”), it’s often all part of an Evil Genius’s plan to use the force of the explosion to propel them ever higher. These are the kind of guys who end up running major corporations and then running them totally into the ground. And then jumping ship to run an even bigger corporation. But, at the core, I think that Evil Geniuses act this way because they couldn’t actually figure out h
ow to do things in an above-board manner. Thankfully, I’ve met very few Evil Geniuses in my day. And those I have met, I’ve been able to largely avoid.

There are only three types of programmers in the world…

..and they are:

  1. Programmers who want to write an operating system
  2. Programmers who want to write a compiler
  3. Programmers who want to write a database

It’s not that every programmer ever actually works on one of these, just that every programmer seems to dream of doing one of these things. It’s the primary reason why things like Linux exist. Yes, open source, blah, blah, blah, OS choices, blah, blah, blah, evil Microsoft, blah, blah, blah. But I would bet my bottom dollar that 9 out of 10 of the people donating their valuable time to the Linux project do so not because they want an alternative to Windows but because they always dreamed of being OS hackers. It’s also why there are so many damn programming languages out there, all the people who sit around dreaming of being, I don’t know, James Gosling or something.

(I think with the advent of the Internet, it’s likely that there’s now a fourth kind of programmer who wants to write websites, but I’m not totally sure about that yet.)

The interesting thing about these categories is that the Venn diagram tends, in my experience, to be pretty distinct-most “data” guys aren’t also “language” guys, and most “language” guys aren’t also “OS” guys, and so on. My theory is that it’s like the parable of the blind men and the elephant: although we all grapple with basically the same set of problems, each kind of programmer grapples with a different aspect of it.

The blind men and the elephant

I say all this because although I started out working in databases, it’s clear to me that I’ve always been a “language” guy. In college, I did so-so in the OS course and never touched a database course (I’m not even sure they were offered), but my compiler course netted me a special letter of commendation from the professor (the only one I ever got). Anyway, now I’m back in the “data” world as an even more confirmed “language” guy and the most interesting thing is how many of the problems are the same, but the way they’re conceptualized, handled, or even talked about, are different from what I’ve been used to working on programming languages. It’s kind of. refreshing to see things in a different light. More on that soon.