Category Archives: Technology

The Avalanche Theory of Programming Language Evolution

One of the other things that listening to Bjarne Stroustrup reminded me of is an idea that I’ve had kicking around in my head for quite some time about the way programming languages evolve. I can’t exactly quote Bjarne here, but I think he said something to the effect of, “One of the reasons I give this talk is because people still think of C++ as it existed back in 1986, not as it exists today.” Which reminded me of something I read a long time ago about avalanches and the way that they work.

The interesting thing about avalanches is that, should you ever be unlucky enough to be caught in one, they go through two distinct phases. In the first phase, while the avalanche is making its way down the slope, it behaves almost as if it were a liquid. Everything is moving, and it’s theoretically possible (if you don’t get bashed in the head by a rock or tree or something, and you can tell which way is “up”) to “swim” to the top of the avalanche and sort of float/surf on top of it. And it turns out that it’s pretty important to do this if you happen to be caught in an avalanche. Because once the avalanche hits the bottom of the slope and stops moving, it suddenly transitions to another state: a solid. At this point, if you aren’t at least partially free of the avalanche (i.e. on top of it), you’re totally screwed because you are now basically encased in a block of concrete. Not only will you be totally unable to move, but you’re quickly going to suffocate because there’s no way to get fresh air. Unless someone is close by with a locator and a shovel, you’re basically dead.

Programming languages, as far as I can tell, seem to evolve in a way similar to avalanches. Once a programming language “breaks free”–and, to be honest, most never do–it starts accelerating down the slope. At this point, the design of the language is very fluid as new developers pour into the space. And by “design of the language,” I don’t just mean the language spec but everything that makes up the practical design of a language, from the minute (coding standards, delimiter formatting, identifier casing) to the large-scale (best practices for APIs, componentization, etc.). Everything is shifting and changing and growing and it’s very exciting!

And then… boom! It all stops and freezes into place, just like that. Usually, this happens around maybe the second or third major release of the language, long enough for the initial kinks to get worked out and for things to cool and take shape. And the interesting thing about it is that it’s not so much that the language designers are done, as it is that from that point on the language design effectively become fixed in the minds of the public. There are number of reasons, but I think that it has to do with reaching a critical mass of things like books, blog posts, articles, samples, course materials, etc. and a critical mass of developers who were trained in a particular version of a language. Once enough of that stuff is out there and enough people have adopted a particular usage of the language, they effectively become the “standard” for the language.

And from that point on, I think, the language designer is essentially like those poor souls trapped at the bottom of the avalanche–they’re still alive and kicking (well, for a while, at least) but they increasingly find that they can’t really move anything. They can pump out new features and new ideas, but smaller and smaller slices of the developers in those languages are even aware of those features, much less willing to retrain (and rethink) to take advantage of them.

It makes me seriously wonder about all the work we did in VS 2008 to add things like LINQ to VB and C#. I suspect the .NET development avalanche largely came to rest around the time of VS 2005 (or maybe even VS 2003), and while I think a lot of high-end developers like and use LINQ, I don’t know that it ever penetrated the huge unwashed masses of .NET developers. I’m not saying we shouldn’t have done it–I think at very least, it helps push the language design conversation forward and influences the next generation of programming languages as they start their descent down the slopes–but just that I wonder.

And I also have to say that I’m very much a participant in this process. I originally learned C++ all by myself back in the ancient year of 1989 by buying myself Bjarne’s first C++ book and a copy of Zortech C++ for my IBM 386 PC. For a long, long time, C++ was effectively the C++ that I learned way back when C++ compilers were just front-ends for C compilers. Even with lots of exposure to more modern programming concepts while working on VB and so on, it’s taken me a long time to break the old habits and stretch within the C++ language. And, I have to admit, it’s really not a bad language once I did it. But I suspect I’m part of a somewhat small portion of the C++ market.

Anyway, all it really means is that I expectantly scan the metaphorical slopes, waiting for the large “BANG!” that will herald the descent of a new avalanche and the chance to try and surf it once again…

You should also follow me on Twitter here.

The Secret to Understanding C++ (and why we teach C++ the wrong way)

A little over a year ago, I asked “How on earth do normal people learn C++?” which reflected some of my frustration as I re-engaged with the language and tried to make sense of what “modern” C++ had become. Over time, I’ve found that as I’ve become more and more familiar with the language again, things have begun to make more sense. Then a couple of days ago I went to a talk by Bjarne Stroustrup (whose name, apparently, I have no hope of ever pronouncing correctly) and the secret of understanding C++ suddenly crystallized in my mind.

I have to say, I found the talk quite interesting which was a huge accomplishment because: a) I usually don’t like sitting listening to talks under the best of circumstances, and b) he was basically covering the “whys and wherefores” of C++, which is something I’m already fairly familiar with. However, in listening to the designer of C++ talk about his language, I was struck by a realization: the secret to understanding C++ is to think like the machine, not like the programmer.

You see, the fundamental mistake that most teachers make when teaching C++ to human beings is to teach C++ like they teach other programming languages. But with the exception of C, most modern programming languages are designed around hiding as many details of how things actually happen on the machine as possible. They’re designed to allow humans to explain to the computer what they want to do in a way that’s as close to the way humans think (or, at least, how engineers think) as possible. And since C++ superficially looks like some of those languages, teachers just apply the same methodology. Hence, when you learn about classes, teachers tend to spend most of their time on the conceptual level, talking about inheritance and encapsulation and such.

But the way Bjarne talks about C++, it’s clear that everything that C++ does is designed while thinking hard about the question how will this translate to the machine level? This may be a completely obvious point for a language whose main purpose in life is a systems programming language, but I don’t think I’d ever really groked how deeply that idea is baked in to C++. And once I really looked at things that way, things make a lot more sense. Instead of teaching classes at just the conceptual level, you really need to teach classes at the implementation level for C++ to make sense.

For example, I’ve never been in a programming class that discussed how C++ classes are actually implemented at runtime using Vtables and such. Instead, I had to learn all that on my own by implementing a programming language on the Common Language Runtime. The CLR hides a lot of the nitty-gritty of implementing inheritance from the C# and VB programmer, but the language implementer has to understand it at a fairly deep level to make sure they handle cross-language interop correctly. As such, I find myself continually falling back on my CLR experience when looking at C++ features and thinking, “How is this supposed to work?” I can’t imagine how people who haven’t had to confront these kinds of implementation-level details figure it out.

It makes me wonder if a proper C++ programming course would actually work in the opposite direction of how most classes (that I’ve seen) do it. Instead of starting at the conceptual level, start at the machine level. Here is a machine: CPU, registers, memory. Here’s how the basic C++ expression map to them. Here’s how basic C++ structures map to them. Here’s how you use those to build C++ classes and inheritance. And so on. By the time you got to move semantics vs. copy semantics, people might actually understand what you’re talking about.

You should also follow me on Twitter here.

Writing a Compiler in Haskell

In the useless memories department, xkcd’s cartoon today took me back:

In my junior year of college, I took the standard programming languages course that goes over fun stuff like how programming languages are put together. The main project for the class, of course, is to build a compiler for a small language. The twist was, that the professor for this class happened to be one of the designers of the Haskell language, so you can guess what programming language the compilers had to be written in.

The thing is, this class took place probably in the fall of 1990 or the spring of 1991. According to Wikipedia, Haskell debuted in 1990, so first and foremost the tools we were working with were… uh, primitive at best. I think the Haskell interpreter was written in Common LISP, and basically you took your program, invoked the interpreter on it, went and got yourself a nice cup of tea (so to speak), and then came back to an answer (if you were lucky), an undecipherable error message (if you were only kind of unlucky), or just nothing (most of the time). Definitely honed the skill of “psychic debugging.”

Anyway, with a brand-new language (that, of course, had no manuals or books written about it yet) and an alpha-level (at best) compiler, as I remember it at least half the class never even managed to get something working. I somehow managed to grok enough of Haskell to be able to write a functioning compiler that took our toy language in and produced correct output. I was one of the lucky ones. But here’s the thing…

It was, without a doubt, one of the most beautiful programs that I’ve ever written. Just a real work of art, with the data flowing through the code in one of the most natural ways I’ve ever seen. Just awesome. It’s the one piece of code that I look back on and wish that I still had a copy of.

So I’ve always had a soft spot in my heart for Haskell. Even if nobody else could actually understand what it was doing or why.

You should also follow me on Twitter here.

Thinking of rewriting your codebase? Don’t.

A friendly word of advice: if you’re thinking of rewriting your codebase… don’t. Just don’t. Please. Really.

Yes, I know, your codebase as it exists today is a steaming pile of crap. Yes, I know, that you had no idea what you were doing when you first wrote it. Or that it was written by idiots who have long since moved on. Or that the world has changed and whatever style of development that was in vogue at the time is now antiquated. Yes, I know that you’re tired of dealing with the same old architectural limitations over and over again. And that the programming language you wrote it in is no longer fashionable. And that there are lots of new features in the newer versions of the language that would allow you to express things so much more elegantly. Or maybe, god help me, you think that you can write it more modularly this time and that will allow you to more quickly iterate on new features.


OK, now that that’s out of my system, let’s get to the natural question: why? Why am I saying this? Because if you ignore my advice and plunge ahead anyway, you’re going to run into what I modestly call:

Vick’s Law of Rewrites: The cost of rewriting any substantial codebase while preserving compatibility is proportional to the time it took to write that codebase in the first place.

In other words, if it took you one year to write your codebase, it’s going to take you on the order of one year to rewrite that codebase. But, of course, we’re not talking about codebases that are only around one year old are we? No, we aren’t. Instead, people usually start talking about rewrites about around the 5-7 year mark. Which means that, as per the law above, it’s going to take on the order of 5-7 years to rewrite that codebase and preserve any semblance of compatibility. Which is not what most people who embark on large rewrites usually think. They think that they can do it in substantially less time than it took in the first place. And they’re always wrong, in my experience.

I first came to this law way back when I worked on Access and the leaders of the project (who, I realize now, were still laughably young, but seemed very old and wise to me at the time) started talking about rewriting Access after we shipped version 2.0 (development time: approx. 4 years total at that point). At the time, I owned the grid control that was the underpinning of the Access data sheet and other controls, and let me tell you–that piece of code was a bit of a beast. Lots of corner cases to make the UI work “just so.” I was talking to one of the leads about this and he dismissed me with a proverbial wave of the hand: “Oh, no, since we understand how that code works now, we can rewrite it in three months.” I think it was at that moment I knew that the rewrite was doomed, although it took three more years for the team to get to that realization.

The bottom line is that the cost of most substantial codebases comes not from the big thinking stuff, but from details, details, details and from bugs, bugs, bugs. And every time you write new code–even if you totally understand what it’s supposed to be doing–you’ve still got the details and bugs to deal with. And, of course, it’s almost entirely unlikely that you do totally understand what it’s supposed to be doing. One of the things that continually humbled me when I worked on Visual Basic was how we’d be discussing some finer point of, say, overload resolution, and someone would point out some rule that I didn’t remember. No way, I’d say, that’s not a rule. Then I’d go check the language specification which I wrote, and there it would be, in black and white. I had totally forgotten it even though I had written it down myself. And the truth is, there are a million things like this that you will inevitably miss when you rewrite, and then you will have to spend time fixing up. And don’t even get me started on the bugs. Plus, you’re probably rewriting on a new platform (because who wants to write on that old, antiquated platform you were writing on before?), and now you’ve got to relearn all the tricks you knew about how to work with the old platform but which don’t work on the new one.

In my experience, there are three ways around this rule:

  1. You’re working on a small or very young codebase. OK, fine. In that case it’s perfectly OK to rewrite. But that’s not really what I’m talking about here.
  2. Decide that you don’t care so much about preserving compatibility or existing functionality. Downside: Don’t expect your users to be particularly pleased, to put it mildly (see: Visual Basic .NET).
  3. Adopt a refactor rather than rewrite strategy. That is, instead of rewriting the whole codebase at once, take staged steps towards the long-term goal, either rewriting one component at a time, or refactoring one aspect of the codebase at a time. Then stabilizing, fixing problems, etc. Downside: Not as sexy or satisfying as chucking the whole thing out and “starting clean.” Requires actually acquiring a full understanding of your existing codebase (which you probably don’t have). Plus, this will take a lot longer than even rewriting, but at least you always have a compatible, working product at every stage of the game.

Rewriting can do enormous damage to a product, because you typically end up freezing the development of your product until the rewrite is done. Maybe at best you leave behind a skeleton crew to pump out a few minor features while the bulk of the team (esp. your star coders) works on the fun stuff. Regardless, this means your product stagnates while the rest of the world moves on. And if, god forbid, the re-write fails, then you run the risk of your team moving on and leaving the skeleton crew as the development team. I’ve seen this happen to products that, arguably, never really recover from it.

So please, learn to live with that crappy codebase you’ve got and tend to it wisely. The product you save may be your own.

You should also follow me on Twitter here.

The Use/Build Fallacy

Working in the language space, especially in language design, you frequently encounter people who fall victim to what I call the “Use/Build Fallacy.” It goes something like this:

Because I know how to use something, I know how to build it as well.

This fallacy is best illustrated by a story I heard from a friend who’s a teacher (another profession that frequently has to deal with this). She was teaching middle-school when teacher conferences rolled around. Talking to the father of one of her students, she explained to him that his daughter was having a lot of trouble in English class and that, based on her observations of how hard the daughter was working, she was pretty sure that the daughter had some sort of language learning disability. She therefore strongly recommended that he take his daughter to an expert to get tested, and that she be tutored by someone trained to deal with the specific kind of learning disability. The father was nonplussed, mainly because he didn’t like the idea that all this would cost him money. “Can’t you just help her more in class?” he asked. My friend explained that she was helping her all she could, but she wasn’t an expert in diagnosing learning disabilities and his daughter really needed to see someone who had the appropriate training.

After a bit of back-and-forth, the father finally got exasperated and said, “Fine, I’ll just tutor her myself! I mean, how hard could it be? I went to school!” My friend then shot back, “Look, you’re a general contractor, right? What would you think if I came to you and said, ‘I don’t need you to build my house—I’ve lived in a house before, so how hard could it be to build one myself?” This, finally, stumped him. I’m not sure whether he actually got his daughter the help she needed, but the story stuck with me because my friend’s response is the perfect distillation of the Use/Build Fallacy.

Note that I’m not saying that just because you’re not an expert on something you can’t have an opinion. I may not know how to build a house, but that doesn’t mean I have nothing to say to the contractors if I decide to do some renovations on my house. Not falling prey to the fallacy, though, means that I always keep a healthy respect for the expert in a field—as long as they truly seem to know what they’re talking about. (I hear this from friends who are architects all the time—they get hired by someone to build or renovate a house for them, and then their client spends all their time endlessly arguing with everything they do. Why bother to hire an expert if you think you already know how to do it yourself?)

I try to remember this myself every time I encounter some aspect of some programming language that I don’t like. Right now, I’m neck-deep in C++ code and it’s tempting to spend all my time kvetching about how how horrible a job Bjarne has done over the years. And then I try to remember—even as someone who’s actually built a language—that this stuff is hard. A language of any complexity has a huge number of moving parts, all of which interact with each other in an unpredictable manner. Historical choices can come back to bite you in all sorts of unexpected ways. Oftentimes all you have are a bunch of imperfect choices, and you have to simply pick the least bad of them all. And then you get to sit there and listen to everyone on the sidelines complain about how horrible a job you’ve done and how they could do it so much better than you because, hey, they’ve used a programming language before.

So I try to temper my complaints with a little humility, and remember how much different building is from using.

Black Hole Projects

OK, so I may have reset my blog, but there were some interesting posts that probably shouldn’t disappear totally down the memory hole. This is one of them, which I am rescuing from back in 2004 because of it’s continuing relevance. It seems that six months can’t go by without something I hear making me think of this. Edited from the original for clarity and to bring it up-to-date.

Many, many years ago, Steve Maine took the opportunity to reminisce about a project at Microsoft that was being worked on while he was an intern. He says:

[ When I was an intern… ] there was this mythical project codenamed “Netdocs”, and it was a black hole into which entire teams disappeared. I had several intern friends who got transferred to the Netdocs team and were never heard from again. Everyone knew that Netdocs was huge and that there were a ton of people working on it, but nobody had any idea what the project actually did.

I also knew a few people who invested quite a few years of their lives into “Netdocs” and it got me thinking about the phenomenon of “black hole projects” at Microsoft (and elsewhere, I’ll wager). There was one I was very close to very early in my career that I managed to avoid, many others that I just watched from afar, and one or two that I got dragged into despite my best intentions. I can’t really talk about most of them since most never saw the light of day, but it did get me thinking about the peculiarly immutable traits of a black hole project. They seem to be:

  • They must have absurdly grandiose goals. Something like “fundamentally reimagine the way that people work with computers.” Nobody, including the people who originate the goals, has a clear idea what the goals actually mean.
  • They must involve throwing out some large existing codebase and rewriting everything from scratch, “the right way, this time.”
  • They must have completely unrealistic deadlines. Often this is because they believe that they can rewrite the original codebase in much, much less time than it took to write that codebase in the first place.
  • They must have completely unrealistic beliefs about compatibility. Usually this takes the form of believing you can rewrite a huge codebase and preserve all of its little quirks without a massive amount of extra effort.
  • They are always “six months” from from major deadlines that never seem to arrive. Or, if they do arrive, another milestone is added on to the end of the project to compensate.
  • They must consume huge amounts of resources, sucking the lifeblood out of one or more established products that make significant amounts of money or have significant market share.
  • They must take over any group that does anything that relates to their absurdly broad goals, especially if that group is small, focused, has modest goals, and actually has a hope of shipping in a reasonable timeframe.
  • They must be prominently featured as demos in public settings such as company meetings, all-hands, conferences, etc. to the point where people groan “Oh, god, not another demo of this thing. When is it ever going to ship?”
  • They usually are prominently talked up publicly by high level executives for years before dying a quiet death.
  • They usually involve “componentizing” some monolithic application or system. This means that not only are you rewriting a huge amount of code, you’re also splitting it up across one or more teams that have to all seamlessly work together.
  • As a result of the previous point, they also usually involve absolutely massive integration problems as different teams try madly to get their components working with each other.
  • They usually involve rewriting the application or system on top of brand-new technology that has not been proven at a large scale yet. As such, they get to flush out all the scalability problems with the new technology.
  • They are usually led by one or more Captain Ahabs, madly pursuing the white whale with absolute conviction, while the deckhands stand around saying “Gee, that whale looks awfully big. I’m not sure we can really take him down.”
  • Finally, 90% of the time, they must fail and die a flaming death, taking down other products with it (or at least severely damaging them). If they do ship, they must have taken at least 4-5 years to ship and be at least 2 years overdue.

It’s kind of frightening how easy it is to come up with this list – it all kind of just poured out. Looking back over 19 years at Microsoft, I’m also kind of frightened at how many projects this describes. Including some projects that are ongoing at the moment…

You should also follow me on Twitter here.

Seven Rules for Beginning Programmers

A little while ago Phil Wadler posted “VS Naipaul’s Rules for Beginners,” listing the famous author’s seven rules for beginning writers. Upon reading them it occurred to me that, with a little adaptation, they could equally apply to beginning programmers. So, with apologies to Mr. Naipaul, here are my “Rules for Beginners:”

  1. Do not write long procedures. A procedure should not have more than ten or twelve lines.
  2. Each procedure should have a clear purpose. It should not overlap in purpose with the procedures that went before or come after. A good program is a series of clear, non-overlapping procedures.
  3. Do not use fancy language features. If you’re using something more than variable declarations, procedure calls, control flow statements and arithmetic operators, there is something wrong. The use of simple language features compels you to think about what you are writing. Even difficult algorithms can be broken down into simple language features.
  4. Never use language features whose meaning you are not sure of. If you break this rule you should look for other work.
  5. The beginner should avoid using copy and paste, except when copying code from one program they have written to a new one they are writing. Use as few files as possible.
  6. Avoid the abstract. Always go for the concrete. [Ed. note: This one applies unchanged.]
  7. Every day, for six months at least, practice programming in this way. Short statements; short, clear, concrete procedures. It may be awkward, but it’s training you in the use of a programming language. It may even be getting rid of the bad programming language habits you picked up at the university. You may go beyond these rules after you have thoroughly understood and mastered them.

You should also follow me on Twitter here.