Category Archives: Personal

My (Terrifying) Meeting with Bill Gates

This being Halloween and all, I thought I would relate one of the most frightening experiences I’ve had in my two decades working at Microsoft. I was reminded of it this weekend when we had a small reunion for everyone who’s worked on Access over it’s 25-or-so year history–it was a bit of old home week, seeing people who in some cases I haven’t seen in well over a decade and a half.

Anyway, it reminded me of an old practice at Microsoft called the “BillG review,” which was a (I think) twice-yearly meeting every major product team had with Bill Gates. They’d go over their progress since the last meeting, talk about their future plans, and give him the chance to ask questions and give direction. As one can imagine, this was a really huge deal (especially back in the days when Microsoft was still a 12,000 person company). A bad BillG review could be extremely traumatic, especially since Bill was not particularly known for his warm-and-fuzziness, nor did he suffer fools gladly. It could also radically alter product plans, depending on whether he agreed with what you were doing or not.

For most of my time in Access, I was too junior to actually attend these reviews, much less give any part of the presentation. I’d mostly just hear about it in the form of a flurry of design changes after each one. But by the time we’d gotten to working on Access ’97, a large majority of the senior folks had split off from the team to work on an ill-fated rewrite of Access. The main focus of Access ’97 was doing something about all the performance we’d lost moving to 32-bit (i.e. Windows 95) and integrating COM and VBA for the first time, and I had self-elected myself to be in charge of the development side of that effort. So when it came time to do the BillG review, I was tapped to give part of the presentation on the performance work. I was also there to throw to the lions in the eventuality that Bill started drilling in on some highly technical question, as he was famous for doing (c.f. Joel’s discussion of his first BillG review).

So the day of the review rolls around and I show up at Bill’s executive conference room with the rest of the team and various Access leaders. Of course, Bill’s running late, so Tod Nielsen (who was Access business unit manager at the time, I believe) decides to entertain us with colorful stories of BillG reviews past. And he decides to tell us the story of the final Omega BillG review.

Now, Omega was the desktop database project that preceded Access. They worked on it for about a year and a half (I think) before it got cancelled, all the code was thrown out, and they restarted on a new project that became Access. I wasn’t around for Omega, but I had heard lots of horror stories about it getting cancelled from people who’d been on that team. As you can imagine, then, the final BillG review for Omega was probably not a particularly happy event.

As I remember Tod telling it, he said that they were going through a list of what wasn’t going well with Omega when, all of a sudden, Bill loses it and starts swearing. “Get f–king recruiting in here, I want f–king recruiting in here right now!” Everyone’s a bit puzzled (and worried), and so they say, “OK, Bill, why do you want recruiting?” He replies, “Because I want to find out what f–king colleges we recruited you guys from and tell them not to f–king recruit there any more because they clearly produce f–king idiots!” Ouch. At that point, the team knew the review was over, so I they basically said, “All right, Bill, we’ll let you calm down and talk to you later,” and left. Tod thought the whole thing was hilarious… now. (It’s also possible he embellished the story a bit, I can’t testify to the veracity of his tale…)

Of course, as the person who was about to present about the primary feature of Access 97 to Bill f–king Gates, I was absolutely terrified. Great, I thought, I’m totally screwed. I’m going to die. Thankfully, Bill showed up, we did the review, and aside from one tense moment, everything went extremely smoothly. Then I got to sit and listen while Bill and the VPs sat around for a little while and discussed when they were going to merge our division in with Office, like they were moving pieces around on a chess board. Fascinating.

Not coincidentally, my other “scariest story” from my time at Microsoft also involves Bill Gates, but that’s a story for another time…

You should also follow me on Twitter here.

Undoing some of the damage…

OK, well, this is a bit embarrassing, but here goes: you know how I threw away the accumulated blogging of nearly seven years (a.k.a. Hitting the Big Red Switch) in favor of a clean slate?

I realized maybe that I went… a little bit overboard.

Having just now completed what I can only describe as one of the more trying periods of my life (getting separated, getting divorced, selling a house, along with some questionable career decisions, none of them experiences I can recommend to anyone), I can look back and say with some conviction that my decision to drop my blog history was motivated in no small part by a desire to wipe my entire slate clean. Unfortunately, I think I ended up throwing out the baby with the bathwater since there were some genuinely useful stuff back in there. (And a lot of stuff that I’m sure no one will ever care about again, if they ever did.)

At any rate, I sucked it up, pulled out the data backup and ported all the blog posts over to WordPress. So my entire blog history has been restored. Yay! There are still some image links I’m working to restore, but by and large it’s all there. So enjoy!

(I realize it’s been quiet around here. Not much I can say about the stuff I’ve been working on (sooner rather than later, I hope), although you should really check out some of the cool stuff that some of my coworkers have been doing with TypeScript. It’s awesome!)

“I Didn’t Quit.”

I ran across this great blog entry today called “What’s The Most Difficult CEO Skill? Managing Your Own Psychology”. I don’t expect to ever, ever, ever be a CEO, but I think a lot of the lessons that Ben Horowitz talks about apply to just about any leadership position you might ever find yourself in, from team lead to Little League Coach. Thing like needing to learn to handle stress, needing to take the responsibility seriously but not too seriously, needing to cope with loneliness, etc. But the thing that caught my eye was the end of the entry:

Whenever I meet a successful CEO, I ask them how they did it. Mediocre CEOs point to their brilliant strategic moves or their intuitive business sense or a variety of other self-congratulatory explanations. The great CEOs tend to be remarkably consistent in their answers. They all say: “I didn’t quit.”

In my experience, this one phrase—“I didn’t quit.”—is the dividing line between success and failure. A few of the successes that I’ve had in my life (such as they are) can be attributed to dumb luck or raw talent. But most, and certainly the most satisfying ones, were primarily attributable to the fact that I didn’t give up even though I often might have wanted to.

The hard part is that it’s often impossible to know whether you’re on the right path or not. Sure things may turn out to be dead ends. Things that people say cannot be done may turn out to be golden opportunities. The best thing you can do is be clear on what’s important to you, follow your gut, and… don’t quit.

You should also follow me on Twitter here.

External Requests Versus Internal Requirements

A commenter observed that a side-effect of my blog reset is that now any answers on Stack Overflow that pointed to my blog are broken. This is an unfortunate situation that I didn’t consider when I decided to reset my blog, and it did give me some pangs of regret when I looked at the list of answers that referred to me. It also got me thinking about the inevitable conflict in life between external requests (i.e. things other people want from me) and internal requirements (i.e. things that I want for myself).

From the Internet’s perspective, it would be happy if every piece of information that ever appears on the web would: a) stay on the web forever, and b) stay in exactly the same place forever. This makes total sense from the collective perspective—I’ve been amazed by the number of times that I’ve gone to research some obscure thing (such as how to unbind a DOS executable from a DOS 4/GW extender… don’t ask me why) and found some answer from back in the dark ages (like, 1996). Information disappearing, from a public utility standpoint, is a Very Bad Thing because you never know how useful that information might be some day. In college I was always amazed at the variety of interesting historical information that could be deduced from everyday things like private letters, diaries, commercial correspondence, etc. The true enemy of the scholar indeed is people who throw away… well, anything, really.

But from my perspective, it’s a lot of dead weight. I mean, a lot of the stuff I dumped off my blog was written five or six years ago. Some of it was wrong and a lot of it was irrelevant when looked at through the lens of later events. Some of it reminded me of things that I’d rather forget all about. And some of it was, frankly, embarrassing. One could say, “Well, then, just don’t look at it!” And, most of the time, I didn’t. But, you know, it was still there, taunting me from the archive list on the right hand side of the blog.

Some changes in my personal life recently motivated me to go through a bunch of boxes that had been sitting in storage for years. They were mostly full of stuff from my high school and college years—old papers, letters, other random stuff. I’d been holding on to a lot of it because, well, it was my stuff. But in looking through it, I realized how much of a burden most it had become. Like the blog, I didn’t look at the boxes very often, but there they always were, taking up space, having to be moved around, making me keep track of them. So I decided to go through and throw away anything that didn’t have a strong, tangible, positive, personal meaning to me. I’d say I threw away about 90% of what I had stored in those boxes, tons of stuff I no longer even remembered anything about. And the wonderful thing? Once I was over the initial trauma, I felt a lot better, freer, and lighter. It’s amazing what wonders getting rid of old stuff can do for you, even if it does make my theoretical future biographer’s job harder.

So my apologies to the Internet: I realize that losing my miniscule contributions to global knowledge might make life a little more difficult, and I’m sorry about that. But I have to say: I feel a whole lot better letting go of that stuff. I’m sure it’s going to cost me some (or even a lot) of visitors, but it seems like a small price to pay. At least, for me.

You should also follow me on Twitter here.

Murphy’s Computer Law

A long time ago, my family took a trip to Expo `86 in Vancouver, with stop offs in San Francisco and Los Angeles. In LA, we went on the Universal studio tour, something which I basically have no memory of. I did get a memento, though-a poster entitled “Murphy’s Computer Law” with a bunch of humorous computing “laws” on it. This poster went up in my room, accompanied me to college and has been in most of my offices at Microsoft. However, a few years ago, a corner ripped off in a move. Then while it was sitting around waiting to be repaired, it got a bit stained. And then I realized just how dated and ratty the thing looked. So, I figured it’s time to retire it. However, I would like to hang on to the “laws” since some of them are are still quite pertinent, even if some are quite outdated. So here they are, on my “permanent record:”

Murphy’s Computer Law:

  1. Murphy never would have used one.
  2. Murphy would have loved them.

Bove’s Theorem: The remaining work to finish in order to reach your goal increases as the deadline approaches.

Brooks’ Law: Adding manpower to a late software project makes it later.

Canada Bill Jones’ Motto: It’s morally wrong to allow na‹ve end users to keep their money.

Cann’s Axiom: When all else fails, read the instructions.

Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic.

Deadline-Dan’s Demo Demonstration: The higher the “higher-ups” are who’ve come to see your demo, the lower your chances are of giving a successful one.

Deadline-Dan’s Demon: Every task takes twice as long as you think it will take. If you double the time you think it will take, it will actually take four times as long.

Demian’s Observation: There is always one item on the screen menu that is mislabeled and should read “ABANDON HOPE ALL YE WHO ENTER HERE.”

Dr. Caligari’s Come-back: A bad sector disk error occurs only after you’ve done several hours of work without performing a backup.

Estridge’s Law: No matter how large and standardized the marketplace is, IBM can redefine it. [ed, later “Microsoft”, now “Apple,” I guess]

Finagle’s Rules:

  1. To study an application best, understand it thoroughly before you start.
  2. Always keep a record of data. It indicates you’ve been working.
  3. Always draw your curves, then plot the reading.
  4. In case of doubt, make it sound convincing.
  5. Program results should always be reproducible. They should all fail in the same way.
  6. Do not believe in miracles. Rely on them.

Franklin’s Rule: Blessed is the end user who expects nothing, for he/she will not be disappointed.

Gilb’s Laws of Unreliability:

  1. At the source of every error which is blamed on the computer you will find at least two human errors, including the error of blaming it on the computer.
  2. Any system which depends on human reliability is unreliable.
  3. Undetectable errors are infinite in variety, in contrast to detectable errors, which by definition are limited.
  4. Investment in reliability will increase until it exceeds the probable cost of errors, or until someone insists on getting some useful work done.

Gummidge’s Law: The amount of expertise varies in inverse proportion to the number of statements understood by the general public.

Harp’s Corollary to Estridge’s Law: Your “IBM PC-compatible” computer grows more incompatible with every passing moment.

Heller’s Law: The first myth of management is that it exists.

Hinds’ Law of Computer Programming:

  1. Any given program, when running, is obsolete.
  2. If a program is useful, it will have to changed.
  3. If a program is useless, it will have to be documented.
  4. Any given program will expand to fill all available memory.
  5. The value of a program is proportional to the weight of its output.
  6. Program complexity grows until it exceeds the capability of the programmer who must maintain it.
  7. Make it possible for programmers to write programs in English, and you will find that programmers cannot write English.

Hoare’s Law of Large Programs: Inside every large program is a small program struggling to get out.

The Last One’s Law of Program Generators: A program generator creates programs that are more “buggy” than the program generator.

Meskimen’s Law: There’s never time to do it right, but always time to do it over.

Murphy’s Fourth Law: If there is a possibility of several things going wrong, the one that will cause the most damage with be the one to go wrong.

Murphy’s Law of Thermodynamics: Things get worse under pressure.

Ninety-Ninety Rule of Project Schedules: The first ninety percent of the task takes ninety percent of the time, and the last ten percent takes the other ninety percent. [ed: words to live by]

Nixon’s Theorem: The man who can smile when things go wrong has thought of someone he can blame it on.

Nolan’s Placebo: An ounce of image is worth a pound of performance.

Osborn’s Law: Variables won’t, constants aren’t.

O’Toole’s Commentary on Murphy’s Law: Murphy was an optimist.

Peer’s Law: The solution to a problem changes the problem.

Rhode’s’ Corollary to Hoare’s Law: Inside every complex and unworkable program is a useful routine struggling to be free.

Robert E. Lee’s Truce: Judgment comes from experience; experience comes from poor judgment.

Sattinger’s Law: It works better if you plug it in.

Shaw’s Principle: Build a system that even a fool can use, and only a fool will want to use it. [ed: also known as “Bob’s Law”]

SNAFU Equations:

  1. Given an problem containing N equations, there will be N+1 unknowns.
  2. An object or bit or information most needed will be least available.
  3. Any device requiring service or adjustment will be least accessible.
  4. Interchangeable devices won’t.
  5. In any human endeavor, once you have exhausted all possibilities and fail, there will be one solution, simple and obvious, highly visible to everyone else.
  6. Badness comes in waves.

Thoreau’s Theories of Adaptation:

  1. After months of training and you finally understand all of a program’s commands, a revised version of the program arrives with an all-new command structure. [ed: also known the “Office Principle”]
  2. After designing a useful routine that gets around a familiar “bug” in the system, the system is revised, the “bug” is taken away, and you’re left with a useless routine.
  3. Efforts in improving a program’s “user friendliness” invariably lead to work in improving user’s “computer literacy.”
  4. That’s not a “bug”, that’s a feature!

Weinberg’s Corollary: An expert is a person who avoids the small errors while sweeping on to the grand fallacy.

Weinberg’s Law: If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization.

Zymurgy’s First Law of Evolving System Dynamics: Once you open a can of worms, the only way to recan them is to use a larger can.

Wood’s Axiom: As soon as a still-to-be-finished computer task becomes a life-or-death situation, the power fails.

T-SQL Tuesday #8: Learning and Teaching

T-SQL Tuesday

Since I’m joining the T-SQL community, I thought I’d try my hand at a “T-SQL Tuesday” that I could actually have an opinion about. This week’s question (hosted by Robert Davis, a.k.a. @SQLSoldier on Twitter) is “How do you learn? How do you teach? What are you learning or teaching?” and is very relevant for me because, of course, I just joined the T-SQL team a short while ago and am doing a whole lot of learning at the moment.


How I learn

I was going to say “by doing,” but I don’t think that’s accurate enough because there are lots of kinds of “doing.” I’m reminded of something they said when I was learning to ballroom dance for my wedding reception. They said that when learning anything new, people tend to go through four distinct stages: “unconscious incompetence” (i.e. you don’t know how bad you are), “conscious incompetence” (i.e. you know exactly how bad you are), “conscious competence” (i.e. you’re good but you have to pay attention), and “unconscious competence” (i.e. you’re good and it seems effortless). So when I’m starting something new, I’m doing a lot things but most of what I’m doing is learning just how little I actually know. That’s helpful and necessary, but it’s not exactly what I call “real” learning. The real learning seems to come between the second and third stages-when I’ve discovered just how bad I am and am now working on figuring out how to be less bad. When I get to the fourth stage, the learning starts to taper down and that’s when I really get to enjoy the state of knowing (which I think is also called the state of “flow”) and I get to have a lot of fun.

The interesting implication of this is that when I’m entering a new area, my first attempts are necessarily going to not be that great because I don’t know what I don’t know yet. So the initial doing isn’t really very helpful in learning the area, nor is it likely to look much like what I’m going to end up with if I keep on learning. But it’s only when I’ve got something and I know, at least at some level, how bad it is that I can start learning the area. Ironically, when the true learning starts it mostly looks like anal-retentiveness and neat-freakishness-going over and over and over something I’ve done, trying to make it better and suck less. In other words, to start really learning something I have to take something I’ve already done and go back and start pulling at the loose threads, seeing how it unravels and then figuring out how to reweave it properly. That’s when I really get to figure out how the things are supposed to work.

(I’ll note here that this is the number one mistake that I’ve seen most new programmers make. They’re like the verse from The Rubaiyat of Omar Khayyam:

The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.

They write their code once and then abandon it, never returning, always moving on to the next thing. Thus, they never actually get the chance to learn how to do things properly and always stay in that state of unconscious incompetence.)

Ironically, the situation I’m stepping into in SQL Server is perfect for “real” learning because I get to largely shortcut through the first stage of unconscious incompetence. That is to say, there’s already this large, mature artifact (i.e. the SQL Server codebase), so I don’t need to go through the trouble of creating something imperfect-someone’s already done that for me. I can spend just a few short weeks realizing just how little I actually know about anything and then jump straight to pulling threads and seeing what starts coming apart. Metaphorically, of course. I’m not gunning to have SQL Server fall apart on me or anything.

I actually think this can be more fun than starting something brand new and blazing the path, which isn’t the way the world sees it, oftentimes.


How I teach

I think I’m actually going to touch on this in more detail soon, but the short answer is: “by writing.” I’m pretty consciously incompetent when it comes to standing up in front of people and teaching them things, but I’ve been practicing writing for a whole lot longer and am better at it than speaking. And writing is just another form of what I was talking about in the previous section-first, I sit down and try to write down an explanation of whatever it is I’m trying to say. Then I realize how pathetically inadequate it is (most of the time) at saying what I want it to say. Or how little I understand what I’m trying to talk about. So I start pulling on the threads again and seeing what I can unravel and rework. And I find myself learning more not only about the process and practice of writing, but also more about whatever it is I’m trying to explain.

I think writing can be a wonderful way to teach people things, but I think it only really works-even technical writing!-if you follow the dictum of “writing what you know (and love).” In the end, I guess any teaching medium works if the teacher is interested enough in the subject, knowledgeable enough about it, and has a real passion for teaching (as opposed to a passion for having people listen to them, which is something entirely different).


Well, that’s about it. Hope this was interesting!

There are only three types of programmers in the world…

..and they are:

  1. Programmers who want to write an operating system
  2. Programmers who want to write a compiler
  3. Programmers who want to write a database

It’s not that every programmer ever actually works on one of these, just that every programmer seems to dream of doing one of these things. It’s the primary reason why things like Linux exist. Yes, open source, blah, blah, blah, OS choices, blah, blah, blah, evil Microsoft, blah, blah, blah. But I would bet my bottom dollar that 9 out of 10 of the people donating their valuable time to the Linux project do so not because they want an alternative to Windows but because they always dreamed of being OS hackers. It’s also why there are so many damn programming languages out there, all the people who sit around dreaming of being, I don’t know, James Gosling or something.

(I think with the advent of the Internet, it’s likely that there’s now a fourth kind of programmer who wants to write websites, but I’m not totally sure about that yet.)

The interesting thing about these categories is that the Venn diagram tends, in my experience, to be pretty distinct-most “data” guys aren’t also “language” guys, and most “language” guys aren’t also “OS” guys, and so on. My theory is that it’s like the parable of the blind men and the elephant: although we all grapple with basically the same set of problems, each kind of programmer grapples with a different aspect of it.

The blind men and the elephant

I say all this because although I started out working in databases, it’s clear to me that I’ve always been a “language” guy. In college, I did so-so in the OS course and never touched a database course (I’m not even sure they were offered), but my compiler course netted me a special letter of commendation from the professor (the only one I ever got). Anyway, now I’m back in the “data” world as an even more confirmed “language” guy and the most interesting thing is how many of the problems are the same, but the way they’re conceptualized, handled, or even talked about, are different from what I’ve been used to working on programming languages. It’s kind of. refreshing to see things in a different light. More on that soon.