Category Archives: Technology

Language wars and Kilkenny cats

I try to stay out of the language wars as much as possible, and I’ve particularly sat this round out because it was started by a company who’s trying to sell something. However, the word that A Word A Day sent out today seemed strangely appropriate. The word was “Kilkenny cats” and you can read more about them here. I particularly liked the limerick:

There wanst was two cats of Kilkenny
Each thought there was one cat too many
So they fought and they fit
And they scratched and they bit
‘Til instead of two cats there weren’t any.

Appropriate…

Boo to you, too!

Don pointed to the release of a new .NET programming language called boo. I downloaded the boo manifesto and took a look and it seems very interesting (and it’s nice to see some language experimentation going on on the CLR!). Some of the ideas aren’t unfamiliar – C# already has iterators and we’ve kicked around the idea of being able to enable late binding on a per-variable basis (called “duck typing” in the manifesto) – but others are less familiar. One idea I’m curious about seeing how it plays out is extensibility in the language. In the manifesto Rodrigo says:

I wanted a language I could extend with my own constructs.

This is one of those ideas that’s always seemed to me to be great on the face of it but questionable because of the law of unintended consequences. On the one hand, how can you argue with giving people the ability to extend the language as they see fit, especially given the fact that compiler releases can be a year or more? On the other hand, it’s so easy to screw up language design even when you’ve been doing it a long time that I wonder if you won’t quickly have a language that’s no longer comprehensible by man or machine.

One thing that Rodrigo loves about .NET is the nice way that the entire set of class libraries are architected together in a pretty clean and consistent manner. This is a great advance over the hodgepodge approaches of the past where fundamental libraries were incompatible or worked in entirely different ways. I wonder, though, if allowing extensible languages isn’t taking a step backwards into that world. Central control of a language ensures some reasonable amount of consistency in thought and design – if anyone is allowed to add extensions, will the language gradually devolve into the language of Babel? Or will only extensions that preserve the core style survive while the others die off? I guess we’ll just have to see.

I will add that one major problem with our compilers today is that entirely too much information is locked up in them. We’ve started exposing compilation information through mechanisms such as the Visual Studio code model, but we need to go a lot further to enable more advanced and extensible tools. It’s certainly possible to expose a lot more information about what exists in the code that’s being compiled without having to make that information modifiable outside of the compiler, and I think that’s something that’s unquestionably worth pursuing in the long run.

Why do hobbyists matter?

After Kathleen worried about losing the hobbyist programmer on .NET, Rory came back with the question “Should the hobbyist programmer matter to Microsoft?” His thesis, in a nutshell, was:

I say that we don’t worry about the hobbyists – don’t dissuade them from coding in .NET, but don’t cater to them either.

I understand where he’s coming from, but I think that the terminology is confusing the issue. When we talk about “hobbyist programmer,” it evokes images of guys tinkering in their garages or in their basements on the weekend. And, yeah, maybe if it was just the equivalent of a bunch of guys (or gals) building model trains or making furniture or rebuilding old cars, it wouldn’t matter so much. But the reality is that the hobbyist programmer doesn’t just program on the weekends – they’re also programming during the week at their “real” jobs.

Before I started working on VB, I worked on Access. And I cannot count the number of times that customer testimonials started along the lines of “I was fooling around with Access one day and managed to write this small app to help manage my group. Once my department found out about it, they started using it to manage the department. Now my whole company uses it!” One of the key aspects of Access’s success was this kind of “viral adoption” where some tinkerer used it to solve some local problem that ended up solving a company-wide problem. The same holds for VB – lots of VB applications in corporations started life as someone’s side project. As I put it in a recent presentation, “Throwaway applications have a way of becoming mission critical applications.” And where do those throwaway applications come from? Hobbyist programmers.

With the spread of computing into more and more industries, the people who don’t consider themselves programmers become more and more important because they’re the beachhead for “real programming” to make its way in. For example, the throwaway applications that hobbyists write ultimately helps drive demand for professional programmers to come in and “professionalize” the applications so that they scale correctly for the corporation. Also, as hobbyist applications make companies more open to the benefits of technology, they open the door to commercial software that can augment or replace the homegrown applications and maybe do a better job. And, of course, hobbyist programmers usually need lots of help, which drives demand for websites, magazines, books, consultants, etc.

So, in much the same way that small businesses serve a vital function in keeping the economy going so that large corporations can thrive, hobbyists play a vital role in sustaining the ecosystem that supports the professional programmers. Even if the professional programmers don’t always appreciate that…

Neither a borrower nor a lender be…

Geek Noise pointed to a tirade against the Cult of Performance that brought to mind a criticism that Jet had of my 10 Rules of Performance entry. The points raised are very well taken – it can actually be more damaging to obsess about performance prematurely than to obsess about it too late. The point that I’m arguing for is moderation. For some reason, developers like to live at extremes – either they’re this way or that, never in the middle. Either they never think about performance or they are completely obsessed with it. Instead, I’m arguing that performance should be a part of the development process and the thought process, but not the only consideration. (If it was, most applications would never ship.)

I suppose this is all human nature, if you look at the way that people tend to polarize in other areas. The title for this entry comes from a speech Polonius gives his son Laertes in Hamlet in which he’s purporting to give him some life advice. Since Polonius is sort of a doddering old windbag, most of the speech boils down to useless platitudes along the lines of be smart, but not too smart; be nice, but not too nice; be rich, but not too rich. However, he does end with a good bit of advice that, nontheless, has got to be the most difficult to follow:

This above all: to thine own self be true,
And it must follow, as the night the day,
Thou canst not then be false to any man.

If you think about it, a lot of my performance advice boils down to a variation on this theme. Most of what I argue for is to stop and take the time to understand the true nature of your application and work with that. Obsessing about the performance of code you haven’t even written yet doesn’t fall into that category…

The Ten Rules of Performance

An Updated Introduction: Seven Years Later (February 9, 2004)

I wrote this document a long time ago to capture the conclusions I had reached from working on application performance for a few years. At the time that I wrote it, the way in which Microsoft dealt with performance in its products was starting to undergo a large-scale transformation. Performance analysis up to that time had largely been an ad-hoc effort and was hampered by a lack of well-known “best practices” and, in some cases, the tools necessary to get the job done. As the transition started from the 16-bit world to the 32-bit world, this kind of approach was clearly insufficient to deal with the increasing popularity of our products and the increasing demands that were being placed on them. The past seven years have seen major changes in the way that performance is integrated into the development process at Microsoft, and many of the “rules” that I outline below have become internalized into the daily processes of most major groups in the company. Even though a lot of what follows is no longer fresh thinking, I still get requests for the document internally which leads me to believe that there’s still value in saying things that many people already know. So I’ve decided to provide in publicly here, for your edification and in the hope that someone might find it useful. (Historical note: The “major Microsoft application“ that I refer to below was not VB.)

A Short Introduction (May 28, 1997)

In the fall of 1995, I was part of a team working on an upgrade to a major Microsoft application when we noticed that we had a little problem — we were slower than the previous version. Not just a little slower, but shockingly slower. This fact had largely been ignored by the development team (including myself), since we assumed it would improve once we were finished coding up our new features. However, after a while the management team started to get a little worried. E-mails were sent around asking whether the developers could spend some time focusing on performance. Thinking “How hard could this be?” I replied, saying that I’d be happy to own performance. This should be easy, I thought naively, just tweak a few loops, rewrite some brain-dead code, eliminate some unnecessary work and we’ll be back in the black. A full two years (and a lot of man-years worth of work on many people’s part) later, we finally achieved the performance improvements that I was sure would only take me alone a few months to achieve.

Unsurprisingly, I had approached the problem of performance with a lot of assumptions about what I would find and what I would need to do. Most of those assumptions turned out to be dead wrong or, at best, wildly inaccurate. It took many painful months of work to learn how to “really” approach the problem of performance, and that was just the beginning — once I figured out what to do, I discovered that there was a huge amount of work ahead! From that painful experience, I have come up with a set of “Ten Rules of Performance,” intended to help others avoid the errors that I made. So, without further ado…

The Rules:

Rule #1: Don’t assume you know anything.

In the immortal words of Pogo, “We have met the enemy, and he is us.” Your biggest enemy in dealing with performance is by far all the little assumptions about your application you carry around inside your head. Because you designed the code, because you’ve worked on the operating system for years, because you did well in your college CS classes, you’re tempted to believe that you understand how your application works. Well, you don’t. You understand how it’s supposed to work. Unfortunately, performance work deals with how things actually work, which in many cases is completely different. Bugs, design shortcuts and unforeseen cases can all cause computer systems to behave (and execute code) in unexpected, surprising ways. If you want to get anywhere with performance, you must continuously test and re-test all assumptions you have: about the system, about your components, about your code. If you’re content to assume you know what’s going on and never bother to prove you know what’s going on, start getting used to saying the following phrase: “I don’t know what’s wrong… It’s supposed to be fast!”

Rule #2: Never take your eyes off the ball

For most developers, performance exists as an abstract problem at the very beginning of the development cycle and a concrete problem at the very end of the cycle. In between, they’ve got better things to be doing. As a result, typically developers write code in a way that they assume will be fast (breaking rule #1) and then end up scrambling like crazy when the beta feedback comes back that the product is too slow. Of course, by the time the product is in beta there’s no real time to go back and redesign things without slipping, so the best that can be done is usually some simple band-aiding of the problem and praying that everyone is buying faster machines this Christmas.

If you’re serious about performance, you must start thinking about it when you begin designing your code and can only stop thinking about it when the final golden bits have been sent to manufacturing. In between, you must never, ever stop testing, analyzing and working on the performance of your code. Slowness is insidious — it will sneak into your product while you’re not looking. The price of speed is eternal vigilance.

Rule #3: Be afraid of the dark

Part of the reason why development teams find it so easy to ignore performance problems in favor of working on new features (or on bugs) is that rarely, if ever, is there anything to make them sit up and pay attention. Developers notice when the schedule shows them falling behind, and they start to panic when their bug list begins to grow too long. But most teams never have any kind of performance benchmark that can show developers how slow (or fast) things actually are. Instead, most teams thrash around in the dark, randomly addressing performance in an extremely ad hoc way and failing to motivate their developers to do anything about the problems that exist.

One of the most critical elements of a successful performance strategy is a set of reproducible real-world benchmarks run over a long period of time. If the benchmarks are not reproducible or real-world, they are liable to be dismissed by everyone as insignificant. And they must be run over a long period of time (and against previous versions) to give a real level of comparison. Most importantly, they must be run on a typical user’s machine. Usually, coming up with such numbers will be an eye opening experience for you and others on your team. “What do you mean that my feature has slowed down 146% since the previous version?!?” It’s a great motivator and will tell you what you really need to be working on.

Rule #4: Assume things will always get worse

The typical state of affairs in a development team is that the developers are always behind the eight ball. There’s another milestone coming up that you have to get those twenty features done for, and then once that milestone is done there’s another one right around the corner. What gets lost in this rush is the incentive for you to take some time as you go along to make sure that non-critical performance problems are fixed. At the end of milestone 1, your benchmarks may say that your feature is 15% slower but you’ve got a lot of work to do and, hey, it’s only milestone 1! At the end of milestone 2, the benchmarks now tell you your feature is 30% slower, but you’re pushing for an alpha release and you just don’t have time to worry about it. At the end of milestone 3, you’re code complete, pushing for beta and the benchmarks say that your feature is now 90% slower and program management is beginning to freak out. Under pressure, you finally profile the feature and discover the design problems that you started out with back in milestone 1. Only now with beta just weeks away and then a push to RTM, there’s no way you can go back and redesign things from the ground up! Avoid this mistake — always assume that however bad things are now, they’re only going to get worse in the future, so you’d better deal with them now. The longer you wait, the worse it’s going to be for you. It’s true more often than you think.

Rule #5: Problems have to be seen to be believed (or: profile, profile, profile)

Here’s the typical project’s approach to performance: Performance problems are identified. Development goes off, thinks about their design and says “We’ve got it! The problem must be X. If we just do Y, everything will be fixed!” Development goes off and does Y. Surprisingly, the performance problems persist. Development goes off, thinks about their design and says “We’ve got it! The problem must be A. If we just do B, everything will be fixed!” Development goes off and does B. Surprisingly, the performance problems persist. Development goes off… well, you get the idea. It’s amazing how many iterations of this some development groups will go through before they actually admit that they don’t know exactly what’s going on and bother to profile their code to find out. If you can’t point to a profile that shows what’s going on, you can’t say you know what’s wrong.

Every developer needs a good profiler. Even if you don’t deal with performance regularly, I say: Learn it, love it, live it. It’s an invaluable tool in a developer’s toolbox, right up there with a good compiler and debugger. Even if your code is running with acceptable speed, regularly profiling your code can reveal surprising information about it’s actual behavior.

Rule #6: 90% of performance problems are designed in, not coded in

This is a hard rule to swallow because a lot of developers assume that performance problems have more to do with code issues (badly designed loops, etc) than with the overall application design. The sad fact of the matter is that in all but the luckiest groups, most of the big performance problems you’re going to confront are not the nice and tidy kind of issues where someone is doing something really dumb. Instead, it’s going to be extremely difficult to pinpoint situations where several pieces of code are interacting in ways that end up being slow. To solve the problems usually requires a redesign of the way large chunks of your code are structured (very bad) or a redesign of the way several components interact (even worse). And given that most pieces of an application are interrelated these days, a small change in the design of one piece of code may cascade into changes in several other pieces of code. Either way it’s not going to be simple or easy. That’s why you need to diagnose problems as soon as you can and get at them before you’ve piled a lot of code on top of your designs.

Also, don’t fall in love with your code. Most programmers take a justifiable pride in the code that they write and even more pride in the overall design of the code. However, this means that many times when you point out to them that their intellectually beautiful design causes horrendous performance problems and that several changes are going to be needed, they tend not to take it very well. “How can we mar the elegance and undeniable usability of this design?” they ask, horrified, adding that perhaps you should look elsewhere for your performance gains. Don’t fall into this trap. A design that is beautiful but slow is like a Ferrari with the engine of a Yugo — sure, it looks great, but you certainly can’t take it very far. Truly elegant designs are beautiful and fast.

Rule #7: Performance is an iterative process

At this point, you’re probably starting to come to the realization that the rules outlined so far tend to contradict one another. You can’t gauge the performance of a design until you’ve coded it and can profile it. However, if you’ve got a problem, it’s most likely going to be your design, not your code. So, basically, there’s no way to tell how good a design is going to be until it’s too late to do anything about! Not exactly, though. If you take the standard linear model of development (design, code, test, ship), you’re right: it’s impossible to heed all the rules. However, if you look at the development process as being iterative (design, code, test, re-design, re-code, re-test, re-design, re-code, re-test, …, ship), then it becomes possible. You will probably have to go through and test several designs before you reach the “right” one. Look at one of the most performance obsessed companies in the software business: Id Software (producers of the games Doom and Quake). In the development of their 3D display engines (which are super performance critical) they often will go through several entirely different designs per week, rewriting their engine as often as necessary to achieve the result they want. Fortunately, we’re not all that performance sensitive, but if you expect to design your code once and get it right the first time, expect to be more wrong than right.

Rule #8: You’re either part of the solution or part of the problem

This is a simple rule: don’t be lazy. Because we all tend to be very busy people, the reflexive way we deal with difficult problems is to push them off to someone else. If your application is slow, you blame one of your external components and say “it’s their problem.” If you’re one of the external components, you blame the application for using you in a way you didn’t expect and say “it’s their problem.” Or you blame the operating system and say “it’s their problem.” Or you blame the user for doing something stupid and say “it’s their problem.” The problem with this way of dealing with these things is that soon the performance issue (which must be so
lved) is bouncing around like a pinball until it’s lucky enough to land on someone who’s going to say “I don’t care whose problems this is, we’ve got to solve it” and then does. In the end, it doesn’t matter whose fault it is, just that the problem gets fixed. You may be entirely correct that some boneheaded developer on another team caused your performance regression, but if it’s your feature it’s up to you to find a solution. If you think this is unfair, get over it. Our customers don’t blame a particular developer for a performance problem, they blame Microsoft.

Also, don’t live with a mystery. At one point in working on the boot performance of my application, I had done all the right things (profiled, re-designed, re-profiled, etc) but I started getting strange results. My profiles showed that I’d sped boot up by 30%, but the benchmarks we were running showed it had slowed down by 30%. My first instinct was to dismiss the benchmarks as being wrong, but they had been so reliable in the past (see rule #3) that I couldn’t do that. So I was left with a mystery. My second instinct was to ignore this mystery, report that I’d sped the feature up 30% and move on. Fortunately, program management was also reading the benchmark results, so I couldn’t slime out of it that easily. So I was forced to spend a few weeks beating my head against a wall trying to figure out what was going on. In the process, I discovered rule #9 below which explained the mystery. Case closed. I’ve seen many, many developers (including myself on plenty of other occasions) fall into the trap of leaving mysteries unsolved. If you’ve got a mystery, some nagging detail that isn’t quite right, some performance slowdown that you can’t quite explain, don’t be lazy and don’t stop until you’ve solved the mystery. Otherwise you may miss the key to your entire performance puzzle.

Rule #9: It’s the memory, stupid.

As I mentioned above, I reached a point in working on speeding up application boot where my profiles showed that I was 30% faster, but the benchmarks indicated I was 30% slower. After much hair-pulling, I discovered that the profiling method that I had chosen effectively filtered out the time the system spent doing things like faulting memory pages in and flushing dirty pages out to disk. Given that: 1) We faulted a lot of code in from disk to boot, and 2) We allocated a lot of dynamic memory on boot, I was effectively filtering out a huge percentage of the boot time out of the profiles! A flip of a switch and suddenly my profiles were in line with the benchmarks, indicating I had a lot of work to do. This taught me a key to understanding performance, namely that memory pages used are generally much more important than CPU cycles used. Intellectually, this makes sense: while CPU performance has been rapidly increasing every year, the amount of time it takes to access memory chips hasn’t been keeping up. And even worse, the amount of time it takes to access the disk lags even further behind. So if you have really tight boot code that nonetheless causes 1 megabyte of code to be faulted in from the disk, you’re going to be almost entirely gated by the speed of the disk controller, not the CPU. And if you end up using so much memory that the operating system is forced to start paging memory out (and then later forced to start paging it back in), you’re in real trouble.

Rule #10: Don’t do anything unless you absolutely have to

This final rule addresses the most common design error that developers make: doing work that they don’t absolutely have to. Often, developers will initialize structures or allocate resources up front because it simplifies the overall design of the code. And, to a certain degree, this is a good idea if it would be painful to do initialization (or other work) further down the line. But often times this practice leads to doing a huge amount of initialization so that the code is ready to handle all kinds of situations that may or may not occur. If you’re not 100% absolutely sure that a piece of code is going to need to be executed, then don’t execute it! Conversely, when delaying initialization code, be aware of where that work is going to be going. If you move an expensive initialization routine out of one critical feature and into another one, you may not have bought yourself much. It’s a bit of shell game, so be aware of what you’re doing.

Also, keep in mind that memory is as important as code speed, so don’t accumulate state unless you absolutely have to. One of the big reasons why memory is such a problem is the mindset of programmers that CPU cycles should be preserved at all costs. If you calculate a result at one point in your program and you might need it later on elsewhere, programmers automatically stash that result away “in case I need it later.” And in some expensive cases, this is a good idea. However, often times the result is that ten different developers think, “All I need is x bytes to store this value. That’s much cheaper than the cycles it took me to calculate it.” And then soon your application has slowed to a crawl as the memory swaps in and out like crazy. Now not only are you wasting tons of cycles going out to the disk to fetch a page that now would have been (now) much cheaper to calculate, you’re also spending more of your time managing all the state you’ve accumulated. It sounds counterintuitive, but it’s true: recalculate everything on the fly; save only the really expensive, important stuff.

© 1997-2004 Paul Vick

Mysticism and the PDC

I’ve followed with interest some of the discussions touched off by Ole’s entry on “Oh, not Oooh” and his follow-up piece. On the whole, I would have to agree with his overall thesis regarding the run-up to the PDC: even though I work for Microsoft and think the PDC is going to be awesome, I’ve found the quasi-mystical aura that many people (who I won’t mention by name) are trying to impart to the conference somewhat confounding at times.

My biggest issue with the mysticism is that it engenders this whole “complexity vs. simplicity” debate that’s swirled around Ole’s entries, and which I think is a red herring. The question is not how complex a technology is or how simple it is, the question is how organic it is. Case in point: in many of these discussions, the transition from VB6 to VB.NET is held up as a canonical example of moving from less complexity to more. But the truth is that from a complexity standpoint VB6 was probably a more complex product than VB.NET is. What’s different, though, is how the two products surfaced their complexity to the user. VB6 did an excellent job of tucking away the things that most people didn’t need in their day to day work – maybe a little too well at times. And it did an excellent job of sticking features right where you expected them to be. (A big feature for Whidbey is going to be returning Edit and Continue to the product, but the reality is that before VS 2002 most people probably weren’t even aware that Edit and Continue was a feature at all. They just did what they wanted to do and it worked. No thought required.)

In many ways, though, VB.NET is much simpler in terms of design and implementation than VB6 was. The problem (such as there is) is that complexity in VB.NET has been surfaced in ways that still do not feel entirely natural. Indeed, a lot of the work that our team has before us in Whidbey and beyond is not removing complexity but trying to create a more natural way of working with the product. This is what I mean when I say that technology should be “organic.” Well designed products don’t surface every bell and whistle to the user, they follow a principle of Everything in its Place. Using technology should be all about a natural unfolding of the capabilities of the technology as you need them, not shoving all this cool stuff in your face at once. And despite all the hoo-hah about complexity, I think that the .NET platform has actually gotten a pretty good start on this even if there are some rough edges that need to be worked on.

Ultimately, I think that this is really what the PDC is going to be about: letting people know about how we’re going to make your life easier and more natural, not how we’re going to “change Windows as you know it.” If we do our job right, developers moving to Avalon and Longhorn and Yukon and Whidbey and so on should feel like they’re meeting an old friend who they haven’t seen for a long time. Sure, there’s going to be a lot of catching up to do, but you should also immediately feel that familiar rapport, that feeling like you’ve known them all your life.

Whether we achieve that remains to be seen, and I think that’s worth showing up to the PDC for…

“With great power comes great responsibility…”

Eric Sink just put up an entry that could easily be subtitled “why I think Edit and Continue is a bad idea.” Although he’s just good naturedly razzing VS in general, his description of a sloppy working style fits totally with what EnC is trying to enable:

Don’t take time to think about all the details. Instead, go for the instant gratification. Just hit F5 and see if the code works. When it doesn’t, try another quick fix and hit F5 again. Eventually the code will work, without ever having to give it any real thought.

Now, I’ll say I have no idea what Eric thinks about EnC, but there are certainly others who use this line of argument to attack the idea of having Edit and Continue in a developer product. Namely that it’s a bad idea because it enables people to code in a style that is ultimately non-productive if not downright dangerous. And all that is true. Really. I kind of glossed over it when talking about EnC before, but it is a real dilemma: giving the consumer more power is what they want, but it also greatly increases the chance that they’ll misuse it. And then everyone might pay.

An analogy that suggests itself is the car. Today’s automobiles are insanely more powerful than Ford’s primitive Model T – the top speed of a Model T was something on the order of 45 mph. And as the power of the automobile has increased, so has its dangerousness. A Ford Model T traveling at 45 mph can cause a good deal of damage, but it’s really nothing compared to what, say, a Ford Expedition could do when traveling at it’s top speed (100 mph? 120 mph?). Even worse, many cars these days are built specifically to encourage a driver’s sense of power and invulnerability, allowing them to drive more sloppily or dangerously than they might if they were driving something much smaller. Modern cars also boast a whole lot of features like 4 wheel drive that can lull drivers into a false sense of complacency, encouraging them to get themselves into jams that they would never get into otherwise.

And yet, I don’t think the genie can be put back into the bottle. Although some might wish it, I don’t believe we can go back to the day of driving Model Ts that are slower and less powerful than what we have today. Technological advances have a way of raising the stakes of ignorance and stupidity on everyone’s part – even good drivers get into accidents – but that’s just the price of progress, to my mind. The problem is that as technology gives people more power, there is a commensurate need to teach people how to use that power responsibly. No one is allowed to legally get behind the wheel of a car until you’ve proven that you can reasonably handle it, and that privilege can be taken away from you at any time if you abuse it.

However, we’re not going to be licensing programmers anytime soon, so what else is there to do? Limit the power and usability of our products to prevent the hoi polloi from (mis)using them? Or do the benefits to society created by putting so much computing power into the hands of the public outweigh the costs that are associated with that? I don’t think there’s any easy answer to that (especially after SoBig and MSBlaster and all their predecesors), but I generally fall on the side of progress. I do think we need better computing education out there, though, not just teaching how to slap a program together but also how to think about programming.

We sometimes joke that we should build a feature into the program that turns off all the “advanced” features of the product. Then, when you try to use one of them, we would pop up a dialog that says “You’re trying to use an advanced feature. Before we can let you use this, you must answer the following questions correctly:” and then test them to see if they’re advanced enough to really use the feature. It’s tempting but, of course, it would never work. Just as no one thinks they are a below-average driver, most people wouldn’t appreciate being told that they aren’t “advanced” enough of a programmer to use a feature…