Author Archives: paulvick

Default Instances

Fresh off of writer’s block, I thought I’d dive straight back into the sea of controversy and talk about a feature called “default instances.” Default instances are a “new” feature in VB 2005 that is really the return of a very old feature, one that’s been around for a long time. Now, the return of default instances has stirred some very passionate debate, but what I’m going to do is address this question in three separate entries. In this entry, I’m going to describe what default instances are at a technical level. In my next entry, I’m going to talk about what I see as the positive case for default instances. Then, in the final entry, I will talk about some limitations of default instances and address the controversy about their reintroduction more directly. You can choose whether or not you want to wait for the last entry before you start throwing brickbats…

So, what are default instances? Well, they’re exactly what their name suggests: a way to provide a default instance of a class that you can use without having to New the class. If you’ve used versions of VB prior to VB 2002 (i.e. pre-.NET), you’ll probably have come across the most common default instance, the form default instance. For example, in VB 6.0, if you created a form named Test, you could say in code:

Test.Show()

And voila! an instance of the form would just magically appear. What was going on behind the scenes was a bit of magic. When you referred to “Test,” the VB runtime would go check and see if it already had a default instance of Test lying around. If it did, then it would hand back that default instance when you called Show(). If there wasn’t a default instance lying around, the runtime would create a new instance of Test and use that as the default instance going forward.

Default instances disappeared in VB 2002 and VB 2003, but are making a comeback in VB 2005. The high-level “why,” I’ll leave to my second entry, so let’s talk about the “how” for the moment. Hopefully, all of you are familiar with the introduction of the My namespace in VB 2005. (If not, go read this MSDN article and then come back.) One part of the My namespace is the Forms object. If you bring up a VB 2005 project with a Form named Test in it, you’ll notice that the My.Forms object has a property called Test. This property returns a default instance of Test. So you can say, sort of like you did in VB 6.0, “My.Forms.Test.Show()” and voila! up pops a new instance of Test. What happens behind the scenes is that the Test property of My.Forms checks to see if it’s already returned an instance of Test and, if it has, it just returns that instance. If it hasn’t returned an instance yet, it just creates a new instance. Pretty simple.

We also extended default instances to work in exactly the same way that VB 6.0 default instances did. Besides saying My.Forms.Test.Show(), you can also just say Test.Show() and it will also access the default instance. Essentially, Test.Show() is a shortcut to saying My.Forms.Test.Show().

There are, however, a couple of differences between the way default instances are implemented in VB6 and the way they are implemented in VB 2005:

  • In VB6, if you tried to use the Is operator to test the default instance to see if it was Nothing, the expression would always evaluate to False. This is because referring to the default instance caused it to be created if it didn’t exist! In VB 2005, we recognize when you are testing the default instance using the Is (or IsNot) operator and won’t auto-create the instance in that case. So you can see if the default instance exists or not.
  • A common source of default instance errors was using the default instance instead of Me within the default instance’s type. For example, if Form1’s load method has the statement “Form1.BackColor = Color.Red”, this will only change the background color of the deafult instance’s form. If you create a new instance of Form1 that is not the default instance, it’s background color will not be red! To assist users in this case, we give an error when you refer to the default instance inside of the type, telling you to use Me instead.

Well, that’s about it for the technical side of things. Now onto the good, the bad and the ugly…

Feedback request on new warning

We’ve been internally discussing a new warning that we’ve added to VB 2005 and how we should treat it for upgrade, and I’d like to get some feedback from those people who have been using the beta or the community previews. A common error that we see is using a reference variable before it’s been assigned a value. For example:

Dim x As Collection
x.Add(10)

This code will throw an exception because I forgot to create a new collection using New. In VB 2005, this code will produce the following warning:

Variable 'x' is used before it has been assigned a value. A null reference exception could result at runtime.

We determine this warning by doing flow of control analysis on the function and determining that you’re using the variable before you gave it a value. This is all well and good, but the warning isn’t perfect. Just because a variable hasn’t been assigned a value doesn’t necessarily mean that there’s going to be a problem. For example:

Dim x As Collection

If Test() Then
x = New Collection()
End If

If x IsNot Nothing Then
x.Add(10)
End If

The test “x IsNot Nothing” will cause a warning that x might be used before it has been assigned a value. This is sort-of true if you look at the code in a simplistic way, but it’s really not a problem, of course, because we’re just testing against Nothing. An ideal solution, and one that we’re considering beyond VB 2005, is to add a more sophisticated level of analysis to the compiler that would allow more fine-grained analysis of null reference problems that would recognize the above situation as safe. But that’s beyond the scope of this release. For now, the level of analysis that we can do is identify places where you used a reference variable without assigning it a value.

Now, to the question: have people found this warning to be annoying on upgrade? New projects don’t seem to have major issues with this warning because you fix the warnings as you go along, by and large. But upgraded projects can be a different story: depending on the coding style used, a project might have a ton of these warnings show up when you first open it in VB 2005. For example, I recently upgraded the VBParser project and spent a little while fixing up 100 or so of these warnings, all of which were what you might call “false positives” – I was using variables before they had been assigned, but always in a safe and courteous manner.

What have people’s experiences been with this warning on upgrade? Was it annoying? Was it helpful? Would it be desirable to not have the warning turned on by default in upgrade? Just to be clear: we’re not planning on changing anything as it stands right now. But now that the beta’s been out in the wild for a while, we thought it might be a good time to validate our assumptions against some cold, hard customer feedback…

Writer’s block

When people haven’t written a while they usually start off with some kind of apology like “sorry I haven’t written in so long, but I’ve been doing xyz.” My feeling is: blogs (with some exceptions) are strictly amateur hour, so no apologies are necessary for life intervening. That’s just the way it is, and people should come to expect that from blogs – you get what you pay for.

That said, I’d really like to say that I haven’t been writing in a while because I’ve been busy, but that’s not really it. The holidays took up a lot of time and I spent a good chunk of the last month or so trying to get the VB Language Specification updated to reflect all the good stuff that we’re doing in VB 2005. Even though, in theory, updating the spec is just a matter of merging individual specifications into the master specification, I tend to do a lot of rewriting to fit things into the general flow and because of the fact that I’m a latent control freak. (Rewriting the spec also helped turn up an interesting number of bugs in our beta compiler. It’s amazing what just trying out what the spec says is supposed to work will produce.) Hopefully the spec will be available generally in the near future (I’m gunning for when Beta2 RTMs, whenever that is), but it’s not like that kept me from writing.

No, I think it’s just plain old writer’s block. There’s always stuff you can write about, but if you’re not interested in or are not inspired, then the writing (and the writing process) is stiff and boring. Until the magic inspiration comes back, I figured the best thing I can do is work on other things and not foist off a bunch of uninteresting junk on everyone. Whatever the reason, it feels kind of like the mojo is coming back, so I’m hopeful that the words will start flowing again. If not, well, I guess this was just a blip…

Die spammer scum!

You may have noticed the comments coming on and off today… I’m getting innundated by an extremely agressive comment spammer (I believe other .Text sites are also getting hit hard), and I’m trying to block them out. Hopefully it’ll work, but please bear with me.

Dogfooding and Microsoft

Every corporate culture has it’s own set of acronyms, TLAs (three letter abbreviations) and jargon, and Microsoft is no different. I try not to let it slip too much into my blog entries, but a comment from M.J. Easton reminded me that a while back I did use one without explanation. In an entry talking about the DirectCast operator, I said:

In addition to the fact that we like VB, it’s also a great way to dogfood the product.

I don’t believe the verb “to dogfood” is unique to Microsoft at all, but it’s certainly an integral part of our culture. It’s short for “to eat one’s own dogfood,” which means “to use the product yourself that you are trying to sell to your customers.” The purpose of dogfooding is severalfold, but the main reasons are:

1) It proves to customers that we believe in the product.

2) Because dogfooding usually means using beta (or pre-beta) software, it helps flush more bugs out of the product.

3) It makes us suffer the same bugs and design flaws that we inflict on users, thus giving us incentive to fix them.

4) It’s a valuable reality check that the product is actually as good as we say it is.

5) Because Microsoft is such a large organization, it can flush out problems that could not otherwise be found prior to full-scale rollout at launch. (This holds especially true for corporate server products such as Exchange, SQL, IIS, etc.)

6) We learn how our products actually work, which is more often than not not exactly how we think they work.

All in all, dogfooding is a extremely valuable, if not sometimes painful, thing that we do at Microsoft.

(You can find a deeper discussion of the etymology of the word in the Wikipedia entry on it.)

Beard = success?

What I want to know about this theory is: what happens if you’re someone like me, who cycles between growing a beard and going beardless? Or does it just matter whether your official picture has a beard? I’ve got one right now, so does that mean I’m doing better work than when I didn’t have one months ago?

Was .NET a black hole project?

Eric Lippert astutely pointed out a hole that I consciously left in my discussion of black hole projects – namely, that they sometimes succeed. I left that part out because I figured it probably merited a little discussion of its own and I didn’t want to complicate the whole “black hole” narrative.

Eric’s completely correct that you could argue that .NET was a black hole project, albeit one that succeeded. It managed to hit most, if not all, of the bullet points I listed, including the last one (I started work on what would become VB.NET in mid-1998, after all, and some people had been working on it for several months by that point). I distinctly remember the first meeting I had with Brian Harry where he outlined a proposal for what would become the metadata engine in .NET. I thought to myself “this guy is completely crazy” because of his grandiose goals. A few months later, I remember sitting in a co-workers office talking about the emerging .NET strategy. “Do you think this thing will actually fly?” she asked me. “I have no idea.” I replied, “I give it about 50/50 chance. I guess we’ll see.”

However, I disagree with Eric a bit that it’s extremely hard to distinguish between a black hole project that’s going to impode and a black hole project that’s going to succeed. I think there are some traits that distinguish the projects that have succeeded. I’m less certain about them because 4 out of 5 black hole projects have failed, so I have less data to extrapolate from, but here’s my take on it…

The number one, no question about it, don’t leave home without it, trait of succesful black hole projects is:

  • Despite having grandiose goals, they are fundamentally a response to a serious and concrete competitive threat.

If .NET had started just as some abstract project to change the way that people program, then I’d be willing to wager that it would have gone down in flames at some point. What kept the teams focused and moving forward was the fact that, let’s be honest here, Java was breathing down Microsoft’s neck. Without that serious competitive pressure, odds are pretty decent that the .NET project would have eventually collapsed under its own weight because we wouldn’t have had the kinds of checks and balances that competition imposed on us. But the reality of needing to compete with Java helped keep our eyes on the prize, as it were, and counterbalanced the tendency of black hole projects to spin out of control.

Some other traits of successful projects are:

  • They tend to be led by people who, while they may be fantastically idealistic, are ultimately realists when push comes to shove.
  • It also seems to be vitally important that the project leadership be very, very technical. This allows them to stay in touch with what is actually happening in the project rather than having to rely on status reports (which are usually skewed in the optimistic direction) and allows them to ask the hard questions.
  • Hidden under the absurdly grandiose goals must be some set of real life problems that real life customers need solved.

I think what I’m starting to get at is that the line that divides success from failure is ultimately how in touch with reality the project team is. The Catch-22 is that to make large leaps, you often have to unmoor yourself a bit from reality so that you can see beyond what is into what could be. But once you’ve unmoored yourself from reality, it’s easy to go all the way and completely lose touch with reality. That’s the point at which things start to spiral out of control. The projects that succeed, in turn, manage to keep one foot in reality at all times. This keeps them from going absolutely insane. And, I think, it’s not as hard as it seems to get a sense of whether a particular project has one foot in reality or not. Usually, all you have to do is talk to a cross-section of the people working on it. They’ll know pretty well what the state of the project is.

I should close by saying my intention wasn’t to knock ambitious projects. I think they’re totally necessary to help our industry make those big leaps forward. It’s more of a question of how people go about doing ambitious projects that I sometimes have a problem with. Projects that think big but try as best they can to stay practical are really the best kind of projects to work on. I can’t say that every moment of working on .NET has been fun, but overall it’s been a wonderful experience and one that I’m very glad I got to be a part of…

Black hole projects

After hearing about a product named Netdoc from Scoble, Steve Maine takes the opportunity to reminsce about a similarly code-named project at Microsoft (that has nothing to do with the new product). He says:

The name “Netdocs” reminds me of my experience as an intern at MS in 2000. There was this mythical project codenamed “Netdocs”, and it was a black hole into which entire teams disappeared. I had several intern friends who got transferred to the Netdocs team and were never heard from again. Everyone knew that Netdocs was huge and that there were a ton of people working on it, but nobody had any idea what the project actually did.

I left Office just about the time that Netdocs really started going, but I do know a few people who invested quite a few years of their lives into it. I can’t say that I know much more than Steve about it, but it did get me thinking about other “black hole projects” at Microsoft. There was one I was very close to earlier in my career that I managed not to get myself sucked into and several others that I just watched from afar. None I can really talk about since they never saw the light of day, but it did get me thinking about the peculiar traits of a black hole project. They seem to be:

  • They must have absurdly grandiose goals. Something like “fundamentally reimagine the way that people work with computers.” Nobody, including the people who originate the goals, has a clear idea what the goals actually mean.
  • They must involve throwing out some large existing codebase and rewriting everything from scratch, “the right way, this time.”
  • They must have completely unrealistic deadlines. Usually this is because they believe that they can rewrite the original codebase in much, much less time than it took to write that codebase in the first place.
  • They must have completely unrealistic beliefs about compatibility. Usually this takes the form of believing you can rewrite a huge codebase and preserve all of the little quirks and such without a massive amount of extra effort.
  • They are always “six months” from from major deadline that never seems to arrive. Or, if it does arrive, another milestone is added on to the end of the project to compensate.
  • They must consume huge amounts of resources, sucking the lifeblood out of one or more established products that make significant amounts of money or have significant marketshare.
  • They must take over any group that does anything that relates to their absurdly broad goals, especially if that group is small, focused, has modest goals and actually has a hope of shipping in a reasonable timeframe.
  • They must be prominently featured as demos at several company meetings, to the point where people groan “Oh, god, not another demo of this thing. When is it ever going to ship?”
  • They usually are prominently talked up by BillG publicly years before shipping/dying a quiet death.
  • They usually involve “componetizing” some monolithic application or system. This means that not only are you rewriting a huge amount of code, you’re also splitting it up across one or more teams that have to all seamlessly work together.
  • As a result of the previous point, they also usually involve absolutely massive integration problems as different teams try madly to get their components working with each other.
  • They usually involve rewriting the application or system on top of brand-new technology that has not been proven at a large scale yet. As such, they get to flush out all the scalability problems with the new technology.
  • They are usually led by one or more Captain Ahabs, madly pursuing the white whale with absolute conviction, while the deckhands stand around saying “Gee, that whale looks awfully big. I’m not sure we can really take him down.”
  • Finally, 90% of the time, they must fail and die a flaming death, possibly taking down or damaging other products with it. If they do ship, they must have taken at least 4-5 years to ship and be at least 2 years overdue.

I’m kind of frightened at how easy it was to come up with this list – it all just kind of poured out. Looking back over 12.5 years at Microsoft, I’m also kind of frightened at how many projects this describes. Including some projects that are ongoing at the moment…

]]>