Author Archives: paulvick

Feature scheduling and the ‘Using’ statement

After introducing it with much fanfare, I’ve been very negligent in actually answering anything submitted to Ask a Language Designer. So let me start to make amends. The first questions I want to address were narrowly asked but touch on a broader question. Both Muhammad Zahalqa and Kevin Westhead ask why VB doesn’t have a ‘using’ statement like C# has and when/if we will have one in the future.

The general question of "Why doesn’t VB have feature x?" is, as you can imagine, one that keeps coming up over time. It’s gotten even more common since C# came on to the scene, since VB and C# are much closer to each other than, say, VB and C++ are. So I thought I’d spend a little time talking in general about why some features that exist in other languages don’t make it into a particular release of VB. And, in the process, I’ll address the question of the ‘using’ statement.

When a feature doesn’t make it into VB, it’s typically because of one of the following reasons:

  1. It’s a bad feature. I’m throwing this in here because it does happen every once in a while that we get a request for a feature that some other language has that we think is just a bad idea. There isn’t anything that I can think of off the top of my head that’s fallen into this category, but it does happen.
  2. It’s not a good match for VB. There are some features that we think just don’t make sense for our customers and the kinds of things that they do. It’s not that we think that nobody would ever use the feature, just that it’s very unlikely that most of our users would need or use the feature. The prime example of this is the unsafe code feature in C#. Although there are situations in which people use unsafe code, they tend to be pretty limited and very advanced. In general, we believe that even very advanced programmers can happily exist in the .NET environment without ever needing unsafe code and that it tends to mainly be programmers coming to C# from C and C++ that find the feature useful. (Our experience since shipping VS 2002 and VS 2003 has so far validated these beliefs.)
  3. It’s not a good cost/benefit tradeoff. Some features are good ideas, but the benefit gained by having the feature doesn’t seem to justify the time it would take to design, specify, implement, test, document and ship the feature when compared against other features we want to do. To pick a prosaic example, the C# command-line compiler csc.exe has a feature where it will read in a "default" response file that contains references to all of the .NET Framework DLLs. This means you don’t have to explicitly "/r" DLLs like System.Windows.Forms.DLL when compiling an application on the command line. It’s a nice idea and handy on the command line, but in the past the overhead of implementing the feature in vbc.exe was judged to be higher than its benefits. So it didn’t make it into the product for VS 2002 or VS 2003. The danger, of course, is that little features like this can end up being constantly cut from release after release because they’re not big enough to be "must have" features, but not small enough to escape the ax. (You’ll just have to give it a try in Whidbey to see whether it made it in this time or not…)
  4. We ran out of time. This has got to be the most painful situation for the product team, because it’s something we desperately wanted to get done but for one reason or another, we just didn’t have time. So instead we have to sit back and suffer the slings and arrows of outrageous fortune for not having the feature, contenting ourselves only with the knowledge that next time will be different. A prime example of this is operator overloading. We really wanted it for VS 2002/2003 but it was too big to fit into the schedule. We’ll definitely be doing it in Whidbey.

So what about the ‘using’ statement? One of the major struggles of the VS 2002 product cycle was trying to figure out how to deal with the transition from reference counting (i.e. the COM model) to garbage collection (i.e. the .NET model). We spent a lot of time trying to see if we could make reference counting coexist with garbage collection in a way that: a) was reasonable and b) was comprehensible by someone less than a rocket scientist. While we were doing this, we deferred making any decisions on garbage collection related features like the ‘using’ statement. After banging our heads against the proverbial wall for what seemed like forever, we came to the very difficult conclusion that we couldn’t reconcile the two and that full-on garbage collection was the way to go. The problem was, we didn’t reach this dead end until late in the product cycle, which meant that there wasn’t enough to time left to properly deal with a feature like ‘using’. Which, let me just say, really, really, really sucked. So the painful decision was made to ship without the feature and, yes, we said to ourselves next time will be different.

Is this time different? This is another one of those things that you’ll have to try out in Whidbey to see… (wink, wink)

Visual Basic Decodes Human Genome

I was reading the entry by Phillip Greenspun that’s been floating around the blogsphere today comparing Java to SUVs. Interesting, but can you figure out what in the following quote caught my eye?

A project done in Java will cost 5 times as much, take twice as long, and be harder to maintain than a project done in a scripting language such as PHP or Perl. People who are serious about getting the job done on time and under budget will use tools such as Visual Basic (controlled all the machines that decoded the human genome). But the programmers and managers using Java will feel good about themselves because they are using a tool that, in theory, has a lot of power for handling problems of tremendous complexity.

VB controlled all the machines that decoded the human genome? I’d never heard that, but there’s at least some evidence on the web that it’s true. (Really, it’s one of those things that’s just "too good to check.") Does that mean we can take downstream credit for all the things we learn from the human genome? "Visual Basic Finds Cure For Cancer?" I guess we’ll just have to wait and see…

Fantasy vs reality

After letting it languish on my hard drive for several months, I finally uninstalled Arx Fatalis today. I’d made it (I think) about half-way through the game before losing interest and never getting back to it, which was disappointing. One of my all-time favorite computer games is Ultima Underworld, which the Arx designers said in interviews they were trying to emulate. Unfortunately, while they borrowed much of the interface logic and much of the basic plot of Underworld, somewhere along the line it lost whatever it was that made Underworld such an awesome game.

I think one of the biggest problems with these kinds of games today is that they end up focusing on trying to give the player a detailed world rather than an enjoyable story. Comparing Arx and Underworld, the former has a much more detailed world, both in terms of backstory and in terms of physical reality. Whereas the Stygian Abyss in Underworld was nothing more than a crude approximation of a real space – I think the dwarf “city” was just one big room with a couple of dwarves in it – the city of Arx has architecture, a castle, denizens who have jobs, etc. But the designers get so caught up in creating someplace “real” that they start to forget about what makes a game actually fun. (There are also some really practical problems with creating spaces that approximate the real world, namely that you end up spending a lot of really uninteresting time backtracking through areas that you’ve already visited to get to somewhere else. If I want to do that, I can go take a hike in the real world.)

This same problem also really ruined another game I should have loved, Deus Ex. After all, it was created by the same guy who oversaw my hands-down favorite game ever, System Shock. As a side plug, if you’ve never played System Shock, I highly recommend tracking down a CD of it in a remainder bin somewhere and doing whatever you have to do to your computer to get it run. It was the most amazing game I’ve ever played, not the least of which because I really felt like I was there on Citadel Station. Even now, I can recall a tactile sense of the layout of the station and some of the more important locations. And all this with a graphics engine that would be laughable today.

When I first heard about Deux Ex, I was very excited although I was worried by the interviews in which the team went on and on about how they adapted real-world locations in loving detail for the game. Sure enough, the game does render futuristic versions of real locations in great detail. The game starts in New York City on Liberty Island, and when I was in New York recently I found myself on a boat sailing past Liberty Island. I was idly staring at a dock on the far side of island when I suddenly had a shock of recognition – I’d been there! Well, not really, I’d just been there in the game, but the game had been accurate enough that I actually could recognize the layout of the island. But even though the game was so “real” in this sense, from a game perspective, the world was… well… empty. As accurate as they were, all the real world locations in Deus Ex felt like disconnected set pieces. This was compounded by the fact that I ended up not giving a damn about the character I was playing. I mean, the guy’s brother is being offed by an evil group bent on world domination, and all I’m thinking is “Damn, couldn’t they get a better voice actor for this gig?” Contrast that with, say, Max Payne, which had exactly the same kind of New York set pieces and the same over-the-top mythical pretensions (while Deus Ex steals from Christianity, Max Payne steals heavily from Norse mythology). Even though the places in Max Payne were less realistic than those in Deux Ex, it kept my attention throughout the entire game. And whereas I uninstalled Deus Ex halfway through, I had a great time with Max Payne through the very last gun battle.

Ultimately, I think how “real” a game is has nothing to do with whether the game is any good. The good games out there make me identify with the protagonist or the story, just like a good movie, book or TV show. They feel more real in an emotional sense… more human. Clichéd as he was, Max Payne was much closer to a real person than the robotic J.C. Denton. And the “how the hell do I get out of here?” plot of Underworld was much more accessible than the “I’m a messenger from the gods from another plane sent to stop another god from blah, blah, blah, blah” plot of Arx.

Now if someone would just rebuild System Shock and Underworld on top of the latest Unreal engine, I’d be as happy as a clam…

Fair warning…

As you may remember, two months ago I moved from the old BlogX codebase to some custom software that I wrote based on BlogX. When I did that, the RSS feed for my site moved to http://www.panopticoncentral.net/rss.aspx, and I put a permanent redirect at the old URL. Well, the time has come to drop the old URL, so if for some reason you still haven’t moved over, please do so ASAP. It’s going to stop working in the next few days. You’ve been warned!

Useless language constructs

Frans points out some C# and VB language constructs that he thinks are superfluous. In the interests of a deeper understanding of the language, here’s the thinking behind the three VB ones:

  • ReadOnly and WriteOnly property specifiers. When we were coming up with the new property syntax in VS 2002, we discussed this issue in great detail. Some people felt that the extra bit of explicitness was nice, but what really got it into the language was interfaces and abstract properties. In both of those cases, you don’t actually specify an implementation for the Get and the Set, but you need to specify whether you expect them to be there or not. One way of attacking the problem is the way C# does it, where you just specify an empty Get and/or Set. But with VB’s line orientation, this ended up with a lot of wasted screen real estate (and it just looked weird in VB to have a Get with no End Get). ReadOnly and WriteOnly were the main alternative, and we liked them much better. We did talk about whether they should be optional for properties that do have a Get and/or a Set, but we felt that it was better to have consistency about when we did and did not require them.
  • WithEvents with form controls. The truth is that WithEvents fields hide a whole lot of magic. (If you’re interested in exactly what, you should check out what the VB language specification has to say about WithEvents.) The most important thing that happens, though, is that a WithEvents field is always wrapped in a property. This is not a minor thing for two reasons. First, it changes the behavior of the field when it’s passed by reference – real fields can be passed true byref, while properties have to be passed copy-in/copy-out. (I just recently wrote the part of my language book on this, maybe I’ll post some of that in here at some point.) But more importantly, accessing a property has slightly more overhead than accessing a field, so there’s a slight penalty to declaring a field WithEvents. Given those two things, we decided we wanted to retain the VB6 convention of specifying WithEvents so that it wasn’t something that was done lightly or accidentally. The problem now, though, is that designers such as the Winforms and Webforms designers just put "WithEvents" on things willy-nilly so that they can add handlers later, resulting in the opposite effect of what was intended. This unintended consequence is something we’re trying to work through in Whidbey.
  • Overloads when Overrides. This is really just a bug in the language design. (Yes, we do have those.) Frans’s logic is completely correct on this point, and it’s something we were already looking at fixing. But for now, you’re stuck with it…

Feel free to throw other things you think are superfluous my way through comments or through Ask a Language Designer. (Yes, I am going to be answering some questions from there soon. I promise!)

How the subroutine got its parenthesis

EricLi does a good job explaining why we started requiring parenthesis around subroutine calls in VB.NET, among other things. Back in the day, we would regularly get a bug report about every month or so complaining that “ByRef parameters aren’t working!” The problem would inevitably be that the developer was calling a subroutine with one parameter and using parenthesis. These bug reports came from both inside and outside of the company, and that wasn’t even counting all the sample code that I saw where people would include incorrect parenthesis and it ended up just not mattering (or the bug wasn’t caught yet). Even after working with Visual Basic code for years, I spent one afternoon trying to figure out why my ByRef param wasn’t behaving as ByRef…

(Minor sidenote: there actually isn’t a special rule about parenthesis making a parameter be passed ByVal, it’s just a side effect of the language. The parenthesis operator is a grouping operator that just evaluates to the value of whatever’s inside the parenthesis. So the expression (x) has the same value as the expression x, but its classification changes from a variable, which can be passed by reference, to a value, which cannot.)

When we started making changes to the language for Visual Basic .NET, this was one of those minor issues that we decided to clean up. From the constant stream of bug reports and from our own experiences, it was clear that this was something that tripped people up, even experienced developers. We believed that most people wouldn’t have trouble adjusting to the change and it would make VB.NET code less buggy, so the tradeoff seemed to be a good one.

This is, however, the source of one of things I find most annoying about the VB.NET editor. We wanted to help people make the transition from no parenthesis for subroutines, so we added a small “autocorrection” in the editor. If you enter the line

foo bar, baz

in the editor, when you hit Return, we’ll assume you were trying an old-style subroutine call and will pretty list it to:

foo(bar, baz)

This is all well and good, but there are many times in the editor where I end up momentarily creating invalid lines that look like subroutine calls. It drives me nuts that the editor “helpfully” adds parenthesis. Usually, it happens when I’m writing comments and I break off the end of a comment to start a new comment line. Then

' This is a very long comment I would like to break if at all possible, please

becomes

' This is a very long comment I would like to break if at
all(possible, please)

if I place the cursor after the word “at” and hit Return. We’re talking about whether this is helping people and whether we can be smarter about when we paste in parenthesis and such.

The results are in…

…and it looks like a dark horse candidate has taken the crown. Looking at my referrer log for the month, so far the counts are approximately:

Scoble – 388
Don – 457
Raymond – 984

So, yes, Raymond has managed to kick everyone’s behinds by a convincing margin in just a few days. To be fair, it might be that the humorous bug reports he linked to were more enticing to browsers, but who knows? This is hardly scientific…

The horror! The horror!

Eric Lippert has started a new blog, and I think I’m suffering from some low-grade PTSD… Before I started working on Visual Basic .NET, I spent a year and a half in purgator… I mean, working on OLE Automation. (I’d moved over to OLEAut from Access because I wanted to work on language technologies and I figured OLE Automation was where it was at. I think a few weeks after I moved over, I had this meeting with some guy named Brian Harry about this weird project he was starting to write some new metadata engine…) Eric’s extended fantasias on BSTRs and such are giving me some severe flashbacks to COM. It reminds me why I really do think .NET is a big step forward.