Just a reminder: there’ll be an MSDN chat with the VB language team tomorrow. Be there!
Category Archives: Visual Basic
Feature scheduling and the ‘Using’ statement
After introducing it with much fanfare, I’ve been very negligent in actually answering anything submitted to Ask a Language Designer. So let me start to make amends. The first questions I want to address were narrowly asked but touch on a broader question. Both Muhammad Zahalqa and Kevin Westhead ask why VB doesn’t have a ‘using’ statement like C# has and when/if we will have one in the future.
The general question of "Why doesn’t VB have feature x?" is, as you can imagine, one that keeps coming up over time. It’s gotten even more common since C# came on to the scene, since VB and C# are much closer to each other than, say, VB and C++ are. So I thought I’d spend a little time talking in general about why some features that exist in other languages don’t make it into a particular release of VB. And, in the process, I’ll address the question of the ‘using’ statement.
When a feature doesn’t make it into VB, it’s typically because of one of the following reasons:
- It’s a bad feature. I’m throwing this in here because it does happen every once in a while that we get a request for a feature that some other language has that we think is just a bad idea. There isn’t anything that I can think of off the top of my head that’s fallen into this category, but it does happen.
- It’s not a good match for VB. There are some features that we think just don’t make sense for our customers and the kinds of things that they do. It’s not that we think that nobody would ever use the feature, just that it’s very unlikely that most of our users would need or use the feature. The prime example of this is the unsafe code feature in C#. Although there are situations in which people use unsafe code, they tend to be pretty limited and very advanced. In general, we believe that even very advanced programmers can happily exist in the .NET environment without ever needing unsafe code and that it tends to mainly be programmers coming to C# from C and C++ that find the feature useful. (Our experience since shipping VS 2002 and VS 2003 has so far validated these beliefs.)
- It’s not a good cost/benefit tradeoff. Some features are good ideas, but the benefit gained by having the feature doesn’t seem to justify the time it would take to design, specify, implement, test, document and ship the feature when compared against other features we want to do. To pick a prosaic example, the C# command-line compiler csc.exe has a feature where it will read in a "default" response file that contains references to all of the .NET Framework DLLs. This means you don’t have to explicitly "/r" DLLs like System.Windows.Forms.DLL when compiling an application on the command line. It’s a nice idea and handy on the command line, but in the past the overhead of implementing the feature in vbc.exe was judged to be higher than its benefits. So it didn’t make it into the product for VS 2002 or VS 2003. The danger, of course, is that little features like this can end up being constantly cut from release after release because they’re not big enough to be "must have" features, but not small enough to escape the ax. (You’ll just have to give it a try in Whidbey to see whether it made it in this time or not…)
- We ran out of time. This has got to be the most painful situation for the product team, because it’s something we desperately wanted to get done but for one reason or another, we just didn’t have time. So instead we have to sit back and suffer the slings and arrows of outrageous fortune for not having the feature, contenting ourselves only with the knowledge that next time will be different. A prime example of this is operator overloading. We really wanted it for VS 2002/2003 but it was too big to fit into the schedule. We’ll definitely be doing it in Whidbey.
So what about the ‘using’ statement? One of the major struggles of the VS 2002 product cycle was trying to figure out how to deal with the transition from reference counting (i.e. the COM model) to garbage collection (i.e. the .NET model). We spent a lot of time trying to see if we could make reference counting coexist with garbage collection in a way that: a) was reasonable and b) was comprehensible by someone less than a rocket scientist. While we were doing this, we deferred making any decisions on garbage collection related features like the ‘using’ statement. After banging our heads against the proverbial wall for what seemed like forever, we came to the very difficult conclusion that we couldn’t reconcile the two and that full-on garbage collection was the way to go. The problem was, we didn’t reach this dead end until late in the product cycle, which meant that there wasn’t enough to time left to properly deal with a feature like ‘using’. Which, let me just say, really, really, really sucked. So the painful decision was made to ship without the feature and, yes, we said to ourselves next time will be different.
Is this time different? This is another one of those things that you’ll have to try out in Whidbey to see… (wink, wink)
Visual Basic Decodes Human Genome
I was reading the entry by Phillip Greenspun that’s been floating around the blogsphere today comparing Java to SUVs. Interesting, but can you figure out what in the following quote caught my eye?
A project done in Java will cost 5 times as much, take twice as long, and be harder to maintain than a project done in a scripting language such as PHP or Perl. People who are serious about getting the job done on time and under budget will use tools such as Visual Basic (controlled all the machines that decoded the human genome). But the programmers and managers using Java will feel good about themselves because they are using a tool that, in theory, has a lot of power for handling problems of tremendous complexity.
VB controlled all the machines that decoded the human genome? I’d never heard that, but there’s at least some evidence on the web that it’s true. (Really, it’s one of those things that’s just "too good to check.") Does that mean we can take downstream credit for all the things we learn from the human genome? "Visual Basic Finds Cure For Cancer?" I guess we’ll just have to wait and see…
Useless language constructs
Frans points out some C# and VB language constructs that he thinks are superfluous. In the interests of a deeper understanding of the language, here’s the thinking behind the three VB ones:
- ReadOnly and WriteOnly property specifiers. When we were coming up with the new property syntax in VS 2002, we discussed this issue in great detail. Some people felt that the extra bit of explicitness was nice, but what really got it into the language was interfaces and abstract properties. In both of those cases, you don’t actually specify an implementation for the Get and the Set, but you need to specify whether you expect them to be there or not. One way of attacking the problem is the way C# does it, where you just specify an empty Get and/or Set. But with VB’s line orientation, this ended up with a lot of wasted screen real estate (and it just looked weird in VB to have a Get with no End Get). ReadOnly and WriteOnly were the main alternative, and we liked them much better. We did talk about whether they should be optional for properties that do have a Get and/or a Set, but we felt that it was better to have consistency about when we did and did not require them.
- WithEvents with form controls. The truth is that WithEvents fields hide a whole lot of magic. (If you’re interested in exactly what, you should check out what the VB language specification has to say about WithEvents.) The most important thing that happens, though, is that a WithEvents field is always wrapped in a property. This is not a minor thing for two reasons. First, it changes the behavior of the field when it’s passed by reference – real fields can be passed true byref, while properties have to be passed copy-in/copy-out. (I just recently wrote the part of my language book on this, maybe I’ll post some of that in here at some point.) But more importantly, accessing a property has slightly more overhead than accessing a field, so there’s a slight penalty to declaring a field WithEvents. Given those two things, we decided we wanted to retain the VB6 convention of specifying WithEvents so that it wasn’t something that was done lightly or accidentally. The problem now, though, is that designers such as the Winforms and Webforms designers just put "WithEvents" on things willy-nilly so that they can add handlers later, resulting in the opposite effect of what was intended. This unintended consequence is something we’re trying to work through in Whidbey.
- Overloads when Overrides. This is really just a bug in the language design. (Yes, we do have those.) Frans’s logic is completely correct on this point, and it’s something we were already looking at fixing. But for now, you’re stuck with it…
Feel free to throw other things you think are superfluous my way through comments or through Ask a Language Designer. (Yes, I am going to be answering some questions from there soon. I promise!)
MSDN chat, round 2
Just a quick note: the VB.NET language design chat that we had on MSDN last month was so successful, we’re going to be having another one at the end of this month (September 30th). More details can be found on the MSDN chat page. Hope to see you all there!
How the subroutine got its parenthesis
EricLi does a good job explaining why we started requiring parenthesis around subroutine calls in VB.NET, among other things. Back in the day, we would regularly get a bug report about every month or so complaining that “ByRef parameters aren’t working!” The problem would inevitably be that the developer was calling a subroutine with one parameter and using parenthesis. These bug reports came from both inside and outside of the company, and that wasn’t even counting all the sample code that I saw where people would include incorrect parenthesis and it ended up just not mattering (or the bug wasn’t caught yet). Even after working with Visual Basic code for years, I spent one afternoon trying to figure out why my ByRef param wasn’t behaving as ByRef…
(Minor sidenote: there actually isn’t a special rule about parenthesis making a parameter be passed ByVal, it’s just a side effect of the language. The parenthesis operator is a grouping operator that just evaluates to the value of whatever’s inside the parenthesis. So the expression (x) has the same value as the expression x, but its classification changes from a variable, which can be passed by reference, to a value, which cannot.)
When we started making changes to the language for Visual Basic .NET, this was one of those minor issues that we decided to clean up. From the constant stream of bug reports and from our own experiences, it was clear that this was something that tripped people up, even experienced developers. We believed that most people wouldn’t have trouble adjusting to the change and it would make VB.NET code less buggy, so the tradeoff seemed to be a good one.
This is, however, the source of one of things I find most annoying about the VB.NET editor. We wanted to help people make the transition from no parenthesis for subroutines, so we added a small “autocorrection” in the editor. If you enter the line
foo bar, baz
in the editor, when you hit Return, we’ll assume you were trying an old-style subroutine call and will pretty list it to:
foo(bar, baz)
This is all well and good, but there are many times in the editor where I end up momentarily creating invalid lines that look like subroutine calls. It drives me nuts that the editor “helpfully” adds parenthesis. Usually, it happens when I’m writing comments and I break off the end of a comment to start a new comment line. Then
' This is a very long comment I would like to break if at all possible, please
becomes
' This is a very long comment I would like to break if at
all(possible, please)
if I place the cursor after the word “at” and hit Return. We’re talking about whether this is helping people and whether we can be smarter about when we paste in parenthesis and such.
We rock .NET Rocks!
Looks like Carl just posted the latest installment of the .NET Rocks! online interview program, featuring yours truly and my fellow VBer Amanda Silver. I haven’t yet had the guts to listen to it… First, because I’m sure I’m going to have the usual “Oh my God, I don’t really sound like that, do I?” reaction, and second, because I’m afraid that somewhere in there I blurted out some super-secret thing that I shouldn’t have talked about and that’s going to get me fired or something. How’s that for a lure to listen? (I’m sure that I didn’t say anything like that, but then again my mouth does sometimes get ahead of my brain, which is why I like blogging. Plenty of time to review before you post.)
Assuming I didn’t spill any trade secrets, it was a lot of fun talking with Carl and Mark and it’d be happy to do again sometime in the future. But I think Amanda and I need to get better pictures. The quality of DataGrid Girl’s picture just put ours to shame…
Small plug…
Kathleen Dollard, who I’ve had a lot of good conversations with over the years, has written a guest editorial for Visual Studio Magazine. She says nice things about VB.NET, which is always nice to hear!
“With great power comes great responsibility…”
Eric Sink just put up an entry that could easily be subtitled “why I think Edit and Continue is a bad idea.” Although he’s just good naturedly razzing VS in general, his description of a sloppy working style fits totally with what EnC is trying to enable:
Don’t take time to think about all the details. Instead, go for the instant gratification. Just hit F5 and see if the code works. When it doesn’t, try another quick fix and hit F5 again. Eventually the code will work, without ever having to give it any real thought.
Now, I’ll say I have no idea what Eric thinks about EnC, but there are certainly others who use this line of argument to attack the idea of having Edit and Continue in a developer product. Namely that it’s a bad idea because it enables people to code in a style that is ultimately non-productive if not downright dangerous. And all that is true. Really. I kind of glossed over it when talking about EnC before, but it is a real dilemma: giving the consumer more power is what they want, but it also greatly increases the chance that they’ll misuse it. And then everyone might pay.
An analogy that suggests itself is the car. Today’s automobiles are insanely more powerful than Ford’s primitive Model T – the top speed of a Model T was something on the order of 45 mph. And as the power of the automobile has increased, so has its dangerousness. A Ford Model T traveling at 45 mph can cause a good deal of damage, but it’s really nothing compared to what, say, a Ford Expedition could do when traveling at it’s top speed (100 mph? 120 mph?). Even worse, many cars these days are built specifically to encourage a driver’s sense of power and invulnerability, allowing them to drive more sloppily or dangerously than they might if they were driving something much smaller. Modern cars also boast a whole lot of features like 4 wheel drive that can lull drivers into a false sense of complacency, encouraging them to get themselves into jams that they would never get into otherwise.
And yet, I don’t think the genie can be put back into the bottle. Although some might wish it, I don’t believe we can go back to the day of driving Model Ts that are slower and less powerful than what we have today. Technological advances have a way of raising the stakes of ignorance and stupidity on everyone’s part – even good drivers get into accidents – but that’s just the price of progress, to my mind. The problem is that as technology gives people more power, there is a commensurate need to teach people how to use that power responsibly. No one is allowed to legally get behind the wheel of a car until you’ve proven that you can reasonably handle it, and that privilege can be taken away from you at any time if you abuse it.
However, we’re not going to be licensing programmers anytime soon, so what else is there to do? Limit the power and usability of our products to prevent the hoi polloi from (mis)using them? Or do the benefits to society created by putting so much computing power into the hands of the public outweigh the costs that are associated with that? I don’t think there’s any easy answer to that (especially after SoBig and MSBlaster and all their predecesors), but I generally fall on the side of progress. I do think we need better computing education out there, though, not just teaching how to slap a program together but also how to think about programming.
We sometimes joke that we should build a feature into the program that turns off all the “advanced” features of the product. Then, when you try to use one of them, we would pop up a dialog that says “You’re trying to use an advanced feature. Before we can let you use this, you must answer the following questions correctly:” and then test them to see if they’re advanced enough to really use the feature. It’s tempting but, of course, it would never work. Just as no one thinks they are a below-average driver, most people wouldn’t appreciate being told that they aren’t “advanced” enough of a programmer to use a feature…
The Ballad of AndAlso and OrElse
Last week, Rachel posted an entry talking about AndAlso
and OrElse
. Besides Integer
and array bounds, this has got to be one of the most sensitive changes we made to the language in VB .NET 2002. So I thought I’d talk a little bit about it.
Prior to VB.NET, the VB language only had the And
and Or
operators. They were essentially bitwise operators, which means that they took their two operands and performed an AND or OR operation on each bit position to produce the resulting bit. So 3 Or 4 = 7
, and 2 And 4 = 0
. At the same time, the Boolean
value True
was considered to be equivalent to the value -1
, which is just a 1 in each bit position. As a result, the bitwise operators behaved as if they were logical operators when working on Boolean
values. So True And False = False
, and True Or False = True
. And if you mixed and matched Boolean
and numeric values, things pretty much worked the way you’d expect them to. So True And 4 = 4
, and 10 Or False = 10
.
This all worked pretty well, and avoided the situation you have in languages like C where there are separate logical operators (||
&&
) and bitwise operators (|
&
). However, there were some problems. The biggest was that it was not possible to support short-circuiting behaviors when doing logical AND and OR operations. Short circuiting is fantastically useful, especially when dealing with reference types. It is extremely common to want to write code along the lines of:
If (Not x Is Nothing) And (x.y = 10) Then
...
End If
Without short circuiting, though, this code will throw an exception if x
is Nothing
, because both sides of the operation are always evaluated. (Having learned about the usefulness of short circuiting from other languages, this limitation bit me all the time when I was working in VB, and I even saw quite a few bugs like this created by people who knew nothing about short circuiting but just expected it to “do the right thing.”)
One of the things we wanted to do in VB.NET, then, was add short circuiting logical operations. A way of doing it would have been just to have changed the meaning of And
and Or
if the two operands were typed as Boolean
, but this seemed a very unacceptable solution. In general, it is very bad to overload two very different behaviors on top of the same keyword. In other words, without knowing what the result type of Foo()
and Bar()
were, it would be hard to know what the behavior of
If Foo() And Bar() Then
...
End If
would be. Would Bar()
always be evaluated or would it sometimes not be evaluated? Even worse, the behavior of the expression might change drastically by shifting the type of just one of the operands.
The only other alternative, then, was to introduce new operators. Our first thought was that logical operations are much more common than bitwise operations, so we should make And
and Or
be logical operators and add new bitwise operators named BitAnd
, BitOr
, BitXor
and BitNot
(the last two being for completeness). However, during one of the betas it became obvious that this was a pretty bad idea. A VB user who forgets that the new operators exist and uses And
when he means BitAnd
and Or
when he means BitOr
would get code that compiles but produces “bad” results. For example:
If 2 And 4 Then
...
End If
Assuming Option Strict
was off, this would produce the value True
(or -1
) instead of the value False
(or 0
), as it did in previous versions. Given that this would be a very easy error for programmers to make and it would have an effect that could be very hard to track down and understand, it seemed clear to us that changing the meaning of these operators would be untenable.
That left the option of adding new logical operators, and we spent a lot time thinking about potential new keywords. I think I’ll do a whole entry later just on the question of choosing new keywords, but suffice it to say that AndAlso
and OrElse
were the best choices, in our humble opinions. Ironically, we’re not even the first language to use those names in this way – the functional language ML has andalso
and orelse
operators, too. (I don’t think we were aware of that at the time we were choosing the keywords, although I could be wrong about that.)
People seem to enjoy poking fun at the keywords, but after working with them for a while, I think they work pretty well and were the right way to add the new feature to the language without breaking people in bizarre and unexpected ways. Then again, as always, I’m biased…
[Correction 09/15/03: Fixed small math error pointed out by Stuart, below.]