Category Archives: Technology

I Heart Beagle Brothers

Jeff Atwood’s little entry on cheatsheets sure brought back some memories… I loved Beagle Brothers. As a general measure of comparison, I think Beagle Brothers had more cool in one little tip/trick box than Google has ever had with their cute variations on the Google logo. Definitely one of the things I look back on with fondness…

I’ve also thought about trying to create a VB.NET language cheat sheet one of these days, but it’s on that list of “things to do when I have time.” Yeah, right…

Latin as a prerequisite for programming?

I’m catching up on my blog reading and just plowed my way through Joel’s curmudgeonly “old guy” rant about The Perils of JavaSchools. I don’t have a lot to say about the central thesis of his rant — I’ve always been of two minds about the efficacy of the Darwinian theory of weeding the weak out through hazing-type classes — but there was an analogy that caught my eye:

Heck, in 1900, Latin and Greek were required subjects in college, not because they served any purpose, but because they were sort of considered an obvious requirement for educated people. In some sense my argument is no different that the argument made by the pro-Latin people (all four of them). “[Latin] trains your mind. Trains your memory. Unraveling a Latin sentence is an excellent exercise in thought, a real intellectual puzzle, and a good introduction to logical thinking,” writes Scott Barker. But I can’t find a single university that requires Latin any more. Are pointers and recursion the Latin and Greek of Computer Science?

I actually took four years of Latin in high school because I had had such a horrible experience trying to learn to speak French in middle school that I was desperate for any language that I didn’t have to listen to or speak. The joke ended up being on me, though, because when I took an Italian class in college, I realized that — difficulties with French aside — Latin was much, much harder to learn than most modern Romance languages. After all, in most of them a noun tends to have just two aspects: gender and/or number. In Latin, though, you have declensions in which the noun changes form based on its role in the sentence. Just that alone made Latin quite a challenge. And a pleasure, I might add, due to the fact that I had an excellent teacher.

Interestingly, though, I think that Latin actually has helped me a lot with my current job. After all, pretty much all you do in Latin class is translate Latin to English and back again on paper (unless you work in the Vatican). And, if you think about it, pretty much all compilers do is sit around day after day translating one language into another. So a lot of the same concepts and methodologies that I learned translating Arma virumque cano, Troiae qui primus ab oris… map fairly well into translating something like If x = 5 Then y = 10. Sure, there are lots of differences between human languages and computer languages, but at some level language is language. So I guess I’m one of those four pro-Latin people and maybe the only pro-Latin person who thinks that learning Latin might help you later when you learn computer programming…

(I should also add that the real payoff of Latin is the opportunity to translate some of the really great masters of Roman literature. Translating the Catiline orations by Cicero gives you a chance to see a really master politician and orator at work in the midst of a pretty gripping political thriller. And Virgil’s Aeneid — at least, the parts we made it through in a year — was just wonderful. While watching the otherwise wretched Troy, I was able to keep myself awake by speculating whether Aeneas would show up with his father on his back when Troy finally burned; the fact that he did was pretty much the only thing that I liked about that movie.)

Sometimes it’s the little victories that matter the most…

I think Rico’s spot on when he says that the real way you win the performance war is 5% at a time. Actually, I think he’s been overly optimistic — a lot of the time, it seems like you win the performance war 1% at a time. It’s much more like trench warfare than blitzkreig.

There’s also a larger idea at work here. Rico’s point is that in a mature product, you shouldn’t be able to come up with a huge performance win in most cases because, if you can, why didn’t somebody think of it before? The thing is, this applies to pretty much any aspect of a mature product. As we think about the future of Visual Basic, I can assure you that we all sit around dreaming of the revolutionary new feature that will return us to the days of explosive growth that the product experienced early in its life. And, hey, it’s always possible that we’re going to latch on to the next game-changing development methodology that will revolutionize how people write programs and causes an influx of another 30 million or so programmers. It just isn’t likely. After all, a lot of very smart people inside and outside of Microsoft have been looking at this problem for a very long time and so far we haven’t gotten radically beyond many of the fundamental ideas that made VB so hot a decade ago.

It’s also why I really don’t envy the guys working on Office. After all, if you’re a developer in Word, what are you doing? It’s not like there’s some new radical paradigm for text editing out there — we’ve wrung most of the major gains that are to be had out of WYSIWYG. Same goes for Excel — the spreadsheet metaphor has reached a high level of maturity. So what do you do besides dreaming up newer and newer ways to arrange your toolbars and menus? Collect your paycheck and go home?

This is where we get back to trench warfare. Even though, yes, a lot of the “big ideas” have been pretty mined out, it’s not like we’ve reached a state of perfection. Looking at Word and Excel and Visual Basic, there are still lots and lots and lots of little things that can be better. Refactoring isn’t going to revolutionize programming the way that a GUI builder did, but it’s still a nice, incremental improvement over what came before. It’s the 5% gain or the 1% gain instead of the 50% gain, and that’s in many ways just where we are as an industry.

Personally, I would love to be the guy who dreams up the next really big thing in the programming world, the one that’s going to put my name in the history books (or, at least, computer history books). And, who knows? Maybe I’ll win the lottery. It happens. But if that day never does come, I’d still be happy improving the lives of VB developers by 50% or 75% just by making those incremental improvements that makes their lives easier one step at a time. It might not be enough to get us another 30 million developers in one shot, but in the long run, who knows?

Everything old is new again…

In one of the comments to the “Introducing LINQ” entry that I wrote, Unilynx wrote:

Sounds like what we’ve been doing for five years already 🙂

This was a comment that came up several times at the PDC from various sources: “What’s so revolutionary about this stuff? We’ve been doing this kind of thing for years!” On the one hand, what’s unique about LINQ is how it’s built, it’s openness and flexibility, and it’s unification of data querying across domains. But on the other hand, yeah, let’s be honest: as Newton would say, if we’re seeing further, it’s only because we’re standing on the shoulders of giants. My standard response to this line of thought is: there are really only 15 good ideas in computer science and all of them were discovered thirty years ago or more. What happens is that the programming world just rediscovers them over and over and over again, each time prentending like the ideas are brand new.

Erik Meijer had a good comment in the languages panel that if you want to know what the next big thing in programming is going to be, all you have to do is look at what was hot twenty years ago. Because that tends to be the length of time it takes for the wheel to turn a full crank…

Orwell on programming

In the Micronews (our internal newspaper), there was an article talking about teaching English that used the following quote from George Orwell:

To write or even speak English is not a science but an art. There are no reliable words. Whoever writes English is involved in a struggle that never lets up even for a sentence. He is struggling against vagueness, against obscurity, against the lure of the decorative adjective, against the encroachment of Latin and Greek, and, above all, against the worn-out phrases and dead metaphors with which the language is cluttered up.

As I read it, though, it occured to me that much the same could be said for working in a programming language…

So it has a name… the Sapir-Whorf hypothesis.

A long, long time ago I riffed a little bit on the question of “why do we have VB and C#?” The idea I was trying to struggle towards was that even though two languages may share many similar constructs and be able to express roughly the same thing, design decisions on little things can make a big difference in the experience of using the language. Now I learn from Dave Remy that this is hardly a new idea and even has a name: the Sapir-Whorf hypothesis. There’s even a discussion about how it might or might not apply to programming languages. Well, you learn something new every day…

(And, yes, I am a geek. When I first saw the phrase “the Sapir-Whorf hypothesis,” my first thought was, “what does this have to do with Star Trek?”)

Relational people vs. object people

As we move towards Beta2 and are preparing for the headlong rush towards RTM (release to manufacturing), I’ve been lucky enough to be able to carve out some time to start doing some research for ideas that might show up past Whidbey. We’re not talking anything formal here, no feature lists or anything like that. Just the opportunity to spend some time noodling with some ideas on the whiteboard and fiddling with some prototype ideas. It’s one of the perks of having been around a long time – especially since most people on the team are still totally heads-down on shipping.

One of my areas of investigation has been the relationship between data access and the VB language and how we might allow people to work with data more effectively. This is an area with a lot of creative ferment going on, and I’m having the good fortune to spend some time working with people like Erik Meijer (one of the original authors of ), kicking new ideas around. In many ways, it’s a return to my roots – I started life at Microsoft on the Access team and worked a lot on the query designer generating SQL. So to get to go back and think about how to make data really easy to use is just a wonderful opportunity.

As I start to ease more back into thinking about data and data access, though, I find myself facinated by a schism between the data world and the language world that was not obvious to me back before I’d worked on either side of the divide. I find it kind of curious, but it seems to break down like this:

On the one side, it appears, are the database folks. Database folks usually cut their teeth designing databases and database products. They are extremely comfortable with the concept of normalization and prefer to deal with data stored in rowsets. Although they readily admit its limitations, they basically like the SQL language and feel that it is a very logical and powerful way of manipulating data. Their command of SQL allows them to slice and dice information in truly astonishing ways. In .NET, they prefer to work directly with datasets (maybe strongly-typed, maybe not), because that’s a very natural way for them to work and maps well to their domain knowledge.

On the other side are the language folks. Language folks usually cut their teeth working with programming languages and compilers. They are extremely comfortable with the concept of encapsulation and prefer to deal with data stored in hierarchical object models. They generally dislike the SQL language and feel that is a very poorly designed language, although they may envy its power. Their command of programming languages allows them to build astonishingly useful models of data that encapsulate large amounts of metadata. In .NET, they prefer to work with object/relational mappers, because objects is a very natural way for them to work and maps well to their domain knowledge.

The defining feature of this divide is that the guys in one camp tend to think that the guys in the other camp are crazy. (And, yes, I know, this is really a three-way battle, but I’m just leaving the XML guys out of it for now. Those guys are really are crazy.) It’s just another one of those religious debates, like case-sensitivity or curly braces, in which the extremists on either side espouse the One True Way, tossing hand grenades at the other side, and the moderates in the middle just try and keep their damn heads down.

This entry was sparked by Andy’s Conrad’s eminently reasonable thoughts on datasets. All he’s saying is something that seems completely obvious to me — sometimes relational is better, sometimes objects is better, it just depends on what you’re doing. And yet, it seems to me that this sort of pragmatic view of data is somewhat… lacking in some of the discussions of data access that I’ve seen elsewhere. Either it’s all O/R mapping and you’d be insane to want to work with your data in a relational way, or it’s all SQL and you’d be insane to want to work with your data as objects, and to heck with the idea that sometimes they’re just two great tastes that taste great together…

Anyway, it’s nice to be coming home a bit. More news when there’s something actually to talk about…

Playing the performance shell game

Another “just links” entry… I thought that Raymond’s entry on the performance shell game was particularly good, and Michael’s additional take was very relevant. Much of performance work involves shifting work around from scenario to scenario, and when doing so it is vitally important that you keep track of the bigger picture. The fact that it’s not possible to test every scenario can easily lead to tunnel vision, where you get so motivated to improve one important scenario (application startup) that you lose sight of perhaps an even more important scenario (OS startup) that’s “not your department,” but which impacts users all the same…

Beard = success?

What I want to know about this theory is: what happens if you’re someone like me, who cycles between growing a beard and going beardless? Or does it just matter whether your official picture has a beard? I’ve got one right now, so does that mean I’m doing better work than when I didn’t have one months ago?

Dynamic languages/dynamic environments

The .NET Languages blog recently pointed me to an SD Times article by Larry O’Brien entitled “Dynamic Do-Over.” Most of the later part of the article talked about IronPython and Jim Hugunin, but the earlier part touched on something that I’ve discussed earlier: the question of language strictness when it comes to typing. The more I think about it, the more I believe that static typing is a good thing and something that should be encouraged wherever possible. But when I say that, I don’t mean to say that there isn’t something of value in all those scripty-like dynamic languages out there. I think Larry hits the nail on the head in his article: what makes dynamic languages so great is not their loose type systems, but their dynamic environments.

In the end, I think anything that helps the average programmer be more productive is a good thing. By and large, static typing satisfies this dictum: static typing enables all kinds of programmer productivity features like Intellisense, better error messages at compile time, etc. (One could argue, I suppose, that you could lose the static typing and use type inferencing instead, but I wonder whether it would be possible to build a complete enough type inferencing ruleset that: a) was implementable, b) made some kind of sense, and c) could compete with just stating the damn type of your variables.) Dynamic environments also do this: edit and continue (pace Franz et al.), continuable exceptions, being able to call functions at design time, etc. So I think marrying the two worlds has some facinating possibilities.

I should add, though, that I don’t believe loose typing has no use. One application for loose typing that I’m particularly interested in is modeling unstructured or semi-structured data such as XML. I think the work that the E4X group has been doing is particularly interesting…