Category Archives: Oslo

Another transition…

After spending a year and a half working on “M”, I’ve decided to make another change in what I’m doing and and move over to the SQL Server Programmability team. That’s the team responsible for things like the T-SQL language and runtime in SQL Server. Working on “M” was a lot of fun and the team was great, but after spending a good, long while down in the bowels of a GLR parser, I decided that that was enough and that it was time to do something else. Working on SQL Server programmability is, in some ways, a combination of all my previous jobs-a bit of data from Access, a bit of runtime from OLE Automation, and a bit of programming language from Visual Basic and “M”. It’s also an interesting challenge-a product that’s both well established and confronting a lot of new challenges. I think it’s going to be quite a bit of fun!

It does mean saying goodbye to “M”, and that was sad (although, really, they’re still in the same division and not that far away), but that’s the way it goes. I’ll be looking forward to their next CTP, which is where people will see a lot of the hard work that’s been going on and the overall direction that the language is headed. There’s a lot of cool stuff coming, and I think people will find it very interesting!

Changing jobs also means that I’m back to drinking from the firehose, learning the ins and outs of the guts of the SQL Server engine, as well as T-SQL. Interesting stuff. Any good T-SQL/SQL Server blogs anyone can recommend?

A Channel9 E2E2E video…

Charles just posted a new “Expert2Expert2Expert” talk on Channel9 on “Programming Data.” This was a talk, moderated by Erik Meijer, with me and Michael Rys about data and programming and “M” and SQL Server and more. It was an enjoyable conversation but Erik did observe afterwards that I’d managed to avoid saying too much about “M” specifically! There wasn’t any mysterious intent on that one-as I said in my previous blog entry, things have been continuing to move in the “M” world and there just isn’t a lot new that I can say at the moment. Soon, hopefully, soon.

Radio Silence

Gosh, it’s been six months since I’ve said anything on this blog. I was beginning to wonder myself whether I’d ever come back or if this just would become yet another decaying corner of the Internet. Anyway, things have been quiet around here for several reasons.

First, there’s just been a lot going on in my world, and blogging (and tweeting and facebooking) has been pretty low on the priority list. None of it is worth broadcasting to the entire world on the Internet, but suffice it to say that it’s been a very heavy year.

Second, I finally realized I really needed a break after ten years slugging it out on VB. I knew I was burned out when I changed jobs, but I don’t think I knew how burned out I really was. The last year has been very therapeutic in that I’ve mostly just kept my head down, showed up at work, and did the work of the average developer-write code, fix bugs, etc. That’s been great, and I had forgotten how much fun it is to just. write code. It’s not the only thing I want to do in my life, but it’s been very nice after the roller coaster of 3.5 major versions of VB to just show up, do my job, and go home. I think I’m now starting to look beyond that, but there you are.

And, finally, I do try to live by the maxim that “if you don’t have anything to say, don’t say it.” Working on M has been fun, but it became clear to me around the time of my last blog post just how much I didn’t know about what I was talking about. (In fact, I owe several corrections on my previous blog post, which I’m working on.) That’s been OK, though, since clearly my team has also gone through several rounds of figuring out what we’re doing, too. It does feel like we’re reaching one of those inflection points where things are about to get a whole lot more clear about M, which is great and maybe means I’ll have more to talk about.

So, yes, I’m still alive. We’ll see how much I have to say.

(By the way, sadly, I won’t be at PDC this year, so have fun without me!)

“Oslo” has a May 2009 CTP…

In case you missed it, we pushed out a new CTP this week of “Oslo”. You can get it at the Oslo Developer Center. New stuff includes:

  • The “Quadrant” modeling tool. Use Quadrant to browse and edit models in a repository database.
  • Domain models for the UML 2.1 specification encompassing Use Case, Activity, Class, Sequence, Component diagrams, profiles and templates.
  • An XMI importer supporting the 2.1 specifications, and covering the diagrams identified above.
  • A domain model and loader for System.Runtime.

There isn’t a huge amount of change for the language portion of M in this CTP. Most of the major differences are under the hood and will only be apparent if you’re calling the underlying APIs directly. The only big thing I can think of is that the “identifier” keyword is no longer needed in a grammar-we’ll automatically detect the situation for you.

The feature I spent most of our last milestone working on appears to be finally coming together, so I hope to have more to say about that soon.

Catching up on free media

Things have been a bit quiet around Panopticon Central lately due to the fact that I’ve been heads down on developing a particularly gnarly feature in M. More on that if and when it starts to see the light of day. But in the mean time, there have been a few interviews/talks that have been posted that I wanted to point out:

Hope you enjoy them!

Revisiting “DSLs are a bad idea”…

My post a few months ago entitled “DSLs: Definitely a bad idea!” certainly seemed to touch a nerve for some people. What was intended to be a sort of lighthearted throwaway entry engendered a lot stronger reactions than I’d expected. So I thought it might be worthwhile to back up for a second and revisit the issue, address some (though not all) of the comments, and talk about how they connect to how I see M.

To start with, yes, of course, I was engaging in a bit of hyperbole by saying that DSLs are a “bad idea.” DSLs are just a tool, and tools are neither bad nor good in and of themselves-it’s all in how you use them. I don’t believe that DSLs are so dangerous that only highly qualified and well studied people should be allowed to attempt them. But I do believe that they have enough of a capacity to cause mischief that they should not be entered into lightly unless you are just fooling around. I mean, if you’re just creating a DSL for your own amusement, then who cares? Have a ball! In fact, I think it’s a great way to learn about programming languages and how they are put together. But if you’re in an environment where you’re going to depend on that DSL for something, or if other people are going to have to depend on it, then you should really take a step back and think carefully before heading down that path. Because DSLs have a nasty habit of taking on lives of their own if you’re not really careful about it.

The question then arises: “Paul, you work on the DSL component of M. Why are you telling us we should think twice before creating a DSL? Aren’t you speaking against your own interest?” I don’t see it that way, for this reason: what I’m mainly talking about here is that one should pause before one creates an entirely new DSL. Maybe this was imprecision in my previous piece that I should have corrected, but I think one can have much less concern when one is working in a domain where there is already a language of some sort defined.

This, to my mind, is where M’s DSL capabilities really shine. If you’ve got a domain that has no domain-specific language associated with it, think carefully before you embark on creating one. But if you’re working in some domain that already has some domain-specific language associated with it, the damage is already done, so to speak, so you might as well jump in and try to make things better! In particular, if you’ve got a domain that has a language associated with it that is more machine-friendly than human-friendly (*cough* angle brackets *cough*), then I think that M’s DSL capabilities are going to be hugely useful for you.

So overall, I’m just cautioning against the hammeritis that often infects the programming community (you know, “when all you have is a hammer, .”). DSLs are a wonderful tool, and I’m hoping to soon start talking about how you put them together. Just, you know, hammer responsibly.

LL vs. LR vs. GLR

Having covered tokens vs. syntax, we can now talk about what “kind” of parser generator MGrammar is. As I said in a previous post, if you want the full discussion of parser generators you should really read the Dragon book, but here’s sort of how things go in a nutshell:

There are traditionally two ways to interpret grammars: LL and LR. In both cases the first “L” means that the grammars read left-to-right. The second character indicates how the grammar is interpreted: “L” for left-to-right and “R” for right-to-left. An LL parser is also called a “top-down” parser because it starts with the top-most (or “start”) syntax production of the language and then starts trying to fill in from there. So if you’ve got a grammar for, say, Visual Basic, a LL parser starts by saying “OK, I’m parsing a file. Well, a file can start with an Option statement, so I’ll start looking there.” If it doesn’t find an Option statement, it says “OK, no Option statements, the next thing I can see is an Import statement, so I’ll start looking for those.” And so on, down the grammar. This is also called “predictive” parsing-the parser always knows what it expects next and either it finds it or it doesn’t.

An LL parser is nice because it fits with how we humans tend to think about parsing text. In fact, both the Visual Basic and C# parsers are handwritten LL parsers. LL parsers also have some nice properties for error recovery-since you always know what you’re expecting, it’s easy to tell the user what you were expecting to see. On the other hand, there are situations where prediction can fail you when trying to figure out what went wrong, usually when something is really out of place-say, a class declaration in the middle of a function declaration. This is because an LL parser only knows what it expects next-it doesn’t think about all the things that could occur somewhere else.

A LR parser is the reverse of a LL parser. Instead of starting with the syntax and trying to fit tokens into it, it starts with the tokens and tries to piece together something that fits into the syntax. So if we’re parsing a Visual Basic file with a LR parser, it would start by saying “OK, I found an `Option’ token. Does that fit anywhere in the grammar?” Since it didn’t make a complete anything yet, it’d shift `Option’ on a stack and then look at the next part of the input. “OK, I found a `Strict’ token. Does a `Strict’ token combined with an `Option’ token fit anywhere in the grammar?” Since there is an `Option Strict’ statement, it can shift `Strict’ onto the stack and then reduce both elements to an `Option Strict statement’. Then it would ask “Does `Option Strict statement’ fit anywhere in the grammar?” and so on.

As you can see, a LR parser builds up the parse tree from the bottom-first it figures out the leaves and then tries to assemble those leaves into a coherent tree. A LR grammar is nice because it’s very easy to generate a set of tables that describe a machine that can process the grammar. So it’s easy to generate an efficient program to recognize the grammar. A downside of a LR parser is that when things go wrong you don’t have the context that an LL parser has-you don’t necessarily know what you’re expecting to come next, so you can have a harder time recovering from problems.

Another downside of an LR parser is that it may find ambiguities in the grammar that wouldn’t exist for an LL parser. For example, a token sequence might reduce into two different syntax productions in the grammar that are totally distinct if you know what the context is. But since a LR parser doesn’t know what syntax production it’s looking for, it’s ambiguous. This means that coming up with a grammar for a LR parser can be difficult because you get lots of conflicts that are not obvious to a human reading the grammar.

But all is not lost. That’s because there’s an improvement on an LR parser, called a GLR parser, that fixes many of these ambiguities. The “G” stands for “Generalized” and what a GLR parser does that’s different than a regular LR parser is that it allows for multiple threads of parsing. In other words, when you run into a conflict where, say, a set of tokens can reduce into two different productions, you simply explore both possibilities. The odds are that in most grammars the ambiguity will resolve itself pretty quickly (i.e. one of the threads will reach an error state), so you just run all possibilities in parallel and wait for one of the threads to survive the others. This fixes most of the conflicts caused solely by the nature of bottom-up parsing.

As you might have guessed by the way I’ve written this entry, MGrammar is a GLR parser generator. There are some tweaks to the algorithm that we make, which I can discuss later, but the core of what we do is GLR. I wasn’t actually part of the original design team, so I can’t comment on exactly what led us to choose GLR over, say, LL but my thinking above probably touches on some of the reasons.

I’d also add that LL and LR are not the last word in parser technology. There are also other approaches to parsing such as Parsing Expression Grammars (PEGs) and the related implementation strategy packrat parsing. And research continues today, so I’m sure there’ll be even more strategies in the future.

Lexing and Parsing

I want to talk in more detail about how the MGrammar parser works, but before I delve too deeply in to that I wanted to talk a little bit about some basic parsing concepts so we can be sure we’re on the same page. This may be a bit basic for some, so you can just skip ahead if you already know all this.

A language grammar is actually made up of two different grammars: a lexical grammar and a syntatic grammar. Although a grammar may not distinguish between the two in its specification (as we’ll see later), every grammar has these two things contained within it. The lexical grammar translates characters (the raw material of language) into lexical units called tokens. The syntatic grammar then translates the tokens into syntatic structures called syntax trees.

Let’s take a look at this in practical terms. Let’s say you’ve got the following sentence:

Alice kissed Bob.

Fundamentally, this sentence is just a collection of characters: the upper-case letter A, the lower-case letter l, the lower-case letter i, and so on. Lexically analyzing this string takes these characters and shape them into the basic tokens of English: punctuation and words. So a lexical analysis might return the following tokens: the word Alice, the word kissed, the word Bob, and a period.

One thing to notice here is what’s missing in the list of tokens-every character in the string “Alice kissed Bob.” is represented in the token list except for two characters. We’ve dropped the spaces between the words out of the token list, and we’ve done this because spaces aren’t significant in English syntax. (As anyone who was taught to type two spaces after a period can attest.) In other words, you can have as many spaces as you want between tokens and they don’t mean anything except as delimiters between tokens. In other words, “AlicekissedBob.” is not the same as “Alice kissed Bob.” but “Alice kissed Bob.” is the same as “Alice kissed Bob.” (Although it looks a little funny.) Since spaces don’t matter once we’ve formed the tokens, we toss them out.

Once we’ve got the tokens, we can then proceed to do the syntatic analysis. In the case of English, a common syntax pattern is “<subject> <verb> <direct object>.” Looking at the sentence, we can pretty easily discern this pattern at work: the subject of the sentence is “Alice,” the verb is “kissed,” the direct object is “Bob,” and the sentence ends with a period. Since the verb links the subject and the direct object, we’d probably come up with a syntax tree that looks something like this:

image

As I said before, some languages make a bigger deal of lexical vs. syntatic grammars than others. For example, if you go and read the XML spec, you’ll see that they don’t really call out what’s a token and what’s syntax. On the other hand, the Visual Basic language specification is extremely explicit about the distinction between tokens and syntax, even going so far as to have two separate grammars. How you think about tokens vs. syntax has big implications for the implementation of your parser, which I’ll talk about in my next entry.