Category Archives: M

Another transition…

After spending a year and a half working on “M”, I’ve decided to make another change in what I’m doing and and move over to the SQL Server Programmability team. That’s the team responsible for things like the T-SQL language and runtime in SQL Server. Working on “M” was a lot of fun and the team was great, but after spending a good, long while down in the bowels of a GLR parser, I decided that that was enough and that it was time to do something else. Working on SQL Server programmability is, in some ways, a combination of all my previous jobs-a bit of data from Access, a bit of runtime from OLE Automation, and a bit of programming language from Visual Basic and “M”. It’s also an interesting challenge-a product that’s both well established and confronting a lot of new challenges. I think it’s going to be quite a bit of fun!

It does mean saying goodbye to “M”, and that was sad (although, really, they’re still in the same division and not that far away), but that’s the way it goes. I’ll be looking forward to their next CTP, which is where people will see a lot of the hard work that’s been going on and the overall direction that the language is headed. There’s a lot of cool stuff coming, and I think people will find it very interesting!

Changing jobs also means that I’m back to drinking from the firehose, learning the ins and outs of the guts of the SQL Server engine, as well as T-SQL. Interesting stuff. Any good T-SQL/SQL Server blogs anyone can recommend?

A Channel9 E2E2E video…

Charles just posted a new “Expert2Expert2Expert” talk on Channel9 on “Programming Data.” This was a talk, moderated by Erik Meijer, with me and Michael Rys about data and programming and “M” and SQL Server and more. It was an enjoyable conversation but Erik did observe afterwards that I’d managed to avoid saying too much about “M” specifically! There wasn’t any mysterious intent on that one-as I said in my previous blog entry, things have been continuing to move in the “M” world and there just isn’t a lot new that I can say at the moment. Soon, hopefully, soon.

“Oslo” has a May 2009 CTP…

In case you missed it, we pushed out a new CTP this week of “Oslo”. You can get it at the Oslo Developer Center. New stuff includes:

  • The “Quadrant” modeling tool. Use Quadrant to browse and edit models in a repository database.
  • Domain models for the UML 2.1 specification encompassing Use Case, Activity, Class, Sequence, Component diagrams, profiles and templates.
  • An XMI importer supporting the 2.1 specifications, and covering the diagrams identified above.
  • A domain model and loader for System.Runtime.

There isn’t a huge amount of change for the language portion of M in this CTP. Most of the major differences are under the hood and will only be apparent if you’re calling the underlying APIs directly. The only big thing I can think of is that the “identifier” keyword is no longer needed in a grammar-we’ll automatically detect the situation for you.

The feature I spent most of our last milestone working on appears to be finally coming together, so I hope to have more to say about that soon.

Catching up on free media

Things have been a bit quiet around Panopticon Central lately due to the fact that I’ve been heads down on developing a particularly gnarly feature in M. More on that if and when it starts to see the light of day. But in the mean time, there have been a few interviews/talks that have been posted that I wanted to point out:

Hope you enjoy them!

Revisiting “DSLs are a bad idea”…

My post a few months ago entitled “DSLs: Definitely a bad idea!” certainly seemed to touch a nerve for some people. What was intended to be a sort of lighthearted throwaway entry engendered a lot stronger reactions than I’d expected. So I thought it might be worthwhile to back up for a second and revisit the issue, address some (though not all) of the comments, and talk about how they connect to how I see M.

To start with, yes, of course, I was engaging in a bit of hyperbole by saying that DSLs are a “bad idea.” DSLs are just a tool, and tools are neither bad nor good in and of themselves-it’s all in how you use them. I don’t believe that DSLs are so dangerous that only highly qualified and well studied people should be allowed to attempt them. But I do believe that they have enough of a capacity to cause mischief that they should not be entered into lightly unless you are just fooling around. I mean, if you’re just creating a DSL for your own amusement, then who cares? Have a ball! In fact, I think it’s a great way to learn about programming languages and how they are put together. But if you’re in an environment where you’re going to depend on that DSL for something, or if other people are going to have to depend on it, then you should really take a step back and think carefully before heading down that path. Because DSLs have a nasty habit of taking on lives of their own if you’re not really careful about it.

The question then arises: “Paul, you work on the DSL component of M. Why are you telling us we should think twice before creating a DSL? Aren’t you speaking against your own interest?” I don’t see it that way, for this reason: what I’m mainly talking about here is that one should pause before one creates an entirely new DSL. Maybe this was imprecision in my previous piece that I should have corrected, but I think one can have much less concern when one is working in a domain where there is already a language of some sort defined.

This, to my mind, is where M’s DSL capabilities really shine. If you’ve got a domain that has no domain-specific language associated with it, think carefully before you embark on creating one. But if you’re working in some domain that already has some domain-specific language associated with it, the damage is already done, so to speak, so you might as well jump in and try to make things better! In particular, if you’ve got a domain that has a language associated with it that is more machine-friendly than human-friendly (*cough* angle brackets *cough*), then I think that M’s DSL capabilities are going to be hugely useful for you.

So overall, I’m just cautioning against the hammeritis that often infects the programming community (you know, “when all you have is a hammer, .”). DSLs are a wonderful tool, and I’m hoping to soon start talking about how you put them together. Just, you know, hammer responsibly.

LL vs. LR vs. GLR

Having covered tokens vs. syntax, we can now talk about what “kind” of parser generator MGrammar is. As I said in a previous post, if you want the full discussion of parser generators you should really read the Dragon book, but here’s sort of how things go in a nutshell:

There are traditionally two ways to interpret grammars: LL and LR. In both cases the first “L” means that the grammars read left-to-right. The second character indicates how the grammar is interpreted: “L” for left-to-right and “R” for right-to-left. An LL parser is also called a “top-down” parser because it starts with the top-most (or “start”) syntax production of the language and then starts trying to fill in from there. So if you’ve got a grammar for, say, Visual Basic, a LL parser starts by saying “OK, I’m parsing a file. Well, a file can start with an Option statement, so I’ll start looking there.” If it doesn’t find an Option statement, it says “OK, no Option statements, the next thing I can see is an Import statement, so I’ll start looking for those.” And so on, down the grammar. This is also called “predictive” parsing-the parser always knows what it expects next and either it finds it or it doesn’t.

An LL parser is nice because it fits with how we humans tend to think about parsing text. In fact, both the Visual Basic and C# parsers are handwritten LL parsers. LL parsers also have some nice properties for error recovery-since you always know what you’re expecting, it’s easy to tell the user what you were expecting to see. On the other hand, there are situations where prediction can fail you when trying to figure out what went wrong, usually when something is really out of place-say, a class declaration in the middle of a function declaration. This is because an LL parser only knows what it expects next-it doesn’t think about all the things that could occur somewhere else.

A LR parser is the reverse of a LL parser. Instead of starting with the syntax and trying to fit tokens into it, it starts with the tokens and tries to piece together something that fits into the syntax. So if we’re parsing a Visual Basic file with a LR parser, it would start by saying “OK, I found an `Option’ token. Does that fit anywhere in the grammar?” Since it didn’t make a complete anything yet, it’d shift `Option’ on a stack and then look at the next part of the input. “OK, I found a `Strict’ token. Does a `Strict’ token combined with an `Option’ token fit anywhere in the grammar?” Since there is an `Option Strict’ statement, it can shift `Strict’ onto the stack and then reduce both elements to an `Option Strict statement’. Then it would ask “Does `Option Strict statement’ fit anywhere in the grammar?” and so on.

As you can see, a LR parser builds up the parse tree from the bottom-first it figures out the leaves and then tries to assemble those leaves into a coherent tree. A LR grammar is nice because it’s very easy to generate a set of tables that describe a machine that can process the grammar. So it’s easy to generate an efficient program to recognize the grammar. A downside of a LR parser is that when things go wrong you don’t have the context that an LL parser has-you don’t necessarily know what you’re expecting to come next, so you can have a harder time recovering from problems.

Another downside of an LR parser is that it may find ambiguities in the grammar that wouldn’t exist for an LL parser. For example, a token sequence might reduce into two different syntax productions in the grammar that are totally distinct if you know what the context is. But since a LR parser doesn’t know what syntax production it’s looking for, it’s ambiguous. This means that coming up with a grammar for a LR parser can be difficult because you get lots of conflicts that are not obvious to a human reading the grammar.

But all is not lost. That’s because there’s an improvement on an LR parser, called a GLR parser, that fixes many of these ambiguities. The “G” stands for “Generalized” and what a GLR parser does that’s different than a regular LR parser is that it allows for multiple threads of parsing. In other words, when you run into a conflict where, say, a set of tokens can reduce into two different productions, you simply explore both possibilities. The odds are that in most grammars the ambiguity will resolve itself pretty quickly (i.e. one of the threads will reach an error state), so you just run all possibilities in parallel and wait for one of the threads to survive the others. This fixes most of the conflicts caused solely by the nature of bottom-up parsing.

As you might have guessed by the way I’ve written this entry, MGrammar is a GLR parser generator. There are some tweaks to the algorithm that we make, which I can discuss later, but the core of what we do is GLR. I wasn’t actually part of the original design team, so I can’t comment on exactly what led us to choose GLR over, say, LL but my thinking above probably touches on some of the reasons.

I’d also add that LL and LR are not the last word in parser technology. There are also other approaches to parsing such as Parsing Expression Grammars (PEGs) and the related implementation strategy packrat parsing. And research continues today, so I’m sure there’ll be even more strategies in the future.

Lexing and Parsing

I want to talk in more detail about how the MGrammar parser works, but before I delve too deeply in to that I wanted to talk a little bit about some basic parsing concepts so we can be sure we’re on the same page. This may be a bit basic for some, so you can just skip ahead if you already know all this.

A language grammar is actually made up of two different grammars: a lexical grammar and a syntatic grammar. Although a grammar may not distinguish between the two in its specification (as we’ll see later), every grammar has these two things contained within it. The lexical grammar translates characters (the raw material of language) into lexical units called tokens. The syntatic grammar then translates the tokens into syntatic structures called syntax trees.

Let’s take a look at this in practical terms. Let’s say you’ve got the following sentence:

Alice kissed Bob.

Fundamentally, this sentence is just a collection of characters: the upper-case letter A, the lower-case letter l, the lower-case letter i, and so on. Lexically analyzing this string takes these characters and shape them into the basic tokens of English: punctuation and words. So a lexical analysis might return the following tokens: the word Alice, the word kissed, the word Bob, and a period.

One thing to notice here is what’s missing in the list of tokens-every character in the string “Alice kissed Bob.” is represented in the token list except for two characters. We’ve dropped the spaces between the words out of the token list, and we’ve done this because spaces aren’t significant in English syntax. (As anyone who was taught to type two spaces after a period can attest.) In other words, you can have as many spaces as you want between tokens and they don’t mean anything except as delimiters between tokens. In other words, “AlicekissedBob.” is not the same as “Alice kissed Bob.” but “Alice kissed Bob.” is the same as “Alice kissed Bob.” (Although it looks a little funny.) Since spaces don’t matter once we’ve formed the tokens, we toss them out.

Once we’ve got the tokens, we can then proceed to do the syntatic analysis. In the case of English, a common syntax pattern is “<subject> <verb> <direct object>.” Looking at the sentence, we can pretty easily discern this pattern at work: the subject of the sentence is “Alice,” the verb is “kissed,” the direct object is “Bob,” and the sentence ends with a period. Since the verb links the subject and the direct object, we’d probably come up with a syntax tree that looks something like this:

image

As I said before, some languages make a bigger deal of lexical vs. syntatic grammars than others. For example, if you go and read the XML spec, you’ll see that they don’t really call out what’s a token and what’s syntax. On the other hand, the Visual Basic language specification is extremely explicit about the distinction between tokens and syntax, even going so far as to have two separate grammars. How you think about tokens vs. syntax has big implications for the implementation of your parser, which I’ll talk about in my next entry.

DSLs: Definitely a bad idea!

Last Friday Rocky posted an entry on his weblog entitled “DSLs – fun, cool, but maybe a bad idea?” and my reaction was:

Maybe a bad idea? Maybe a bad idea? Of course they’re a bad idea!

This may seem strange coming from someone who’s working on the heart of Oslo/M’s DSL capabilities, but stick with me here.

To begin with, writing a new language, domain-specific or not, is a lot like opening a restaurant: everyone thinks they can do it because they’ve eaten at one and it looks like fun. But the truth is that something like 50% of restaurants fail in the first year, and probably 90% fail over the long haul. It’s incredibly difficult, time consuming, and hard to keep a restaurant as a going concern-but doing that while actually coming up with something people want to eat is nearly impossible. I’m amazed new restaurants open up at all, given the odds.

Languages are the same way. It’s mind-bendingly, stupendously difficult to build a new language, even one that’s largely based on an existing one. Yet many programmers think, “hey, I use languages, how hard can this be?” and go at it. Like restaurants, probably over 98% of them fail to ever gain any traction at all, but god bless the optimists because without them we’d never get the 2% of languages that succeed. I’m personally willing to sacrifice the millions of dollars and hours wasted on languages that never make it just so that we can get languages like C# and Java and Ruby and Python and so on. (One language omitted to preserve at least the appearance of humility.)

So the fact that coming up with a new language is a bad idea shouldn’t dissuade people from developing new DSLs, it should just give them pause and hopefully a little humility. The key, I think, is to start small and to stay small. A domain-specific language needs to stay, well, domain-specific and not try and branch out to become a general purpose programming language. The problem is that it’s really, really hard to resist the temptation to do that. You just have to keep in mind that all the really successful DSLs have remained little languages, and repeat that to yourself every day. If you do that, you’ll be fine. Or, put another way:

Remember kids, DSLs don’t kill people, people kill people.