Author Archives: paulvick

Blogging is like waiting for the bus…

…nothing arrives for a long time and then a bunch of them show up together. (Apologies to Alan Moore for the corrupted quote.)

I was having a nice, leisurely week getting caught up but couldn’t think of anything inspiring to write about to save my life. Then I go and get very ill (ending up briefly in the ER to get rehydrated because it was after hours at the doctor’s offices, happy, happy, joy, joy) and I find lots of interesting things to blog about showing up in my inbox. Feh.

The fever seems to have broken (knock on wood) and some of the other fun aspects of whatever virus I’ve got seem to have passed, but I’m still only partially here. I’ll see if I can catch up some this weekend…

In defense of Krispy Kreme…

I could have written a comment on Raymond’s blog about his dis of Krispy Kreme, but then I thought I’d write a full fledged entry. He says:

I don’t understand the appeal of KK donuts. They have no flavor; it’s just sugar.

…and with that, Raymond proves that he did not grow up in the South. First, it is a maxim of Southern cooking that anything that tastes good with sugar will taste even better with lots more sugar. Case in point: iced tea. Frankly, I don’t know how the rest of the world manages to drink tea without a ton of sugar dumped in it. My mom’s recipe for iced tea? Make about a quart of tea and then dump in 3/4 a cup of sugar. Sure, what you get tastes more like sugar water than tea, but that’s the point. Southerners would pour raw granulated sugar down their throats if their stomachs could handle it… Which, now that I think about it, may explain Krispy Kremes…

Of course, the reality is that my opinions of Krispy Kremes, like Raymond’s opinions of Dunkin Donuts, have been bred in. Growing up in North Carolina, home of Krispy Kreme, I’ve been eating those suckers since I was a kid. Nothing says “home” like Krispy Kreme, which is why the KK stores opening up in the Pacific Northwest have induced a kind of cognitive dissonance every time I drive by them. (It’s also weird to see stores that are so new… I’m more used to stores that are so old my parents went to them as kids. I also think those older machines make better doughnuts, but I could be wrong on that.) Now if we could just get a damn Chick-Fil-A franchise opened up in Seattle, I’d never have to go home to see my parents again!

I will have to admit, though, that Seattlites were totally out of control when the first KK opened in Issaquah. There were, like, hour long lines at 4 in the morning. People would ask me, when they found out I was from a state that already had Krispy Kremes, “are they really that good?” My answer was always: “They’re really good but, hey, they’re still just doughnuts.”

(I will add that my parents are safe – I’ll continue to visit them until Bullock’s Bar B Que starts licensing franchises! Mmmmm…. hush puppies…)

The Ten Rules of Performance

An Updated Introduction: Seven Years Later (February 9, 2004)

I wrote this document a long time ago to capture the conclusions I had reached from working on application performance for a few years. At the time that I wrote it, the way in which Microsoft dealt with performance in its products was starting to undergo a large-scale transformation. Performance analysis up to that time had largely been an ad-hoc effort and was hampered by a lack of well-known “best practices” and, in some cases, the tools necessary to get the job done. As the transition started from the 16-bit world to the 32-bit world, this kind of approach was clearly insufficient to deal with the increasing popularity of our products and the increasing demands that were being placed on them. The past seven years have seen major changes in the way that performance is integrated into the development process at Microsoft, and many of the “rules” that I outline below have become internalized into the daily processes of most major groups in the company. Even though a lot of what follows is no longer fresh thinking, I still get requests for the document internally which leads me to believe that there’s still value in saying things that many people already know. So I’ve decided to provide in publicly here, for your edification and in the hope that someone might find it useful. (Historical note: The “major Microsoft application“ that I refer to below was not VB.)

A Short Introduction (May 28, 1997)

In the fall of 1995, I was part of a team working on an upgrade to a major Microsoft application when we noticed that we had a little problem — we were slower than the previous version. Not just a little slower, but shockingly slower. This fact had largely been ignored by the development team (including myself), since we assumed it would improve once we were finished coding up our new features. However, after a while the management team started to get a little worried. E-mails were sent around asking whether the developers could spend some time focusing on performance. Thinking “How hard could this be?” I replied, saying that I’d be happy to own performance. This should be easy, I thought naively, just tweak a few loops, rewrite some brain-dead code, eliminate some unnecessary work and we’ll be back in the black. A full two years (and a lot of man-years worth of work on many people’s part) later, we finally achieved the performance improvements that I was sure would only take me alone a few months to achieve.

Unsurprisingly, I had approached the problem of performance with a lot of assumptions about what I would find and what I would need to do. Most of those assumptions turned out to be dead wrong or, at best, wildly inaccurate. It took many painful months of work to learn how to “really” approach the problem of performance, and that was just the beginning — once I figured out what to do, I discovered that there was a huge amount of work ahead! From that painful experience, I have come up with a set of “Ten Rules of Performance,” intended to help others avoid the errors that I made. So, without further ado…

The Rules:

Rule #1: Don’t assume you know anything.

In the immortal words of Pogo, “We have met the enemy, and he is us.” Your biggest enemy in dealing with performance is by far all the little assumptions about your application you carry around inside your head. Because you designed the code, because you’ve worked on the operating system for years, because you did well in your college CS classes, you’re tempted to believe that you understand how your application works. Well, you don’t. You understand how it’s supposed to work. Unfortunately, performance work deals with how things actually work, which in many cases is completely different. Bugs, design shortcuts and unforeseen cases can all cause computer systems to behave (and execute code) in unexpected, surprising ways. If you want to get anywhere with performance, you must continuously test and re-test all assumptions you have: about the system, about your components, about your code. If you’re content to assume you know what’s going on and never bother to prove you know what’s going on, start getting used to saying the following phrase: “I don’t know what’s wrong… It’s supposed to be fast!”

Rule #2: Never take your eyes off the ball

For most developers, performance exists as an abstract problem at the very beginning of the development cycle and a concrete problem at the very end of the cycle. In between, they’ve got better things to be doing. As a result, typically developers write code in a way that they assume will be fast (breaking rule #1) and then end up scrambling like crazy when the beta feedback comes back that the product is too slow. Of course, by the time the product is in beta there’s no real time to go back and redesign things without slipping, so the best that can be done is usually some simple band-aiding of the problem and praying that everyone is buying faster machines this Christmas.

If you’re serious about performance, you must start thinking about it when you begin designing your code and can only stop thinking about it when the final golden bits have been sent to manufacturing. In between, you must never, ever stop testing, analyzing and working on the performance of your code. Slowness is insidious — it will sneak into your product while you’re not looking. The price of speed is eternal vigilance.

Rule #3: Be afraid of the dark

Part of the reason why development teams find it so easy to ignore performance problems in favor of working on new features (or on bugs) is that rarely, if ever, is there anything to make them sit up and pay attention. Developers notice when the schedule shows them falling behind, and they start to panic when their bug list begins to grow too long. But most teams never have any kind of performance benchmark that can show developers how slow (or fast) things actually are. Instead, most teams thrash around in the dark, randomly addressing performance in an extremely ad hoc way and failing to motivate their developers to do anything about the problems that exist.

One of the most critical elements of a successful performance strategy is a set of reproducible real-world benchmarks run over a long period of time. If the benchmarks are not reproducible or real-world, they are liable to be dismissed by everyone as insignificant. And they must be run over a long period of time (and against previous versions) to give a real level of comparison. Most importantly, they must be run on a typical user’s machine. Usually, coming up with such numbers will be an eye opening experience for you and others on your team. “What do you mean that my feature has slowed down 146% since the previous version?!?” It’s a great motivator and will tell you what you really need to be working on.

Rule #4: Assume things will always get worse

The typical state of affairs in a development team is that the developers are always behind the eight ball. There’s another milestone coming up that you have to get those twenty features done for, and then once that milestone is done there’s another one right around the corner. What gets lost in this rush is the incentive for you to take some time as you go along to make sure that non-critical performance problems are fixed. At the end of milestone 1, your benchmarks may say that your feature is 15% slower but you’ve got a lot of work to do and, hey, it’s only milestone 1! At the end of milestone 2, the benchmarks now tell you your feature is 30% slower, but you’re pushing for an alpha release and you just don’t have time to worry about it. At the end of milestone 3, you’re code complete, pushing for beta and the benchmarks say that your feature is now 90% slower and program management is beginning to freak out. Under pressure, you finally profile the feature and discover the design problems that you started out with back in milestone 1. Only now with beta just weeks away and then a push to RTM, there’s no way you can go back and redesign things from the ground up! Avoid this mistake — always assume that however bad things are now, they’re only going to get worse in the future, so you’d better deal with them now. The longer you wait, the worse it’s going to be for you. It’s true more often than you think.

Rule #5: Problems have to be seen to be believed (or: profile, profile, profile)

Here’s the typical project’s approach to performance: Performance problems are identified. Development goes off, thinks about their design and says “We’ve got it! The problem must be X. If we just do Y, everything will be fixed!” Development goes off and does Y. Surprisingly, the performance problems persist. Development goes off, thinks about their design and says “We’ve got it! The problem must be A. If we just do B, everything will be fixed!” Development goes off and does B. Surprisingly, the performance problems persist. Development goes off… well, you get the idea. It’s amazing how many iterations of this some development groups will go through before they actually admit that they don’t know exactly what’s going on and bother to profile their code to find out. If you can’t point to a profile that shows what’s going on, you can’t say you know what’s wrong.

Every developer needs a good profiler. Even if you don’t deal with performance regularly, I say: Learn it, love it, live it. It’s an invaluable tool in a developer’s toolbox, right up there with a good compiler and debugger. Even if your code is running with acceptable speed, regularly profiling your code can reveal surprising information about it’s actual behavior.

Rule #6: 90% of performance problems are designed in, not coded in

This is a hard rule to swallow because a lot of developers assume that performance problems have more to do with code issues (badly designed loops, etc) than with the overall application design. The sad fact of the matter is that in all but the luckiest groups, most of the big performance problems you’re going to confront are not the nice and tidy kind of issues where someone is doing something really dumb. Instead, it’s going to be extremely difficult to pinpoint situations where several pieces of code are interacting in ways that end up being slow. To solve the problems usually requires a redesign of the way large chunks of your code are structured (very bad) or a redesign of the way several components interact (even worse). And given that most pieces of an application are interrelated these days, a small change in the design of one piece of code may cascade into changes in several other pieces of code. Either way it’s not going to be simple or easy. That’s why you need to diagnose problems as soon as you can and get at them before you’ve piled a lot of code on top of your designs.

Also, don’t fall in love with your code. Most programmers take a justifiable pride in the code that they write and even more pride in the overall design of the code. However, this means that many times when you point out to them that their intellectually beautiful design causes horrendous performance problems and that several changes are going to be needed, they tend not to take it very well. “How can we mar the elegance and undeniable usability of this design?” they ask, horrified, adding that perhaps you should look elsewhere for your performance gains. Don’t fall into this trap. A design that is beautiful but slow is like a Ferrari with the engine of a Yugo — sure, it looks great, but you certainly can’t take it very far. Truly elegant designs are beautiful and fast.

Rule #7: Performance is an iterative process

At this point, you’re probably starting to come to the realization that the rules outlined so far tend to contradict one another. You can’t gauge the performance of a design until you’ve coded it and can profile it. However, if you’ve got a problem, it’s most likely going to be your design, not your code. So, basically, there’s no way to tell how good a design is going to be until it’s too late to do anything about! Not exactly, though. If you take the standard linear model of development (design, code, test, ship), you’re right: it’s impossible to heed all the rules. However, if you look at the development process as being iterative (design, code, test, re-design, re-code, re-test, re-design, re-code, re-test, …, ship), then it becomes possible. You will probably have to go through and test several designs before you reach the “right” one. Look at one of the most performance obsessed companies in the software business: Id Software (producers of the games Doom and Quake). In the development of their 3D display engines (which are super performance critical) they often will go through several entirely different designs per week, rewriting their engine as often as necessary to achieve the result they want. Fortunately, we’re not all that performance sensitive, but if you expect to design your code once and get it right the first time, expect to be more wrong than right.

Rule #8: You’re either part of the solution or part of the problem

This is a simple rule: don’t be lazy. Because we all tend to be very busy people, the reflexive way we deal with difficult problems is to push them off to someone else. If your application is slow, you blame one of your external components and say “it’s their problem.” If you’re one of the external components, you blame the application for using you in a way you didn’t expect and say “it’s their problem.” Or you blame the operating system and say “it’s their problem.” Or you blame the user for doing something stupid and say “it’s their problem.” The problem with this way of dealing with these things is that soon the performance issue (which must be so
lved) is bouncing around like a pinball until it’s lucky enough to land on someone who’s going to say “I don’t care whose problems this is, we’ve got to solve it” and then does. In the end, it doesn’t matter whose fault it is, just that the problem gets fixed. You may be entirely correct that some boneheaded developer on another team caused your performance regression, but if it’s your feature it’s up to you to find a solution. If you think this is unfair, get over it. Our customers don’t blame a particular developer for a performance problem, they blame Microsoft.

Also, don’t live with a mystery. At one point in working on the boot performance of my application, I had done all the right things (profiled, re-designed, re-profiled, etc) but I started getting strange results. My profiles showed that I’d sped boot up by 30%, but the benchmarks we were running showed it had slowed down by 30%. My first instinct was to dismiss the benchmarks as being wrong, but they had been so reliable in the past (see rule #3) that I couldn’t do that. So I was left with a mystery. My second instinct was to ignore this mystery, report that I’d sped the feature up 30% and move on. Fortunately, program management was also reading the benchmark results, so I couldn’t slime out of it that easily. So I was forced to spend a few weeks beating my head against a wall trying to figure out what was going on. In the process, I discovered rule #9 below which explained the mystery. Case closed. I’ve seen many, many developers (including myself on plenty of other occasions) fall into the trap of leaving mysteries unsolved. If you’ve got a mystery, some nagging detail that isn’t quite right, some performance slowdown that you can’t quite explain, don’t be lazy and don’t stop until you’ve solved the mystery. Otherwise you may miss the key to your entire performance puzzle.

Rule #9: It’s the memory, stupid.

As I mentioned above, I reached a point in working on speeding up application boot where my profiles showed that I was 30% faster, but the benchmarks indicated I was 30% slower. After much hair-pulling, I discovered that the profiling method that I had chosen effectively filtered out the time the system spent doing things like faulting memory pages in and flushing dirty pages out to disk. Given that: 1) We faulted a lot of code in from disk to boot, and 2) We allocated a lot of dynamic memory on boot, I was effectively filtering out a huge percentage of the boot time out of the profiles! A flip of a switch and suddenly my profiles were in line with the benchmarks, indicating I had a lot of work to do. This taught me a key to understanding performance, namely that memory pages used are generally much more important than CPU cycles used. Intellectually, this makes sense: while CPU performance has been rapidly increasing every year, the amount of time it takes to access memory chips hasn’t been keeping up. And even worse, the amount of time it takes to access the disk lags even further behind. So if you have really tight boot code that nonetheless causes 1 megabyte of code to be faulted in from the disk, you’re going to be almost entirely gated by the speed of the disk controller, not the CPU. And if you end up using so much memory that the operating system is forced to start paging memory out (and then later forced to start paging it back in), you’re in real trouble.

Rule #10: Don’t do anything unless you absolutely have to

This final rule addresses the most common design error that developers make: doing work that they don’t absolutely have to. Often, developers will initialize structures or allocate resources up front because it simplifies the overall design of the code. And, to a certain degree, this is a good idea if it would be painful to do initialization (or other work) further down the line. But often times this practice leads to doing a huge amount of initialization so that the code is ready to handle all kinds of situations that may or may not occur. If you’re not 100% absolutely sure that a piece of code is going to need to be executed, then don’t execute it! Conversely, when delaying initialization code, be aware of where that work is going to be going. If you move an expensive initialization routine out of one critical feature and into another one, you may not have bought yourself much. It’s a bit of shell game, so be aware of what you’re doing.

Also, keep in mind that memory is as important as code speed, so don’t accumulate state unless you absolutely have to. One of the big reasons why memory is such a problem is the mindset of programmers that CPU cycles should be preserved at all costs. If you calculate a result at one point in your program and you might need it later on elsewhere, programmers automatically stash that result away “in case I need it later.” And in some expensive cases, this is a good idea. However, often times the result is that ten different developers think, “All I need is x bytes to store this value. That’s much cheaper than the cycles it took me to calculate it.” And then soon your application has slowed to a crawl as the memory swaps in and out like crazy. Now not only are you wasting tons of cycles going out to the disk to fetch a page that now would have been (now) much cheaper to calculate, you’re also spending more of your time managing all the state you’ve accumulated. It sounds counterintuitive, but it’s true: recalculate everything on the fly; save only the really expensive, important stuff.

© 1997-2004 Paul Vick

Bay .NET User Group talk, February 12th

Something that I think I’ve forgotten to mention up until now (shame on me!): I’m going to be giving a talk on VB .NET 2.0 at the Bay .NET User Group this Thursday, February 12th. The show starts about 6:45pm and should run for about an hour and a half – more details can be found at the Bay .NET User Group page. Even though the blurb I wrote claims that I’m going to focus entirely on the language, I’m thinking I may talk about some IDE features that relate as well. Should be a good talk.

(I think that Addison-Wesley may also be sending along some pre-production copies of my book for raffling! Just a few more weeks until the real thing comes along!)

DevDays are coming!

Just wanted to make a quick plug for the upcoming DevDays presentations across the US. It’ll be a great event with lots of good information (and another chance to get a copy of the Whidbey technology preview!) and the smart client track is all going to be in VB.NET. Be there or be rectangular!

What does /optimize do?

Cory, while talking about VB compilation options, asks “what exactly can one expect to gain by enabling optimizations?” It’s a good question, and one that the docs are kind of vauge about. Turning on optimizations, which is the default for release builds, does three major things at the compiler level:

  • It removes any NOP instructions that we would otherwise emit to assist in debugging. When optimizations are off (and debugging information is turned on), the compiler will emit NOP instructions for lines that don’t have any actual IL associated with them but which you might want to put a breakpoint on. The most common example of something like this would be the “End If“ of an “If” statement – there’s no actual IL emitted for an End If, so we don’t emit a NOP the debugger won’t let you set a breakpoint on it. Turning on optimizations forces the compiler not to emit the NOPs.
  • We do a simple basic block analysis of the generated IL to remove any dead code blocks. That is, we break apart each method into blocks of IL separated by branch instructions. By doing a quick analysis of how the blocks interrelate, we can identify any blocks that have no branches into them. Thus, we can figure out code blocks that will never be executed and can be omitted, making the assembly slightly smaller. We also do some minor branch optimizations at this point as well – for example, if you GoTo another GoTo statement, we just optimize the first GoTo to jump to the second GoTo’s target.
  • We emit a DebuggableAttribute with IsJITOptimizerDisabled set to False. Basically, this allows the run-time JIT to optimize the code how it sees fit, including reordering and inlining code. This will produce more efficient and smaller code, but it means that trying to debug the code can be very challenging (as anyone who’s tried it will tell you). The actual list of what the JIT optimizations are is something that I don’t know – maybe someone like Chris Brumme will chime in at some point on this.

The long and the short of it is that the optimization switch enables optimizations that might make setting breakpoints and stepping through your code harder. They’re all low-level optimizations, so they’re really applicable to pretty much every kind of application in existence. They’re all also very incremental optimizations, which means that the benefit of any one optimization is likely to be nearly unmeasurable except in the most degenerate cases but in aggregate can be quite significant. I definitely recommend everyone leave this option on for your release build…

Minor site update

Just in case anything goes wrong: I made a slight update to the site today. Mostly cosmetic changes such as cleaning up my blogroll and trimming it down to blogs that I actually read rather than just browse. The biggest change is introducing caching for all the dynamic pages (i.e. the entire site) rather than just the RSS feed. I’ve been getting hit with a lot of bandwidth caused, I think, by people’s ISA servers fetching and refetching pages. Caching should help while I continue to investigate what’s going on.

Oh, and I banned my first IP address… Somebody’s running some kind of aggregator that has no host string and ignores caching. After pinging me every 8 minutes for the past week or so and eating up something like 125 meg all on their own, I figure that’s enough…

First we got the bomb, and that was good…

While catching up on the blog entries from when I was gone, I ran across Julia‘s second-hand link to a flash version of Tom Lehrer’s The Elements song and I thought I’d pass it along as well.

I discovered Tom Lehrer when I was nine years old and in the fourth grade, which sound incredibly strange to me now but didn’t seem so strange at the time. My best friend at the time, Tom Kraines, got the LP of That Was the Year That Was from God knows where and he made a tape of it for me. I would have to say that 90% of the humor of the album went completely over my head (I had no idea was genuflecting was, to say the least, or nearly any of the political references) but we thought the songs were hilarious nonetheless. From there, of course, we went on to An Evening Wasted With Tom Lehrer and the rest was history. (The Oedipus Rex song still cracks me the hell up.)

The funny part of it all is that many things that I learned about from Tom Lehrer’s songs made no sense to me until some random point later in my life. The Elements and the last bit of Clementine were just funny bits to me until I discovered Gilbert and Sullivan in high school. And I was totally shocked when I got to Yale and first heard my new alma mater (Bright College Years) and The Whiffenpoof Song, realizing finally what the old Harvard man was lampooning in Bright College Days. (Now I know where the “tables down at Mory’s” are…)

Anyway, it’s one of those cherish bits of flotsam and jetsam from childhood, so…

Back from vacation

Just wanted to post a quick note and say that I made it back from vacation! At this point, I’m still not all here – Kenya and Tanzania are 11 time zones away from Seattle, so there’s a bit of jet lag involved – and I’ve got a lot to catch up on, so it’s going to be a little while before I get back into the swing of things.

With a trip as long and as varied as the one we took, it’s hard to succinctly talk about how it went, but I will say that the trip was wonderful! From start to finish, I’d have to say it was one of the most unique experiences I’ve ever had in my life and one that I could heartily recommend. East Africa is a facinating place both from a cultural perspective as well as a natural perspective. The wildlife was quite amazing and unique, and the people were just great. When all the pictures come back, I’ll see what I can do about putting some representative shots up.

Some thoughts on travel:

  • It is a sad fact of travel that you never know what you need (and how little you’ll need) until after the trip is over. Thankfully, we did not forget to take anything important – a fact that I attribute solely to the foresight of my wife, Andrea – but we did take a few things that were completely superfluous. This would have been less of a big deal if we hadn’t been travelling so much and for so long.
  • As a corrolary to the previous point, it’s amazing how much use you can get out of a good pair of safari pants without needing to wash them.

Some thoughts on the trip:

  • When they found out that my wife lived in Kenya as a child, many Kenyans that we talked to apologized for the state that their country was in relative to the time she lived there. This was one of the saddest parts of our trip, especially given that the Kenyans we met, by and large, were incredibly nice, open and optimistic people. So to hear them say this about their own country was really heartbreaking. (With the recent change in the government, though, many people were hopeful that this situation is starting to turn around.)
  • Although I’ve long been intellectually aware of how incredibly wealthy the First World is relative to the Third World, this intellectual knowledge is nothing compared to actually experiencing it firsthand. Even experiencing it firsthand, it was sometimes hard to grasp.

Some recommendations:

  • If you’re looking for a luxury safari experience, you should definitely check out the Lewa Safari Camp. The tents are very comfortable, the food is amazing, the staff is great and the wildlife is spectacular. Yeah, if you’re going to Kenya or Tanzania, you really should head to the Serengeti/Masai Mara and see that too, but there is no question that Lewa was our favorite of the safari places that we went.
  • If you’re looking for a place to relax, you should definitely check out Loldia House. We stopped off there near the end of our trip, and it was one of the most restful places to stay I’ve ever been to. Beautiful scenery, wonderful food and just a great experience.

(I would just add – as great as Lewa Downs and Loldia House were, they’re only the tip of the iceberg. There are lots of great places to go and things to do in East Africa.)

  • I brought both a pair of tennis shoes and a pair of Lands’ End Men’s High Waterproof Lightweight Hikers boots. I shouldn’t have bothered with the tennis shoes – the boots were so comfortable, I just wore them everywhere.
  • And, finally, I can heartily recommend our travel company, Global Adrenaline. Our trip was very well organized and went off without a hitch, and they were extremely helpful in organizing our custom itinerary. It was a great experience working with them!

The final comment that I’d make is that it was interesting to see how my worries about travel to East Africa measured up to the reality. In the travel packet that we got from Global Adrenaline right before we left, they included a section that should have been called “Why This Trip Is A Very Bad Idea” or, alternatively, “Make Sure Your Will Is Current.” In it, they call out all the stuff that could go wrong on a trip like this – the current terrorism alerts for Kenya, a list of fun tropical diseases you can catch, and problems with petty theft, armed carjackings and the like. For a careful guy like me, this section made me extremely nervous. (I also made the fatal mistake of rereading them on the plane ride there. My advice: once you’re headed on your way, either you’re ready or you’re not, so don’t obsess about it.)

What was interesting, though, was how relative all that stuff turned out to be. The “high risk” of terrorism, disease and crime is really relative to the US, where the chance of any of these things happening (despite media scare tactics) are incredibly low. In other words, a “high risk” of something bad happening really means something like a 15% chance of something bad happening rather than a 0.5% chance of something bad happening. And if you take relatively simple precautions (avoid certain hotels, get correct vacinations and medications, watch your possessions, avoid walking in certain places at night), the chance of anything bad happening goes way down.

All this would be pretty academic if the East African economies weren’t so dependent on tourism. All these issues have had a real and material impact on the fortunes of the places we visited, which is a real shame. Without my wife’s adventurous spirit, I doubt I ever would have overcome my fears and gone on the trip, but now that I’m back, I’m really glad I did it!

So, in summary: The trip was great, it’s wonderful to be home, and there’s more to come…