Sunday, November 22, 2009

Angry Players Make Sunday More Interesting

Youtopia has been growing quickly the last couple of weeks. It's fun to watch and the team is really excited about it. Of course, with the growth comes a lot of performance tuning with our code. Today we hit an issue I wasn't expecting at all. . .

We've been running Windows 2008, IIS7, and ASP.NET 3.5 in production for a while now, but haven't had to do much of any performance tuning. It just works, and is fast. Which is awesome!

But today, Youtopia was running slowly and requests were hanging so I investigated. The databases were performing normally and not having any locking issues. The network looked good. The memcached cluster was healthy. The queueing service looked great. The ASP.NET performance counters even looked good at first glance.

None of the diagnostic performance monitors I'd used in the past (such as Requests in Application Queue) showed the issue, but requests were absolutely being queued -- or otherwise not processed immediately. There were also plenty of free worker and IOCP threads. The only thing that clued me in was the Pipeline Instance Count and Requests Executing counters were exactly the same (96) on all the servers. So I started investigating from there.

It turns out that due to the way IIS7 ASP.NET integrated mode threading model functions there is a (configurable) request limit of 12 per CPU. We hit this limit in Youtopia today because we hold open requests for asynchronous Comet-like communications and there were over 288 people online simultaneously. Our three eight core web servers each had 96 (8*12) people connected to them and weren't really serving any other requests. We aren't running into any thread configuration limits as the long running requests are asynchronous and not using ASP.NET worker threads.

Here are a few great links that came out of my research.

With ASP.NET 3.5 SP1 it boils down to a simple configuration file change. Use something like this in the aspnet.config file (in x64 it's at C:\Windows\Microsoft.NET\Framework64\v2.0.50727\aspnet.config). This is the default. Adjust maxConcurrentRequestsPerCPU to suit your needs.

<system.web>
<applicationPool maxConcurrentRequestsPerCPU="12" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000"/>
</system.web>

In addition, the application pool needs to be configured to allow more requests. By default it only allows 1000 concurrent requests. This is done under the Advanced Settings for the application pool in the IIS 7 manager. Set Queue Length to 5000 to match this system level configuration.

Monday, November 16, 2009

Ditch Your Events (Part 1)

About four months ago Max, Hive7's Lawful Evil CEO, decided we needed to take our games to the next level and build something fun and accessible that everyone who plays "those farming games" would want to play. We all brainstormed, pitched our ideas to the company, and everyone voted by comparing every idea against every other – I wish we had a digital photo of the giant matrix on the whiteboard. There were a bunch of great ideas, but in the end... I won! Youtopia was born.

Youtopia was released to the public about three months from its inception. Hats off to the dev and art team for pulling this one together. A new technology for the developers and fully animated objects for the art team led to much blood, sweat, and tears, but we got 'er done! Of course, we're still actively developing Youtopia, and there are lots of great things planned for the future! But, back to my tech article...

It's been a long time since I've stepped out of my comfort zone and learned a new (to me) technology. Don't get me wrong, I'm always experimenting with the lastest .NET based thingie-ma-bobbers out there, but I haven't used a completely foreign development environment since C#/.NET came out over eight years ago. But for this project I needed to learn Flash/AS3, and it needed to be done yesterday. Luckily for me nobody else on our dev team knew Flash so I could still pretend like I knew what I was talking about and make lots of (un)educated architectural decisions without anyone being the wiser!

One such recent decision was to use an event driven property binding system. Youtopia's engine is based on a great open source game engine, brought to you from some of the Dyamix/GarageGames people, called the PushButton Engine (or PBE). In PBE there is a class called PropertyReference. This class facilitates a late-bound approach for one component to read the value of a property (member variable or getter/setter) on another component. It's a pretty cool pattern, but requires you to poll the target component whenever you want to know if the property changed. This works fine when you're talking about 10's or 100's of components. But in Youtopia we have thousands of entities in the scene at once. We needed this binding to be event-driven.

Of course, with my .NET background I immediately reached for the INotifyPropertyChanged pattern used in .NET's data binding infrastructure. With INotifyPropertyChanged it is the responsibility of the object owning the property to raise an event whenever a property value changes. Any listeners will then immediately know they need to poll for the new value if they want it.

This works great in .NET and is very performant. But in Flash, events are a whole other story. They are an extremely feature-rich subsystem that I don't really want to get into. In the end, all the features and memory allocations when you raise an event lead to poorer performance than we needed for Youtopia. We need every bit of CPU power on that single Flash thread and really shouldn't be wasting it raising events.

So, I shamelessly copied the .NET patterns and brought them over to AS3. Let's start at the core. In order for things to perform their best, I couldn't use the built-in Events. Though Troy did the benchmarking legwork, he didn't provide an implementation we could use to register callbacks and call multiple functions. So, I wrote a MulticastFunction that behaves a whole lot like the MulticastDelegate in .NET. Usage is really straightforward.

var func:MulticastFunction = new MulticastFunction();

//register my listener callback
func.add(
function():void
{
//this callback does amazingly cool stuff
trace("hello from the callback");
});

//calls all the callbacks that have been added, in the order they were added
func.apply();

As you can see, dealing with the MulticastFunction is a lot like the EventDispatcher, but each MulticastFunction is only designed to be used for a single event. So, to use it for events, create a public getter on your class named something reasonable and add your callbacks to it. Done!

Ok, I realize I keep talking about event dispatching speed, but haven't put my money where my mouth is. I wrote some benchmarks of my own and here is the output with a release build, in the latest standalone Flash 10 player. It does five test runs. Download the Source

running tests...
Event dispatching took 848ms
MulticastFunction took 355ms

running tests...
Event dispatching took 846ms
MulticastFunction took 351ms

running tests...
Event dispatching took 834ms
MulticastFunction took 352ms

running tests...
Event dispatching took 836ms
MulticastFunction took 351ms

running tests...
Event dispatching took 823ms
MulticastFunction took 343ms

Yup, that's right. MulticastFunction is nearly 2.5x faster, and I haven't spent much time tuning it. For example, it's using an Array under the hood and doing more work than it needs to during the apply call. Events will also become less performant over time as you have to create (and potentially clone) Event objects for every dispatch, causing a lot of garbage collection pressure. Here's the MulticastFunction, with lots of comments or you can download the source

package com.jdconley
{
/**
* A wrapper that mimics the synchronous behavior of the MulticastDelegate used in .NET for events.
* This doesn't support any of the async methods, as we don't have free threading here.
* It also doesn't support return values.
* See: http://msdn.microsoft.com/en-us/library/system.multicastdelegate.aspx
*/
public class MulticastFunction
{
private var _functions:Array = [];
private var _iterators:int = 0;

/**
* Adds a function to be called when apply is called.
* If the function is already in the list it won't be added twice.
* Returns true if the function was added.
**/
public function add(func:Function):Boolean
{
var i:int = _functions.indexOf(func);
if (i > -1)
return false;

//add new functions to the end so they are picked up live during an apply
_functions.push(func);
return true;
}

/**
* Removes a function to be called when apply is called.
* Returns true if the function was removed.
**/
public function remove(func:Function):Boolean
{
var i:int = _functions.indexOf(func);
if (i < 0)
return false;

if (_iterators == 0)
_functions.splice(i, 1);
else
_functions[i] = null;

return true;
}

/**
* Synchronously applies all functions that have been added.
* Functions can be safely added or removed during an apply and changes will take effect immediately.
* Added functions will be called, and removed functions will not.
**/
public function apply(thisArg:*=null, argArray:*=null):void
{
_iterators ;
var holes:Boolean = false;

for (var i:int = 0; i < _functions.length; i )
{
var f:Function = _functions[i];
if (f == null)
holes = true;
else
f.apply(thisArg, argArray);
}

//cleanup holes left by removing functions during this apply call.
//if any of the function apply's throw an error the state of _iterators will be off.
//but, we'll only leak array slot memory if functions are removed.
//putting a try/finally or try/catch block here significantly decreases performance.
if (--_iterators == 0 && holes)
{
for (i = _functions.length - 1; i >= 0; i--)
{
if (_functions[i] == null)
_functions.splice(i, 1);
}
}
}

/**
* Removes all functions from the list. Stops the current apply call, if there is one.
**/
public function clear():void
{
_functions = [];
}
}
}

Although capture, bubble, weak references, and priority are handy features of the Flash eventing system, they're not always necessary and will hurt your performance when you might have thousands of them firing per frame.

In Part 2 we'll put this MulticastFunction to use in a more meaningful way with the INotifyPropertyChanged implementation.

Friday, November 13, 2009

Anyone still out there?

Wow, I haven't posted in a while. In recent months I've been focused intently on a few things.

  1. Babies! My wife and I had twins in February.
  2. Learning a new technology while shipping an amazing game at Hive7.
  3. Working on a cool open source project.

I won't bore all you geeks with the baby stuff. If you can find the link to my personal blog you can go look at lots of pictures.

You should all check out Youtopia (the new game we shipped). We're really proud of this one.

So, drumroll please... *in my most awesome announcer voice* And, the new technology is... Flash! That's right, this Microsoft fanboy is now in the Flash camp. I really wish I could be working with Silverlight, but well, you can't build a game that runs on Facebook and make people install something. It just won't work. Once Silverlight has a market share more like Flash Player, then we're in business.

What do I dislike most about Flash? The development environments (yes, plural) for Flash pale in comparison to Visual Studio. Compiling is slow. Stuff crashes a lot. Heck, I even got the compiler to throw a null pointer exception on a few occasions! Debugging is a pain. The garbage collector isn't very fast. You only have one thread to work with. Hey Adobe is it still 1998?

All that being said, Flash (and more specifically Actionscript 3 and Flash Player) is actually really mature now and a decent piece of technology. It has most things a developer looks for in a language/runtime. And, well, it allows us to create a really rich and interactive experience that runs in your browser and doesn't require you to install anything. Obviously the business case here wins out over my whining.

I think I've spent enough time talking. Coming very soon, a useful post that contains lots of great technical info from the perspective of a C# junky diving head first into Flash.

Friday, June 26, 2009

Functional Optimistic Concurrency in C#

A few months ago Phil Haack wrote about how C# 3.0 is a gateway drug to functional programming. (Yeah, that's how long ago I started writing this blog.) I couldn't agree more. I find myself solving problems using functional rather than imperative programming quite often nowadays. It's much more elegant for many problem spaces.

Before we go any further, here's the sample app used for this article. Even if you don't like my writing, you should play with it. Yeah, you! optimistic-concurrency.zip

One problem space that fits very well with functional patterns is in developing apps that have to use optimistic concurrency to maintain data consistency at scale. Here at Hive7 we build PvP games. In such games, multiple people and background processes are often affecting the same entity at the same time. We can't use coarse grained locks or high isolation levels in MS-SQL, or the whole game would come to a halt. Here's a common scenario in a game like Knighthood:

Multiple rival lords are attacking my Kingdom at once trying to steal my most prized vassal, my wife! My wall is staffed with a heavy defense, and my hospital has a strong set of medics healing my kingdom over time. But to keep a handle on the attack I also have to continuously spend gold to heal my defensive army.

In this common use case there are a number of subtleties. First, multiple people are attacking me at once. That means they're doing damage to my defenses in real time, and at the same time. My hospital is healing my vassals over time. This occurs in a background process once every few minutes. And I'm triggering an instant heal to my defensive vassals using my gold supply. My Marketplace is also generating gold for me over time in another background process. To top it all off, this is happening across a cluster of application servers that are certain to be processing multiple requests simultaneously. Phew!

So what does all that mean? Well, basically, there are a lot of possibilities for change conflicts. And we have to deal with those conflicts to both keep a consistent data model and perform well.

There are a a number of potential strategies for managing these change conflicts in the persistent store – a few beefy Microsoft SQL Server databases in our case. We chose to go with optimistic concurrency and an abort on conflict transaction strategy. That basically means when we write data to the database we make sure we are always writing the most recent version of a row. If an application attempts to write an old version of the row, the data access layer throws an exception and aborts the transaction. Knighthood uses NHibernate so the validation is done for us automatically using a simple version number on the row. The basic algorithm is:

  1. Read data and serialize into objects (done by NHibernate)
  2. Modify objects in code
  3. Tell NHibernate to persist the changes, which does the following

    1. Increments the version number
    2. Finds all the changes and batches up insert/update calls
    3. Uses the version number in the WHERE clause of updates like: "UPDATE Table SET Col1='blah' WHERE Version=36"
    4. Checks the rows modified reported by SQL server and throws an exception if it's an unexpected number

As you can imagine, this fails regularly in a high concurrency scenario, but it succeeds orders of magnitude more often than not. It's also pretty standard for any web app nowadays.

The only problem is, to preserve consistency, an exception is thrown and the transaction is aborted when change conflicts occur. That means whatever request the application or user issued fails. We could show the user a friendly error message, but that would be a frustrating experience. Nobody likes seeing errors for non-obvious reasons. And in the case of headless software running in the background the error would just be in a log somewhere. If it's something important that needs to happen, then we have to make sure it gets done! So us imperative programmers devise a retry scheme and write a loop with an exception trap around our code. Maybe you get clever and create a class that does this which raises an event any time you need to execute your retry-able code. But, this gets pretty cumbersome. Enter functional programming!

We have a little class named DataActions that is used to simplify and consolidate this retry process and make it painless to use. I'm going to use LINQ to SQL as the example here. Here's some usage code:

DataActions.ExecuteOptimisticSubmitChanges<GameDataContext>(
dc =>
{
var playerToMod = dc.Players.Where(p => p.ID == playerId).Single();
SetRandomGold(playerToMod);
});

As you can see it's really straight forward. Notice all the goodness going on there. We don't have to instantiate our own DataContext, manually submit the changes, or worry at all about transactions. It's all handled by the wrapper. And, you just have to provide some code to execute once the DataContext has been instantiated.

The ExecuteOptimisticSubmitChanges helper method itself is pretty simple as well:

public static void
ExecuteOptimisticSubmitChanges<TDataContext>(Action<TDataContext> action)
where TDataContext : DataContext, new()
{
Retry(() =>

{
using (var ts = new TransactionScope())
{
using (var dc = new TDataContext())
{
action(dc);
dc.SubmitChanges();
ts.Complete();
}
}
});
}

And, finally, we have the Retry method:

public static void Retry(Action a)
{
const int retries = 5;
for (int i = 0; i < retries; i )
{
try { a(); break;
}
catch { if (i == retries - 1) throw;

//exponential/random retry back-off. var rand = new Random(Guid.NewGuid().GetHashCode());
int nextTry = rand.Next(
(int)Math.Pow(i, 2), (int)Math.Pow(i + 1, 2) + 1);

Thread.Sleep(nextTry);
}
}
}

When you string all this together you get pseudo-stacks that look like:

MyCode
ExecuteOptimisticSubmitChanges
Retry
ExecuteOptimisticSubmitChanges
MyCode

So, why should you care? The calling code is really easy to read, and you get a number of other benefits with this code. In addition to handling exceptions caused by concurrency errors, you also get retries on deadlocks, and more common Sql Connection errors.

I put together a little sample application you can play with. It uses these helpers and has a SQL Database with it. The sample simulates really high concurrency and you can watch it deal gracefully with deadlocks. Then you can change line 29 of Program.cs and execute the same concurrent code without retries enabled. It ouputs the number of failed transactions and a bunch of other interesting stuff to the console. Here's some example output:

 ...  Retrying after iteration 0 in 1ms Retrying after iteration 0 in 0ms Thread finished with 0 failures. Concurrency at 3
Retrying after iteration 1 in 3ms
Retrying after iteration 1 in 4ms
Thread finished with 0 failures. Concurrency at 2
Retrying after iteration 2 in 5ms
Thread finished with 0 failures. Concurrency at 1
Retrying after iteration 3 in 15ms
Thread finished with 0 failures. Concurrency at 0

0 total failures and 7 total retries.
All done. Hit enter to exit.

And the same test run with retries disabled:

 ...  Starting worker. Concurrency at 8
Thread finished with 0 failures. Concurrency at 7
Thread finished with 0 failures. Concurrency at 6
Thread finished with 1 failures. Concurrency at 5
Thread finished with 1 failures. Concurrency at 4
Thread finished with 1 failures. Concurrency at 2
Thread finished with 2 failures. Concurrency at 3
Thread finished with 0 failures. Concurrency at 1
Thread finished with 2 failures. Concurrency at 0

7 total failures and 0 total retries.
All done. Hit enter to exit.

Here's the download link again: optimistic-concurrency.zip

Let me know if you have any questions.

Tuesday, June 2, 2009

A new basket for my eggs

Hopefully after reading that title you're thinking of the old adage "Don't put all your eggs in one basket" and not something crude. Ok, I admit, either way it works for me. You're still reading.

Let me preface by saying I love what's going on at Hive7, but a guy's gotta have a side project. In fact I wrote about this phenomenon a while back. And, in my mind, that side project might as well make me some lunch money.

For the last two years or so I've been really interested in digital photos and the untapped markets that lie within. In fact, I got introduced to Hive7 while trying to sell myself to an investor to get some angel funding in the space. I'm not a pro photographer wannabe or anything like that. I just think digital photos are a great medium for sharing life with friends and family. I have built Friend Photosaver for Facebook (a screen saver using Facebook photos), Photo Feeds for Facebook (automatic photo RSS creator for Facebook), and Photozap (a tool to download Facebook photos as a zip file).

Those applications are all pretty cool, but didn't really strike me (or anyone else) as especially compelling. But, they did lead me down the path of building something that I think is pretty interesting.

Everyone has a digital camera or cell phone camera. When you go to a social gathering of any sort there are usually tens to hundreds of photos taken. Think of weddings, birthdays, graduations, family bbq's, night clubs. . . . What happens to these photos? Someone copies them to their computer, or uploads them to a photo sharing web site. They send out links or maybe share the photos through a social network's tagging or posting features or some such. That's all fine and good, but I think there's more to be had.

Enter pixur.me. Quoting the about page:

Pixur.me is a different kind of online photo sharing service. Our mission is to focus on the person receiving photos, rather than the one taking them. There are a lot of great services where you can organize your own photos and share them with people, but we think that's only half of the equation.

Can you find all the cute pictures of your kids from your last family vacation? Or how about all the photos from your wedding that your guests took? Could your mother find those same photos?

You could if your family was using pixur.me! What if all the photos that everyone took at that last vacation or your wedding were in one spot? Even though Aunt Sue uses Flickr, and you use Facebook, and your mother uses Picasa. That's pixur.me. Create a Stream and see for yourself! Once your stream is created anyone can add photos to it, regardless of where they are stored online.


That's it. Another basket awaiting some eggs. Give it a spin and let me know what you think. Of course, it's not very interesting if you just use it by yourself. Create a stream and give out the link at your next gathering. Or maybe start a stream that your extended family can add photos to so grandma can see them all in one spot.

Oh yeah, I almost forgot this is a technical blog. This project started out as a technology experiment so it's built on Windows Azure and ASP.NET MVC. Very cool stuff. I'll have to write more about them later...

Thursday, March 12, 2009

ioDrive, Changing the Way You Code

Introduction

In my lifetime there have been very few technologies that have created a paradigm shift in the software industry – I was born just after the spinning magnetic hard drive was created. Off the top of my head I can think of: the Internet (thanks Al!), optical disks, Windows, and parallel computing. From each of these technologies entirely new software industries were born and development methodology drastically changed. We're at the beginning of another such change, this time in data storage.

Contents
Random Story for Context

At Hive7 we make web based social games for platforms like Facebook and Myspace. We're a tiny startup, but producing a successful game on these platforms means we're writing code to deal with millions of monthly users, and thousands of simultaneous users pounding away at our games. Because our games are web based they're basically written like you'd write any other web application. They're stateless, with multiple RDBMS back end servers for most of the data storage. Game state is pretty small so we don't really store that much data per user. We don't have Google sized problems to solve or anything. Our main problem is with speed.

When you're surfing the web you want it to be fast but can live with a page taking a few second to load here and there. When you're playing a game, on the other hand, you want instant gratification. A full second is just way too long to wait to see the results of your action. Your character's life might be on the line!

To accomplish this speed in our games we currently buy high end commodity hardware for our database servers, and have a huge cluster of memcached that we tap into. It works. But, properly implementing caching is complex. And those DB servers are big 3U power hungry monsters! Here's a typical disk configuration of one of our DB servers:



Each of those drives are 15k RPM 72 GB SAS (or whatever the fastest is at the time of build). And the RAID controllers are very high end with loads of cache. And here's the kicker! We can only use about 25% of the capacity of these arrays before the database write load gets too high and performance starts to suffer. They cost us about $10k a piece. Sure, there are much more complex architectures we could use to gain performance. Or we could spend a few hundred grand and pick up a good SAN of some sort. Or we could drop some coin for Samsung SSD's. But, those options are a bit out of the price we want to spend for our hardware, not to mention the necessary rack space and power requirements.

Enter the ioDrive. With read/write speeds that are very close to the 24 SSD monster that Samsung recently touted, at a way lower price, I have a hard time imagining choosing the 24 drive option. Maybe if you had massive storage requirements, but for pure performance you can't beat the ioDrive price/performance ratio right now. I don't remember if I'm allowed to comment on pricing, but you can contact a sales rep at Fusion-io for more info.

Last month we picked up one of these bad boys for testing. In summary, "WOW!" I spent a few hours this week putting the ioDrive through the ringer and comparing it to a couple different disk configurations in our datacenter. My main goal was to see if this is a viable option to help us consolidate databases and/or speed up existing servers.

The Configuration

ioDrive System (my workstation)

  • Windows Server 2008 x64 Standard Edition
  • 4 CPU Cores
  • 6 GB Ram
  • 80 GB ioDrive
  • Log and Data files on same drive

Fast Disk System

  • Windows Server 2008 x64 Standard Edition
  • 8 CPU Cores
  • 8 GB Ram
  • 16 15k RPM 72 GB SAS Drives (visualization above)
  • Log and Data files on different arrays

Big and Slow Disk System

  • Windows Server 2008 x64 Standard Edition
  • 4 CPU Cores
  • 8 GB Ram
  • 12 7200 RPM 500 GB SATA Drives
  • Log and Data files on different arrays
Test Configuration

For this test I used SQLIOSim with two five minute test runs. We were really only interested in simulating database workloads. If you want a more comprehensive set of tests check out Tom's Hardware. I should also mention that this was obviously not a test of equals. Both disk based systems have a clear RAM advantage and the fast disk system has a clear CPU advantage. The hardware chipsets and CPU's are also slightly different, but they're the same generation of Intel chips. In any case, when you see the results you'll see how this had a negligible effect. We're talking orders of magnitude differences in performance here...

I ran two different configurations through SQLIOSim. One was the "Default" configuration that ships with the tool. It represents a pretty typical load on a SQL Server disk system for a general use SQL server. The other was one I created called "Write Heavy Memory Constrained". The write heavy one was designed to simulate the usage in a typical game, where, due to caching, we have way more writes than reads to a database. Also, the write heavy one is much more parallel. It uses 100 simulated simultaneous random access users where the default one has only 8. And, with the write heavy one there is no chance the entire data set can be cached in memory. It puts a serious strain on the disk subsystem.

I took the output from SQLIOSim and imported it into Excel to do some analysis. I was primarily concerned with two metrics: IO Duration and IO Operation count. These two things tell me all I need to know. First, how long does it take the device to perform IO on average, and how many can it get done in the given time period.

Test Results

Write Heavy Memory Constrained Workload
Metric ioDrive Slow Disks Fast Disks
Total IO Operations 10,625,381 1,309,673 3,260,725
Total IO Time (ms) 17,625,337 1,730,147,612 356,839,912
Cumulative Avg IO Duration (ms) 1.66 1,321.05 109.44


Wow, 100x faster IO's on average!


Over 20x less time spent doing IO operations!


And over 3x more operations performed. This would have been way higher, but the ioDrive system was CPU constrained, taking 100% CPU. Looks like we'll be loading up at least 8 cores in any database servers we build with these cards!

Default Workload
Metric ioDrive Slow Disks Fast Disks
Total IO Operations 690,753 287,180 456,300
Total IO Time (ms) 3,616,903 231,859,576 93,991,055
Cumulative Avg IO Duration (ms) 5.24 807.37 205.99


40x faster on average in this workload! Looks like the bulk operations and larger IO's present in this workload narrowed the gap a bit.


This time, a little under 30x less time spent doing IO operations!


Only 1.5x more total operations this round. This time we weren't CPU constrained, and I didn't take the time to dig in to the "why" on this one. Based on the raw data I would guess this is caused by IO blocking a lot more often for ioDrive than the fast RAID system. This probably has to do with the caching system in the RAID cards under this mixed write workload. You'll notice if you look at the raw report, that the ioDrive has no read or write cache at the device level. It doesn't really need it.

In case you want to see the raw data or the SQLIOSim configuration files, you can download the package here: ioDrive Test Results

Conclusion

Wow! ioDrive is going to be scary fast in a database server, especially when it comes to tiny random write IO's, parallelism, and memory constraints. I think we'll be seeing a lot of new interesting software development and system architectures due to this type of technology. The industry is changing. You no longer need either tons of cache (or cash) or tons of RAM to get great performance out of your data store. We're talking 100x better performance than our fast commodity arrays. I think it's safe to say we'll be using these devices in production in the near future. Since this device is currently plugged into my workstation, maybe I'll post another review about how it's improving my development productivity so you can convince your boss to buy you one. :)

Monday, February 9, 2009

You might be a great hacker if you...

A number of years ago I was doing some mentoring at a California state agency that shall remain nameless. I got my butt up in time to be into their office at 8am. (Ok, I'll be honest, usually I got up in time. I was late on a few occasions.) I led them down the path of learning ASP.NET from scratch. Together we built a great product that is still in use today on a highly trafficked web site. Some time late in the mentoring project a student came up to me and asked the strangest question. He wanted to know how I learned everything I was teaching them. He wanted to take the same classes.




The guy was an amazing engineer. He was methodical, had great documentation, dotted all his i's and crossed all this t's. But he wasn't a great hacker. He was a bit slow, and didn't have much creativity. It was around that time I started paying more attention to traits of great hackers, before Paul coined the term. The bastard! At that time I was really just looking for people that learned quickly and could get things done faster than the rest. Some day maybe I'll figure out how to be all witty and important and coin terms.

So, here we go, in no particular order – except the first one, which is the obvious transition from my enticing story above:

  • started off as a script kiddie

    A typical scenario looks something like the following, though could occur in any area (not just video games):

    1. Play Quake 2
    2. Get stupidly good at Quake 2
    3. Get bored with Quake 2
    4. Figure out how to cheat by writing scripts to rocket jump or speed hack
    5. Realize what you just did was coding, and it's fun and amazingly rewarding
    6. Change all direction in life (yes, even when you're 12 years old) so you can do more coding

  • often forget to shave

    "Wait a minute, you mean people do their own laundry?" Yes, you are exceptionally lazy. That's ok. You have more important things to do in life than worry about how you smell/look.

  • have ever worked on a project for 24 hours straight

    "Hold on! You don't work without sleep until a problem is solved?" And no, last minute procrastination doesn't count, and neither do production outages. Everyone's been there. I'm talking about all night sessions working to solve a problem that could have been done over a few normal working days

  • instantly quiet a room when you speak

    You don't talk much. You spend most of your time listening to others. You find idle chit chat to be boring and have quantitatively determined it's a waste of time. Maybe it's because you don't have much to say. But, what I think is more likely is that you only say things that are relevant and important. Thus, when you speak, the room listens. A room full of strangers doesn't count. They won't know you only say important things until you've trained them that way.

  • have ever gone to visit a friend and proceeded to ignore them because you must finish that stupid puzzle they had on their coffee table before putting it down

    See photo at top of post.

  • get asked to help debug other people's code

    There's a certain amount of pride a developer has over their code. No matter how logical it is to call someone in for help, it's always the last thing we do. If you're the guy people call for help, you're on the right track.

  • are naturally good at video games

    Ever pick up a game, and within minutes beat or come really close to beating, someone who's been playing it for months? This is a sure fire sign of your analytical and problem solving skills. Come to think of it, I think I'm going to start adding this to my interview process.

  • use every operating system in existence

    Sure, you think Windows sucks, but you use it because you play games on it and deep inside you know it doesn't really suck much worse than the competition. You know Linux is the best (DUH!) but you play with FreeBSD. You have OSX running on your laptop because those big icons and MacBooks are sexy. But really, it's more about curiosity than anything.

  • make a habit of picking up a new technology over the weekend

    Lego Mindstorms, anyone? Oooh, how about Adobe AIR or Microsoft Azure or iPhone development. You catch my drift.

  • are extremely critical of everything

    You find fault in everything from your takeout food to web sites to world economic systems. The world is an imperfect mess that needs to be cleaned up. And, of course, you could do it with a weekend and your new favorite development platform (that you haven't used yet)!


This is all I could come up with in the time I set aside for this blog post. So what do you all think? What am I missing? I'll update the post with your ideas as they come in, if they don't suck.

Monday, February 2, 2009

Abstracting Away Azure: How to Run Outside of the Cloud

I had a lot of fun over our holiday break this December working on prototype projects for up and coming technologies. One of those projects dealt with Windows Azure, or, the Azure Services Platform. Azure is basically a cloud application hosting environment put together by Microsoft. The idea is, you build your web apps in .NET and publish them to the nebulous cloud. Once in the cloud they scale and perform well and you don't have to deal with any of the headaches of managing things at the OS/System level.

But with the recent economic news out of Redmond I've been wondering about the future of its more experimental CTP/Alpha/Omega/Whatever-They-Call-It projects such as Azure. If you're not familiar with the project, I suggest you venture on over and check it out now.

Unlike other cloud hosting platforms out there, with Azure you don't have to maintain the operating system. Not only do you get the benefits of cloud computing, but you don't even need a system administrator to run the thing. Of course, the fact that you don't have control of the operating system has its drawbacks.

With Azure you can't run unmanaged code, you're stuck in Medium trust, and you can only build a port 80/443 HTTP application. If you want to run memcached or Velocity or streaming media codecs, well, you can't. If you want to host a game server that communicates with UDP or some non-http protocol, you can't do that either. But, for most custom web applications, everything you need is there. They host a "database" for you, a queue service, you can run background services, and you even get a shared logging service.

All of the services they provide seem to work as advertised and are promised to be extremely scalable. But, one thing they don't talk about (and I can't say I blame them) is how you might run your applications if they're not hosted in the cloud. In our company this just isn't acceptable. If we put out a game and our hosting provider ceases to exist, or no longer meets our needs, we had better be able to move to a new hosting provider! So, I'll give you some tips based on my experiences building prototype Azure applications on how you can easily easily design your applications to run outside of the cloud.

The Main Azure Features
  • Table storage
  • Queue services
  • Blob storage
  • Logging
  • Background services (Worker Role)
Table/Queue/Blob

Abstracting away tables, queues, and blobs is fairly simple but takes a bit of up front planning. You do basically the same thing you'd do if you were building an application on a large team that is designed to work with any data storage back end. At a high level:


In order to maintain the abstraction it's very important that your UI and background services don't interact directly with the Azure services. First off, use DTO entities. If all else fails and your new back end storage isn't compatible with Azure, you can always fall back to re-writing the layer that talks to it and you don't have to change any of your UI code. Do not expose the PartitionKey and RowKey values on your DTO entities. Leave the partitioning scheme as an implementation detail of your Service/Model layer. It will change if you have to move your data into Amazon's SimpleDB, for example. Since Azure Table Storage uses the ADO.NET Entity Framework at the core, there actually isn't much you need to do to the entities in order to make them portable to other Table-like storage systems. Also, the Blob and Queue storage services are quite simple and abstracting their interface is a matter of tens of lines of code.

Create interfaces for the layer that the UI communicates with and use a dependency injection (DI) framework such as StructureMap or Castle to inject your implementations that communicate with Azure.

I use StructureMap on a day to day basis, and I was dissapointed that it didn't work out of the box. I had to make a couple modifications to the source to get it to run under medium trust. First, you need to add an AllowPartiallyTrustedCallersAttribute to the assembly and then remove the security assertion that's asserting the right to read the machine name (you don't have access to the machine name in medium trust). You can download my updated version here (patch and binary): StructureMap-2.5-PartialTrust.zip

That's it. With your UI not talking directly to the Azure services you'll have an extra layer of code to maintain, but you'll be thankful if you ever need to pull it out of the cloud.

Logging

For all my non-Azure projects I use log4net for logging. It's a simple, flexible, open-source logging engine. You might want to use Enterprise Framework. Whatever. Just like with the storage engines the key to being able to move off of the Azure logging service some day is to not use it in your applications directly. I wrote a little Appender plugin for log4net that writes logs to the Azure RoleManager if the app is loaded into the Azure context. Most of the code is mapping the multitude of log4net log levels to the Azure event log names. Here's the code:

public class AzureRoleManagerAppender
: AppenderSkeleton
{
public AzureRoleManagerAppender()
{
}

public AzureRoleManagerAppender(ILayout layout)
{
Layout = layout;
}

protected override void Append(log4net.Core.LoggingEvent loggingEvent)
{
if (null == Layout)
Layout = new log4net.Layout.SimpleLayout();

var sb = new StringBuilder();
using (var sr = new StringWriter(sb))
{
Layout.Format(sr, loggingEvent);
sr.Flush();

if (RoleManager.IsRoleManagerRunning)
RoleManager.WriteToLog(GetEventLogName(loggingEvent), sb.ToString());
else
System.Diagnostics.Trace.Write(sb.ToString(), GetEventLogName(loggingEvent));
}
}

protected virtual string GetEventLogName(LoggingEvent loggingEvent)
{
if (loggingEvent.Level == Level.Alert)
return "Critical";
else if (loggingEvent.Level == Level.Critical)
return "Critical";
else if (loggingEvent.Level == Level.Debug)
return "Verbose";
else if (loggingEvent.Level == Level.Emergency)
return "Critical";
else if (loggingEvent.Level == Level.Error)
return "Error";
else if (loggingEvent.Level == Level.Fatal)
return "Critical";
else if (loggingEvent.Level == Level.Fine)
return "Information";
else if (loggingEvent.Level == Level.Finer)
return "Information";
else if (loggingEvent.Level == Level.Finest)
return "Information";
else if (loggingEvent.Level == Level.Info)
return "Information";
else if (loggingEvent.Level == Level.Notice)
return "Information";
else if (loggingEvent.Level == Level.Severe)
return "Critical";
else if (loggingEvent.Level == Level.Trace)
return "Verbose";
else if (loggingEvent.Level == Level.Verbose)
return "Verbose";
else if (loggingEvent.Level == Level.Warn)
return "Warning";
else
return "Information";
}
}

Then you just configure log4net as usual, and go on your merry way. Write your logs to log4net rather than to the Azure log manager.

<log4net>
<appender name="azure" type="AzureRoleManagerAppender,MyAssembly">
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%logger - %message" />
</layout>
</appender>

<root>
<level value="ALL" />
<appender-ref ref="azure" />
</root>
</log4net>
private ILog _log = LogManager.GetLogger(typeof(WorkerRole));

...

_log.Info("Starting worker process");

Background Services

Background services (Worker Roles) are basically Windows Services. The key difference, though, is in the behavior of the Start method. In Windows Service land you're expected to exit the Start method when the service has started. In Azure, the Start method is more like a Main and when it exits Azure assumes your service has completed its task and is restarted. I'd just write all your code in your RoleEntryPoint and not worry about any abstraction for the Worker Role. It's simple enough to just refactor and move to a Windows Service model if need be. But, just like in your UI, don't communicate directly with Azure back end services like Table, Queue, and Blob storage.

So there you have it. The basics of abstracting away Azure. I don't think Microsoft plans on canceling this project any time soon, but if they do (or you want to host elsewhere) you'll be ready! I, for one, am really excited about the future potential of Azure and we may even use it here, but we will be designing our applications so they can easily be ported to a different platform just in case.

Monday, January 26, 2009

Put down the abstract factory and get something done

It seems as a group, us programmers have our priorities screwed up. Programmers value clean, concise code. Code that requires no documentation. Code that perfectly uses design patterns and best practices. Code that other programmers will look at and think "wow, I wish I was as l33t as this guy."

But, let's get real. That stuff doesn't matter.

Why do you write code? Well, chances are someone pays you to do it. Of course the best programmers also love what they do and do it for fun. But at the end of the day it's your profession. Maybe you're (un)lucky enough to be doing it for yourself in your own startup.

In the early years of a startup only one thing should matter to a programmer: shipping your product to meet your customers' needs. Everything else we do is simply a result of this, the humblest of goals. Without the product people want you have no revenue. Without the revenue you have no company. Without the company, well, you get the idea.

Startups are widely considered the purest form of a company. You exist to meet a perceived need with a pretty small scope. There aren't layers of management or TPS reports to get in the way of getting things done. The only barrier to getting something done is yourself. No excuses. It is in this environment where an engineer's need for perfection must be replaced with a hacker's passion to get things done, and get them done fast. Nothing else matters.

Maintainability isn't a factor. Best practices don't matter. Design patterns don't matter. All that matters is getting things done. Don't worry about scalability until you have to. Instantiate that object. Who cares about the factory. Skip the interface, and create a static class. Some day if you need the interface come back and re-factor your code. With the power of the IDE these days re-factoring is a lot less scary than it used to be.

This may sound short sighted, and it is. In fact, that's the point. Who knows if the company will even exist in a year to have anything to maintain. Projects change. You have to adapt. You will never know how your code will be used 5 years from now. Stop thinking about it. 5 years ago did you think you'd be integrating your [random business application] with this Facebook thing? I bet you've thought about it now. Not to mention, 5 years from now it's likely the entire programming paradigm will have changed. Were they thinking about AJAX when they designed ASP.NET? How about 3D graphics in desktop applications when the window message pump was developed? Such is life in our fast paced world. No amount of overly designed or perfectly formatted code will change it.

If you find yourself maintaining this horribly designed, hacked together legacy code from the early days of a company be thankful and bask in its glory. Without that spaghetti nightmare you wouldn't have that job. It was that short sighted thinking that was able to get something done and create a profitable product/company.

Of course, I'm not advocating you just toss all your code in a button's click event or anything that silly. Be smart, organize things well, but don't waste time overly designing code to be flexible. If you have to spend more than a couple hours sketching out your design, it's probably too complicated. Write some code. Re-factor it if you need to. You don't need a proper RESTful architecture, or a perfect DDD. Your application isn't going to change from Microsoft SQL to MySQL some day.

Alright, I'll admit it. If you're building enterprise server products, or work on a large team, or are building framework products for developers to use, then ignore everything I've said. Of course, then I'd question why you're a startup in that position in the first place....

So I urge you, especially if you're in a startup, to put down put down the abstract factory and get something done.

Tuesday, January 20, 2009

Concurrency. It's like doing the dishes

Since we moved to Palo Alto I've had the luxury of walking to work every day. Usually that's where I do my deep thinking. By the time I cruise by the Whole Foods it's really easy to ignore the activist-of-the-day petitioning something about global warming. But yesterday was different.

My walks to and from work were pretty normal. When I got home I decided to clean up a bit around the house, was doing the dishes, and had an odd moment of clarity. I threw down the sponge and ran over to my laptop to jot this down.

Usually I'm at a loss for analogy when explaining how concurrency works to a developer who has never had to deal with it before. So I throw out all kinds of highly technical terms and their eyes glaze over. But you know, it's actually really simple.

Managing concurrency is like doing the dishes. You can hand wash everything and be sure it gets cleaned perfectly every time or you can stick the dishes straight into the dish washer and take your chances. Most of the time everything will come out clean, but every couple loads you'll get a dish you need to wash again. Going straight into the dishwasher is way faster, and you can even do more than one dish at a time (assuming you have two hands).

If you want the technical description, I leave that as an excercise to the reader. Here's a Wikipedia article. And another over at Microsoft that's specific to database concurrency. See, told ya it's like doing the dishes.

Monday, January 19, 2009

Don't hire a programmer if they don't code for fun

I'm not the first person to talk about this paradigm and I won't be the last. Every single programmer I've seen that is exceptionally good at their job also does it for fun. They have an itch. It must be scratched. No matter how fun and lenient the work place, they always have their own project to work on. Their own passion. But, I think there is more to it, or I'd just site some previous articles and be done with it.

But first, what do I mean by exceptionally good at their job? Well, Steve McConnell has made quite a name for himself in recent history bringing forth the research on 10x Software Development. This is the level of exceptional I'm talking about. The guy that takes a 10 minute set of verbal requirements, extrapolates, and builds a Web 4.0 Whooziwhatsit in a day, before you even know what Web 4.0 is. Paul Graham calls these guys great hackers, Joel Spolsky says they're smart and get things done. We just call them rock stars.

But, to possibly be a rock star, it's not enough for the programmer to just have a side project. The side project has to be fun (for them). Maybe they get a kick out of programming Lego Mindstorms to walk their chinchilla or creating an app for their mobile phone that synthesizes unique farting noises for ring tones based on the names in their address book. Whatever it is, they should be doing it for the pure joy they get by flexing their creative muscle.

Next time you're doing a phone interview ask the candidate about side projects early in the call. Dig in a little bit. Expect the rock star to change her mood and instantly become a lot more talkative. The passion will be self evident. If it isn't, this person isn't your rock star.

Obviously fun coding projects aren't the only indicator of a rock star, but they're a good way to filter out programmers that just do it for a paycheck.

Friday, January 16, 2009

ASP.NET MVC sucks and so does jQuery and PHP

Apparently, saying something sucks gets you a lot of hits. I think I'll use this tactic more often. My post on 10 Reasons ASP.NET Webforms Suck has been quite the talk in our tiny little .NET blog world this week. Who knew you all had such strong opinions on the matter!

ASP.NET doesn't completely suck

Saying something sucks doesn't mean it isn't good enough, or isn't the best option. In my mind everything sucks. My cell phone sucks, my laptop sucks, my operating system sucks, my car sucks. They all need improvements. They are all far from perfect. It is this mindset that drives me to build better software. If you can't see the flaws around you how can you improve on them? ASP.NET 4.0 didn't come around for fun. It came around because 3.5 sucks and needs improvement, and so on, and so forth.

Quite a few of you wrote some great rebuttals. Some were utter nonsense, but hey, this is the internet. That's to be expected. I'd like to talk about my favorite comments:

"It's obviously not a perfect design, but, it did it's job." – Robert Sweeney


Indeed, my thoughts exactly. It did its job. The internet has moved along really fast. Webforms are lagging behind a bit. Sure, it's still perfect for RAD business type applications. But build a web game on it, or a "Web 2.0" web site, or other consumer facing web product. The level of customization you end up doing to work within the bounds of the framework's abstraction starts to become silly.

"I agree that is does suck now." ... "over time, better ways to do things are created and naturally, the old ways get laid to rest" – shaun


Yeah, this is the nature of things. The technology will be around forever, but the development world will pass it by. We like shiny new things.

"Anyway, this 'hidding how HTTP works' philosophie that ASP.NET follows in every single corner of the framework is the real problem. Django, Ruby on Rails, and PHP doesn't try to hide the fact that you are building a website/page/app and help you in the process of coding with helpers, decorators or simple functions." – Angel


YES! HTTP, HTML, CSS, Javascript. These are the technologies we work with on the web. They're simple. Learn them, love them, embrace them. It'll also make your skills a lot more transferable should you ever be looking for work.

"Newb" – rabbit

pwned!

"loooks like newly migrated from php/java." – web spider


Did you read my post? I have been using ASP.NET since it was in beta. I live and breathe this stuff and use it every day. I'm simply pointing out the flaws I see.

"I've used PHP, Drupal, Rails and even FastCGI in the bad old early days and find I'm always coming back ASP.Net. Security, data abstraction layers, controls, validation, scalability, application recycling, caching, session management and great development/debugging environments are just to hard to pass up." – Mike

Yeah, I love ASP.NET too. I don't use anything else on a serious basis. You definitely mentioned my favorite parts of it, especially the last one.

"This model has been created back in what 1999/ 2000 when MS started working on .NET 1.0 (was released in 2002). So we are talking the model/architecture is almost a decade old, way before the Web 2.0/Ajax days." – Bart Czernicki

Yes! It's old (mature as some say)! Is it at all possible there is a better way now?

Chris Vanderheyden said a lot

"Honestly, after more than 8 years of professional experience: YES, your SHOULD be out of that highschool mentality. (Look my editor can do only 2 colors i am so L33T...) "

"I am a developer, i write logic, not translations. I WANT my HTML abstracted. I don't want to write zeroes and ones down to the NIC now do i. "

Since you likely know me only from my 'it sucks' post, you're going to find this shocking. I agree with most of the justifications you made (except the ones I listed above). My biggest beef is with your comment on my #1. You obviously have no sense of humor. ;) And, why wouldn't you want to write html? It's a simple human readable markup language, not binary networking protocols. XHTML+CSS is abstraction at its best. In fact it's usually just as simple as the abstractions provided by ASP.NET controls. I mean, really, can you actually point at one of your ASP.NET apps that would run outside of the context of your modern web browser? Something other than html 4.0 or whatever you're using? You have to learn a lot about the quirks of ASP.NET to get things done well. Why not learn the quirks in html/css/js? Oh wait, you dohave to do that too. The leaky abstraction abounds.

"I only have 1 reason... Leaky abstraction over HTTP that introduces instead of removing complexity. Every other reason is a derivative or effect of this one reason" – Greg Young

Thanks Greg. You always have a nice way of distilling things down. But that wouldn't make nearly as fun of a blog post!

"I think that JD Conley is just sarcastic. Actually he loves .NET" - br_other

No, I'm not being sarcastic. Perhaps dramatic. Yes, I love .NET. But it's not without its flaws.

"sos un boludo" – Sebastian

This is cooler than the "newb" comment! Thrasing in a foreign language!

"You don't have to use <%= ClientID%> stuff at all There is a much better way. I claim ASP.NET webforms has the "best" integration with client side DOM. You think I am kidding ? Have you ever heard IScriptControl ? I guess you didn't" – onur

Indeed I have heard of IScriptControl and use it quite a bit. It's an interesting and often useful abstraction. Though I always laugh at myself since to use it I add some C# code to generate some js code to call some other js code I could have just called in the first place if I were working in the markup.

And finally, we have the people who decided to write a full rebuttal on their slice of the net. Cool. Thanks for the link backs! Hope you enjoyed my comments.

Mike Pope also posted an interesting commentary on the matter. Us silly kids and our toys! Aren't we allowed to change our minds, backpedal, or get excited by new technology?

Happy hacking!

Wednesday, January 14, 2009

Fire and Forget Email, Webservices and More in ASP.NET

Often times when you're working on a web site you want to fire and forget an email, a web method or, most common in our case, a Facebook call. There's a good chance there's a Framework method available to do that for you quite simply. They're suffixed with the word Async. For email there's the System.Net.Mail.SmtpClient class. The following dirt simple code will send an email for you asynchronously:

Download the sample code

var s = new SmtpClient();
s.SendCompleted +=
(sender2, e2) =>
{
//do something when the send is done.
//retry if error, etc.
};

s.SendAsync(from.Text, to.Text, "", message.Text, null);

Well, that's pretty darn simple! Create a new SmtpClient. Call SendAsync and pass in your message data. Cool. There's even a whole set of classes to help you with attachments, multiple formats (like html and text), etc. From your console app or Windows Service this will work beautifully! The problem is, in an ASP.NET page this won't work. If you do this in a Page_Load or button click event, for example, you'll get the following helpful error message.

Asynchronous operations are not allowed in this context. Page starting an asynchronous operation has to have the Async attribute set to true and an asynchronous operation can only be started on a page prior to PreRenderComplete event.

Basically what ASP.NET is saying is that it's not prepared for you to make an Async call. No problem! ASP.NET has a nifty page directive. Just set Async="True". The MSDN documentation says: "Makes the page an asynchronous handler (that is, it causes the page to use an implementation of IHttpAsyncHandler to process requests)." What does that mean? Well, there are a whole bunch of posts on this, so if you're not familiar, search around for asp.net async page and come back here. Also do a search for "async" in my blog. I've posted about it a lot. It's one of my favorite features in ASP.NET.

So, now you've got the Async page directive down and you think all is good. But then, suddenly, you notice page load times start to increase. Your phone is ringing. Users are complaining. After mere minutes of debugging (after all you're a kung fu debugger right?) you realize your ASP.NET page is waiting for the email to send. "What the heck is going on here? This was an Async call," you mumble under your breathe. You curse Microsoft, and write an angry blog post about it. What happened?

When you set that Async="True" directive on your page you told ASP.NET that you want to do page rendering asynchronously. However, what you didn't realize is that you're doing things asynchronously with regards to the use of threads, and not the serving of the page. Let me clarify. With Async="True" ASP.NET waits for all Async calls to complete before finishing page rendering. It's designed so you can kick off long running IO operations like calling a database, web service, writing files, and sending email, without tying up a valuable worker thread in your ASP.NET threadpool. Instead, the IO operation gets queued up down in unmanaged Windows land and IOCP magic and the shared IO threads kick in. If you truly want to fire-and-forget, and not have your Async calls affect your page load time, here's your answer.

using (new SynchronizationContextSwitcher())
{
var s = new SmtpClient();
s.SendCompleted +=
(sender2, e2) =>
{
//do something when the send is done.
//retry if error, etc.
};

s.SendAsync(from.Text, to.Text, "", message.Text, null);
}

It should be noted that in this sample code when the SendCompleted anonymous method is called, you are no longer in the ASP.NET context. The SynchronizationContextSwitcher removed this context and put you in no context, so you're just free ballin'. This is important. You can't mess with the Request, Page, Response, etc. We're talking serious multi-threading now. In fact it's even likely that delegate will be executing at the same time as some other method in your page's lifecycle, on a whole other thread. So, pass anything you want to use from the page via the last parameter on the SendAsync call, pull it out of the EventArgs in your SendCompleted handler, and don't touch that page object or anything in it.

I must confess. I didn't write this SynchronizationContextSwitcher class. It was another developer on our team (Boris) and then was improved by a random good Samaritan named Richard. It's also based on this one that's quite a bit more featureful/complicated.

Anyway, Simply wrap your send (or any Async) call in a using block like this and, for the scope of that block, any Async operations will happen as if you were not even in ASP.NET and didn't have a Request context to worry about. Your page will be served immediately without waiting for your Async call to complete. Of course, this does have caveats. By doing a true fire and forget there is now the potential your email won't get sent and you won't even know about it. ASP.NET could shut down your app domain 1/2 way through the send and you and the user would be none the wiser. So, care must be taken to either store these things in some other reliable place before the Async call, or (as in our case) usually whatever you're firing off isn't critical, so a few missed ones here and there won't matter.

public class SynchronizationContextSwitcher
: IDisposable
{
private ExecutionContext _executionContext;
private readonly SynchronizationContext _oldContext;
private readonly SynchronizationContext _newContext;

public SynchronizationContextSwitcher()
: this(new SynchronizationContext())
{
}

public SynchronizationContextSwitcher(SynchronizationContext context)
{
_newContext = context;
_executionContext = Thread.CurrentThread.ExecutionContext;
_oldContext = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(context);
}

public void Dispose()
{
if (null != _executionContext)
{
if (_executionContext != Thread.CurrentThread.ExecutionContext)
throw new InvalidOperationException("Dispose called on wrong thread.");

if (_newContext != SynchronizationContext.Current)
throw new InvalidOperationException("The SynchronizationContext has changed.");

SynchronizationContext.SetSynchronizationContext(_oldContext);
_executionContext = null;
}
}
}

I whipped up a small sample project to demo the effects I talk about here. There are two pages. One that is async, and one that isn't. It demos the error you get if you try to use an Async method on a non-async page, and simulates a slow email server on the async page. Then you can see the fire and forget in action.

Async methods are extremely useful, even if you're not using fire and forget. Most of the samples you see for doing asynchronous ASP.NET pages use the IAsyncResult and Begin*/End* methods. Those are pretty complicated, and if the Async method is available why not use it? I've written about the benefits of async programming quite a lot. Search for "async" up at the top right of the page.

Monday, January 12, 2009

10 Reasons ASP.NET Webforms Suck

I've always been a .NET Fanboy. I've been on the bandwagon since its inception. I've developed quite a few shipping .NET products for the web, Windows, and Linux. I've given talks at user groups, created a consulting company, and mentored developers new to .NET. I always experiment with the latest toys and try to stay ahead of the technolgoy curve. In my most recent role at Hive7, I've been focused on web technology. We have some pretty large scale games (millions of players) built on ASP.NET webforms and ASP.NET AJAX. It's been about 8 years since I've written a full blown web app that wasn't in ASP.NET webforms. Sure, there's the occasional small PHP or static html site, but no "real" applications have been built on anything but ASP.NET. I think I've been missing out.

I'm going to preface this by saying one thing. Ever try to train someone new to ASP.NET? Especially someone with any other web programming experience. It's not easy. That to me is a sign of suck, or maybe fail.

The Reasons (in order of frustration)
  1. Other web developers assume you're inferior

    Let's face it, if you're coding in ASP.NET you are NOT initially considered one of the cool kids. It's automatically assumed you're a corporate lackey with no programming fu. You have to prove yourself. It sucks. Yes, this is #1. Afterall, don't you want other people to think you're cool? Or am I the only one still living in high school...?


  2. One form to rule us all, one form to bind us

    I don't think I have much to say on this, other than. Why? What was the design decision behind overloading the html form and only lettings us have one? Why? Why? Why?


  3. Viewstate

    Ever accidentally generated a 1MB (simple) page by just using standard controls?


  4. ID insanity

    Mapping id's in html elements to id's in code starts out innocent enough. But, throw in nested controls (a recommended design practice) and hold on for your life. Once you get used to it everything makes sense. But try showing your dhtml/javascript guy how to use codebehind to grab a ClientID and pass that to his javascript code...


  5. Html abstraction

    I truly hate that in webforms that you don't really write html code. Browser independent rending is just a bad, horrible idea. The abstraction sounds nice on the surface, but some day it will bite you. Web developers should know how to write html code, understand the web programming model, and the cross platform implications of their code.


  6. Postbacks everywhere

    Linkbuttons, and any of the controls with 'autopostback' should be taken out on the street and shot. Posting back to the initial page as the default to perform an action is just counter intuitive. And then, how do you consume this action? An event handler? Weird.


  7. Request lifecycle

    Init, Load, PreRender. WTF? Try explaining that one to your javascript guy. The fact that we need a 10 step lifecycle for things to work sends off warning bells in my head.


  8. Getting data to the client

    Ok, I've got this cool data driven web site. And now I want to do some AJAX. How do you interface your server code with your Javascript? You can pick one of 20 methods, none of which are simple, and all leave the developer scratching his head. Sometimes things magically work. Usually they don't.


  9. Ugly URL's

    Ok, so this one is low on the list becuase the latest service pack added a routing engine. But hey, it's bugged me for the last 8 years. Customers want pretty url's. Webforms did not deliver without much hackery.


  10. Codebehind

    I love c#. But, the concept of codebehind just seems weird. Why is there a separate file that's coupled to the html code? This nifty abstraction has been the cause of so many developer questions and Visual Studio environment issues I don't even want to go there.


  11. The odd feeling that you have to beat the framework into submission to get it to do what you want

    Ok, this is #11, I know. But hey, something just feels wrong in webforms. Like you're trying to stick a square peg through a round hole.


As I look over this list I realized that most things I hate about ASP.NET Webforms related to the choices that were made about abstractions. I don't understand what was so scary about the web programming model that these decisions were made. In fact, now that I think about it, I'd have been happier sticking with the classic ASP programming model than using webforms. Oh well. The last 8 years will now be known as "the time in my life when I had to code on that ASP.NET webforms junk". Ok I'm done complaining for now. Posts in the forseeable future will be about happy things like ASP.NET MVC, Azure, and jQuery.

Most recently I've been working with the ASP.NET MVC framework and I have to say, wow. What a relief. It reminds me of doing web programming when I actually wrote simple Html code for my first web site. It's not really the MVC pattern per-se that attracts me. The joy comes from not being coupled to the desires of the ASP.NET framework developers whims. I can write javascript, html, and css. I can write server side code. And guess what, it's not necessarily coupled! Maybe in 8 years I'll be singing a different tune. But for now, I'm happy again.

Edit: I posted a follow up.

About the Author

Wow, you made it to the bottom! That means we're destined to be life long friends. Follow Me on Twitter.

I am an entrepreneur and hacker. I'm a Cofounder at RealCrowd. Most recently I was CTO at Hive7, a social gaming startup that sold to Playdom and then Disney. These are my stories.

You can find far too much information about me on linkedin: http://linkedin.com/in/jdconley. No, I'm not interested in an amazing Paradox DBA role in the Antarctic with an excellent culture!