Facebook is the Death Star, and we're all building it.

Do you ever wonder what happened to all the innocent construction workers on the Death Star (v2.0) when it is destroyed? If you've watched Kevin Smith's Clerks, you certainly would have. If you haven't seen it, Google "Clerks Star Wars" and watch a short clip from it. For those that can't be bothered, the discussion between a customer and a store clerk revolves around the destruction of the second Death Star in Return of the Jedi. Given that it was still under construction at the time, it is almost certain that many thousands of workers would've been killed in the blast. The customer feels that it's terrible that so many people were killed just doing their jobs (roofer, plumbers, electricians etc) that weren't, strictly speaking, part of the "Empire". The clerk argues that the independent contractor has a moral obligation to know who they're working for before they take on the job. If you're building a Death Star, it's really your duty to know what it is you're helping to build. If you get killed in the process, well... you knew the risk when you took on the job.

What does this have to do with you? Well, Facebook is the Death Star, and you're helping build it.

Ok, so you're not part of the Empire, but you're contributing to the construction nonetheless. You're the plumber or the electrician, the guy that forgot to put in safety rails, the girl that builds the strangely located trash compactors in prison blocks and then ensures they have large tentacled creatures in them, or maybe you're just the one responsible for naming Mr. Coffee and Mr. Radar. The point is, you don't need to work for the Empire to be complicit in its success.

Tim Berners-Lee, in his Scientific American article, wrote, "If we, the Web's users, allow these and other trends to proceed unchecked, the Web could be broken into fragmented islands. We could lose the freedom to connect with whichever Web sites we want." One of the trends he speaks of are the walled gardens of information; silos of user data owned by the likes of Facebook and LinkedIn. He argues, "Each site is a silo, walled off from the others. Yes, your site’s pages are on the Web, but your data are not."

He quite clearly sees the onus on us, as web users, to prevent this from happening. But I think it actually goes further than that. Data portability is certainly an issue, but friend data is at best transient and for any individual can become outdated in a year or two. Sure those awesome photos of your mate dressed as the Death Star may be stuck permanently in Facebook's database, but in a year or two, you're not going to care. Even the network of friends you build today is quite possibly a lot different than the one you'll create in 2 or 3 years time. This data is usually not critical and is, more importantly, replaceable. People left MySpace for Facebook and built new networks, and they can certainly leave Facebook for Diaspora or whatever comes up next. No, the real problem is Facebook becoming a defacto standard for your online identity. 

Many applications now provide the ability to log on to their system using Facebook Connect. In many cases, this is now the exclusive means of registering. Signing up to just about anything is becoming a simple matter of clicking the "Facebook Connect" button. For many of us, it's almost too tempting to take the quick route rather than signing up a whole new account with a new password. What we're not doing, however, is giving thought to what we're really doing. By joining something like Digg with that one click, we're permanently tying our Digg identity to our Facebook identity. Without thinking, we've made an implicit decision that our Facebook account will be active longer than our Digg account. From this point on, we can't delete our Facebook account without losing access to Digg as well. By doing this, we've all decided to make Facebook our permanent record, our online authentication protocol and our secure means of access to dozens of websites and applications.

Most of us sign up to a social network with little thought to the security behind it. Afterall, do I really care if someone knows what I did last summer? But if your Facebook account becomes your online passport, how secure is it? If your Facebook account is compromised how many sites does that hacker now have access to? How many sites can a hacker sign up to and assume your identity in the process? How many sites have your Credit Card stored securely for easy checkout? The hacker now has access to it all. The more ubiquitous Facebook becomes in your life, the more likely it is to be compromised and the more destructive it will be when it happens. What was originally a means to connect with friends, has become your single online identity. But you don't need a hacker installing malware on your PC. This an identity that is persistently logged in on your home laptop, your iPad, your iPhone and your work PC. Have you ever lost your phone or even just left your computer logged on at work? Someone now has access to everything you do online. Worse than that they even have a handy list of every website you're signed up to with Facebook Connect.  

Returning for a moment to the Star Wars analogy; remember when Jar Jar Binks voted to dissolve the Galactic Senate (thus handing control to Senator Palpatine) and you were screaming at your TV because everyone in the damned movie must have to be a complete and utter moron to not see it happening? Well...you just put Jar Jar Binks in charge of your online identity. Nice work dumbass.

These are all personal issues, things that affect the end user. I'm not proposing people go out deleting their Facebook accounts. Facebook connect is actually really handy. But what about the big picture? What happens when everybody on the Internet uses Facebook as their online identity? What happens when sites start offering Facebook Connect as the only means to sign up? Without realising it, you've made Facebook the sole authentication system for logging on to just about anything on the Internet. A closed-network owned by a private company now has a complete record of every site every person on the web visits. Think about that for a second. Remember that kid from that Social Network movie you saw the other day? He's holding all the keys to everything...literally.

The danger in Facebook isn't in that you can't take your friends elsewhere, the danger in Facebook is having it become a defacto standard means of authentication on the Web. Every time somebody hands the keys to their Blippy account over to Facebook, that is one more person who is now "stuck" with Facebook. You can decide to leave your social network behind, but if you use Facebook Connect, leaving Facebook also means leaving behind every account on any one of a hundred different websites. The barrier to entry is insignificant, yet the barrier to exit is so insurmountable that it's scary. It doesn't matter what the Diaspora project do. The social network is all but irrelevant now, it's the online authentication hook that is what will ensure Facebook survives any new up and comer. 

Of course, there are ways to stop this happening. As developers, we're the ones building the Death Star. Every time we add Facebook Connect, we're adding another piece of armour to make Facebook stronger, more powerful. Our users want Facebook Connect, and that is fine, but we have an obligation to give them a way out too. Some sites, like Groupon, give users the ability to disconnect their account from Facebook. You just add a password, and you can then just login manually. The problem is if I delete my Facebook account before I do this, I can't log in to Groupon to flick the switch! This is something akin to uninstalling Adobe Photoshop only to realise that you had to deactivate it first. Problem is now you can't deactivate it without it being installed, and you can't install it without deactivating it. Catch 22.

The real problem is having Facebook as a permanent centralised authentication system in the first place. It's one thing to "get started" with Facebook Connect, but an application should always provide another means of accessing the application without Facebook (even for those who connected with Connect in the first place). If Facebook becomes inaccessible, or we want to stop supporting Facebook, or even if Facebook stops supporting us, we need to have a solution ready to allow users to continue using our application without that reliance on Facebook Connect.

The goal here is to prevent Facebook "lock in" and I believe, the humble "forgot password" is the place to start. I'm putting forward the suggestion that applications that implement Facebook Connect should add a "login without Facebook" next to their "login with Facebook" link. This would work something like a "forgot password" function for Facebook users who no longer want to connect using Facebook. This would transparently modify their accounts to allow manual login and simultaneously perform a "forgot password" retrieval. The user gets an email with a link to create a password, and from then on they can use that to login instead of Facebook. Groupon is actually one site out there that is already subtly doing just this using their existing "forgot password" function, but it certainly isn't obvious. As time goes on, users are going to become more and aware of how much they're relying on Facebook for authentication, and eventually they're going to want out. It's up to us as developers to give them by providing that "out".

Of course, Facebook may have already worked this out. Now they want you to use Facebook for your email as well...can you see where I'm going with this?

We're building the Death Star, make no mistake. And every time a user signs up using Facebook Connect another Ewok dies. Do you want that on your conscience? We have an obligation to ensure that there are as many exposed shield generators and poorly positioned ventilations shafts as possible. Only by creating weaknesses in the Death Star as we assist in its construction can we ensure that some whiney kid in an X-Wing can destroy it when the world finally realises what we've built. As developers, it's our responsibility to make sure we don't contribute to the yet to be coined "Facebook lock-in". We must provide our users not only a means to use our applications without Facebook, but also a means to use our application should they have already stopped using Facebook. By perpetuating the need for Facebook logins in our applications, we perpetuate the security risk that Facebook Connect presents to our already over-connected users.

Why you absolutely MUST write an API when you write your next app

I'm a bit of hacker. I've worked with serious software engineers, and I'm definitely not one of those, although I can be when it's required. They're the types that spend more time writing and talking about what they're going to code than they do actually coding it. The word engineer suits these types perfectly. A piece of software to them is constructed from piece by piece of pre-defined, pre-designed and pre-fabricated code. There is more art than science to what I do. I liken what I do to gardening more than building a bridge. To me, software is more of an organic thing and I start software the way I start in the garden. I pick up a shovel and start digging. Over time as plants grow (or die), the yard changes and you change with it. I'm not building bank software or landing planes. I'm building to an ever changing target, evolving my software to suit a need that is never well defined, never fully known. It's whatever I feel like making it.

The big disclaimer here is that sometimes I need to be an engineer. If I'm working for a client on a fixed price project, then obviously everything needs to be engineered to a certain goal. So I can work that way, I just prefer not to. My client work is structured and defined, my pet projects are the result of an ever evolving process of discovery and learning. Of course, that isn't to say you can forget about security, performance or reliability...but it means these things are just one part of an iterative process, not a project in themselves.

For the most part, not being constrained by things like "best practices" is a wonderful freedom. I think of something, and I start coding it. When I'm done, I might go back and tweak or even throw away the prototype and start again (kind of like how we're onto our second Japanese Maple, after the first one died). It's wonderful when you know nobody will ever look at your code, or discover your hacks, it allows you to solve problems now without worrying too much about later. It's like an author taking short cuts with facts, names and places instead of wasting time researching. It just gets in the way of writing the story. But there is a big problem with this type of development. What you gain in agility, you lose in portability. By coding for yourself, you're forgetting about everyone else. 

If you build a successful product, there is a fair chance that at some point you're going to need to get someone else involved. It may be as an employee joining your coding team, or showing a potential investor what you've been working on or it may be by releasing an API for others to integrate with your system. For the first time you're suddenly faced with that embarrassing feeling of your parents dropping in to visit...and it's the morning after a really really big party. It's at this point you suddenly regret all the shortcuts, the laziness, the hacks and the sambuca...oh Lord the sambuca!

That's where writing a public API from day 1 comes in.

You don't have to have your whole house neat, just the bits your parents are going to be seeing. Writing a public API as you develop your system forces you to keep a certain level of decency without constraining your ability to be agile elsewhere. By using your own API you also force yourself to follow the same rules that you expect others to follow when interfacing with your code. It prevents you taking shortcuts that you rightly wouldn't let anyone else take.

If you're an agile or "lean" developer, the one thing you never think about is what this app is going to need to do in the future. You're only ever worried about what it needs to do today. This means your code can evolve into a horrible flying spaghetti monster (if you believe in such things). What makes sense today, may not seem like such a good idea tomorrow. By having a well defined core API you have at least one part of your system (hopefully the important bit) that is well documented, well considered and well written. It's something akin to maintaining a tidy formal lounge whilst the rest of your house is being subjected to an ill-considered conga line.

When it eventually comes time to make your API public, it's already tested and known to work. You know what problems your users are likely to encounter, because you've already encountered them yourself. Best of all, you're not trying to shoehorn a heap of public points of access where they were never intended. Writing an API as you go means you solve a lot of problems before they ever happen...and all without having to think about it too much. The API is as flexible as you want it to be until the day you make it public.

As you grow your application, your API grows with it. The more reliable it becomes, the more likely you will turn to it rather than hacking in a new piece of code elsewhere. When new developers come on board they have something they can immediately recognise and understand. We're all terrible at documenting our code, but with a published API you all of a sudden are advertising what you're application does...you want it to be well documented. Your users won't stand for anything less. 

All of this is the main reason most developers recommend working on open source projects. Open source means everything you do is available for anyone to see. Hacks and cheats are found and highlighted by others, and hopefully you'll learn a better way in the process. You lose this benefit of peer review when you're working on your own private projects. Writing to your own API is the next best thing. Hacks look a whole lot worse when you have to document them for someone else.

Go from .NET to Ruby in One day. For (and by) the complete Linux n00b

There are 1001 guides out there for installing bits and pieces of Ruby, Linux or Rails. This is intended as an absolute beginners guide, specifically for Windows users who don't know jack about Linux, Ruby OR Rails. I'm a complete n00b at this. I spent a day or two trying to get this setup, finally worked it out and have gone back and done it all again whilst writing this guide on how I did it. I've tried to find the easiest way with the least complications. Advanced users will laugh at me, but they can go #&^$ themselves. If this whole setup wasn't so difficult to use, I wouldn't have needed to spend so long working out half of this stuff in the first place.  I hope it helps other .NET Ruby hopefuls, but please keep in mind, I am FAR from an expert, so comments/improvements/suggestions are more than welcome! I was issued something of a challenge last week: To write our next web app in Ruby. Not such a big deal right? I mean, it's just another language? Can't be that hard? I read this quickstart guide, and whilst Ruby is rather odd little language, it didn't appear to be anything that I couldn't handle. So, I decided to give it a go.. The first thing I'd suggest as a .NET developer of nearly 10 years, is that if you're seriously considering giving Ruby on Rails a go, I strongly suggest playing around with .NET MVC first. As a big fan of MVC, getting into Rails was a lot easier, as many of the concepts found in MVC were...well...stolen fromRails. If you're familiar with Models, Views, Controllers, lambda expressions, partials and all that stuff, Rails will be a breeze. Ruby is still an ugly SOB in my opinion...but at least Rails will feel familiar to you.

Step 1. Installing Linux.

You could use Cygwin, or even Windows, but I'd really suggest not. I tried this, and it's just a pain in the ass. Besides, you won't be using either in our production environment, so you best get used to Linux sooner rather than later! I chose Ubuntu, just because that's what came up in Google. If that statement doesn't make it clear, I'm a complete Linux beginner. I used Linux about 10 years ago, and haven't touched it since. So if lack of knowledge of Linux is what's been stopping you, have no fear. If I can manage it, anyone can. You have a couple of options to get up and running with Linux. Setup a new Linux box, or run a Virtual Machine. I didn't have a spare box lying around, so I chose the latter. So, first thing you'll need to do is go off and download Oracle VM VirtualBox and Ubuntu. Then mount the Ubuntu ISO as a virtual CD drive in Windows (there are plenty of apps to do this, google it). Install VirtualBox and create a New Machine, Call it Linux, choose Linux/Ubuntu as your OS Type. You can pretty safely click next repeatedly during the rest of the install. Once it has installed the VM, go to settings > storage. Click the little empty CD controller and from the CD/DVD device on the right, choose the drive which has your Linux ISO. Click ok and start up the VM. If you've done everything right to this point, Ubuntu should start installing. If it crashes because it can't find your ISO, go back and check your settings. Assuming you have a fairly standard machine, it should all install easily enough. Skip the language packs if you don't need them, they take forever to download and install. Once it has installed, you'll need to "power off" the VM, and remove the ISO so that Linux can boot. When you first get into Linux, it'll want to install a heap of updates...just let it. Ok, so If you're a Linux n00b like me, you wont even know where to start. In fact, you're probably wondering why the damned VM window is so small. To fix this, go to the Devices menu in VirtualBox, click the Install Guest Additions. This will mount another ISO in Linux. Go to Applications>Accessories>Terminal. Welcome to Linux's DOS prompt :P
cd /media
cd VBOXADDITIONS_3.2.0_61806/
(bit of a tip for the REAL n00bs...rather than typing the directory, just hit TAB and it will autocomplete)
sudo sh ./VBoxLinuxAdditions-x86.run
This will go off and install a heap of stuff to make Linux run nicer with your host environment....including allow you to have nicer screen resolutions. Once it is done, do a reboot. Couple of quick tips. Right Ctrl is your host key. Ctrl+F will give you fullscreen. That's it. Step 1 complete! The best part is that you haven't completely committed to using Linux, it's not too late to back out. You're only ever a delete key away from being back in the safe hands of Uncle Bill and his crazy Cousin Steve.

Step 2. Installing Ruby.

Unfortunately, that was the easy part. The next part caused me a lot of grief. I eventually worked it out, but only after a lot of Googling, and a lot of swearing and it turns out I didn't actually need to do most of it. Hopefully I can save you from the same fate. The first thing you need to know is, whenever Googling, resist the temptation to Cut and Paste helpful command line tips and tricks. These often include references to a specific version of a package, and can screw everything up real quick. If you need to install ANYTHING, make double sure that you've got the right version. The problem here is that so many versions of Ruby, Gems and Rails have come and gone, and the installation methods, requirements and problems are all different. This can be a major headache for Linux beginners! Even going through this the second time, I'm still running into issues! So, crack open the terminal again. If you haven't already guessed you'll spend a lot of time here! First thing to note is a command called "sudo". Sudo is the Linux verison of "run as administrator". Most of the commands you run for installation will be prefixed with this. It will prompt you with your admin password and then carry on. Second thing to note is that for the most part, you won't need to download anything from any third party website when doing installs.  Linux has a rather nifty package installer that gets and installs everything for you! If you're keen you can download source packages and compile them yourself, but we won't need to do any of that. So theoretically, Ubuntu should have most of what you need already, but we do need to go get a few bits and pieces. Make sure that the updater has finished before trying this, as it won't work if it is still running. The first thing you need is ruby.
sudo apt-get install ruby-full build-essential
Run this to make sure it installed ok.
ruby -v
This will print the version you just downloaded (hopefully!) This will go off and get the latest version of ruby and install it and a heap of other libraries for you.  Done! You've run your first Linux command and installed ruby! To install gems :
sudo apt-get install rubygems
Run this to make sure it installed ok
gem -v
While you're here, install the ruby-debug gem
gem install ruby-debug
Done and done.

Step 3. Installing Rails

Ok, so now we have ruby, and ruby gems. Now we need rails.
sudo apt-get install rails
Yeah...that's it. Now that all looks pretty simple, and you could've done most of it from the one line. But there are PLENTY of guides out there that will show you how wget your source packages, untar them, compile them, run them blah blah blah. Linux zealots want to know why people still use Windows over Linux? Have a read of your guides! You have to have a PhD in astrophysics just to learn how to run a web app!

Step 4.  Install your IDE

Given that you're a .NET developer, you're going to need an IDE. None of this command line editing BS. We'll be using Aptana Studio. Go here: http://www.aptana.org/studio/download Download the archive. You're most likely using Firefox, so you should be able to find the zip in the downloads folder. Extract the archive to your home directory. You can then run aptana directly from there. You can also create a shortcut in your Applications menu. In Ubuntu, go to Preferences > Main Menu > Programming. Click the New Item button. Fill in the details, navigate to Aptana executable and select it, and also click the icon to change it to the Aptana one. Click Ok, close. And check it out in your Applications menu! Great, that was easy! Well...not so fast! This is Linux! You can't install something so easily! You need to do some more command line hacking. Ubuntu doesn't come with Sun Java, which appears to be a prerequisite for Aptana. So now we need to install that. First we need to add the java repo:
sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner"
Then get the updated list of files
sudo apt-get update
Then install Java
sudo apt-get install sun-java6-jre
You can now run Aptana. Once you open Aptana, you'll also want to click on the Plugins button and install RadRails. It's pretty self explantory. That's about all you need to get up and running. If at this point, you're wondering "why the hell am I doing this???", I don't blame you!  We're nearly done...hang in there! At this point you should be able to run a pretty basic website. So this a good time to....

Step 5.  Watch a video

As you won't have installed it, you'll need to install curl before you get too far in the video, and mongrel (a webserver) too. Just run:
sudo apt-get install curl
sudo apt-get install mongrel
If you ever find yourself missing something, that's usually a good way to try to install it. This 15 minute video will take you about 2 hours to get through. With a bit of luck, everything you've done till now will see you through the video. Use Aptana as your text editor for the examples. You'll be pausing every ten seconds, but stick with it. By the end of it you'll understand why so many developers love RoR....even if you still think Ruby is a pile of poo (like me), you'll see how quick it is to get something up and running in Rails. Before watching the video, make sure you create yourself a code directory.
cd ~
mkdir code
cd code
And now enjoy the video! http://media.rubyonrails.org/video/rails_blog_2.mov

Step 6. What next?

The good news is, that if you got through the video and everything worked, then you have everything you need to create your first web app. One good thing about Ruby is that there is a plethora of resources out there to get you moving from here. The one major pitfall is that there are HEAPS of tutorials that are hopelessly out of date, and can not only lead you astray but cause you some serious headaches if you mess up your installation! There is a lot still to learn about RoR, especially for me, but I hope this is a good starting point. My goal was to write something that got ME up and running. I figure there are plenty of Windows developers out there (like me) that wouldn't have the foggiest idea where to start, so if that is you, I hope this has helped! Cheers, Alan

Introducing Navflow

As some of you may know we have been hinting at something new over the last couple of weeks. Not a new feature or a redesign but an entirely new app, and today I am happy to announce that the fruit of our labour is launching under the moniker of navflow.

With navflow we believe we have created a novel approach to testing how people navigate websites and applications. The traditional approach to running usability tests of this nature has mostly involved the testing of live interfaces. While this is a perfectly valid means of gaining valuable user insight, implementing design changes can be prohibitively costly and difficult to implement once a website or application is live.

Our solution was to move testing into the design phase and give designers tools that allow them to run user analysis using mock ups and wire frames. This approach has proven to be very successful on fivesecondtest.com with almost a thousand people participating in and creating tests every day. For navflow we wanted to take design phase testing even further and let designers see how users navigate their designs without needing to build them first. To that effect we have built a testing platform that takes a series of images and creates a conversion funnel from it. The designer is responsible for highlighting areas of the image that result in successful conversions and we allow successful clicks to proceed along the funnel.

Navflow is currently in beta and open to registrations, however new accounts will need to be activated in batches in order for us to stay on top of any bugs or other issues that may arise with a new launch. Bear in mind that this is a beta and so may still be a little rough around the edges, we will be working hard over the coming weeks to get everything up to scratch. We are eager to hear your thoughts so please send any bug reports, feedback or even praise to support@navflow.com

And so with that, happy testing!

Single file upload, using plupload

I'm pretty happy with plupload for all my multi-upload needs. One thing that seems to be missing "out of the box", is the ability to limit the number of queued files. It is possible, with a little event wrangling, to force plupload into allowing only a single upload.

The first thing you need to do is disable plupload's "multi_selection" setting.

var uploader = new plupload.Uploader({
runtimes: 'gears,flash,silverlight,browserplus,html5', multi_selection: false,
...
...
});

An important thing to note here. This doesn't prevent queuing of multiple files, it only prevents multiple SELECTION of files.  There are a few options to preventing queuing of multiple files. The one I've used below works for my needs, but there are other options.

uploader.bind('QueueChanged', function(up) {
if ( up.files.length > 0 && uploader.state != 2) {
uploader.start();
}
});

Basically, this is a bit of a cheat. Whenever we select a file, we check that we're not already uploading and then automatically start the upload. At this point you could then choose to disable the uploader altogether, or in our case, display an icon for the uploaded file and give the user the option to replace it. I've seen other people use the event "FileAdded", but this event looks like it should really be called "BeforeFileAdded", as the newly added file doesn't appear to be added to the "files" collection until after the event has run.

It's worth noting that this method is really only appropriate if you're using your own custom upload interface. Obviously the queuechanged event is triggered when adding AND removing files. Not to mention that our little cheat doesn't actually clear the queue (although there is nothing stopping you doing that).

Using plupload with ASP.NET

I hate file uploaders. The simplest way is always the ugliest, and having anything "nice" requires days of backbreaking labour, and even then you're not even close to being sure it will work across all browsers. It's frustrating to do so much work and then have some users receive seemingly random errors for no apparent reason!

Matt pointed this neat file uploader out to me the other day, plupload. It is very simple and has about a half dozen "fallbacks" for supporting different technologies from Google Gears, Flash and HTML5.

For the most part implementing plupload is a matter of downloading the zip, unzipping and whacking in some sample code. But there IS a catch. Plupload uses BINAY STREAMING, not your bog standard multipart upload. So if you're hoping for a simple drag and drop replacement for your existing code, sorry to disappoint.  But it's not that hard to convert your existing app into a binary stream app!

Of course, there is an easy PHP implementation from the folks that wrote plupload, but I really struggled to find an ASP.NET one. So here's one I wrote based loosely on the PHP version! This is not complete by any stretch (you'd want to handle caching, I/O errors, folder creation etc), but this is the guts of it to save you some research.

Two main things you'll want to take note of.

1. Plupload supports chunks - this is a way to get around upload file size restrictions. Basically, the file can be chopped up and sent as multiple parts, and then put back together again when it arrives. If you look at the requests going to the server, you will see there are actually multiple requests PER FILE. This means we need to stitch the files together when we're done at the other end.

2. Data handling - plupload stores the file name, and chunk info into the querystring, and all the file data is in Request.InputStream...so don't bother trying to work out why Request.Files is empty!

Here is the basics of what you need to implement plupload in asp.net! Easy as! If you're using any other file uploader, change it NOW!

int chunk = Request.QueryString["chunk"] != null ? int.Parse(Request.QueryString["chunk"]) : 0;

//open a file, if our chunk is 1 or more, we should be appending to an existing file, otherwise create a new file FileStream fs = new FileStream(Server.MapPath("/files/" + fileName), chunk == 0 ? FileMode.OpenOrCreate : FileMode.Append); //write our input stream to a buffer

Byte[] buffer = new Byte[Request.InputStream.Length]; Request.InputStream.Read(buffer, 0, buffer.Length);

string fileName = Request.QueryString["name"] != null ? Request.QueryString["name"] : ""//write the buffer to a file.

fs.Write(buffer, 0, buffer.Length);

fs.Close();

Of course, if you're REALLY lazy, they've just recently updated plupload to support multipart uploads...but that's just boring!

fivesecondtest.com update coming soon...very soon!

We've been working hard on the latest update to fivesecondtest.com. We've made a lot of changes, many of which won't be visible in the update, but there are some fairly notable changes that WILL be visible in the next few days. The first major change is the addition of Premium tests. Yes, we all knew it was coming eventually! The first thing I have to mention is that our free tests will remain free in exactly the same state that they are now, and will forever remain free! Premium tests, however, give you three benefits (with more to come!). 1. You will receive double the number of responses given to free tests. 2. You will receive your results much quicker than free tests. 3. You get to feel good about supporting fivesecondtest.com! One thing, that hasn't been terribly clear in the past is that tests don't keep getting results forever. Whilst it may seem a little unfair, currently most tests will receive about 12 results. We don't decide this, it is a factor of how many tests are created versus how many responses are given. At the moment, that ratio is 12 to 1. That means for every test that is created today, 12 tests are viewed. The upshot of this is that we spread the love evenly, ensuring that all users get roughly the same number of results. For some people, 12 results isn't enough, whilst for others 12 is plenty. So we're giving you the choice. Upgrade if you wan't, but don't feel obligated! Keep in mind this also about the speed in which you get your results, not just the quanitity! The astute among you may realise that if premium test are getting MORE results, then everyone else must be getting less! You'd be correct. Partially. Another thing we're trying to do in this release, and more so in the future, is promoting the idea of Karma. We're letting you see who is doing your test, to give you an opportunity to do theirs in return. At the moment this is purely to help you help those that helped you. In the future, their will be benefits to your kindness...and yes we ARE watching who is being naughty and nice. Another feature we're adding is the non-repetition of tests. Once you've done a test, you should never see it again. If you do, one of two things has happened... someone has created multiple tests that LOOK the same, or my code is broken...in which case, let me know!! We've now added a really nice manage page. This is a central location for managing your tests. Here you can enable, disable, upgrade, share and delete all your tests. In the future we'll be adding more features here, for example, the ability to get results in your inbox, or subscribe to an RSS feed for a test's results. We haven't changed the actual test process too much, although we've made it a little quicker for people who've seen the instructions before, you won't have to sit through them again. The upload process has been revamped to clean up our front page, and we've changed how we show site activity on the front page. There have been a lot of other little changes here and there, and a lot of changes in the background that will enable us to release some much requested features in the near future. Anyway... this is just a heads up! Matt will be giving another update when we go live, which should be in the next few days! Cheers and thanks for your awesome support to date! Angry Monkeys

How game designers can help application developers : Part 1

I'm a gamer. Our house has an Xbox 360, PS3 (the new slim variety) a Wii, an old xbox, an old ps2, a Gamecube, 2 Nintendo DSs and even a Gameboy Micro, plus the PC and a pair of iPhone 3GSs. Most of these get a fairly regular workout, except for perhaps the Micro and the Gamecube. Suffice to say, we're a gaming house. My wife is a Nintendo nut, and I'm more into PC and Xbox gaming. The PS3 is a blu-ray player.... What's this got to do with application design? More than you'd think! Usability, in application design, is often an afterthought. Not only in terms of interface design, but in terms of the user experience or user journey. This is mostly true for developer driven application design. The sort of design that evolves at the hands of a developer only to be "spruced up" by a designer later (if we're lucky). We consider how things work, but not really how they will be experienced. You can spot these apps a mile away. I know, because I've built plenty of them in my time. You don't realise how bad they are until you see someone trying to actually use your application in a real world environment, something many developers never ever witness and as a result never learn. Some typical examples, if I may:
  • Features that were clearly important at the start of development, remain visually prominent well after the feature has been demoted in importance or the focus of the application has changed.
  • Repetitive tasks use interfaces which are too cumbersome and time consuming to work, often as a result of a developer building an interface in isolation rather than as part of a work-flow.
  • Building a data interface which requires pre-existing data but with no way to add it, requiring the user to leave their current process to add the prerequisite data somewhere else.
  • Interface elements which are either non-obvious in their function or "tricky" to use for the inexperienced.
  • The worst offender, inconsistent design and implementation. When a developer is left to their own devices, they will often come  up with different solutions to the same problem. There is nothing worse than using an application which implements the same process two different ways in two different areas!
I'll be honest, you'll see a few of these issues in our own Fivesecondtest.com. But we're working on addressing these... I promise! Whilst most of these issues could be easily addressed by a better pre-build process or even just some rudimentary user testing, this is often something that is out of the hands of the developer. Often in a small team, these tasks fall into the hands of the dev, something which, let's face it, we're all ill equipped to deal with. Not only that, but often the entire build process is focused on getting the job done quickly, rather than having a good product at the end. So, back to the games.... Imagine you're playing an RTS like Command and Conquer, Age of Empires or Starcraft. The enemy is approaching and you're in dire trouble. You need to build a wall, some defence towers and at least 20 units to fend off the attack. If all that isn't enough, you have only about 5 minutes in which to do it. The pressure is on! If the game is well designed, you will have enough time to fend off the horde. If the game is well designed, you will be able to achieve it all with a minimal amount of effort (time pressure aside!). Finally, if the game is well designed, there will be minimal interference from unrelated factors whilst you attempt to achieve your goals. Let's think about that more for a moment. If it takes too long for us to complete our task, we will be overrun by the Zerg Swarm and lose the game. If it is too difficult to complete the task, we will most likely give up and go play something else. If the game interrupts us constantly by notifying us that our citizens are unhappy, that another player wants to trade with us and that we should consider building more farms, we will get annoyed at the game and blame it for making us lose. Game developers spend crazy amounts of time considering these issues to ensure their game is fun to play, and yet in reality the same issues face us as application developers. We may not expect our users to have fun, but we still want them to be able to achieve their goals, and hopefully enjoying it enough to come back again next time! Come back next week for Part 2 of, "How game designers can help application developers"

ExtJS and .NET MVC

I've been using JQuery with .NET MVC for a while now, and it works wonderfully. I've made a couple of post previously on the subject. In my most recent project I've been using ExtJS with .NET MVC and it works wonderfully well (for the most part!). Since the latest version of ExtJS (v3) came out, it has gotten a whole lot easier with full REST support for data stores. When I first sat down to write my data components for this particular project, I made my subclass of the Store class (or JSONStore depending on what I was doing), and implemented my own REST-style interface. This basically involved providing a URL and then appending _get, _set, _delete etc to get the appropriate URL. It was messy and ugly and didn't work like REST should. Fortunately, around this time version 3 of ExtJS came out and solved all my issues in one go. Now by including the property "restful:true" into your Store, the store will automatically wire up the get, set, update and delete methods for you.
var store = new Ext.data.Store({
restful: true,
...
In my case, I am using a datagrid with a row editor. This means that when I click add, delete, update or load, the data store will take care of everything. All I need to do is provide a URL. Now, most ASP.NET developers will have never come across this, but it's definitely something you should be aware of. All 4 actions use the same URL (and hence the same method name in the controller) but with varying HttpVerbs.
[AcceptVerbs(HttpVerbs.Delete)]

        [AcceptVerbs(HttpVerbs.Post)]

        [AcceptVerbs(HttpVerbs.Put)]

        [AcceptVerbs(HttpVerbs.Get)]
What this means is that in your controller you can have ONE method name (but 4 separate methods) to handle all requests for this data type. Not only is it a lot neater, it's a lot easier to understand.

Managing Databases in .NET

Migrating databases has always been a pain the ass. In the past, I've used tools like RedGate SQL Compare to get the job done. The last major project I worked on (with a team of a dozen developers) we used a custom solution of managing sql updates and change scripts. Nasty business. The main issue with updating databases is not only making sure that your test database matches your staging database which matches your live database, BUT that the database of each instance matches the code running alongside that database. Regardless of which method you use the code-sql versioning has always been a pain. I can't count the number of times an SQL update has been applied when the relevant code patch hasn't! This problem is magnified when you are working in an "Agile" environment. When you're making frequent code updates and with a scope that is constantly changing, databases changes unfortunately are a necessary evil. The Ruby crowd have been using database migrations for some time now, and it's only recently that the .NET community is catching up. I've recently begun using MigratorDotNet. It's not perfect, and is still fairly immature, but it makes managing databases a lot easier than keeping track of change scripts! My particular implementation probably isn't the most recommended method...but it works for me and it saves a lot of messing around with build targets and project configurations. If you haven't already, take a look at the wiki and have a read of the "getting started" page...cos I'm going to assume you know what this code does.....
[Migration(1)]
    public class _001_Init : Migration
    {
        public override void Up()
        {
              Database.AddTable("dbo.tbl_Tokens",
              new Column("TokenId", DbType.Guid, ColumnProperty.PrimaryKey),
              new Column("UserId", DbType.Guid, ColumnProperty.NotNull),
              new Column("DatePurchased", DbType.DateTime),
              new Column("DateUsed", DbType.DateTime)               
          );
         }
     }
Where I change my process is how I set up my projects and how I run my migration. This is for a web application obviously btw. Do this much (taken from the getting started Wiki): 1. Add a new class library project. In this example it's called Called DBMigration. 2. Create a lib directory in your DBMigration project, and extract all the dotnetmigrator DLLs into it. You can exclude database-specific DLLs that you don't need. e.g. If you're not using Oracle, you don't need Oracle.DataAccess.dll. 3. In your DBMigration project, add a reference to the Migrator.Framework.dll. That's the only additional reference you need in the project. Now....Ignore the rest of the steps, and do this instead :) 4. Create your first migration class (like the one above). 5. In your web application, add a reference to Migrator, Migrator.Framework and your DBMigration project. Here's the trick for the lazy peeps that can't be bothered messing with MSBuild and build targets! In your Global.asax file... you want something like this:
protected void Application_Start()
 {
            string strConnString = ConfigurationManager.ConnectionStrings["AspNetSqlProvider"].ConnectionString;
            Assembly a = System.Reflection.Assembly.Load("DBMigration");

            Migrator.Migrator m = new Migrator.Migrator("SqlServer", strConnString, a);
            m.MigrateToLastVersion();
}
Whenever you update the code for your application, on first run the application will automagically update the database using your migration scripts! Nifty! A few important things to take note of here! The first thing to understand is the connection string. I set up my connection strings using an external XML which is referenced from web.config....like this
<connectionStrings configSource="connections.config"/>
And in my connections.config file I have the connection string for the Live OR test OR staging server. There is different version of this file on each respective server (thus make sure it's not in your solution if you use VS deployment!). Why? So that when I deploy my application to the test server, or to staging, it will be updating the database for THAT server only. This means I have a seamless means of updating my database at the same time as updating my code. No batch files, no configuration, no build targets! Just run it, and it updates the right database! Now, obviously this is NOT ideal for all situations. You'd really need to be sure that this approach is suitable for what you need. For us, we do infrequent updates to the live server, but are constantly updating our staging server. This approach means that whenever we deploy and run the website, we know the database is going to be update and correct!