Wednesday, 21 March 2012

Fun with github

A while ago I was working at home. I was working with Ștefan Rusu's excellent aws2js npm module for wiring up nodeJS servers to AWS. I wanted to make a small change, so I forked his github repo, made my changes and sent him a pull request. A very short while later he took my changes, fixed my mistakes, and the outcome went into the next release of his module. A great experience for me, a good advert for the the github way of collaborating.

Today I'm working on the same code base and I want to make another small change to Stefan's code. So, the tasks I need to accomplish go something like this

  1. Check out my fork
  2. Pull all the changes from Stefan's master to bring myself up to date
  3. Check that into my repo
  4. Change whatever I want to change
  5. Check the changes into my repo
  6. Give Stefan another pull request.

There's a further complication which is that I'm working in the office today, so I need to setup my work machine to be capable of checking into my github repos. This blog is an aide-memoire for me about that last issue and steps 1 to 3.

To get my work machine setup, I need to something like

cd ~/.ssh

rm id_rsa id_rsa.pub           # or stash them somewhere safe to put back later

ssh-keygen -t rsa -C "my-email@my-domain.com"
                               # and hit return to the various questions

ssh-add ~/.ssh/id_rsa

vi ~/.ssh/id_rsa.pub           # and copy the contents


Then I log into my github account, "add an SSH key", and paste the copied file contents.

Now for steps 1 to 3:

# 1. Check out my fork

git clone git@github.com:dcleal/aws2js.git

# 2. Pull all the changes from Stefan's master to bring myself up to date

git remote add upstream git://github.com/SaltwaterC/aws2js.git

git fetch upstream

git merge upstream/master

# 3. Check that into my rep

git push origin master

Friday, 16 March 2012

Ten tips for scaling in AWS

We love Amazon Web Services: the last thing a fast-growing company like ours needs is to worry about how many servers to own. Much, much easier to use Amazon's technology to let our server stack grow to respond to demand.

However, it's one thing to say 'set it up so that it grows to meet demand', and slightly another to make it so. Here are our ten top tips. Some of these are detailed and technical, others are broad-brush architectural principles, but hopefully there's at least one of interest to most readers.

1. Don't design servers to shutdown

Once you have an army of servers, there will be casualties. Servers will crash, Amazon will occasionally reboot one, others will just go awol. You need to design your system to cope with chaotic shutdown, so why worry about the other kind as well? If you start from the beginning by just pulling the plug, you're more likely to be ready when the plug falls out.

2. The database is the only bottleneck

Most of the bottlenecks you might be used to from outside the cloud are dealt with by AWS. It's up to you to design to use the AWS tools. So, don't put data on disk and mount NFS, just put it in S3. Don't try to roll your own server messaging, make SQS work. Use load balancers and auto scaling groups. About the only place you're likely to have a bottleneck is in front of the database, so focus your technical smarts there.

3. Use AWS i/o

Particularly useful if you have a lot of videos coming in and out: don't send and receive them through your web servers, use AWS servers instead. Their content distribution network is simple to use, so use it. And your contributors can upload directly to S3, so let them do that too...

4. S3 upload gotcha

... although, bear in mind the annoying restriction of S3 uploads, which is that the bucket key must be the first field in the multipart upload. This can be a bit tricky, if for example your client code uses a reasonable dictionary to hold a list of fields, but you need to work around this. Groan. You can see why they might want to get the bucket key before the file contents, but before everything else?

5. Script (nearly) everything

Each step that's manual can and will go wrong at just the wrong moment. One of the beauties of AWS is that you can use fifty computers for an hour and never use them again. Take advantage of this by making it simple to create a whole environment for a short while and then throw it away. Give yourself shortcut ways to scoop up log files from an army, log into machines, and so on: time spent on this is never wasted in even the quite short term.

6. Don't expect things to happen immediately

All those scripts need to be robust enough to cope with major variations in the time it takes to do something. Most operations need a kind of "do it, wait for a while checking whether it happened every few seconds, finally bail if it didn't" logic.

7. Use cloud-init

If you're using linux boxes, then use the cloud-init package to tailor machines at launch time. We create one machine image for each release of our software, whether they're web servers, video processors, and whether they're in the production or test environments. Then we use a launch configuration to attach data to a machine at start up which gets picked up by cloud-init to tell the machine what to do and who to do it with. That way we have high confidence that test machines and production machines will behave the same (they're built off identical machine images) and the flexibility to add new environments and move environments to different releases without rebuilding our code.

8. Use elastic IPs for special computers

Our database servers need to be reachable by our armies of web servers and video processors. We achieve this by assigning them elastic IP addresses, which means they won't change address. It also means that if one goes down, its replacement steps into place without reconfiguring the other servers.

9. Use the Amazon-assigned public DNS name for those special computers

Once a computer has an elastic IP, it has an Amazon-assigned public DNS name, a fixed public IP, a private DNS name, and a private IP. Traffic routed via the public IP address will incur fees, the private IP and private DNS might change, so point your other servers at the public DNS name. The DNS servers inside AWS resolve this to the private IP, so there are no fees.

10. Integrate continuously

You do continuous integration, run the tests and do a build on every check-in. If you don't, then now would be a good time to start. Once the build is built, deploy it to AWS and run some tests against a real environment. Machines are cheap, so why not?

Error handling in nodeJS

We at Vyclone are developing our web server using Node JS and the Express web server framework. As a server technology, We think Node has has many great characteristics, but a well-developed and consistent story on error handling isn't one of them. In this article we'll explore some of the different situations that arise in every day Node examples and then the strategies we're currently adopting to deal with these.

Http client

Here's a small example to play with. We provide a web service that lets people uploads videos. Once we have a video with a latitude and longitude, we look up a place name using a separate web service, store the coordinates and placename in our local database, and then report the placename back to our user. Here's a first go at the necessary code:


[As an aside, notice the way we're chaining callbacks. We're not using libraries like step or async. Our experience of trying these is that they obscure the flow of control without hiding any complexity. Lots of people have noticed that nesting the callback "naturally" just means you run out of space at the right hand side of the editor screen. So, we just chain with named functions that describe the next step in a process. We do use async when we want things to happen in parallel.]

Notice the 'response' object we've passed in - we're in the context of a request to our server, and this is the HttpResponse on which we'll respond to our client. Anyway, the code is somewhat complete, and on one of those days when everything pans out perfectly, it might even work.

However there's absolutely no error handling. Here's a few things that might go wrong:
  1. the http request might encounter an error. These are signalled using an 'error' event.
  2. the json parse might fail. This synchronous call will throw an exception in that case. Alternatively, we might get an exception even after a successful parse if have the structure wrong (that is, there turns out not be a "name" inside a "place" inside a "result").
  3. the database store might fail. Notice our callback from the database uses the common Node practice of passing us a first parameter, "err", whose value is either undefined (nothing went wrong) or an Error object.
  4. something goes unexpectedly wrong in the library. For example, if there's a DNS failure, Node's HTTPClient doesn't tell us using an 'Error' event, it just throws an (unhandled) exception to Node.
And there's the problem with error handling in Node: there are so many different ways that errors can announce themselves.

Our goal is to capture all the different errors types and handle them in the same way, and not scatter too much clutter over our code. To this end, we've written ourselves some general-purpose error handling support. Here's how our example looks with these error handlers applied:


There are three mystery functions here: errorHandlerFunc, tryCatchFunc and handleErrors.

errorHandlerFunc returns a function that takes an Error as an argument, and does something suitable. In this code it's a closure around our response object so that after logging any errors it receives it can send a suitable response to our client. In reality we have a few different error handlers that do different things in different contexts. We use this directly to deal with type 1 errors from our earlier list in the function 'dealWithError'.

tryCatchFunc returns a function tryCatch. This function executes its argument - assumed to be a function - passing on any arguments it receives, but inside a try-catch block. This traps any thrown Errors and passes them to our error handler. This deals with type 2 errors. For example, when we parse json, any errors thrown are eventually caught by the tryCatch wrapped around storeLocation.

handleErrors checks its argument. If the argument evaluates to true, it assumes that the argument is an Error and throws it, otherwise it does nothing. We're careful only to do this when we're inside our tryCatch protection, so that these errors also end up getting passed to our error handler. This deals with type 3 errors: for example, it ensures that any errors we get from our database are thrown to the tryCatch wrapped around sendResponse.


Here's a partial implementation of the generic error handlers:


Hopefully that's all fairly clear.

So, that's wrapped up nearly all our errors and processed them through one route. There are a couple of outstanding problems for another day:


  1. We aren't handling our type 4 errors. These are kind of tricky: node 0.6 lets you catch unhandled errors, but then what? Our approach is to restart the server - it only takes a few seconds, and we're in the fortunate position of operating on a scale where there's a group of similar servers to take the strain in the meantime. Anything else seems unsafe.
  2. The Error objects aren't terribly useful for problem diagnosis in cases 1 and 3. That's because the only part our our code that's mentioned in the stack trace is the generic error handling code - it's not easy to tell where our code went wrong. For this reason, the production versions of our error handling code are more complicated in order to doctor the stacks and to allow coders to add extra messages where it might be worthwhile. More on this another time.

Thursday, 23 February 2012

I love bash

Rob and I just wrote this

tools/AutoScaling-1.0.49.1/bin/as-describe-launch-configs $SECURITY --max-records 100 | grep launch-all-staging | cut -d ' ' -f3 | sort -r | tail -n +6 | xargs -I BC tools/AutoScaling-1.0.49.1/bin/as-delete-launch-config $SECURITY --force --launch-config BC

And now we only have six launch configurations in AWS whose names start with "launch-all-staging". It's only a shame that the command line is too long to tweet.

Thursday, 26 January 2012

Object-oriented programming in Javascript without classes

Like a lot of people, I've been programming with objects a long time. Far too long in Java, and before that in Smalltalk, C++, and even CLOS. In all those languages, objects start with classes. Javascript's different though. There are actually two object-oriented programming models you can adopt: one's a traditional class-based language, whereas the other uses closures instead and lets Javascript's functional nature shine through.

Javascript Classes

The first of these programming models looks a lot like other object-oriented programming languages. Define a class, add a few methods, new up a few instances and poke them;

So far, so good. Let's not worry about whether this is a good way to deal with money or even whether it constitutes good object-oriented design. Instead, let's look at what this tells us about Javascript the object-oriented language. There are some things that might look like problems if you're used to "proper" (Java) classes. All the variables are public, come to that so are all the methods. There's also a certain amount of clutter - what's with the mysterious "prototype"? And there's some weird stuff going on with "this". Try this method:

This won't work as it stands. The keyword "this" refers to the instance of Account that we're working with inside a method of Account, but NOT inside a nested function. So in this example, "this" refers to the anonymous function that's mapped across the array. To get this to work I need this magic incantation:

To put it another way, this this isn't the same as that this. We Node JS programmers live in a world where every other line involves a nested function for NodeJS to call back, so we see a lot of this.

And now imagine that I want to make handleDebit private so that only handleSeveralDebits is public. There's no way to achieve this while handleDebit is still attached to the prototype. There are lots more hoops you can jump through (Douglas Crockford has several) but these just add to the clutter.

Javascript objects without classes

To recap: Javascript classes gave us public instance variables, the prototype special variable, "self=this", no private methods... Fortunately, Javascript is a functional language with closures as well. How about this implementation:

A simple function that returns an object. There's no class here, but that's ok, because in Javascript an object is whatever I say it is. Objects made like this will behave just like instances of the Account class, except that we won't be able to get at their instance variables or the private handleDebit method. We aren't using the "new" keyword, nor the "prototype", and best of all, not even the dreaded "this". There's simply less stuff to read. And there's less stuff to comprehend: just try googling "understanding the keywords prototype and constructor in JavaScript".

An alternative that some like is this:

Much the same, but the public api is a bit easier to scan.

Caveat emptor

It's not all good news though. These styles uses a lot more memory, because each instance has its own copy of each function. In Node 0.5.9, at least, this is the difference between (very roughly) 40 and 275 bytes per Account object. Which may or may not matter to your real application.

Conclusion

Javascript doesn't need classes. It's instructive that classes need some reserved keywords whose behaviour is easy to misunderstand, whereas the classless closure only needs the basic syntax of the language. I think this hints that programming with classes is an uneasy bolt-on intended to make the language approachable for Java exiles, whereas the pure core Javascript wants us to use classless objects with a pure functional syntax.

In this article we haven't talked about inheritance. My general attitude is that inheritance is a not particularly important special case of delegation, but that sounds like a topic for another day.