Posts Tagged architecture

Building and implementing a Single Sign-On solution

Most modern web applications start as a monolithic code base and, as complexity increases, the once small app gets split apart into many “modules”. In other cases, engineers opt for a SOA design approach from the beginning. One way or another, we start running multiple separate applications that need to interact seamlessly. My goal will be to describe some of the high-level challenges and solutions found in implementing a Single-Sign-On service.

Authentication vs Authorization

I wish these two words didn’t share the same root because it surely confuses a lot of people. My most frequently-discussed example is OAuth. Every time I start talking about implementing a centralized/unified authentication system, someone jumps in and suggests that we use OAuth. The challenge is that OAuth is an authorization system, not an authentication system.

It’s tricky, because you might actually be “authenticating” yourself to website X using OAuth. What you are really doing is allowing website X to use your information stored by the OAuth provider. It is true that OAuth offers a pseudo-authentication approach via its provider but that is not the main goal of OAuth: the Auth in OAuth stands for Authorization, not Authentication.

Here is how we could briefly describe each role:

  • Authentication: recognizes who you are.
  • Authorization: know what you are allowed to do, or what you allow others to do.

If you are feel stuck in your design and something seems wrong, ask yourself if you might be confused by the 2 auth words. This article will only focus on authentication.

A Common Scenario

SSO diagram with 3 top applications connecting to an authorization service.

This is probably the most common structure, though I made it slightly more complex by drawing the three main apps in different programming languages. We have three web applications running on different subdomains and sharing account data via a centralized authentication service.


  • Keep authentication and basic account data isolated.
  • Allow users to stay logged in while browsing different apps.

Implementing such a system should be easy. That said, if you migrate an existing app to an architecture like that, you will spend 80% of your time decoupling your legacy code from authentication and wondering what data should be centralized and what should be distributed. Unfortunately, I can’t tell you what to do there since this is very domain specific. Instead, let’s see how to do the “easy part.”

Centralizing and Isolating Shared Account Data

At this point, you more than likely have each of your apps talk directly to shared database tables that contain user account data. The first step is to migrate away from doing that. We need a single interface that is the only entry point to create or update shared account data. Some of the data we have in the database might be app specific and therefore should stay within each app, anything that is shared across apps should be moved behind the new interface.

Often your centralized authentication system will store the following information:

  • ID
  • first name
  • last name
  • login/nickname
  • email
  • hashed password
  • salt
  • creation timestamp
  • update timestamp
  • account state (verified, disabled …)

Do not duplicate this data in each app, instead have each app rely on the account ID to query data that is specific to a given account in the app. Technically that means that instead of using SQL joins, you will query your database using the ID as part of the condition.

My suggestion is to do things slowly but surely. Migrate your database schema piece by piece assuring that everything works fine. Once the other pieces will be in place, you can migrate one code API a time until your entire code base is moved over. You might want to change your DB credentials to only have read access, then no access at all.

Login workflow

Each of our apps already has a way for users to login. We don’t want to change the user experience, instead we want to make a transparent modification so the authentication check is done in a centralized way instead of a local way. To do that, the easiest way is to keep your current login forms but instead of POSTing them to your local apps, we’ll POST them to a centralized authentication API. (SSL is strongly recommended)

diagram showing the login workflow

As shown above, the login form now submits to an endpoint in the authentication application. The form will more than likely include a login or email and a clear text password as well as a hidden callback/redirect url so that the authentication API can redirect the user’s browser to the original app. For security reasons, you might want to white list the domains you allow your authentication app to redirect to.

Internally, the Authentication app will validate the identifier (email or login) using a hashed version of the clear password against the matching record in the account data. If the verification is successful, a token will be generated containing some user data (for instance: id, first name, last name, email, created date, authentication timestamp). If the verification failed, the token isn’t generated. Finally the user’s browser is redirected to the callback/redirect URL provided in the request with the token being passed.

You might want to safely encrypt the data in a way that allows the clients to verify and trust that the token comes from a trusted source. A great solution for that would be to use RSA encryption with the public key available in all your client apps but the private key only available on the auth server(s). Other strong encryption solutions would also work. For instance, another appropriate approach would be to add a signature to the params sent back. This way the clients could check the authenticity of the params. HMAC or DSA signature are great for that but in some cases, you don’t want people to see the content of the data you send back. That’s especially true if you are sending back a ‘mobile’ token for instance. But that’s a different story. What’s important to consider is that we need a way to ensure that the data sent back to the client can’t be tampered with. You might also make sure you prevent replay attacks.

On the other side, the application receives a GET request with a token param. If the token is empty or can’t be decrypted, authentication failed. At that point, we need to show the user the login page again and let him/her try again. If on the other hand, the token can be decrypted, the content should be saved in the session so future requests can reuse the data.

We described the authentication workflow, but if a user logins in application X, (s)he won’t be logged-in in application Y or Z. The trick here, is to set a top level domain cookie that can be seen by all applications running on subdomains. Certainly, this solution only works for apps being on the same domain, but we’ll see later how to handle apps on different domains.

The cookie doesn’t need to contain a lot of data, its value can contain the account id, a timestamp (to know when authentication happened and a trusted signature) and a signature. The signature is critical here since this cookie will allow users to be automatically logged in other sites. I’d recommend the  HMAC or DSA encryptions to generate the signature. The DSA encryption, very much like the RSA encryption is an asymmetrical encryption relying on a public/private key. This approach offers more security than having something based a shared secret like HMAC does. But that’s really up to you.

Finally, we need to set a filter in your application. This auto-login filter will check the presence of an auth cookie on the top level domain and the absence of local session. If that’s the case, a session is automatically created using the user id from the cookie value after the cookie integrity is verified. We could also share the session between all our apps, but in most cases, the data stored by each app is very specific and it’s safer/cleaner to keep the sessions isolated. The integration with an app running on a different service will also be easier if the sessions are isolated.



For registration, as for login, we can take one of two approaches: point the user’s browser to the auth API or make S2S (server to server) calls from within our apps to the Authentication app. POSTing a form directly to the API is a great way to reduce duplicated logic and traffic on each client app so I’ll demonstrate this approach.

As you can see, the approach is the same we used to login. The difference is that instead of returning a token, we just return some params (id, email and potential errors). The redirect/callback url will also obviously be different than for login. You could decide to encrypt the data you send back, but in this scenario, what I would do is set an auth cookie at the level when the account is created so the “client” application can auto-login the user. The information sent back in the redirect is used to re-display the register form with the error information and the email entered by the user.

At this point, our implementation is almost complete. We can create an account and login using the defined credentials. Users can switch from one app to another without having to re login because we are using a shared signed cookie that can only be created by the authentication app and can be verified by all “client” apps. Our code is simple, safe and efficient.

Updating or deleting an account

The next thing we will need is to update or delete an account. In this case, this is something that needs to be done between a “client” app and the authentication/accounts app. We’ll make S2S (server to server) calls. To ensure the security of our apps and to offer a nice way to log requests, API tokens/keys will be used by each client to communicate with the authentication/accounts app. The API key can be passed using a X-header so this concern stays out of the request params and our code can process separately the authentication via X-header and the actual service implementation. S2S services should have a filter verifying and logging the API requests based on the key sent with the request. The rest is straight forward.

Using different domains

Until now, we assumed all our apps were on the same top domain. In reality, you will often find yourself with apps on different domains. This means that you can’t use the shared signed cookie approach anymore. However, there is a simple trick that will allow you to avoid requiring your users to re-login as they switch apps.


The trick consists, when a local session isn’t present, of using an iframe in the application using the different domain. The iframe loads a page from the authentication/accounts app which verifies that a valid cookie was set on the main top domain. If that is the case, we can tell the application that the user is already globally logged in and we can tell the iframe host to redirect to an application end point passing an auth token the same way we did during the authentication. The app would then create a session and redirect the user back to where (s)he started. The next requests will see the local session and this process will be ignored.

If the authentication application doesn’t find a signed cookie, the iframe can display a login form or redirect the iframe host to a login form depending on the required behavior.

Something to keep in mind when using multiple apps and domains is that you need to keep the shared cookies/sessions in sync, meaning that if you log out from an app, you need to also delete the auth cookie to ensure that users are globally logged out. (It also means that you might always want to use an iframe to check the login status and auto-logoff users).


Mobile clients

Another part of implementing a SSO solution is to handle mobile clients. Mobile clients need to be able to register/login and update accounts. However, unlike S2S service clients, mobile clients should only allow calls to modify data on the behalf of a given user. To do that, I recommend providing opaque mobile tokens during the login process. This token can then be sent with each request in a X-header so the service can authenticate the user making the request. Again, SSL is strongly recommended.

In this approach, we don’t use a cookie and we actually don’t need a SSO solution, but an unified authentication system.


Writing web services

Our Authentication/Accounts application turns out to be a pure web API app.

We also have 3 sets of APIs:

  • Public APIs: can be accessed from anywhere, no authentication required
  • S2S APIs: authenticated via API keys and only available to trusted clients
  • Mobile APIs: authenticated via a mobile token and limited in scope.

We don’t need dynamic HTML views, just simple web service related code. While this is a little bit off topic, I’d like to take a minute to show you how I personally like writing web service applications.

Something that I care a lot about when I implement web APIs is to validate incoming params. This is an opinionated approach that I picked up while at Sony and that I think should be used every time you implement a web API. As a matter of fact, I wrote a Ruby DSL library (Weasel Diesel) allowing you describe a given service, its incoming params, and the expected output. This DSL is hooked into a web backend so you can implement services using a web engine such as Sinatra or maybe Rails3. Based on the DSL usage, incoming parameters are be verified before being processed. The other advantage is that you can generate documentation based on the API description as well as automated tests.

You might be familiar with Grape, another DSL for web services. Besides the obvious style difference Weasel Diesel offers the following advantages:

  • input validation/sanitization
  • service isolation
  • generated documentation
  • contract based design
Here is a hello world webservice being implemented using Weasel Diesel and Sinatra:
describe_service "hello_world" do |service|
service.formats :json
service.http_verb :get
service.disable_auth # on by default
service.param.string :name, :default => 'World'
service.response do |response|
response.object do |obj|
obj.string :message, :doc => "The greeting message sent back. Defaults to 'World'"
obj.datetime :at, :doc => "The timestamp of when the message was dispatched"
service.documentation do |doc|
doc.overall "This service provides a simple hello world implementation example."
doc.param :name, "The name of the person to greet."
doc.example "<code>curl -I 'http://localhost:9292/hello_world?name=Matt'</code>"
service.implementation do
{:message => "Hello #{params[:name]}", :at =>}.to_json
view raw hello_world.rb hosted with ❤ by GitHub

Basis test validating the contract defined in the DSL and the actual output when the service is called:

class HelloWorldTest < MiniTest::Unit::TestCase
def test_response
TestApi.get "/hello_world", :name => 'Matt'
view raw gistfile1.rb hosted with ❤ by GitHub

Generated documentation:

If the DSL and its features seem appealing to you and you are interested in digging more into it, the easiest way is to fork this demo repo and start writing your own services.

The DSL has been used in production for more than a year, but there certainly are tweaks and small changes that can make the user experience even better. Feel free to fork the DSL repo and send me Pull Requests.

, ,


Designing for scalability

Designing beautiful and scalable software is hard. Really hard.

It’s hard for many reasons. But what makes it even harder is that software scalability is a relatively new challenge, something only really done in big companies, companies that are not really keen on sharing their knowledge. The amount of academic work done on software design is quite limited compared to other types of design, but shared knowledge about scalable design is almost nonexistent (Don’t expect to find detailed information about scaling online video games either, the industry is super secretive. And even if this is a niche market where finding skilled/experienced developers is really challenging, information is not shared outside a game project).

I don’t pretend to have the required knowledge to cover this topic at length. However, I do have some exposure and figured I should share what I learned so others can benefit from my experience and push the discussion further.

Designing scalable software is just like any other type of software design, with a few unique constraints. If I had to define the key requirements of a great design I would have to quote Frederick P. Brooks:

“Great designs have conceptual integrity – unity, economy, clarity”

This is true for any type of design and one should always start by that.
Don’t just jump on your keyboard and start writing tests/code right away. Take a minute to think about your design.
That will save you hours of refactoring and headaches.

You’re a designer and might not even know it

You might not be designing the next NASA engine but you are more than likely designing an API that you and others will use. As a matter of fact, unless you write code that will never be seen again, you are writing an Application Programming Interface (API). Every single class, method, function you write is an API that you and others will use. Remember that every time you write code, you are the implementer of a design, and therefore you are a designer.

Giana and Matt Aimonetti

Giana and I, discussing design patterns

When thinking about your design, focus on design concepts instead of implementation details. A design concept must be clear, simple to both explain with words and draw on a whiteboard. If you can’t draw and explain your design on a whiteboard, you have failed one of the great design requirement: clarity. If you work alone, or your coworkers are tired of hearing you, try rubber ducking your design ideas. It’s the same concept as rubber ducking debugging, where a programmer would force himself to explain his code, line-by-line, to a rubber duck on his desk but instead of talking about the code, explain your design and why it’s awesome (I’ve recently done this with my baby girl and it’s been really helpful).

Keeping the design integrity

One of the challenges of designing scalable software is that your constraints are often very unique to your product. Off the shelf solutions don’t work for you, and the specific solution used by another project can’t be transposed to your project because the cause and the effect of what you need to scale are different. The problem is that you really quickly lose design integrity.

Let’s take a look at a concrete example to see how the design integrity can be lost or even not defined at all.
Let’s pretend we want to write a suite of web APIs for video games.

We can look at this task from different perspectives:the shout posted by Matt Aimonetti

  • Video game deadlines are crazy, let’s find a way to release as many APIs ASAP.
  • We’re going to get a huge amount of traffic, let’s make sure we don’t crash and burn.
  • We need to make sure our APIs are simple to use for the dev teams integrating them.

Each of these perspectives reflects a facet of the challenge. Other facets exist that I didn’t mention but that a business person might have listed right away, one of which being: How can we do that for the least amount of money?

To design our API suite, we first need to understand the different perspectives. Gaining this understanding will help us design something better but it will also help us communicate better with the different stakeholders. Once we have a decent understanding of the constraints and expectations, someone needs to explicitly define the design values and their priorities. This is a crucial step in the design process. Systems nowadays are too complicated to be handled by only one person and keeping design integrity requires clear communication.

Design goal and values

The best way to communicate the design is to write a simple sentence defining the primary goal:
“Build a robust, efficient and flexible middleware solution leveraged by external teams to develop online video game features.”

This is a bit like the mission statement of your project, or the elevator pitch you give someone that asks you what you are working on.

Associated with the primary goal are a host of desiderata, or secondary objectives. These are the key objectives used to weigh technical decisions. It’s important for the design to highlight a scale of values so one can refer to them to decide if his/her idea fits the design or not. Here is an example:

  1. Stability
  2. Performance / Scalability
  3. Encapsulation / Modularity
  4. Conventions
  5. Documentation
  6. Reusability / Maintainability

Often these desiderata are applied to most of your projects and reflect your team/company’s technical values. The list might seem simple and unnecessary but, believe me, it will reduce the arguments where John tells Jane that her idea sucks but his is better because he “knows better”. Having an objective reference to refer to when trying to decide which is the best way to go is greatly valuable and will reduce the amount of office drama.


Finally, make sure to explicitly define all the major constraints and to acknowledge the team’s concerns. Here is a small example of what could be listed (which also reflect the previously mentioned perspectives):

  • hard deadlines
  • external teams involved
  • huge load expected
  • limited support available
  • requirements changing quickly
  • limited budget
  • unknown hosting architecture/constraints

Remember that design is always iterative because the constraints keep changing. That’s just the way it is and a lot of technical constraints only appear as you implement or test your design. That’s also why the design needs to be clear but the implementation needs to be flexible.

Reads vs writes

Most of the web apps out there are read heavy, meaning that the stored data gets more accessed than modified. Scaling these type of systems is easier as one can introduce a cache layer, an intermediary storage, which acts as a fast buffer that avoids putting load on the backends. The cost reduction is huge because if you architected your app properly, the data is read from the data store only once (or once every X minutes) after being created/modified.

Caching is so important that it’s even built into the HTTP protocol, making caching trivial.
Speaking of HTTP, a common problem I often see when serving http content to a browser is that even though the backend calls are the same, some information needs to be customized for the current visitor. This prevents it from caching the entire page. An easy solution in this case is to still cache the entire page but to use javascript to fetch the custom data from the backend and to modify the cached http at the client’s browser level directly. As part of your design, you will more than likely need to implement multiple layers of caching and use technologies such as query caching, Varnish, Squid, Memcached, memoization, etc…

The problem is that, as your system gets more traffic, you will notice that the volume of DB/network writes becomes your bottleneck. You will also notice a reduction of your cache/hit ratio because only a small part of your cached data is often retrieved by many clients. At this point, you will need to denormalize to avoid contention, shard your data in silos, or write to cache and flush from cache when the data store is available and not overwhelmed.

Asynchronous processing

One way to avoid write contention is to use async processing. The concept is simple. Instead of directly writing to your datastore after your backend receives a request, you put a message in a queue with all the information needed to run the operation later. On the other side, you have a set number of workers receiving messages and operating on them one after the other.

The advantage of such an approach is that you control the amount of workers and therefore the amount of maximum concurrent writes to your datastore. You can also process the queue before it gets worked and and maybe coalesce some messages or remove outdated/duplicated message. Finally, you can assign more workers to some message types, making sure the important messages get processed first.

Another advantage of this design includes not letting the client hang while you’re processing the data and potentially timeout. You can also process a long queue faster by starting more workers to catch up and retire them later.
You app is more resilient to errors and failed async jobs can be restarted.

Load test, monitor and be proactive

Even the best designs have weak spots and will have to be improved once they are released. Don’t wait for your system to fall apart before looking for solutions. Monitor your app. Every single part of your app. Look for patterns showing signs of potential problems and imagine what you could do to resolve them if they would start manifesting.

Of course before getting there, you will need to understand each part of your system and benchmark/load test/profile your app so you can be ready to face the storm.

Benchmarks and load tests are both super important and, too often, not reflective of what you will really face later on. They are usually great at identifying major problems that should be resolved right away, but fail to show the one big problem you will see on day one when you have to deal with 20k concurrent requests. Use them as indicators, rely on your experience and learn about problems other have faced. This will help you build a knowledge of scalability challenges, their root causes, and their potential solutions.

For benchmarking Ruby code, I use the built-in benchmark tool available in the standard lib.
For simple load testing, I use httperf/autobench and siege.
For anything more complicated, I use JMeter.
In the video game industry, we also often use sims using the client’s code to create load.

Benchmarking without profiling is often useless. Unlike other programming languages, Ruby doesn’t yet have awesome profiling tools easy to use, but things are evolving quickly. Here are some tools I use regularly.

The Ruby wrapper around google perftools is really good.
Before using perftools as often as I do now, I frequently used ruby-prof with kcachegrind.
Ruby 1.9 lets you inspect its garbage collector as explained in a previous post.
And when using MacRuby, I often use DTrace.

Other misc. things I learned


Documentation is critical. It doesn’t matter how you do it but you need to make sure you document what you want to build, how you build it, and why you build it. Documenting will help you and the others working on the project, and will keep you in check. I have started documenting an API and then realized that the design was flawed. Maybe it’s just the way you name a method, or a class, or it can be a weird method signature or even the entire workflow being wrong, but when you document things, design errors appear more obviously.

To document Ruby code, I use yard which is quite similar to javadoc. Code documentation, when writing duck typed language, is, for me, very important since it makes the API designer’s expectations much clearer. I also often add English documentation, written in markdown files and compiled by yard. If you say that your code is simple and that it doesn’t require documentation because anyone can just read it and understand … then you have totally miss the point. Yes, it’s more work to keep documentation and code in sync. But people using web APIs don’t have access to the implementation details. The people distributing compiled APIs don’t give access to their implementation. And honestly, the API should be decoupled from the implementation. I shouldn’t have to guess how to use your API based on how you implemented the code underneath, otherwise my assumptions might be totally wrong.


With great power comes great responsibility. The law of system entropy says that systems become more disorganized over time, so don’t start with complicated code if you can avoid it! It’s not because your programming language lets you do crazy stuff that you have to use it. In 90+% of the time, your code can be written without voodoo and be easier to read, easier to understand, easier to maintain and faster to execute.

If you can’t figure out how to *not* use metaprogramming or weird patterns, take a step back and look at your design, did you miss something?
Also, don’t reinvent the wheel. Use the language the way it was designed to be used. Keep your APIs as small as possible, don’t expose too much as it will be virtually impossible to remove it later on.

As an example, look to what extent Rails modified the Ruby language:

In Rails’ console (Rails 2, Ruby 1.8.7)

>> Array.ancestors
=> [Array, ActiveSupport::CoreExtensions::Array::RandomAccess,
 ActiveSupport::CoreExtensions::Array::Grouping, ActiveSupport::CoreExtensions::Array::ExtractOptions,
 ActiveSupport::CoreExtensions::Array::Conversions, ActiveSupport::CoreExtensions::Array::Access,
 Enumerable, Object, ERB::Util, ActiveSupport::Dependencies::Loadable, Base64::Deprecated, Base64,
>> [].methods.size
=> 233

In irb:

>> Array.ancestors
=> [Array, Enumerable, Object, Kernel]
>> [].methods.size
=> 149

Removing any of these added methods is virtually impossible since some piece of code somewhere might rely on it.

Abstraction & its dangers

Often when designing an API, it’s preferable to offer a well defined public API which will delegate the work to a private implementation shared between multiple public APIs. This approach avoids duplication, makes maintenance easy, and allows for more flexibility. As an example, we can have a public matchmaking API which will delegate most of the work to a private matchmaking interface. If required, swapping the private interface would be totally transparent to the public API. This approach has a downside, however. Having a shared private implementation does create a duplication of APIs. It leaves us with both a public and a private API because we need an API for public access and a private API for the public API to connect to. But when we weigh the benefits and look at what is duplicated, we realize that this trade off is worth it.

Keeping a certain level of abstraction is important to maintaining the separation of concerns as clear as possible. You want to layer your design so that each layer is responsible for itself, only knows about itself, and has limited interactions with other layers. By factoring/isolating the different modules, you can keep a simple, elegant, easy to maintain system. This is a key element of design but one needs to be careful not to obfuscate the design by over abstracting his/her code. This is particularly important when designing a scalable app because you will often need to be able to easily swap parts to optimize each part of your system.

That said, a lot of code out there is unnecessarily complicated. I sometime wonder if the authors of such code try to show that they know some cool language tricks. Or maybe this is due to the fact that, too often, people are impressed by code they don’t understand. The problem with overly complicated or magical code is that it creates yet another abstraction layer between the end user and API. It makes the API more opaque, and that’s a cost you have to take into consideration. Every time you abstract something you have a cost associated with the abstraction. This cost can be calculated in terms of performance loss, clarity loss and maintainability cost.

This is exactly the same problem encountered when trying to normalize data in a database.
Normalizing is a great concept which makes a lot of sense … until you realize that the cost of keeping your data normalized is too great and it becomes a major bottleneck, not letting you scale your application.
At this moment (and probably only then) that you need to denormalize your data.

It’s the same thing with code abstraction. It’s fine to abstract, unless the abstraction is such that it requires too much work to understand what is going on. A bit of duplication is often worth it, but be careful to not abuse it.


Ruby has a decent debugger called ruby-debug and I’m amazed by the amount of people who haven’t heard about it.
I don’t know what I would do if I couldn’t use breakpoints and get an interactive shell to debug Ruby code.
Please people! This is 2011, stop using print statement as a means of debugging!


That’s is for this post. It was longer than expected and I feel I didn’t really cover anything in depth, but hopefully you learned something new or at least read something that piqued your interest. I look forward to reading your comments and, hopefully, your blog posts sharing your experience in designing scalable software.

, ,


Causality of scalability

Part of my job at Sony PlayStation is to architect scalable systems which can handle a horde of excited players eager to be the first to play the latest awesome game and who would play for 14-24 hours straight. In other words, I need to make sure a system can “scale”. In my case, a scalable system is a system that can go from a few hundred concurrent users/players to hundreds of thousands of concurrent users/players and stay stable for months.

One can achieve scalability in many ways, and if you expect me to provide you with a magical formula you will be disappointed. I actually believe that you can scale almost anything if you have the adequate resources. So saying that X or Y doesn’t scale is for me a sign that people are taking shortcuts in their explanations (X or Y are really hard to scale so they don’t scale) or that they don’t understand the causality of scaling. However what I am exploring in this post is the relationship between cause and effect when trying to make a system scalable. We will see that the scalability challenge is not new and not exclusive to the tech world. We will study the traditional approach to scaling and as well as the challenge of scaling in relation to the web and what to be aware of when planning to make a solution scalable.

Scaling outside of the tech world

Trying to scale isn’t new. It goes back to well before technology was invented. Scaling something up or increasing something in size or number is a goal businesses have aimed for ever since the oldest profession in the world was invented. A prostitute wanting to scale up her business was limited by her own time and body. She would reach a point where she couldn’t take more clients. (Independent contractors surely know what I am talking about!) So a prostitute wanting to scale up would usually become a madam/Mama-san and scale the business by having girls work for her.

Another simple example would be a restaurant. A restaurant can handle up to a certain amount of covers/clients at once, after that, customers have to wait in line. The restaurant example is interesting because you can clearly see that opening a huge restaurant with a capacity of 1,000 covers might not be a good idea. First because the cost of running such a restaurant might be much more than the income generated. But also because even though the restaurant does 1,000 covers at peak time, it doesn’t mean that the restaurant will stay that busy during the entire time it’s open. So now you have to deal with waiter/waitresses, busboys and other staff who won’t have anything to do. As you probably have understood already, scaling a restaurant means that the scaling has to be done in a cost effective manner.  And what’s even more interesting is that what we could have thought was the bottleneck (the amount of concurrent covers) can be easily scaled up but it wouldn’t provide real scalability. In fact this choice would cascade into other areas of management like staffing and the building size. Often, the scaling solution for restaurants is to open new locations which can result in keeping the lines shorter, targeting new markets and reducing risks since one failing branch won’t dramatically affect the others.

posted by Matt Aimonetti

Scaling in the traditional tech world

If you’ve ever done console development or worked on embedded devices, you know that they are restricted by some key elements. posted by Matt AimonettiIt can be memory, CPU, hard drive space etc… You have to “cram” as many features as you can into the device, working around the fixed limitations of the hardware. In the console industry, what’s interesting to note is that the hardware doesn’t change often but people expect than a new game on the same platform will do things better than the previous game, even though the limitations are exactly the same. This is quite a challenging problem because you have to fight against the hardware limitations by optimizing your code to be super efficient. That’s exactly the reason why console video game developers manage memory manually instead of relying on a garbage collector. This way they can squeeze every resource they can from the console.

The great advantage of this type of development is that you can reproduce and accurately anticipate issues. The bottlenecks/limitations are well known and immutable! If you find a way around in your lab, you know that the solution will work for everyone. Console video game developers (and to some extent, iOS developers) don’t have to wonder how their game will behave if the player has an old graphic card or not enough RAM.

But ever since we started distributing the processing power, scaling technology has become more challenging.

Scaling on the web

Scaling a web based solution might actually seem quite like scaling a restaurant, except that you can’t easily open multiple locations since the concept of proximity in web browsing isn’t really as concrete as in real life. So the solution can’t be directly transposed. Most people will only have to scale up by optimizing their code running on one server, or maybe two. That’s because their service/app is not, and won’t be, generating high traffic. Scaling such systems is common and one can rely on work done in the past decades for good examples of solutions.

However some web apps/games are or will become high traffic. But because every single entrepreneur I’ve met believes that their solution will be high traffic, they think they need to be able to scale and therefore should be engineered like that from the beginning. (This is, by the way, the reason scalability is a buzzword and you can sell almost anything technical saying that it scales.) The problem with this approach is that people want scalability but don’t understand its causality. In other words, they don’t understand the relationship between cause and effect related to making a solution scalable.

Basically we can reduce the concept of causality of scalability to something like this: you change a piece of the architecture to handle more traffic, but this part has an effect on other parts that also need to change and the pursuit of scalability almost never ends (just ask Google). Making a system scalable needs to have well a defined cause and expected effect, otherwise it’s a waste. In other words, the effect of scaling engenders the need for solutions which themselves have complex effects on a lot of aspects of a system. Let’s make it clearer by looking at a simple example:

Simple Architecture by Matt AimonettiWe have an e-commerce website and this website uses a web application with a database to store products and transactions. Your system is made of 1 webserver handling the requests and one database storing the data. Everything goes well until Black Friday, Christmas or Mother’s Day arrives and now some customers are complaining that they can’t access your website or that it’s too slow. This is also sometimes referred to as the digg/slashdot/reddit effect. All of a sudden you have a peak of traffic and your website can’t handle it. This is actually a very simple use case, but that’s also the only use case most people on the web need to worry about.

The causality of wanting this solution to scale is simple, you want to scale so you can sell more and have happy customers. The effect is that the system needs to become more complex.

To scale such a system, you need to find the root cause of the problem. You might have a few issues, but start by focusing on the main one. In this case, it’s more than likely that your webserver (frontend) cannot handle more than x requests/second. Interestingly enough, the amount of reqs/s might not match the result of your load tests. That’s probably because you didn’t expect the usage pattern that you are seeing, but that’s a whole different topic. At this point you need to understand why you can’t go above the x reqs/s limit you’re hitting. Where is the bottleneck? Is it that your application code is too slow? Is it the database has been brought to its knees? Or maybe the webserver serves as many requests as technically possible but it’s still not enough based on the traffic you are getting.

If we stop right here, we can see that the reasons why the solution doesn’t scale can be multiple. But what’s even more interesting is that the root cause this time depends on the usage pattern and that it is really hard to anticipate all patterns. If we wanted to make this system scale we could do it different ways.

To give you some canned answers, if the bottleneck is that your code is too slow, you should check if the code is slow because of the DB queries made (too many, slow queries etc..). Is it slow because you are doing something complex that can’t be easily improved or is it because you are relying on solutions that are known to not support concurrent traffic easily? More than likely, you will end up going for the easy caching approach. By caching some data (full responses, chunk of data, partial responses etc..) you avoid hitting your application layer and therefore can handle more traffic.

More complex Architecture by Matt Aimonetti

Caching avoids data processing & DB access

If your code is as fast as it can be, then a solution is to add more application servers or to async some processes. But now that means that you need to change the topology of your system, the way you deploy code and the way you route traffic. You will also increase the load on the database by opening more connections and maybe the database will now becoming the new bottleneck. You might also start seeing race conditions and you are certainly increasing the maintenance and complexity aka cost of your system (caching might end up having the same effect depending on the caching solution chosen).

load balanced approach by Matt Aimonetti

One way of scaling it to load balance the traffic

Just looking at these possible causes and the various solutions (we didn’t even mention DB replication, sharding, NoSQL etc..), we can clearly see that making a system scalable has some concrete effects on system complexity/maintenance which directly translate in cost increase.

If you are an engineer, you obviously want your system to be super scalable and handle millions of requests per second. But if you are a business person, you want to be realistic and evaluate the causality of not scaling after a certain point and convert that as loss. Then you weigh the cost of not scaling with the cost of “maybe” scaling and you make a decision.

The problem here though is that scaling is a bit like another buzzword: SEO (Search Engine Optimization). A lot of people/solutions will promise scaling capabilities without really understanding the big picture. Simple systems can easily scale up using simple solutions but only up to a certain level. After that, what you need to do to scale becomes so complex than anyone promising you the moon probably doesn’t know what they are talking about. If there was a one-size-fits-all, easy solution for scaling, we would all be using it, from your brother for his blog, to Google without forgetting Amazon.

AWS logo posted by Matt AimonettiSpeaking of Amazon, I hear a lot of people saying that Amazon AWS services is “THE WAY” (i.e: the only way) to scale your applications. I agree that it’s a compelling solution for a lot of cases but it’s far from being a silver bullet. Remember that the cause and effect of why you need to scale are probably different than anyone else.

Amazon Web Services

Let me give you a very concrete example of where AWS services might not be a good idea: high traffic sites with lots of database writes and low latency.

Zynga, the famous social game company behind Farmville, Mafia Wars etc., is using AWS and it seems that they might have found themselves in the same scenario as above. And that would be almost correct. Zynga games have huge traffic and they do a ton of DB writes. However I don’t think they need low latency since their game clients are browsers and Flash clients and that their games are mainly async so they just need to be able to handle unstable latency. We’ll see in a second how they manage to perform on the AWS cloud.

The major problem with AWS when you have a high traffic site is IO: IO reliability, IO latency, IO availability. By IO, I’m referring to network connection (internal/external) and disk access. Put differently, when you design your system and you know you are going to run on AWS, you need to take into consideration that your solution should survive with zero or limited IO because you will more than likely be IO bound. This means that your traditional design won’t work because your database hard drive won’t be available for 30s or will be totally saturated. You also need to have a super redundant system because you are going to randomly lose machines. Point number one, moving your existing application from a dedicated hosting solution to AWS might not help you scale if you didn’t architect to be resilient to bad IO. Simply put, and to only pick one example: if you were expecting your database to be able to always properly write to disk you will have problems.

Octocat, the GitHub mascot

The solution depends on how you want to look at it and where you are at in your project. You can go the Zynga route and design/redesign your entire architecture to be highly redundant, not rely on disk access (everything is kept in memory and flushed to disk when available) and tolerate a certain % of data loss. Or you can go with the GitHub approach and mix dedicated hardware for IO and “cloud” front end servers all on the same network. One solution isn’t better than the other, they are just different and depend on your needs. GitHub and Zynga both need to scale but they have different requirements.

When it comes to scaling, things are not black or white. To stay on the AWS topic, let’s take another example: Amazon Relational Database Service (RDS). Earlier today, I was complaining on Twitter that RDS doesn’t and probably won’t let you use the MySQL HandlerSocket plugin any time soon, even though it’s been released for almost 6 months and used in prod by many. Then someone asked me if using this plugin would offset the scalability cost-saving. The quick and wrong answer  is yes. By using the plugin, you can potentially get rid of your Memcached servers, probably your Redis/MongoDB/CouchDB servers or whatever NoSQL solution you write and just keep the database servers you currently have. You might have to beef up your DB servers a bit but it would certainly be a huge cost reduction and your system would be simpler, easier to maintain and the data would be more consistent. Sounds good right? After all the biggest online social game company designed it and uses it.

The only problem is that RDS is an AWS service and like every AWS service, it suffers from poor IO. So, if you were deciding to not use RDS and run your own MySQL servers with the HandlerSocket plugin, it wouldn’t bring you much improvement (1). Actually, if you are already IO bound, it would make things worse, because you are centralizing your system around the most unreliable part of your architecture. Based on that premise, RDS won’t support HandlerSocket because RDS runs on the same AWS architecture and has to deal with the same IO constraints. What’s the solution, you might ask? Amazon already went through these scaling problems and they offer a custom, non-relational, data storage solution working around their own issues called SimpleDB. But why would they improve RDS and fix a really hard problem when they already offer an alternative solution? Easy. SimpleDB forces you to redesign your architecture to work with their custom solution and, guess what? You are now locked-in to that vendor!

So the answer is yes, you can offset scalability costs if you don’t use AWS or any other providers with bad IO. Now you should look at the cost of moving away from AWS and see if it’s worth it. How much of your code and of your system is vendor specific? Is that something you can easily change? The fog library, for instance, supports multiple cloud providers. Are you using something similar? Can you transition to that?  Can you easily deploy to another hosting company? (Opscode chef makes that task much easier) But if, for one reason or another, you have to stick with AWS/<other cloud provider>, make sure that the business people in charge understand the consequences and the cost related to that choice.


My point is not to tell you to not design a scalable solution, or not to use AWS, or that RDS sucks. My point is to show that making a system scale is hard and has some drastic effects that are not always obvious. There aren’t any silver bullet solutions and you need to be really careful about the consequences (and costs) involved with trying to scale. Make sure it’s worth it and you have a plan. Define measurable goals for your scalability even though it’s really hard, don’t try to scale to infinity and beyond, that won’t work. Having to redesign later on to handle even more traffic, is a good problem to have, don’t over engineer.

Finally, be careful to understand the consequences of your decisions. What seems to be an almost trivial scaling move such as moving your app from dedicated hosting to a specific cloud provider might end up getting you in a vendor lock in situation!

1: I assume that you are IO bound. If you are not and your DB data fits in memory/cache, then HS on AWS is fine but if that’s the case what’s your bottleneck? ;)

, , ,