This is largely common sense already, but I still frequently run into people who don't know how dangerous this is or how to properly store user credentials. The many Anonymous hacks in the past year that resulted in the leaking of users' passwords also show that many sites still store passwords in either clear-text or encrypted form. It's actually quite simple to store credentials safely, so here's a quick recap and example.
The biggest issue with storing passwords is that you have to assume that it's always possible that someone can get access to your database. Yes, even if it's not directly exposed to the outside world, which it never should be. Whatever security measures you've put in place to protect your database, it's a good idea to assume that sooner or later, someone will be able to punch a hole through your security measures and be able to read the data. So obviously, you really don't want to store clear-text passwords. You also don't want to store encrypted passwords because encrypted data can always be decrypted. And if people get access to those encrypted passwords even if they weren't supposed to, it'd be wise to assume that they also know how to decrypt them, or that it won't take them long to figure it out.
A much better approach is to store a hashed representation of the password instead, using a strong one-way cryptographic algorithm and a unique salt value per password. If the cryptographic algorithm is one-way, it means you can't apply another algorithm to get the original source value again. The only way to compare passwords is to apply the cryptographic algorithm on a given password using the originally used salt value, and then compare the resulting hash with the one you've stored. If they are identical, the given password is the same as the one that was used originally. If they differ, the password is invalid.
Attackers can still employ rainbow tables to try to find password values that generate the same hashes as the ones in your database. Luckily, generating rainbow tables takes time and plenty of space as well so it makes it much harder for attackers to find the passwords. This is why it's so important to use a unique salt value per password. It effectively means that a rainbow table would have to be generated for every single salt value that you've used, making it practically infeasible to find the original password values.
Let's demonstrate this with a simple example. The example is from a Node.js application, but this technique can be applied with whatever technology stack you're using.
This is my User model:
Notice that the salt property of my User type has its default value set to 'uuid.v1'. In this case, uuid.v1 is a function which will be invoked by Mongoose whenever a new User instance is created. Every User instance will thus have a UUID value stored in its salt property. You can also see that I'm not storing the given passwordString in the setPassword function, but that I calculate the hash value based on the passwordString and the UUID salt value.
NOTE: the code above uses SHA-256 to create the hash. These days, a better alternative is to use bcrypt, which is specifically designed to be slow so that it makes brute forcing a much more expensive and impractical operation.
Suppose I create a user with the following code:
Its database representation will look like this:
If an attacker would get access to this, he'd have to generate a rainbow table using the salt value, which takes time, and even then he has no guarantee that the rainbow table will actually contain the correct password. Again, this is why it's so important to use a unique salt for every password. Also, you can use whatever value you want as the salt value so if you can determine it based on some other fields or by using a specific formula you don't need to store the actual salt value. It's recommended to use a long salt value though. Theoretically speaking, it's safer if the salt value isn't stored so clearly as I'm doing here, but even with the salt value clearly visible to a possible attacker, it would still be practically infeasible for him to generate all those rainbow tables.
And of course, my actual authentication function is still very simple as well:
So as you can see, there's nothing hard or complicated about storing credentials in a secure manner. It's quite easy to do so and there are no downsides to doing this.
Written by Davy Brion, published on 1/23/2012 9:53:51 PM
I've only been using the server that's hosting this blog for a week or two, so I'm still keeping a close eye on it. I check usage graphs (cpu, disk I/O and network) a couple of times a day to verify whether things are still running smoothly. This morning, I saw a noticeable increase in CPU usage and network activity that lasted for about 11 hours. I logged into the machine, checked some logs and found out that someone had conducted an 11 hour lasting brute-force SSH attack. It doesn't make much sense to try that on my server since my SSH daemon doesn't allow password authentication, and indeed there was no successful login during the attack so no harm done, right?
Even if such an attack is not successful, it does consume resources on the targeted server(s). And wasteful, unnecessary resource usage has always been a bit of a pet peeve of mine so I wanted to prevent this from happening again. For this particular scenario, it's pretty easy. I installed DenyHosts which routinely checks for repeated (configured at 5) failed log-in attempts, and adds the offending IP addresses to /etc/hosts.deny so every other attempted SSH connection from those IP addresses will be denied immediately. Each offending IP address will be purged from /etc/hosts.deny after 1 week. Then I added a firewall rule that prevents you from connecting through SSH more than 5 times in 60 seconds. If you go over 5 connections, it just starts dropping packets, and by the time the drop behavior for your IP address expires, you'll have been added to /etc/hosts.deny already. As I said, pretty easy in this scenario because there are great tools I can rely on.
But what would you do if you had to implement a strategy to deal with this yourself? The most interesting approach I've heard of is to add an incremental delay on each failed authentication attempt. If the user fails the authentication check, delay the response with 1 second. If the user fails the second time, delay the response with 2 seconds. Third failure means a delay of 3 seconds, and so on. This pretty much makes a brute-force or dictionary attack impossible. The key is though, that you can't block any of your request-handling threads because then you open yourself up to an easy DoS attack.
Implementing this for a web application built on Node.js and Express.js is incredibly easy (there's an ASP.NET MVC example later in this post btw). I took the authorization example of Express.js and made just a few minor changes. First of all, I added the delayAuthenticationResponse function:
This is the most important part of the implementation. Every time we get here, we increment the number of attempts for this user by one and store the number in the user's session. Side note: this is one of the few things you'd actually want to use a session for: session-related data. Then we schedule the callback to be executed after the number of attempts * 1000 milliseconds have passed. The important part to remember here is that Node's event loop is not blocked by this, so our ability to handle other requests is not impaired in any way. The only one who suffers here is the attacker. Note that in a real world implementation, you'd probably only want to start increasing the delay after 5 attempts or so, in order to not piss off users who're just having problems remembering their password.
Then I changed the authenticate function so that it receives a session as the first parameter, and uses our delayAuthenticationResponse function whenever something goes wrong:
After that, it's just a matter of changing the function that is assigned to the login route:
And there we go. This effectively makes it impossible to brute-force your way into this web application, and I'm sure you can agree it was rather easy to do so. Of course, this is only because Node.js is inherently non-blocking. In an environment where non-blocking is the exception rather than the rule, you have to keep a few more things into account when trying to implement this strategy.
For instance, ASP.NET MVC is a typical blocking web framework. There's a certain number of threads that are waiting to handle requests, and once they receive a request, they process that request in its entirety. That means that if your code has to wait on something, the request handling thread is blocked and can't handle any other requests. So obviously, if you'd like to implement this strategy for dealing with repeated failed log-ins, you really want to avoid doing something like this:
(note: this is a slightly modified LogOn method from the default AccountController when selecting 'internet application' in the MVC project wizard)
While this looks like it does the same as the Node/Express example, it certainly doesn't. The experience for the attacker is the same, because each failed attempt causes the response time to be increased with an extra second. But on your server, the thread handling the request is blocking the whole time and is thus incapable of handling extra requests while you're making the attacker wait.
Luckily, you can use ASP.NET MVC's asynchronous controllers to provide an asynchronous implementation of an action without blocking the request handling thread:
Your controller has to inherit from AsyncController instead of Controller to make this work. Of course, it's much more complicated and requires more ceremony compared to the Node/Express approach, but then again, ASP.NET MVC isn't optimized for this kind of usage whereas Node/Express definitely is.
Either way, no matter what web framework you use, if you can add an incremental delay to the response of each failed log-in attempt without blocking a request-handling-thread, you've added a very effective and low-cost protection against brute-force and dictionary attacks.