I've only been using the server that's hosting this blog for a week or two, so I'm still keeping a close eye on it. I check usage graphs (cpu, disk I/O and network) a couple of times a day to verify whether things are still running smoothly. This morning, I saw a noticeable increase in CPU usage and network activity that lasted for about 11 hours. I logged into the machine, checked some logs and found out that someone had conducted an 11 hour lasting brute-force SSH attack. It doesn't make much sense to try that on my server since my SSH daemon doesn't allow password authentication, and indeed there was no successful login during the attack so no harm done, right?
Even if such an attack is not successful, it does consume resources on the targeted server(s). And wasteful, unnecessary resource usage has always been a bit of a pet peeve of mine so I wanted to prevent this from happening again. For this particular scenario, it's pretty easy. I installed DenyHosts which routinely checks for repeated (configured at 5) failed log-in attempts, and adds the offending IP addresses to /etc/hosts.deny so every other attempted SSH connection from those IP addresses will be denied immediately. Each offending IP address will be purged from /etc/hosts.deny after 1 week. Then I added a firewall rule that prevents you from connecting through SSH more than 5 times in 60 seconds. If you go over 5 connections, it just starts dropping packets, and by the time the drop behavior for your IP address expires, you'll have been added to /etc/hosts.deny already. As I said, pretty easy in this scenario because there are great tools I can rely on.
But what would you do if you had to implement a strategy to deal with this yourself? The most interesting approach I've heard of is to add an incremental delay on each failed authentication attempt. If the user fails the authentication check, delay the response with 1 second. If the user fails the second time, delay the response with 2 seconds. Third failure means a delay of 3 seconds, and so on. This pretty much makes a brute-force or dictionary attack impossible. The key is though, that you can't block any of your request-handling threads because then you open yourself up to an easy DoS attack.
Implementing this for a web application built on Node.js and Express.js is incredibly easy (there's an ASP.NET MVC example later in this post btw). I took the authorization example of Express.js and made just a few minor changes. First of all, I added the delayAuthenticationResponse function:
This is the most important part of the implementation. Every time we get here, we increment the number of attempts for this user by one and store the number in the user's session. Side note: this is one of the few things you'd actually want to use a session for: session-related data. Then we schedule the callback to be executed after the number of attempts * 1000 milliseconds have passed. The important part to remember here is that Node's event loop is not blocked by this, so our ability to handle other requests is not impaired in any way. The only one who suffers here is the attacker. Note that in a real world implementation, you'd probably only want to start increasing the delay after 5 attempts or so, in order to not piss off users who're just having problems remembering their password.
Then I changed the authenticate function so that it receives a session as the first parameter, and uses our delayAuthenticationResponse function whenever something goes wrong:
After that, it's just a matter of changing the function that is assigned to the login route:
And there we go. This effectively makes it impossible to brute-force your way into this web application, and I'm sure you can agree it was rather easy to do so. Of course, this is only because Node.js is inherently non-blocking. In an environment where non-blocking is the exception rather than the rule, you have to keep a few more things into account when trying to implement this strategy.
For instance, ASP.NET MVC is a typical blocking web framework. There's a certain number of threads that are waiting to handle requests, and once they receive a request, they process that request in its entirety. That means that if your code has to wait on something, the request handling thread is blocked and can't handle any other requests. So obviously, if you'd like to implement this strategy for dealing with repeated failed log-ins, you really want to avoid doing something like this:
(note: this is a slightly modified LogOn method from the default AccountController when selecting 'internet application' in the MVC project wizard)
While this looks like it does the same as the Node/Express example, it certainly doesn't. The experience for the attacker is the same, because each failed attempt causes the response time to be increased with an extra second. But on your server, the thread handling the request is blocking the whole time and is thus incapable of handling extra requests while you're making the attacker wait.
Luckily, you can use ASP.NET MVC's asynchronous controllers to provide an asynchronous implementation of an action without blocking the request handling thread:
Your controller has to inherit from AsyncController instead of Controller to make this work. Of course, it's much more complicated and requires more ceremony compared to the Node/Express approach, but then again, ASP.NET MVC isn't optimized for this kind of usage whereas Node/Express definitely is.
Either way, no matter what web framework you use, if you can add an incremental delay to the response of each failed log-in attempt without blocking a request-handling-thread, you've added a very effective and low-cost protection against brute-force and dictionary attacks.
When you're pushing out localized content to your users, you don't want to mess up any possible output caching you've got set up (or would want to set up later on). One common approach to deal with this is to always include the relevant language code as a route parameter in your URLs. It works great with output caching because each localized version of the content will be accessible through its own URL, and as an extra benefit, your content is indexable by search engines in every language you support as well.
The only downside to that approach is that you absolutely have to make sure that the correct language code is always included in each URL you put on your pages. That's tedious work at best, error-prone at worst. Ideally, each URL that you generate on your pages automatically has the current language code included in it. And obviously, you want to be able to provide it explicitly as well (for language selection links for example). It took me a while to figure out how this can be done with ASP.NET MVC but I did manage to find a pretty nice solution.
I was browsing the MVC source code (see how useful this whole open source thing is?) to look for some kind of hook I could use to influence how URLs are generated when you use Url.Action or Html.ActionLink in your views. And it turns out that there is one, though it's not really an obvious one. Whenever you use Url.Action or Html.ActionLink, ASP.NET MVC calls the GetVirtualPath method for each defined Route object and the first returned VirtualPathData instance is the one that will provide the final URL that is rendered in your links. So we first need to come up with our own custom Route class:
Now we have to make sure that we define a route of this type before our normal routes are defined:
This ensures that our AutoLocalizingRoute instance will get a chance to provide a VirtualPathData instance whenever an action-URL is needed, before the standard Route instance is called to create one.
Now, all we need is something that sets the current thread's Culture and UICulture property based on the language code in the URL of each request. I did this with a HttpModule, of which only this part is relevant here:
And that's it. Every time we generate an URL to a Controller Action, the current language code will be included automagically, so there's no chance of us forgetting it somewhere.
Written by Davy Brion, published on 4/6/2011 9:24:29 PM
I recently got convention-based localization of display labels working with ASP.NET MVC, and this week, I wanted to get something similar working for required field validation messages. ASP.NET MVC3 shows a default validation message when a required field is not filled in, unless you specify a resource provider and the name of the resource key when you put the Required attribute on a property. Just like with the display value of labels, I wanted a convention based approach for this. I wanted ASP.NET MVC to look for a resource key with the NameOfModelClassNameOfPropertyrequired convention.
After some googling and browsing the MVC3 source code, I couldn't really find the hook I needed to make this happen, so I let it rest for a few days and got back to it later on. I had more luck the second time around and found the hook I needed. ASP.NET MVC uses the RequiredAttributeAdapter class to retrieve a ModelClientValidationRequiredRule which by default is initialized with the default error message. The trick was just to inherit from this class and return a ModelClientValidationRequiredRule with your own message, and then register that class with the DataAnnotationsModelValidatorProvider. This is the new subclass of the RequiredAttributeAdapter class:
And this is how you tell ASP.NET MVC to use it:
And that's it... with this approach you have full control over how the message is formatted.
Written by Davy Brion, published on 4/3/2011 7:45:01 PM
A feature of ASP.NET MVC that I really like is that when you use the LabelFor extension method in a strongly-typed view, the LabelFor implementation will try to retrieve and use metadata for the property you're creating a label for. For instance:
This will generate an HTML label for the SomeProperty property of your model. If you need localized views, you can annotate the property in your model like this:
And the label will be generated with a localized value from your application's resources depending on the culture of the current user. Which is great, but putting those Display attributes on each property gets quite tedious. It would be better if the localized value was automatically retrieved based on a convention. Something like NameOfModelClass_NameOfProperty.
It turns out that ASP.NET MVC uses a DataAnnotationsModelMetaDataProvider by default to retrieve this metadata, and that you can provide a different implementation to be used by the framework. We still want to take advantage of those DataAnnotations, but we just want to add some convention-based default behavior to it as well. So we inherited from the DataAnnotationsModelMetaDataProvider and came up with something pretty simple like this:
We first call the base implementation which will get the values from annotations if they're present. If no DisplayName value was created based on annotations, we're going to check to see if the value is present in our resources based on the convention and if so, add it to the metadata before we return it. Then we instruct ASP.NET MVC to use this provider instead of the default:
And now, every label will be localized automatically if a translation is present in the resources with the expected key. Not sure if this is the best way to do this (better suggestions are welcome!), but it's certainly a big step up from having to annotate each property.
Written by Davy Brion, published on 3/19/2011 11:41:15 AM
I recently had to research which UI technology would be the best choice for the applications that my client is going to build in the next couple of years. This is a .NET shop, so there are 2 major directions you could move into: standards-based web development, or Silverlight. When you have to recommend one over the other, you ideally want to be able to back up your choice with more than just some opinions. So we made a list of candidates and did a POC for each one. Then we came up with a list of criteria, grouped in a bunch of categories. The criteria were all assigned a weight, and we scored each of them for all candidates.
In this post, I want to go over the categories of criteria, and discuss our findings. I'm also going to share the spreadsheet so you can go through the numbers yourself. Depending on your needs or your opinions, you can change the weights and the scores and see how that affects the outcome. I removed some of the criteria that were specific to my client, but it didn't have a significant impact on the outcome. For this post, I also limited the candidates to ASP.NET MVC 3 in combination with the jQuery family (jQuery Core, jQuery UI and jQuery Mobile) and Silverlight.
Here's a quick listing of the categories and some of their criteria (for the actual list, check the spreadsheet... the link is at the end of the post):
- User experience (compelling UI, accessibility, intuitive/ease-of-use, accessible from multiple devices, accessable from multiple platforms)
- Infrastructure (easy/flexible deployment, monitorability)
- Security (safe from XSS, CSRF)
- Performance (server footprint, client-side resource usage, asynchronicity, UI responsiveness, initial load times)
- Code/Architecture (maturity, reusability of validation logic, simplicity, maintainability, flexibility, power, testability, i18n, feedback cycle, learning curve, potential efficiency, rapid application prototyping, readable URLs, extensibility)
- People (limits the number of required skills, mindshare, documentation, community support, commercial support)
- Strategic (future-proof, standards-compliant, differentiator, backing, vision)
- License (do we have access to the code?)
- Tools (IDE support, availability of extra tools, free 3rd party component availability, commercial 3rd party component availability
Depending on what you or your organization requires, some of these might not apply to you. Perhaps there are other criteria that you find important and that we missed. Nevertheless, I think this is a pretty comprehensive list which covers most of the factors that you need to think about when making this kind of decision.
This graph visualizes how both technologies scored, grouped by category:
I'm sure there are quite a few things about that image that surprise you. The first thing you might be thinking is "how can Silverlight score so badly when it comes to User Experience?". The answer to that is quite simple: if your users aren't using a desktop/laptop with Windows or OS X on it, there is no experience to be had at all. Users that require assistive technology are out of luck as well since accessibility support in Silverlight is still very poor. If you hold those factors into account, it really doesn't matter much that you can easily make Silverlight applications incredibly flashy (pardon the pun). Besides, most people get bored and annoyed with excessive animations rather quickly, so you're often better off not to overdo it. With that in mind, jQuery UI and HTML5 will easily meet your needs for that kind of stuff.
Another area where Silverlight scores very poorly is the strategic department. The fact that it's not standards-compliant obviously hurts a lot here, but there's more to it than that. First of all, the mobile story (again) pretty much kills it. Android and iOS don't support it. We already know it's never going to work on iOS and as long as it doesn't work on iOS, Android has no reason whatsoever to provide support because Silverlight simply isn't important in the grand scheme of things to any of the important players. Microsoft hasn't even announced a Silverlight browser plugin for WP7 yet and who knows if it will? That means that Silverlight web applications aren't usable on any mobile device right now, except for slates running a full Windows OS which looks like its only a tiny portion of the market. Secondly, despite its original tagline of "Lighting up the web", it appears that Microsoft only has about 3 scenario's in mind where it still actively pushes Silverlight: internal business applications, video streaming and native WP7 development. While internal business applications are certainly a large part of what we're going to do in the next couple of years, we're also going to build things that are available publicly and to a large variety of people. Going with Silverlight for the internal applications and HTML(5) for the public-facing applications wouldn't be very cost-efficient either since that means we have to train our developers for both cases. And it wouldn't make much sense anyway since HTML(5) is a great fit for internal business apps as well.
So we have 3 categories where Silverlight scores better than ASP.NET MVC3/jQuery but that's far from sufficient to close the gap. Based on the weights we assigned to the criteria, the maximum possible score is 732. ASP.NET MVC3 with jQuery scored 568. Silverlight scored 304. Obviously, the results will vary depending on what you find important. Which is why we asked an analyst from one of those large IT research & advisory companies to give us some feedback on this. The analyst agreed entirely with our findings and our data, and confirmed that his company is recommending moving towards HTML5 to all of their customers. He even went as far as to say that Silverlight is hard to recommend, unless you're not targeting any mobile users and the applications are internal-only and you've already invested in the technology. I can't provide a link for any of this yet, but a paper about this will be published soon so I'll either link to it when it's out (if it's publicly available) or at least reference it.
I encourage anyone who is faced with the same decision to use the spreadsheet and modify it to your needs (adding more criteria, changing weights and/or scores, whatever) to see which one is the best fit for your situation. You can download the spreadsheet here.