I recently referred to an interesting .NET job as a 'non-typical .NET job'. I hadn't used that term yet up until that point, so I thought that was rather interesting. But what exactly do I mean with 'non-typical .NET job'? It's pretty simple really: a job where you're using .NET technology without blindly following the guidelines, recommendations and software from Microsoft on how to develop software on the .NET platform. It basically means that you'll use whatever you think is most appropriate for what you're trying to do.
The biggest problem in the .NET world is that most companies that do .NET development just stick to what Microsoft tells them to use and how to use it. Many .NET developers largely focus on that, because they know all too well that it increases their odds of getting hired. And let's face it: Microsoft has a solution for practically everything. The only problem is that those solutions are rarely the best in what they're trying to solve. But hey, no manager gets fired for going with Microsoft, right?
The result is that there are too many companies and too many developers that focus only on what Microsoft offers. But there's a lot more to software development than what Microsoft offers, or even knows about. There are countless examples of Microsoft being late to whatever technical party is interesting at the time. And when they show up, they certainly don't always make a good impression.
If you're the kind of developer that likes to learn from what other software development communities are doing, odds are high that you're screwed. There is an interesting OSS community within the .NET world, and they frequently produce great solutions, quite often based on succes stories coming from other development communities. The problem is not that .NET developers don't have great solutions available to them. The problem is that the majority of them simply don't know about them only because there hasn't been any Microsoft hype about it, or that the devs who do know about it aren't allowed to use it because their managers are sceptical about it, most likely also because there's no Microsoft backing for the technology or architectural style that is being proposed.
I'm not advocating the avoidance of Microsoft products or solutions. By all means, use Microsoft products if they are indeed the best solution to your problem. But do be aware of the things that are getting attention outside of the Microsoft sphere and use them when it makes more sense to use them. That's the essence of the 'non-typical .NET job' and that's exactly what makes it interesting: using the right tool for the right job.
I've only been using the server that's hosting this blog for a week or two, so I'm still keeping a close eye on it. I check usage graphs (cpu, disk I/O and network) a couple of times a day to verify whether things are still running smoothly. This morning, I saw a noticeable increase in CPU usage and network activity that lasted for about 11 hours. I logged into the machine, checked some logs and found out that someone had conducted an 11 hour lasting brute-force SSH attack. It doesn't make much sense to try that on my server since my SSH daemon doesn't allow password authentication, and indeed there was no successful login during the attack so no harm done, right?
Even if such an attack is not successful, it does consume resources on the targeted server(s). And wasteful, unnecessary resource usage has always been a bit of a pet peeve of mine so I wanted to prevent this from happening again. For this particular scenario, it's pretty easy. I installed DenyHosts which routinely checks for repeated (configured at 5) failed log-in attempts, and adds the offending IP addresses to /etc/hosts.deny so every other attempted SSH connection from those IP addresses will be denied immediately. Each offending IP address will be purged from /etc/hosts.deny after 1 week. Then I added a firewall rule that prevents you from connecting through SSH more than 5 times in 60 seconds. If you go over 5 connections, it just starts dropping packets, and by the time the drop behavior for your IP address expires, you'll have been added to /etc/hosts.deny already. As I said, pretty easy in this scenario because there are great tools I can rely on.
But what would you do if you had to implement a strategy to deal with this yourself? The most interesting approach I've heard of is to add an incremental delay on each failed authentication attempt. If the user fails the authentication check, delay the response with 1 second. If the user fails the second time, delay the response with 2 seconds. Third failure means a delay of 3 seconds, and so on. This pretty much makes a brute-force or dictionary attack impossible. The key is though, that you can't block any of your request-handling threads because then you open yourself up to an easy DoS attack.
Implementing this for a web application built on Node.js and Express.js is incredibly easy (there's an ASP.NET MVC example later in this post btw). I took the authorization example of Express.js and made just a few minor changes. First of all, I added the delayAuthenticationResponse function:
This is the most important part of the implementation. Every time we get here, we increment the number of attempts for this user by one and store the number in the user's session. Side note: this is one of the few things you'd actually want to use a session for: session-related data. Then we schedule the callback to be executed after the number of attempts * 1000 milliseconds have passed. The important part to remember here is that Node's event loop is not blocked by this, so our ability to handle other requests is not impaired in any way. The only one who suffers here is the attacker. Note that in a real world implementation, you'd probably only want to start increasing the delay after 5 attempts or so, in order to not piss off users who're just having problems remembering their password.
Then I changed the authenticate function so that it receives a session as the first parameter, and uses our delayAuthenticationResponse function whenever something goes wrong:
After that, it's just a matter of changing the function that is assigned to the login route:
And there we go. This effectively makes it impossible to brute-force your way into this web application, and I'm sure you can agree it was rather easy to do so. Of course, this is only because Node.js is inherently non-blocking. In an environment where non-blocking is the exception rather than the rule, you have to keep a few more things into account when trying to implement this strategy.
For instance, ASP.NET MVC is a typical blocking web framework. There's a certain number of threads that are waiting to handle requests, and once they receive a request, they process that request in its entirety. That means that if your code has to wait on something, the request handling thread is blocked and can't handle any other requests. So obviously, if you'd like to implement this strategy for dealing with repeated failed log-ins, you really want to avoid doing something like this:
(note: this is a slightly modified LogOn method from the default AccountController when selecting 'internet application' in the MVC project wizard)
While this looks like it does the same as the Node/Express example, it certainly doesn't. The experience for the attacker is the same, because each failed attempt causes the response time to be increased with an extra second. But on your server, the thread handling the request is blocking the whole time and is thus incapable of handling extra requests while you're making the attacker wait.
Luckily, you can use ASP.NET MVC's asynchronous controllers to provide an asynchronous implementation of an action without blocking the request handling thread:
Your controller has to inherit from AsyncController instead of Controller to make this work. Of course, it's much more complicated and requires more ceremony compared to the Node/Express approach, but then again, ASP.NET MVC isn't optimized for this kind of usage whereas Node/Express definitely is.
Either way, no matter what web framework you use, if you can add an incremental delay to the response of each failed log-in attempt without blocking a request-handling-thread, you've added a very effective and low-cost protection against brute-force and dictionary attacks.
Concepts like Dependency Injection and using an Inversion Of Control container have gradually gotten more popular and accepted in the .NET world in the last 2 years or so. There are however still quite a few people who doubt the validity of these concepts. Quite a few of these people seem to think that Dependency Injection for instance is only useful to increase the testability of your code. I wanted to go over a real world example which shows a (IMO) huge benefit which can be solely attributed to thinking about loose coupling, using Dependency Injection and using an Inversion Of Control container.
I recently wrote a post about how you can use an Agatha service layer fully in-process instead of having to host it in a different service through WCF. The only reason why it’s so easy to do that, is because Agatha’s design makes heavy use of loose coupling and dependency injection. And obviously, it makes use of an Inversion Of Control container to manage the dependencies and to wire everything together.
There are basically 3 very important parts to Agatha. The first is the Request Processor. This class basically delegates the handling of each incoming request to its corresponding request handler. The other important parts are the Request Dispatcher (for synchronous client-side usage) and the Asynchronous Request Dispatcher (for asynchronous client-side usage). I'll just refer to these two classes as the request dispatchers for the rest of this post. When the service layer is hosted through a WCF service, the request dispatchers need to communicate with the Request Processor through a proxy (either a synchronous one, or an asynchronous one). It would’ve been very easy to tie the implementations of the request dispatchers directly to the implementation of the WCF proxies. If I had gone that way, I wouldn’t have been able to offer the ability to run the service layer in process though, since the request dispatchers would require the WCF proxies to be used. I could’ve hosted the WCF service in process, but that would really be overkill if you don’t really want to use WCF.
As easy as it would’ve been to tie the request dispatchers directly to the proxies, it was just as easy to make them depend on a slightly more abstract component. There are two interfaces to communicate with the Request Processor:
Notice that there are no WCF attributes on these interfaces. I have two more WCF-enabled interfaces, namely the IWcfRequestProcessor and the IAsyncWcfRequestProcessor. They define the same methods as their non-WCF counterparts, but have all of the required WCF attributes placed on top of those methods.
Typically when creating (or generating) a WCF proxy, the proxy will only implement the interface of the service contract (in this case, that would be the IWcfRequestProcessor or the IAsyncWcfRequestProcessor interface). Any class that would want to use these proxies would then need an instance of that proxy to be able to communicate with the WCF service. In Agatha’s proxies, I chose a slightly different approach. They implement both the WCF interface as well as the none-WCF interface. Since the method definitions are identical, this doesn’t require any more work. Take a look at the class definitions of those proxies:
As you can see, both proxy classes inherit from WCF’s ClientBase class and pass the WCF-specific interface as the generic type parameter to ClientBase. That means that these proxies can execute the WCF operations defined in the WCF-specific interface. These proxy classes also implement the original interfaces, which means that these proxies can be used by any class which requires an instance of IRequestProcessor and IAsyncRequestProcessor.
In Agatha, there is no class that directly uses either the RequestProcessorProxy or the AsyncRequestProcessorProxy class. Instead, the request dispatchers depend solely on IRequestProcessor or IAsyncRequestProcessor:
The dispatchers never know exactly who they’re talking to. This means that they can communicate either through a WCF proxy, or the real Request Processor, depending on how your Inversion Of Control container is configured. That is a huge benefit because it gives you a tremendous amount of flexibility when it comes to how you want to use your service layer (in process, in another service, on another machine, whatever) and it really didn’t take any extra effort with regards to development time to achieve that flexibility. If I want to use Agatha with a different communication technology, then I could do so by simply creating implementations of the IRequestProcessor and IAsyncRequestProcessor interfaces which deal with the specifics of whatever that different communication technology might be. I would also have to change the way the implementations are registered with the Inversion Of Control container to make sure that the new implementations are used. But I wouldn’t have to change anything else.
In this case, dependency injection and loose coupling was not used to increase testability. The increased testability that comes from using this approach is an added bonus.
We're experimenting at work with a bit of a different approach as to how we structure our views and how they will interact with each other. We already know that our views will be reused in many different contexts so having them communicate with other views is something that needs to be done in a very loosely coupled manner. I don't want any of the views to even know about the existence of other views, let alone having them know about specific instances of them.
But these views do have to interact with each other. I didn't want to use typical events because that would require either a certain view to know about another view to be able to subscribe to its events, or you'd need some other component which knows which views need to be hooked to each other. We really need maximum flexibility for what we have in mind with our views, so it only made sense to finally start using the Event Aggregation pattern. The idea is that a view can basically publish events, without knowing who is subscribed to these events, and that suscribers will be notified whenever these events occur. However, subscribers don't know anything about who is publishing the events. Instead, both publishers and subscribers only know of the Event Aggregator. A publisher tells the aggregator to publish an event to all subscribed listeners for that event. Each subscribers simply tells the aggregator "i'd like to be notified whenever this event occurs, and I don't care where it comes from".
Plenty of implementations of this pattern can be found online already, so I figured: why not add my own? :p
Fist of all, an event is nothing more than a class that inherits from this class:
Every event should inherit from this class, and add whatever necessary properties that are important for that particular type of event.
If a class is interested in listening to a specific event, it needs to implement the following interface:
If a class is interested in multiple events, it simply needs to implement the generic IListener interface for each type of event that it wants to handle.
Then we obviously need the Event Aggregator. I wanted an aggregator that allowed listeners to either subscribe/unsubscribe to/from very specific events, or just subscribe/unsubscribe to/from whatever it supports. So I have the following IEventAggregator interface:
The Subscribe and Unsubscribe methods that simply take an IListener reference will either subscribe or unsubscribe the given listener to/from every event that it can handle. In other words, for every generic IListener interface that it implements. Yet you also have the ability to subscribe/unsubscribe from a specific event type.
And here's the implementation:
In case you're wondering... the IDispatcher interface is merely a way to wrap Silverlight's real Dispatcher. We wrap it so we can use a different implementation of the IDispatcher in our automated tests. Other than that, the implementation is very straightforward.
We started using this very recently, so this implementation might change in the upcoming weeks, but for now it does what it needs to do and it does so pretty well. In our case, every view's Presenter automatically has an IEventAggregator property so whenever we need to publish an event, we can simply do something like EventAggregator.Publish(new SomeEvent(someParameter)). Or, Presenters that need to listen to events can simply say EventAggregator.Subscribe(this) or only subscribe to some specific events whenever they need to and their specific Handle method will be called whenever someone publishes the event. This also allowed us to get rid of the somewhat awkward syntax when testing events (subscription, unsubscription and the actual handing of events) with mocking frameworks.
And as a last bonus, I put a call to the Unsubscribe(IListener) method in the Dispose method of our base Presenter implementation. Which means that none of them will be left subscribed to events by accident anymore :)
If you have a web application which communicates with a remote service, it's important to protect that web application from any problems the remote service might be dealing with. For instance, if the remote service goes down (for whatever reason) you really don't want your application to keep making calls to this service. These failing calls increase the load on the service, which is already having problems, and will also block your threads which takes away resources from your application to deal with other requests. One pattern which is very suitable to reduce the problems for this situation is the Circuit Breaker (read that unless you're familiar with the circuit breaker).
The biggest issue I have with my previous implementation is that it required you to call it manually to protect potentially risky calls. I don't like having to call my circuit breaker whenever I want to make a service call because as a consumer of a service proxy, I shouldn't even know about the circuit breaker. I also don't want any coupling between my service proxy and the actual circuit breaker. Sounds like a good candidate for some AOP magic, right?
We're going to use Castle Windsor's Interceptors to make this work. First, the implementation of the CircuitBreaker class:
Notice how the CircuitBreaker class implements Windsor's IInterceptor interface. The Intercept method will be called by Windsor whenever we try to call a method from a protected component. Within the Intercept method we can add the necessary logic to apply the Circuit Breaker pattern to the code that was originally called.
Now we just need to configure the Windsor IOC container to apply this bit of AOP magic for us.
First, we register the CircuitBreaker with the container:
Notice that we register the CircuitBreaker implementation with a Singleton lifestyle, a custom name and the required constructor parameters to create an instance of the CircuitBreaker.
Then we register our service proxy:
Notice how we registered the service proxy as a transient component, while referencing the singleton CircuitBreaker interceptor. This means that each resolved instance of our service proxy will be protected by the same CircuitBreaker instance. If you have multiple services that you want to protect, simply register multiple CircuitBreakers with different keys and link each service you want to protect with the correct CircuitBreaker key.