If you're new to Ruby and you're coming from static languages like C# or Java, you'll probably wonder why there isn't much interest in Dependency Injection in the Ruby community. The answer is quite simple: because you don't need it. Now, that's not to say that Dependency Injection isn't a valuable technique in your toolbox. In fact, if you're doing C# or Java I'd even go as far as saying it's absolutely necessary to use Dependency Injection in most of your code. Two of the biggest reasons (i know there are more, but let's focus on these for now) why Dependency Injection is important if you're using a static language are these:
- Highly increased testability because you can control the dependencies during automated tests
- Lowered coupling between classes which enables you to change implementations of dependencies at will (granted, not a lot of people actually do that often but it certainly is a real benefit)
In Ruby however, you don't really need dependency injection to achieve the 2 benefits mentioned above as I hope the following contrived example shows. Suppose we have the following 2 classes.
This is no good. The workyourmagic_on method from SomeClass directly instantiates a new instance of the Dependency class. During automated tests, we could actually replace the implementation of the new method of the Dependency class to return a stub or a mock if we want to instead of an instance of the real thing. But we could never easily change the implementation of the dependency that SomeClass requires to function properly in real production code without screwing up everything else that also happens to depend on the Dependency class.
If you're coming from a static language, you'd probably be inclined to change SomeClass to this:
Ahh, that's much better. The dependency is now injected in SomeClass' initializer method and we can very easily achieve the above mentioned 2 benefits by passing whatever we want to each instance of SomeClass, as long as it has a dosomethingwith method. The biggest downside however is that every consumer of SomeClass instances now needs to know about the dependencies that it requires to function properly. This quickly becomes very painful because using Dependency Injection throughout your codebase will very quickly lead to having to satisfy the dependencies of the dependencies of the dependencies of the dependencies of the class you actually need to use. This quickly requires the usage of a good Inversion Of Control Container to handle all of these dependencies for you. There's just one problem: there doesn't seem to be a widely used IOC Container in Ruby. Which in itself tells you that it's simply not needed in Ruby.
There's a better way to modify SomeClass:
Now, the ALT.NET fanbois will still tell you how wrong this is because SomeClass still has a direct dependency on the Dependency class. I should know because I was one of them. And again, in C# or Java I'd definitely agree that this code is bad. Not so in Ruby however because I can easily replace the actual implementation of SomeClass' dependency in both automated tests and actual production code, without impacting anything else that uses the Dependency class.
Suppose we have the following test:
Since I don't provide the dependency to the object that I'm testing, I can't really verify that the dependency was used correctly. I'm also not using any mocking framework since I want to show how the language itself takes away the need to inject your dependencies. Given the following spy class:
I could now write my test like this:
And it'll pass. The 'trick' is that I simply change the implementation of the getdependency method for the instance that I have. This doesn't change anything at the class level, merely at the instance level. Technically, I don't really change the implementation of getdependency in SomeClass, I merely insert a method in this particular instance which will precede the one in SomeClass during Ruby's method lookup procedure.
Also, I would like to point out that this kind of testing isn't exactly something that I'd encourage you to do, though this technique can be useful if you really need to do interaction testing. But it's a good illustration of why you don't really need to do Dependency Injection in Ruby to write testable code.
Now you might be wondering, what's the difference between doing something like this and using a tool like TypeMock in .NET to basically achieve the same thing? Well, when writing code like this in C# and testing it with TypeMock, you achieve one of the benefits that you could have with using Dependency Injection: being able to control the dependencies. But you can't change the implementation of the dependency at runtime in normal production code. In Ruby, with the approach outlined above, I can still easily achieve that like this:
If this code is executed after the earlier definition of SomeClass, it will reopen SomeClass and change the implementation of the get_dependency method for each instance that will be created. This effectively gives you the ability to change the implementation of a dependency at runtime in production code, without having to use Dependency Injection. Now, some Dependency Injection purists will still claim that this approach is bad because SomeClass knows which implementation of the dependency it uses. And my question to those people is: so what? I can easily change it in any situation I'd run into. You can also consider the presence of the actual type of the dependency as the default implementation to use, without having to force the requirement of this knowledge on consumers.
There is still one situation where I would probably use Dependency Injection in Ruby though, and that is when you want to benefit from what I consider to be yet another great reason to use Dependency Injection in static languages:
- Not having to know anything about the lifecycle of your dependencies
In this case, it's probably much easier to just inject a long-living dependency in an object with a shorter lifecycle. However, if the dependency is basically a singleton (which is still the default for many of the .NET IOC Containers), then I actually would consider implementing the singleton as a class with nothing but class methods (similar to static methods in static languages, but not quite since you can still change them whenever you want) and having my other classes that depend on it call those methods directly, or through helper methods that I can still change when I need to.
I'm sure many people will disagree with some of the points I try to make in this post, but until I get some actual real-world reasons that invalidate my points, I simply don't see the point in sticking to a set of rules and guidelines that were largely made up out of necessity to deal with shortcomings of static languages. That's not to say that dynamic languages don't have any shortcomings or drawbacks, but it does mean that the rules and guidelines of how to write good code are, well, simply different. And as such, it wouldn't be wise to blindly stick with rules that were made for a different way of programming. Question what you already know, because it might not be relevant to what you need to do now.
Julian Birch recently posted a reaction to my reaction to Uncle Bob’s IOC lunacy post. Julian mistakenly thinks that I have a problem with using factories. I definitely don’t have a problem with using factories. I just have a problem with using them in the manner that Uncle Bob suggested in his post. Jeffrey Palermo also recently posted an example of where he thinks using a factory is better than injecting dependencies. I wanted to react to both those posts with a real-world example of where I prefer to use a factory and how I use an IOC container to do so.
As you probably know by now, I'm a big proponent of using IOC containers. I've never stated that they should be used in every single application, but using one certainly pays off in most applications, especially as complexity increases gradually. When you use an IOC container, you’ll have much less need for factories. There are however two situations where I certainly prefer to use a factory. And that is when:
- a certain dependency might not always be used by a class and that dependency is expensive to create
- I might need multiple instances of a dependency during the lifetime of a class
A good example of both those situations is when using Agatha’s AsyncRequestDispatcher class from a Silverlight user control. Creating an AsyncRequestDispatcher is expensive because it in turn requires an instance of the AsyncRequestProcessorProxy class. The AsyncRequestProcessorProxy class inherits from WCF’s ClientBase class, and those types are relatively expensive to create. And due to the asynchronous nature of those service calls, you can never deterministically dispose of a AsyncRequestDispatcher instance with absolute certainty that it won’t be disposed before the response of the service call is returned from the service. Because of that, the AsyncRequestDispatcher class was designed to dispose of itself automatically once the response is returned. This effectively means that you’ll need a new instance of AsyncRequestDispatcher whenever you need to make an asynchronous call to an Agatha service.
For most user controls, it obviously doesn’t make sense to inject one instance of the AsyncRequestDispatcher into the user control because you often need to make multiple service calls depending on user interactions or other events. A good way to deal with this is obviously to ask a factory to create the AsyncRequestDispatcher whenever you need one. Which is why Agatha offers the AsyncRequestDispatcherFactory class, which looks like this:<
Now, some of you are probably thinking: what is the difference between this and Uncle Bob’s example? Well, unlike Uncle Bob’s example this factory is not responsible for registering the required components to create an AsyncRequestDispatcher with the container. It merely resolves an instance and returns it. And yes, it uses the container to do so. I could actually just new up an AsyncRequestDispatcher myself but then the factory would also have to know about its dependencies and make sure it’d be able to create them. If those dependencies have dependencies of their own, I'm back to dealing with dependencies manually which is exactly what I'm trying to avoid throughout my codebase. I can’t come up with a single reason why this would be wrong, but some of you probably will so feel free to point out those reasons.
The benefit of this factory is that it enables me to delay the instantiation of an expensive dependency, and it also enables me to get more than one instance if I need to during the lifetime of a single object. At the same time, it doesn’t have any of the downsides of Uncle Bob’s approach. Also, this approach is something that you won’t need to resort to throughout your codebase, it’s more for edge-cases.
Now, how do we use this factory? Instead of new’ing the factory manually like Jeffrey does in his example, the factory is registered with the IOC container and I simply inject the factory into the class that needs to use it. This still enables me to change implementations of both the factory as well as the instances the factory needs to create. And it also makes it easy to stub or mock the factory during tests. I get all of the benefits from using Dependency Injection and an IOC container, and I have yet to notice any downsides to this approach. But again, I only use this approach in the 2 situations I mentioned in the beginning of this post. In most cases, you really don’t need factories anymore. And when you do, just leverage your IOC container to make the factory as simple and dumb as possible.
Uncle Bob has done it again. In his latest post, he actually advises people to go back to manual dependency injection instead of relying on a framework (our beloved IOC containers) to do it for us. You really ought to check out the post and his examples in particular. Done with that? Ok.
There are two major problems that I have with his post. The first is this one:
I like this because now all the Guice is in one well understood place. I don’t have Guice all over my application. Rather, I've got factories that contain the Guice. Guicey factories that keep the Guice from being smeared all through my application. What’s more, if I wanted to replace Guice with some other DI framework, I know exactly what classes would need to change, and how to change them. So I've kept Guice uncoupled from my application.
For those of you who don’t know, Guice is an IOC container from Google. Uncle Bob doesn’t want references of an IOC container spread out throughout his application, but he doesn’t seem to mind coupling his factories with his IOC container for some reason. He seems to think that he can actually switch to a different container more easily because of this, because he would only have to modify his factories.
Here’s the deal. If you have more than a handful of references to your IOC container in your code (apart from the registration obviously), then you really don’t know what using an IOC container is all about. Yes, I know Uncle Bob is supposed to be a legend. Yes, I realize that I'm saying that a legend in the OO world doesn’t seem to grasp some of the most important concepts of using an IOC container. Can you really disagree with that after reading his post?
His whole idea of making it easier to switch to another container with his approach is ludicrous. He would have to modify every single factory that he uses, and in his trivial non-real world examples that wouldn’t be a lot to do but out here in the real world, we’d end up with a boatload of those factories and they would all have to be modified. Conversely, if you’re using an IOC container in the way it is meant to be used, you’d only have to change your registration code and the one or two places where you resolve something manually through the container. And that’s it. Yes, you typically only need one or two places where you use the container directly. Well, unless you’re Uncle Bob of course.
My other problem with his post is with the sample that he uses. It really is a very simplistic example and the approach he outlines simply does not scale when you’re dealing with large, real-world codebases. His approach might work for the size of the examples in his books, or for the code of Fitnesse, but if I were to apply his recommended approach for some of the systems that we’re working on, it would lead to a terrible mess very quickly. Not exactly what I'd have in mind when thinking of Clean Code.
The one thing I do like about this post? Well, there are quite a few people who follow Uncle Bob blindly and can’t say a single bad thing about his thoughts/ideas/approaches. I know quite a few of them make extensive use of IOC containers and some of them are even involved in the development of these containers. Hopefully, Uncle Bob’s post will finally convince these people that blindly following anyone is simply never a good idea and that you have to keep a critical mindset for everything that you read, no matter who wrote it.
Concepts like Dependency Injection and using an Inversion Of Control container have gradually gotten more popular and accepted in the .NET world in the last 2 years or so. There are however still quite a few people who doubt the validity of these concepts. Quite a few of these people seem to think that Dependency Injection for instance is only useful to increase the testability of your code. I wanted to go over a real world example which shows a (IMO) huge benefit which can be solely attributed to thinking about loose coupling, using Dependency Injection and using an Inversion Of Control container.
I recently wrote a post about how you can use an Agatha service layer fully in-process instead of having to host it in a different service through WCF. The only reason why it’s so easy to do that, is because Agatha’s design makes heavy use of loose coupling and dependency injection. And obviously, it makes use of an Inversion Of Control container to manage the dependencies and to wire everything together.
There are basically 3 very important parts to Agatha. The first is the Request Processor. This class basically delegates the handling of each incoming request to its corresponding request handler. The other important parts are the Request Dispatcher (for synchronous client-side usage) and the Asynchronous Request Dispatcher (for asynchronous client-side usage). I'll just refer to these two classes as the request dispatchers for the rest of this post. When the service layer is hosted through a WCF service, the request dispatchers need to communicate with the Request Processor through a proxy (either a synchronous one, or an asynchronous one). It would’ve been very easy to tie the implementations of the request dispatchers directly to the implementation of the WCF proxies. If I had gone that way, I wouldn’t have been able to offer the ability to run the service layer in process though, since the request dispatchers would require the WCF proxies to be used. I could’ve hosted the WCF service in process, but that would really be overkill if you don’t really want to use WCF.
As easy as it would’ve been to tie the request dispatchers directly to the proxies, it was just as easy to make them depend on a slightly more abstract component. There are two interfaces to communicate with the Request Processor:
Notice that there are no WCF attributes on these interfaces. I have two more WCF-enabled interfaces, namely the IWcfRequestProcessor and the IAsyncWcfRequestProcessor. They define the same methods as their non-WCF counterparts, but have all of the required WCF attributes placed on top of those methods.
Typically when creating (or generating) a WCF proxy, the proxy will only implement the interface of the service contract (in this case, that would be the IWcfRequestProcessor or the IAsyncWcfRequestProcessor interface). Any class that would want to use these proxies would then need an instance of that proxy to be able to communicate with the WCF service. In Agatha’s proxies, I chose a slightly different approach. They implement both the WCF interface as well as the none-WCF interface. Since the method definitions are identical, this doesn’t require any more work. Take a look at the class definitions of those proxies:
As you can see, both proxy classes inherit from WCF’s ClientBase class and pass the WCF-specific interface as the generic type parameter to ClientBase. That means that these proxies can execute the WCF operations defined in the WCF-specific interface. These proxy classes also implement the original interfaces, which means that these proxies can be used by any class which requires an instance of IRequestProcessor and IAsyncRequestProcessor.
In Agatha, there is no class that directly uses either the RequestProcessorProxy or the AsyncRequestProcessorProxy class. Instead, the request dispatchers depend solely on IRequestProcessor or IAsyncRequestProcessor:
The dispatchers never know exactly who they’re talking to. This means that they can communicate either through a WCF proxy, or the real Request Processor, depending on how your Inversion Of Control container is configured. That is a huge benefit because it gives you a tremendous amount of flexibility when it comes to how you want to use your service layer (in process, in another service, on another machine, whatever) and it really didn’t take any extra effort with regards to development time to achieve that flexibility. If I want to use Agatha with a different communication technology, then I could do so by simply creating implementations of the IRequestProcessor and IAsyncRequestProcessor interfaces which deal with the specifics of whatever that different communication technology might be. I would also have to change the way the implementations are registered with the Inversion Of Control container to make sure that the new implementations are used. But I wouldn’t have to change anything else.
In this case, dependency injection and loose coupling was not used to increase testability. The increased testability that comes from using this approach is an added bonus.
For those of you who've used Dependency Injection, you know that the two most common ways of injecting a dependency into a class are constructor injection and setter injection. For those of you who haven't used Dependency Injection yet, here's a simple example which shows both techniques:
This example is very abstract, but it should be pretty clear. Constructor injection is used to inject the required dependency whenever an instance of MyService is created, whereas setter injection is used to inject the optional dependency after the instance is created. I obviously can't speak for everyone who uses Dependency Injection, but generally speaking most people use constructor injection for required dependencies and setter injection for optional dependencies.
There is however one situation in which I prefer setter injection for required dependencies over constructor injection: dependencies of abstract classes or base classes. For instance, in our service layer each incoming request is handled by a specific RequestHandler. Most of our RequestHandlers need our NHibernate infrastructure to be set up, which is automatically taken care of by our UnitOfWork implementation. So we have the following NhRequestHandler class (simplified for the purpose of this blog post):
As you can see, the IUnitOfWork dependency is a required dependency because you would get a NullReferenceException when trying to handle a request without having a IUnitOfWork instance present. Yet, I really don't want to put it in the constructor because then each and every RequestHandler that derives from this will also have to put it in the constructor, even though most of them won't access the IUnitOfWork directly.
Actually, most of our applications inherit from the NhRequestHandler and then add some more dependencies that some kind of base BusinessRequestHandler will need. These are dependencies to deal with authentication, authorization, user context, application context, etc... Some of these dependencies will be used by the derived RequestHandlers, some won't. All of them however will indeed be used by the BusinessRequestHandler so they are definitely required dependencies. Using constructor injection for these dependencies would lead to 'noise' in every derived RequestHandler's constructor.
Instead, we use setter injection for all of a base-type's dependencies, and use constructor injection only for the dependencies of the derived types. It keeps the constructors as clean as they can be and avoids unnecessary noise. We know that our IOC container will fulfill all the constructor dependencies as well as each property dependency in the inheritance hierarchy so there's no chance of anything going wrong there. Unless of course somebody seriously breaks the IOC configuration but in that case, our applications won't even make it through the simplest of requests so that's not something that will ever happen unnoticed.
And for our tests, we always inherit our fixtures from things like HandlerTest or ControllerTest or whatever where all of those property dependencies are automatically set up with mocks or stubs, so it doesn't really cause problems there either.
Written by Davy Brion, published on 11/2/2009 7:00:31 AM