I'm using Jasmine for the automated tests in my breakable toy project. It's a BDD framework, very similar to RSpec so it makes it really easy to write your tests in the given-when-then style which I'm (finally) starting to like more and more. What makes given-when-then testing interesting is that you have 3 explicit steps in a test: setting up the 'given' part, doing something in the 'when' part, and asserting on it in the 'then' part. It makes it especially easy to reuse the 'givens' and even the 'whens' if you just want to add a few 'thens'.
Let's first start off with a bad example, one that I wrote about 2 weeks ago:
It's organized in a given-when-then style, but in a bad way. The only benefit that I'm getting from it here, is that the structure is sort of easy to read: given an existing customer, when it is retrieved from the database, it should contain the same values that have been inserted. When I wrote it, I knew that saving the customer should be done in the 'given' step, retrieving it should be done in the 'when' step and comparing the fields of the inserted customer and the retrieved customer should be done in the 'then' step. In this case, everything is being done in the 'then' step.
When I wrote that code, I figured it would just be easier to do it this way, because on Node.JS every I/O call is asynchronous and I thought it would hurt readability if I were to split everything up according to the given-when-then rules due to the asynchronous calls. But then I wanted to add tests for updating and deleting a customer. In both cases, the 'given' part would again be 'given an existing customer'. So I wanted to add them in the right place, which meant I had to choose between duplicating the code to save the customer in each test, or bite the bullet and split it up properly and deal with the asynchronous calls.
Let's start with our original example, and move the saving of the customer to the 'given' step, and the retrieval to the 'when' step:
Despite ending up with more lines of code, there are some notable improvements here. We take advantage of the beforeEach method, which is executed once before each spec (in the case of Jasmine, a call to the 'it' method is a spec) is executed and once before each spec in each nested suite (in the case of Jasmine, a call to the 'describe' method creates a new suite) is executed. Most BDD-frameworks have something similar. Obviously, due to the asynchronous nature of Node.JS we use the asyncSpecWait() and asyncSpecDone() calls (added to Jasmine by jasmine-node) to wait until the asynchronous calls have completed before we move to the next step. In production Node.JS code, you really don't want to do this since that completely takes away the benefits of the platform, but for automated tests, it makes sense to do so. This enables us to put the right code in the right place: saving the customer is done in the 'given' step, retrieving it is done in the 'when' step, and the 'then' step only contains the code to verify that both instances are equal. If we need to verify something else about retrieved customers, we could easily add more specs (calls to the 'it' method) without having to repeat any of the setup work.
Now we can also add the tests for the update and delete scenario, within the context of the 'given an existing customer' scenario.
In this case, the only duplication we have are the calls to asyncSpecWait and asyncSpecDone, which can't really be avoided with this style of testing on the Node.JS platform. Other than that, each part of the code is focused solely on that what it needs to do. If you're using a BDD framework, be sure you leverage it to make sure each part of your testcode is as focused on its task as it can be.
Lost some time yesterday trying to get something working with MS Test (not my choice, but that's what my client uses) that I had expected to be easy. After all, it was especially easy to get working with NUnit. I wanted to create a base testing fixture which would instantiate one instance of Internet Explorer for the entire test run, and make that instance available to each test in the assembly. Sounds easy, no?
First problem: MS Test runs each test on a different thread.
When you use IE through WatiN, it uses COM behind the scenes. Accessing COM objects from different threads is not a safe thing to do and can lead to the following exception: System.Runtime.InteropServices.InvalidComObjectException: COM object that has been separated from its underlying RCW cannot be used.
Running each test individually worked, but running the entire suit made every test except for the first one fail with that exception because MS Test uses a different thread for each test (i suppose the development team did that to make sure it was enterprisey). Quite annoying, but luckily for me, the only other guy in the world who uses MS Test with WatiN also ran into the same problem and he described his workaround on his blog.
I made minor modifications to his IEStaticInstanceHelper class (basically just turned it into a static class) so my version looks like this:
I also had the following AssemblyInitialize and AssemblyCleanup methods:
MS Test will call the AssemblyInitialize method before any test in the assembly is executed, provided that you don't forget to add the TestContext parameter to your method or it will silently be ignored (WTF?!). It'll also call the AssemblyCleanup method after each test in the assembly has finished executing.
Second problem: MS Test runs the AssemblyCleanup method in an MTA thread, even though each test is executed in STA threads by default.
As you can see in my AssemblyCleanup method, I access the IE property of IEStaticInstanceHelper. That property getter contains the following line:
That line works perfectly during the execution of the tests. When it is called from the AssemblyCleanup method, it times out after 30 seconds because it can't seem to find the IE window with the handle (_ieHwnd) that is known to be valid. And this, apparently, is because the current thread is an MTA thread when we're within the AssemblyCleanup method instead of an STA thread. I can't for the life of me figure out why they'd use an MTA thread for the AssemblyCleanup method while they use STA threads for the tests, but I will again assume it was done to keep up to the high enterprisey standard that people expect from something like MS Test.
The solution, while a horrible hack, is quite simple and works perfectly:
There... nice and enterprisey.
Written by Davy Brion, published on 3/24/2011 6:26:03 PM
Just ran into something that I thought was pretty cool. If you're using WatiN, it's relatively easy to write browser-based automated tests without resorting to recorded tests. And since WatiN supports multiple browsers, you can write those tests in a browser-agnostic manner. And if you make use of NUnit's Generic Fixtures (introduced in NUnit 2.5), you can very easily run those tests in multiple browsers as well. Suppose you have the following base test fixture:
You can then write a test fixture like this:
And when you run your tests, it will run this test once in IE, and once in Firefox.
Resharper's TestRunner has issues with this though... it does run the tests, but it doesn't report any feedback on them. The normal NUnit testrunner does show the feedback correctly though.
Written by Davy Brion, published on 2/14/2011 5:52:27 PM
I wanted to write some tests for the EventPublisher Ruby module I've been playing around with, so I figured I'd just use RSpec for it since that appears to be the most popular testing library in the Ruby world. Now, in the .NET world I never really got into the whole BDD thing and I stuck with TDD because I was quite happy with the coverage that it gave me. In Ruby however, due to the whole dynamic environment I think it's more important to test functionality as completely as possible with as little knowledge as possible of implementation details while mocking/stubbing/faking as little as possible. That doesn't mean I wouldn't mock anything in Ruby tests... it just means that I would try to follow my own rules on the subject as much as possible, whereas in the .NET world many of us (myself included) probably go a little overboard with the whole mocking/stubbing/faking thing.
Something to keep in mind for the rest of this post: I did not write my tests first for this thing. I know, I know, test-first is better. I generally prefer to write my tests before my real code as well, but in this case, the EventPublisher code was the result of just some first time Ruby experiments, and since I'm pretty happy with the code I don't want to get rid of it just so I could do it "right" by re-writing it test-first. So these tests were not meant to drive the design, only to verify the correctness of the code. Also note that the tests are not complete yet. More should be added, but I thought I had enough to post here and hopefully collect some feedback from you guys/gals.
When I started with these tests for the EventPublisher module, I instinctively wanted to test on a too technical level, like I often do in .NET. For instance, I wrote a test that proved that when you called the subscribe method, that the passed in method was actually added to the Event instance that the EventPublisher uses. The thing is: if you use the EventPublisher, you never directly use Event instances. So why on earth should I even know about them in my tests, right? After all, they are an implementation detail. I had to switch my reasoning from "is the code doing what i, a software developer, think it should do?" to something along the lines of "what needs to happen when I trigger an event?". For instance, if I trigger an event, all I should care about is that the subscribed methods are called correctly and that they receive their arguments correctly. How that actually happens is something that I probably shouldn't care about at all in these tests.
I eventually ended up with the following:
There are a couple of things I like about this. For starters, the output of running this code looks like this:
Anyone can read that and understand what kind of functionality is supported.
Another big benefit of these tests is that they contain zero knowledge of the actual implementation of the EventPublisher module. They merely initiate its functionality, and verify whether the expected behavior in the given functional context occurred. I could seriously refactor (or even rewrite) the actual EventPublisher code and I wouldn't have to change my tests as long as I don't change the name and arguments of the subscribe and trigger methods.
For now, I'm pretty happy with this style and organization of tests and will probably stick with it for a while in my Ruby coding. Unless one (or some) of you tell me how I can improve it :)
I was asked to show how you can easily do CRUD tests, so here’s a base class that makes it very easy
Simply inherit from this class, implement the BuildEntity, ModifyEntity, AssertAreEqual and AssertValidId methods and that’s it. Those methods are usually pretty simple. In BuildEntity you just create an unpersisted entity and assign values to the properties, in ModifyEntity you modify the properties, and in AssertAreEqual you compare the properties of both instances. In AssertValidId, you make sure that the ID value is ok (depending on your identifier strategy).
This is good for regular CRUD operations, though we typically add extra tests when we want to test cascades or one-to-many associations mapped with inverse="true".