Groovy Mocking a Method That Exists on Object

In Groovy, the find method exists on Object. The find method is also an instance method on EntityManager which is commonly used in JPA to get an instance based on a database id. I was trying to create a Mock of EntityManager like:

def emMock = new MockFor(EntityManager)
emMock.demand.find { Class clazz, Object rquid -> return new Request(rqUID: rquid) }
def em = emMock.proxyDelegateInstance()

This gave me an error though:

groovy.lang.MissingMethodException: No signature of method: com.onelinefix.services.RequestRepositoryImplTest$_2_get_closure1.doCall() is applicable for argument types: (groovy.mock.interceptor.Demand) values: [groovy.mock.interceptor.Demand@a27ebd9]
Possible solutions: doCall(java.lang.Class, java.lang.Object), findAll(), findAll(), isCase(java.lang.Object), isCase(java.lang.Object)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:264) ~[groovy-all-1.8.6.jar:1.8.6]
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:877) ~[groovy-all-1.8.6.jar:1.8.6]
at groovy.lang.Closure.call(Closure.java:412) ~[groovy-all-1.8.6.jar:1.8.6]
at groovy.lang.Closure.call(Closure.java:425) ~[groovy-all-1.8.6.jar:1.8.6]

It ends up that find is a DefaultGroovyMethod that’s added to all Objects and this was causing the mock.demand to get confused because it thought I was trying to call the Object.find.

Luckily there is a workaround and that’s to use the ordinal argument for find.


emMock.demand.find(1) { Class clazz, Object rquid -> return new Request(rqUID: rquid) }

That’s enough of a change to invoke mock version instead of the DefaultGroovyMethod version.
Now if I only knew why….

MSBuild Task for PartCover

I continue to lament the dearth of option for Test Coverage in the .NET world.

In the Java world you have open source tools like Emma and Cobertura that are widely used and supported (and many more) as well as proprietary tools like Clover available.

.NET we have an open source NCover SF that requires you to do odd code instrumentation and is essentially dead it seams, another NCover which is proprietary and costs money and PartCover which is open source, but doesn’t seem real active.

Don’t get me wrong, NCover.org is a good option if you are willing to spend the money for it. But with a team of 30+ and a CI server, I’m not sure if I want to drop $10k on it. (NCover up until version 1.5.8 was Free Software (GPL) before it was closed. Who has the source and why haven’t you forked it yet?)

]]>

If you’re not willing to pay that basically leaves PartCover. But of course you want to integrate your code coverage with your automated build. There was no support for MSBuild out of the box, so I decided to build it.

Creating an MSBuild Task

I need to do 2 things to run PartCover:

  1. Register the native PartCover.CorDriver.dll
  2. Execute the PartCover.exe with the proper options

Register Native DLL

To see the details on how to register a native DLL using .NET code so my earlier post Register and Unregister COM DLL from .NET Code.

Execute PartCover

The MSBuild framework provides a ToolTask base class whose whole purpose is for executing external command line tools. I used this as the base of the task.

1. ToolName

First you override the ToolName property to return the name of the EXE to run. Nothing special here, it’s just the executable name.


protected override string ToolName
{
get { return "PartCover.exe"; }
}

2. Properties

Next to start build the task you go about defining all of the settings that a user will need to set to execute the task. You then create those as Properties on the class and they will be set by MSBuild. Start with the simple things that someone will need to pass to get the tool to execute properly. You can build from there for other properties. If possible give the properties sane defaults so that people don’t have to override them in their build file.


// ...

///

/// The application to execute to get the coverage results.
/// Generally this will be your unit testing exe.
///

public string Target
{
get { return _target; }
set { _target = value; }
}

///

/// The arguments to pass to the executable
///

public string TargetArgs
{
get { return _targetArgs; }
set { _targetArgs = value; }
}

public string WorkingDirectory
{
get { return _workingDirectory; }
set { _workingDirectory = value; }
}

// ...

3. Command Arguments

Then you need to override string GenerateCommandLineCommands() method. The whole purpose of this method is to construct any command line parameters that need to be passed to the ToolName command using the Properties defined in the task.


protected override string GenerateCommandLineCommands()
{
StringBuilder builder = new StringBuilder();
AppendIfPresent(builder, "--target", Target);
AppendIfPresent(builder, "--target-work-dir", WorkingDirectory);
AppendIfPresent(builder, "--target-args", QuoteIfNeeded(TargetArgs));
AppendIfPresent(builder, "--output", Output);

AppendMultipleItemsTo(builder, "--include", Include);
AppendMultipleItemsTo(builder, "--exclude", Exclude);

Log.LogCommandLine(builder.ToString());

return builder.ToString();
}

5. Execute

Finally, if you have anything special to do, you can override the Execute(). In this case, I wanted to handle the registering and de-registering of the Core.dll. Make sure that you call the base.Execute() method so that the TaskTarget can do the work that it needs to do.


public override bool Execute()
{
string corDriverPath = Path.Combine(ToolPath, CorDriverName);
Log.LogMessage("CoreDriver: {0}", corDriverPath);
using (Registrar registrar = new Registrar(corDriverPath))
{
registrar.RegisterComDLL();
return base.Execute();
}
}

To see the whole thing, download the files at the bottom of this post.

How to Use PartCover with MSBuild

Now that you have a Custom task you need to create a Target in your MSBuild file to execute the task.









Download the code:
PartCover MSbuild.zip

Good luck and I hope someone else finds this useful.

On Singletons

The Singleton Pattern is one of the most widely used patterns from the Gang of Four (GoF) Desgin Patterns Book. One the reasons that it’s so widely used, I think, is because it’s also very easy to understand. The basic idea is that you control the creation of an object so that it will only ever have one instance. This is achieved by having a private constructor and a static instance that is accessed through a static method on the Class.

Singleton Example:

public class FirstSingleton {

private static final FirstSingleton instance = new FirstSingleton();

public static FirstSingleton instance() {
return instance;
}

private FirstSingleton() {
}
}

Reasons to Avoid the Singleton Pattern

Practical Limitations

There are some practical limitations of the Singleton pattern though. With modern VM based languages like Java there can be multiple classloaders in a given application. A Singleton then is no longer a Singleton as there will be an instance-per-classloader. So if you can’t enforce an actual single instance why use the Singleton Pattern? You might answer that the classloader case is a rare one and that in a “normal” application your not going to run into that. And you would be right. But I still think that you should avoid the Singleton

Multiple Concerns

Fundamentally a Class in Object Oriented Programming should have one and only one responsibility. The Single Responsibility Principle often leads to good design and the benefits of a highly-cohesive system such as being easily understood and being easier to change and maintain over time.

The Singleton, by its very definition, has multiple responsibilities. The primary responsibility defined in the pattern is controlling constructions of Objects of this Class. Of course from an application point of view, this is pretty weak. What you really want from a class is a useful service. So if you have a Singleton Data Access class which is more important? The Singularity of the instance or the ability to get at data in your data store? I really hope you picked the latter.

There are a number of ways that you can control access to the construction of a class that do not rely on the Singleton pattern. A Factory or a Container for example can be used to get at an instance in such a way that the calling code doesn’t care about the implementation of the construction of the class. Whether a new instance is created or a single instance returned should be immaterial to the user of that instance. This also does a good job of dealing with our Single Responsibility concern raised earlier. Now the inforcement of a single instance is handled by another class whose sole responsibility is that singleton enforcement.

Testability

One consideration that must be made when you are designing software is: How are you going to test the code? If no consideration is given, then often the code will be very difficult to test and quality will suffer.

The example of a Singleton above is relatively simple. But what happens if you introduce a dependency to your Singleton? Now you have a tight-coupling between two classes. If the dependent class does something like access the database or call a web service or itself has a bunch of dependencies then all of a sudden the code becomes very hard to test.


public class SecondSingleton {

private static final SecondSingleton instance = new SecondSingleton();

private SomeDependency myDependency;

public static SecondSingleton instance() {
return instance;
}

private SecondSingleton() {
myDependency = new SomeDependency();
}
}

I’ve written previously about Mock Objects for testing. Mock Objects allow you to create dummy implementations of dependencies to make testing a Class much easier. In the Singleton example here, you can see that there would be no way to inject a Mock instance into the SecondSingleton class.

Factories or Containers to Enforce Single Instances

One of the best ways to access dependencies in a class is through Constructor based Dependency Injection. This can be done through a container like Spring or Pico Container.

You can of course do this yourself as well. If you are using a Factory for object creation, then the singleton enforcement and the dependency injection can happen in the same class (whose sole responsibility is to manage the construction of objects).


public class NotSingleton {

private SomeDependency myDependency;

private NotSingleton(SomeDependency depends) {
myDependency = depends;
}
}

public class Factory {

private NotSingleton notsingleInstance
= new NotSingleton(
(SomeDependency) lookupType(SomeDependency.class));

public NotSingleton getNotSingleton() {
return notsingleInstance;
}

/**
* Get an instance for a given Interface
* @param interface The interface for which you want the instance
*/
public Object lookupType(Class clazz) {
// ...
}
}

With this structure you can now independently test all of your classes without worrying about the interactions of dependent objects. You also have not structured the code in such a way that having multiple instances should cause any problems, so your code won’t break if the classloader case ever comes up.

Mocking .NET Objects with NUnit

NUnit is my Unit Testing tool of choice for .NET development. Microsoft provides a unit testing framework but it only works with some higher-end versions of Visual Studio. They’re so similar that it’s almost ridiculous that Microsoft created their own version.
(See one of my previous posts for more information on Automating NUnit with MSBuild.) In the Java world it’s fairly common to do Mocking to help make unit testing easier. I’ve written about using JMock for Unit Tesing in Java. In this post, I’d like to talk about a relatively new feature of NUnit which now supports Mocks out of the box.

What Are Mock Objects

Mock Objects are a technique that allow you to isolate classes from their dependencies for testing purposes. This isolation allows for fine-grained testing of single methods in a single class, possibly even before the dependent classes are fully implemented. This isolation allows your tests to run quickly and it makes testing small pieces of functionality much easier. When you’ve tested individual pieces of code in isolation you can have much higher confidence in larger-grained tests. This isolation becomes even more interesting when you are dealing with dependencies such as a data layer or a web service layer. External calls like that can be very time consuming or could fail if the remote system is down for maintenance.

One of the great things about using Mock Objects is that they force you to think about the dependencies that your classes and methods have. It forces you to think about the coupling between your classes. If you have high coupling then your code is often harder to test. If you have a loosely coupled design then testing and using Mock Objects is very much easier. Thinking about those design notions early can help you more easily manage change over time.

Maxims:

  • Good design is better than bad design
  • Loosely coupled objects are usually a better design than tightly coupled objects
  • Testing improves code quality and developer efficiency over time
  • Testing is easier with a loosely coupled designs

A Sample Project

We’re going to start with some simple code. We create a Domain object called Person and an interface for a Data Access object called IPersonRepository. Pretty simple at this point.

public class Person
{
public string Id;
public string FirstName;
public string LastName;
public Person(string newId, string fn, string ln)
{
Id = newId;
FirstName = fn;
LastName = ln;
}
}


public interface IPersonRepository
{
List GetPeople();
Person GetPersonById(string id);
}

Next we create a PersonService object. This would represent all of the business logic in our application. It would interact with the Data Access tier and return information to the UI layer for display.

We wire together our objects using Constructor based Dependency Injection. All of the dependent Objects are sent in through the constructor. This allows for the loose coupling since the PersonService doesn’t know about the Implementing class, but only the interface. Since it’s done in the constructor we can also never have an invalid PersonService as would be the case if there was a setter for the IPersonRepository implementation.

This is again a fairly straightforward implementation, but I hope enough to display the issue at hand.

public class PersonService
{
private IPersonRepository personRepos;
public PersonService(IPersonRepository repos)
{
personRepos = repos;
}
public List GetAllPeople()
{
return personRepos.GetPeople();
}
public List GetAllPeopleSorted()
{
List people = personRepos.GetPeople();
people.Sort(delegate(Person lhp, Person rhp) {
return lhp.LastName.CompareTo(rhp.LastName);
});
return people;
}
public Person GetPerson(string id)
{
try
{
return personRepos.GetPersonById(id);
}
catch (ArgumentException)
{
return null; // no person with that id was found
}
}
}

Using Mocks with NUnit

Now we can start testing our PersonService. Notice that we haven’t even implemented the IPersonRepository yet. That way we can make sure that everything in our PersonService class works as expected without having to think about other layers of the application.


using System;
using System.Collections.Generic;
using NUnit.Framework;
using NUnit.Mocks;
[TestFixture]
public class PersonServiceTest
{
// The dynamic mock proxy that we will use to implement IPersonRepository
private DynamicMock personRepositoryMock;
// Set up some testing data
private Person onePerson = new Person("1", "Wendy", "Whiner");
private Person secondPerson = new Person("2", "Aaron", "Adams");
private List peopleList;
[SetUp]
public void TestInit()
{
peopleList = new List();
peopleList.Add(onePerson);
peopleList.Add(secondPerson);
// Construct a Mock Object of the IPersonRepository Interface
personRepositoryMock = new DynamicMock(typeof (IPersonRepository));
}
[Test]
public void TestGetAllPeople()
{
// Tell that mock object when the "GetPeople" method is
// called to return a predefined list of people
personRepositoryMock.ExpectAndReturn("GetPeople", peopleList);
// Construct a Person service with the Mock IPersonRepository
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// Call methods and assert tests
Assert.AreEqual(2, service.GetAllPeople().Count);
}
[Test]
public void TestGetAllPeopleSorted()
{
// Tell that mock object when the "GetPeople" method is called to
// return a predefined list of people
personRepositoryMock.ExpectAndReturn("GetPeople", peopleList);
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// This method really has "business logic" in it - the sorting of people
List people = service.GetAllPeopleSorted();
Assert.IsNotNull(people);
Assert.AreEqual(2, people.Count);
// Make sure the first person returned is the correct one
Person p = people[0];
Assert.AreEqual("Adams", p.LastName);
}
[Test]
public void TestGetSinglePersonWithValidId()
{
// Tell that mock object when the "GetPerson" method is called to
// return a predefined Person
personRepositoryMock.ExpectAndReturn("GetPersonById", onePerson, "1");
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
Person p = service.GetPerson("1");
Assert.IsNotNull(p);
Assert.AreEqual(p.Id, "1");
}
[Test]
public void TestGetSinglePersonWithInalidId()
{
// Tell that mock object when the "GetPersonById" is called with a null
// value to throw an ArgumentException
personRepositoryMock.ExpectAndThrow("GetPersonById",
new ArgumentException("Invalid person id."), null);
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// The only way to get null is if the underlying IPersonRepository
// threw an ArgumentException
Assert.IsNull(service.GetPerson(null));
}
}

The PersonService doesn’t have a lot of logic in it, but I hope this illustrates how you easily can test various conditions using Mock objects. It also illustrates the idea of testing early by allowing you to test some code before all of the dependent objects are implemented.

While the Mocks built into NUnit might not be the most powerful or complete Mocking library out there, it should be sufficient for most uses. I’m sure they will continue to improve them over time as well, so I look forward to them becoming more powerful (and having better documentation) in the future.

Download Code Example:
NMock C# Example Project

For more on NUnit you might like to check out:

MSBuild with NUnit

I’ve written about Unit Testing and Build Automation in the past, but mostly dealing with Java projects and tools (because I usually write about things I’m working on at the time). Well, I’ve started a .NET project for the first time in a while so I want to solve some of the samde problems in this environment.

Why MSBuild?

In the Java world, the natural choice for automation is usually Ant. NAnt was created for the as a work alike for the .NET platform and is a very good tool. I’ve build custom extensions for database automation in the past and used it successfully. So why would I consider MSBuild? With the advent of Visual Studio 2005, Microsoft included MSBuild as the build tool for its projects. All of the project files generated by Visual Studio are MSBuild files in disguise. This makes it really easy to leverage the project files to chain together a nice automated build.

Why NUnit?

Microsoft is a true believer in Not Invented Here and as such they created their own Unit Testing framework that looks and acts exactly the same as NUnit. The MS XUnit framework only comes with the higher-end Team System versions of Visual Studio. In addition it is very hard to deploy the DLLs to build systems. The only way to get the proper DLLs installed is to install the full Visual Studio. That seems like a really big problem to me. With NUnit you can include the needed DLLs in your version control system and reference them in the project which makes it very easy to build and test code and use a Continuous Integration process.

How to Integrate MSBuild and NUnit

Here’s the good news, all of the work has been done for you. The good people at Tigris.org (the creators of Subversion) have created a series of MSBuild Tasks filling in the gaps that MSBuild was missing. One of the included Tasks is an NUnit task. Their hard work and the simple script below should get ou started with a build that can run your tests as well.

Example Build File







Debug
bin\$(Configuration)
ProjectName
ProjectNameTest
$(ProjectDir)\ProjectName.csproj
$(ProjectTestDir)\ProjectNameTest.csproj












The “Test” target above uses the NUnit task to run all of the NUnit Tests found in the DLL. There are a number of different ways to create ItemGroups ins MSBuild, all of which are odd and confusing. If anyone has any ideas on what the best practices in this regard, let me know.

One final plug for my favorite Visual Studio plugin, ReSharper. It includes support for NUnit. It will help you create NUnit Tests and then lets you run them and debug them from within your IDE. It’s almost like you have a real IDE all of a sudden!

References

NUnit
MSBuild Tasks
Integrating MSBuild with CruiseControl.NET

Pragmatic Unit Testing in C# with NUnit
NUnit Pocket Reference
Deploying .NET Applications: Learning MSBuild and ClickOnce

jMock For Mocking Unit Tests

Ok, so I might not exactly be on the cutting edge with this post, but I just started playing with jMock, a framework for creating Mock objects.

Mock objects are used to isolate various layers of an application by providing a fake implementation that will mimic the behavior of the real implementation but offers a deterministic behavior. It’s very useful for isolating database layers as well because DB access can slow down unit tests dramatically. And if unit tests take to long then they won’t get run. Like Inversion of Control (IoC), Mock Objects, can be done by hand without the use of a framework, but frameworks can make the job a lot easier.

jMock alows you to setup the methods that you want to call and define the return values for a given set of parameters. You basically script the expected calls with the return values you want. This lets you isolate classes so that you can test just a single method or class at a time thus simplifying the tests.


public class MockTester extends MockObjectTestCase {
Mock mockOrderService;
OrderService orderService;

public void setUp() throws Exception {
mockOrderService = new Mock(OrderService.class);
orderService = (OrderService) mockOrderService.proxy();
}

public void testSomeServiceMethod() throws Exception {
String orderId = "orderId";

// The "doesOrderExist" method will be called once with the orderId parameter
// and will return a true boolean value
// If the method isn't called, then the mock will complain
mockOrderService.expects(once())
.method("doesOrderExist")
.with(eq(orderId))
.will(returnValue(true);

FullfillmentService fullfillment = new FullfillmentServiceImpl(orderService);
assertTrue(fullfillment.confirmOrder(orderId));
}
}

One thing to realize is that by default jMock will only create proxies for interfaces. If you want to mock concrete classes, you’ll need the jmock-cglib extension and the asm library so that it can create proxies of the concrete classes.

I find this sort of scripting of Mock objects very compelling. It allows you to focus on testing the behavior of a very isolated piece of code. It even allows you to test code without having written all of the dependent objects. I encourage you to check it out for yourself.

Relentless Build Automation

How far is going to far when you are doing build automation? My answer to that question, is that it’s basically not possible to go to far. Any task that you need to do more than three times deserves automation. Three times is my heuristic. The longer and more complex the task, the smaller the number you should allow yourself to do it by hand. Compiling for example is a pain if you don’t automate it, so you automate it the first time. Computers are good at one thing, running algorithms. If you are doing those algorithms by hand, you’re wasting your computer’s potential.

Why Automate?

There are two major reasons to automate:

  1. Reduce errors
  2. Being lazy

Reducing errors is easy. We want to have a repeatable, dependable process that anyone can use to achieve the same results. If your deploy master has to know 15 steps and the ins and outs of some complex process, what happens when she goes on vacation? You don’t do deployment. By automating you allow other people to perform the same task and you keep your computer working for you.

But being lazy? Yes! A truly great computer programmer is lazy and does everything in their power to avoid having to work that they’ve already done before. Likewise, if you’ve done it 10 times, doesn’t it get boring to do the same thing over and over again? Wouldn’t you rather be getting coffee or reading blogs while your computer works at what it does best? In addition to general laziness, automation will make your life easier when you want to do other things that you know you should be doing like unit testing, integration testing and functional testing.

What to Automate

Going back to my introduction, I say automate everything that you or someone new coming onto a project will need to do more than a few times. A little bit of effort up front can save every member of a team a lot of effort later on. So, as a punch list to get you started:

  • Compiling code
  • Packaging code into JARs, WARs, etc
  • Running unit tests

Most people doing Java development will automate code compilation and packaging all of the time. Some of the times, they’ll automate unit tests, sometimes deployment, and rarely a few other things. We usually use some excellent script-based tools for that kind of thing like Ant or Maven. Even if you do all of these things, is that going far enough? I think that for most projects the answer is no.

If you are slightly more ambitious you will also automate:

  • Integration testing
  • Functional testing
  • Deployment

There, now you’ve got all of your day-to-day activities automated. So you must be done right?

Generating Databases

Databases are the bane of automation. People don’t take the time to script their database creation or to script their default or testing data. When you don’t script your database, you make integration and functional testing very difficult. It’s really easy to write functional tests and integration tests if they rely on the data being in a known state. If they do not, then running them is very difficult because the data underneath is always changing. That usually means that integration and automated functional testing are deemed to hard and are not done. But don’t blame the tests. You’re avoiding a little bit of work, that if you did it, would allow you to avoid A LOT of work. (I think little jobs that prevent you from having to do big jobs are a good tradeoff.)

Generating your database will make new team members lives a breeze. They should be able to check out a project from source control and with one simple command, generate a database, compile the code, run the tests and know with confidence that everything is working for them. When they make any changes to the project they can repeat this process to know with confidence that they haven’t broken anything. Likewise in day-to-day development, being able to get back to a known state is going to give you the ability to develop with more confidence and test with ease.

Ant has a core task for running SQL queries, so you don’t even need any other tools to accomplish this. One problem that you might run into is that running DDL commands in JDBC can cause some problems. I found a great article on executing PL/SQL with Ant that you can probably apply to other databases as well. It allows you to run more database specific queries to build your schema from a blank database. This helped me overcome the last hurdle in a recent project to get it so I had the entire database scripted from scratch. So check it out.

I find that having a few standard Ant tasks makes this a breeze:

  • A task to create tablespaces (if you have separate tablespaces for each database)
  • A task to create a user
  • A task to generate the full schema
  • A task to delete the user and drop the schema

Together these tasks can be woven together to fully to create the full database. To support development and testing, it is also handy to have loadable datasets around. Either maintain a list of SQL INSERT statements or use a tool like DBUnit to load the data into your database. Often it can be nice to have a few sets of base data:

  • Unit testing data
  • Functional testing data
  • Demo data for showing of to clients and manager

The only thing lacking is an easy way to dump a schema from a working database so that you can use nice GUI tools to modify the database and then dump it out using an easy Ant target. Even with this, you’ll likely need to maintain change scripts so that you can apply them to working production systems without wiping out all of the data. I guess since I’m complaining, that will have to be my next side project. Leave a comment if you know how to do this.

Conclusion

With a little bit of effort in going all the way with automation, you can make it easy for new people to come onto a project, you can make it easier to create and run all kinds of tests, and you can make the entire development process easier and more confident. The tools exist to help make automation easy, so why not?

What things do you like to automate that you see people skipping on projects?

Resources

Java Unit Testing: JUnit, TestNG
Web Functional Testing: Selenium
Java Build Automation: Ant

Update

Looks like someone is a couple of steps ahead of me. While not a final release, the Apache DdlUtils contains a set of Ant tasks that give you the ability to read and write database schemas as well as loading data sets defined in XML documents. Something to keep an eye on…