Testing and Internal Implementation in .NET

Switching back and forth between Java and .NET lets you see some of the differences between the two platforms more easily. This happened to me the other day when I switched from Java to .NET and was writing Unit Tests. In Java, the access modifiers include public, private, protected and default. In C# they are public, private, protected and internal. In general, the public, private access modifiers are very similar. Protected is slightly different in that Java allows both derived classes as well as classes in the same package to access those elements where C# only allows derived classes to access them. Where things diverge more is in the default/internal differences. Default in java restricts access to the same package while internal in C# restricts access to the same Assembly (generally a single DLL).

What does this have to do with testing you might ask?

It’s a good OO design principle to expose only those things that are part of the contract to a class or package and to leave the implementation hidden as much as possible. This is called encapsulation. You can make methods private or default/internal. You can make entire classes default/internal and only publicly expose an interface that clients need to use.

A common practice in the Java world is to mimic the package layout of your main source code in your test code. When you mimic that layout then your test classes and implementation classes end up being in the same packages. Because of this your test classes can access all those default members to test them. In C# because it’s not based on a namespace, but rather an Assembly this doesn’t work.

Luckily there’s an easy workaround.

In the AssemblyInfo.cs of your main project add:

[assembly: InternalsVisibleTo("someOther.AssemblyName.Test")]

Where SomeOther.AssemblyName.Test is the name of the Assembly that contains your tests for the target assembly. Then the test code can access internal details of the assembly. And you can easily test the things that other calling code might not have access to.

Using Quartz.NET, Spring.NET and NHibernate to run Scheduled Tasks in ASP.NET

Running scheduled tasks in web applications is not normally a straightforward thing to do. Web applications are built to respond to requests from users and respond to that request. This request/response lifecycle doesn’t always match well to a long running thread that wakes up to run a task every 10 minutes or at 2 AM every day.

ASP.NET Scheduled Task Options

Using ASP.NET running on Windows, there are a number of different options that you could choose to implement this. Windows built in Scheduled Tasks can be run to periodically perform execute a program. A Windows Service could be constructed that used a Timer or a Thread to periodically do the work. Scheduled Tasks and Windows Service require you to write a standalone program. You can share DLLs from your Web application but in the end it is a separate app that needs to be maintained. Another option if you go this route is to turn the Scheduled Task or Service being run into a simple Web Service or REST client that can call your Web application but doesn’t need any knowledge of the jobs themselves.

Another option is an Open Source tool called Quartz.NET. Quartz.NET is based on the popular Java scheduled task runner called (not surprisingly) Quartz. Quartz.NET is a full-featured system that manages Jobs that do the work and Triggers that allow you to specify when you want those jobs run. It can run in your web application itself or as an external service.

The simplest approach to get started is to run directly in your Web application as a process in IIS. The downside to this is that IIS will periodically recycle it’s processes and won’t necessarily start a new one until a new web request is made. Assuming you can deal with this indeterministic behavior then in an IIS process will be fine. It also creates a relatively easy path that will allow you to migrate to the external service process at a later point if need be.

I’m an ALT.NET kind of .NET developer, so I like to use tools like NHibernate for ORM and Spring.NET for Dependency Injection, AOP and generally wiring everything together. The good news is that Spring.NET supports Quartz.NET through its Scheduling API. Start with that for some basic information on using Quartz.NET with Spring. The bad news is that the documentation is a bit thin and the examples basic. I attempt to remedy that in part here.

Using Quartz.NET, NHibernate and Spring.NET to run Scheduled Tasks

The goal is to integrate an existing Spring managed object like a Service or a DAL that uses NHibernate with a Quartz Job that will run on a periodic basis.

To start with you need to create an interface for your service and then implement that interface. The implementation I’ll leave to you and your problem, but the example below you can image uses one or more NHibernate DALs to lookup Users, find their email preferences, etc.

Implementing Services and Jobs


public interface IEmailService
{
void SendEveryoneEmails();
}

When implementing your Job you need to know a few details about how Quartz works:

  1. The first thing to understand is that if you are going to use the AdoJobScheduler to store your Jobs and triggers in the database the Job needs to be Serializable. Generally speaking your DAL classes and NHibernate sessions and the like are not going to be serializable. To get around that, we make the properties set-only so that they will not be serialized when they are stored in the database.
  2. The second thing to understand is that your Job will not be running in the context of the Web application or a request so anything you have to set up connections (such as an OpenSessionInView filter) will not apply to Jobs run by Quartz. This means that you will need to setup your own NHibernate session for all of the dependent objects to use. Luckily Spring provides some help with this in the SessionScope class. This is the same base class as is used by the OpenSessionInView filter.

Using the Service interface you created, you then create a Job that Quartz.NET can run. Quartz.NET provides the IJob interface that you can implement. Spring.NET provides a base class that implements that interface called QuartzJobObject helps deal with injecting dependencies.


using NHibernate;
using Quartz;
using Spring.Data.NHibernate.Support;
using Spring.Scheduling.Quartz;

public class CustomJob : QuartzJobObject
{
private ISessionFactory sessionFactory;
private IEmailService emailService;

// Set only so they don't get serialized
public ISessionFactory SessionFactory { set { sessionFactory = value; } }
public IEmailService EmailService { set { emailService = value; } }

protected override void ExecuteInternal(JobExecutionContext ctx)
{
// Session scope is the same thing as used by OpenSessionInView
using (var ss = new SessionScope(sessionFactory, true))
{
emailService.SendEveryoneEmails();
ss.Close();
}
}
}

Wiring Services and Jobs Together with Spring

Now that you have your classes created you need to wire everything together using Spring.

First we have our DALs and Services wired in to Spring with something like the following:



Next you create a Job that references the Type of the Job that you just created. The type is referenced instead of the instance because the lifecycle of the Job is managed by Quartz itself. It deals with instantiation, serialization and deserialization of the object itself. This is a bit different than what you might expect from a Spring service normally.


Once your Job is created, you create a Trigger that will run the Job based on your rules. Quartz (and Spring) offer two types of Jobs SimpleTriggers and CronTriggers. SimpleTriggers allow you to specify things like “Run this task every 30 minutes”. CronTriggers follow a crontab format for specifying when Jobs should run. The CronTrigger is very flexible but could be a little confusing if you aren’t familiar with cron. It’s worth getting to know for that flexibility though.



The last piece that needs to be done is the integration of the SchedulerFactory. The SchedulerFactory brings together Jobs and Triggers with all of the other configuration needed to run Quartz.NET jobs.

A couple of things to understand about configuring the SchedulerFactory:

  1. Specifying (where DbProvider is the db:provider setup used by your Nhibernate configuration) tells the SchedulerFactory to use the AdoJobProvider and store the Jobs and Trigger information in the database. The tables will need to exist already and Quartz provides a script for this task.
  2. Running on SQL Server requires a slight change to Quartz. It uses a locking mechanism to prevent Jobs from running concurrently. For some reason the default configuration uses a FOR UPDATE query that is not supported by SQL Server. (I don’t understand exactly why a .NET utility wouldn’t work with SQL Server out of the box?)
    To fix the locking a QuartzProperty needs to be set:
  3. The JobFactory is set to the SpringObjectJobFactory because it handles the injection of dependencies into QuartzJobObject like the one we created above.
  4. SchedulerContextAsMap is a property on the SchedulerFactory that allows you to set properties that will be passed to your Jobs when they are created by the SpringObjectJobFactory. This is where you set all of the Property names and the corresponding instance references to Spring configured objects. Those objects will be set into your Job instances whenever they are deserialized and run by Quartz.

Here’s the whole ScheduleFactory configuration put together:







Conclusion

Scheduled tasks in ASP.NET applications shouldn’t be too much trouble anymore. Reusing existing Service and DAL classes allows you to easily create scheduled tasks using existing, tested code. Quartz.NET looks to be a good solution for these situations.

StringBuilder and my Biggest Pet Peeve

What You Should Know About Strings

In both Java and .NET (and other languages) String objects are immutable. They don’t change. Period. If you “change” a String, it creates a new String. That includes String concatenation using a +


// One string created "foo"
String foo = "foo";
// foo exists, "bar" is created and the combination of foo and "bar" is a third string
String bar = foo + " bar";

Ok, if you don’t know this, fine. But if you don’t know this, why would you EVER use a StringBuilder?

Why Does StringBuilder Exist?

We know that Strings are immutable. If you need to do a bunch of string modification, concatenation, replacement – you will create a bunch of strings. Ok, great…why do I care? We care because we are told that creating a lot of Objects (and then later having to Garbage Collect them) is inefficient. To start with, I will guarantee right now that concatenating strings WILL NOT be the thing that prevents your application from performing. Guaranteed. Period.

Ok fine, it’s not going to be a problem. But you want to be a responsible coder and not do things that are intentionally inefficient if you can help it.

So you use a StringBuilder. StringBuilder is implemented internally as an array of characters. The code manages the allocation and copying of data to new arrays if the buffer gets filled. It sometimes over allocates the new buffer so that it has to perform allocations less often. You sacrifice a bit of memory overhead to avoid some Object creation and Garbage Collection later.

My Biggest Pet Peeve

Your use of StringBuilder is a premature optimization but probably a forgivable one.

So, WHY, OH WHY do you do this:

// One string created "foo"
StringBuilder sb = new StringBuilder();
sb.append("Foo: " + fooValue + " \n");
sb.append("Bar: " + barValue + "\n");

It makes me have violent thoughts. Please stop.

Update:
For some of the reasons why Strings are immutable, see this post on immutability and its positive qualities.

Register and Unregister COM DLL from .NET Code

Background

The command regsvr32 is used to register a native, unmanaged code DLL so that it is available via COM. With a registered DLL you can use COM Interop to call that code from .NET Managed code.

Regsvr32 is unmanaged code and as such makes use of some existing functions that are defined in the kernel32.dll. Fortunately .NET makes available a pretty easy to use foreign function interface (FFI) in the form of P/Invoke.

In general to call an unmanaged function you just need to use the DllImport annotation on an extern function to tell the CLR how to access the function.

e.g.:

[DllImport("shell32.dll")]
static extern int DllGetVersion(ref DLLVERSIONINFO pdvi);

Registering an Unmanaged DLL with C# Code

regsvr32 actually calls functions defined within the DLL itself in what is known as a self-registering DLL. So assuming your DLL is self-registering then you should be able to use this approach as well. The only thing we need to do is figure out what functions to call.

It ends up there are 2 basic functions: LoadLibrary and GetProcAddress.

LoadLibrary

LoadLibrary returns a handle the module (a pointer to a structure).

After you are done with you library you can clean up by calling FreeLibrary passing it the handle that was returned from LoadLibrary.

GetProcAddress

GetProcAddress finds a function defined in a module and returns a pointer to that function. A function pointer allows you to call a method in a dynamic way. It is functionally equivalent to a delegate in managed code.

In C e.g.:

(*some_func)();

Put it All Together

Now we have a basic algorithm to register a DLL:

  1. LoadLibrary to get a handle to the library
  2. GetProcAddress to get a function pointer to the proper function to register the DLL
  3. Call the function returned from GetProcAddress
  4. Cleanup

Mix that in with some error checking code and I got the following:

public class Registrar : IDisposable
{
private IntPtr hLib;

[DllImport("kernel32.dll", CharSet = CharSet.Ansi, ExactSpelling = true, SetLastError = true)]
internal static extern IntPtr GetProcAddress(IntPtr hModule, string procName);

[DllImport("kernel32.dll", SetLastError = true)]
internal static extern IntPtr LoadLibrary(string lpFileName);

[DllImport("kernel32.dll", SetLastError = true)]
internal static extern bool FreeLibrary(IntPtr hModule);

internal delegate int PointerToMethodInvoker();

public Registrar(string filePath)
{
hLib = LoadLibrary(filePath);
if (IntPtr.Zero == hLib)
{
int errno = Marshal.GetLastWin32Error();
throw new Win32Exception(errno, "Failed to load library.");
}
}

public void RegisterComDLL()
{
CallPointerMethod("DllRegisterServer");
}

public void UnRegisterComDLL()
{
CallPointerMethod("DllUnregisterServer");
}

private void CallPointerMethod(string methodName)
{
IntPtr dllEntryPoint = GetProcAddress(hLib, methodName);
if (IntPtr.Zero == dllEntryPoint)
{
throw new Win32Exception(Marshal.GetLastWin32Error());
}
PointerToMethodInvoker drs =
(PointerToMethodInvoker) Marshal.GetDelegateForFunctionPointer(dllEntryPoint,
typeof(PointerToMethodInvoker));
drs();
}

public void Dispose()
{
if (IntPtr.Zero != hLib)
{
UnRegisterComDLL();
FreeLibrary(hLib);
hLib = IntPtr.Zero;
}
}
}

Note:
The requirement I was dealing with was a build script so I wanted to register the unmanaged DLL, use it, and then unregister it so the computer would be in its previous state. If you want to leave the DLL registered, such as for an install program, you would need to modify the above example.

To call this code you just need to pass it a path to the dll that needs to be registered.

using (Registrar registrar = new Registrar("path\\to\\com.dll"))
{
registrar.RegisterComDLL();
return base.Execute();
}

Resources:
Check out pinvoke.net for a lot of good documentation and example of how to call native methods from managed code.

Database Migrations for .NET

One of the more difficult things to manage in software projects is often changing a database schema over time. On the projects that I work on, we don’t usually have DBAs who manage the schema so it is left up to the developers to figure out. The other thing you have to manage is applying changes to the database in such a way that you don’t disrupt the work of other developers on your team. We need the change to go in at the same time as the code so that Continuous Integration can work.

Migrations

While I don’t know if they were invented there, migrations seem to have been popularized by Ruby on Rails. Rails is a database centric framework that implies the properties of your domain from the schema of your database. For that reason it makes sense that they came up with a very good way of These are some example migrations to give you an idea of the basics of creating a schema.

001_AddAddressTable.cs:

using Migrator.Framework;
using System.Data;
[Migration(1)]
public class AddAddressTable : Migration
{
override public void Up()
{
Database.AddTable("Address",
new Column("id", DbType.Int32, ColumnProperty.PrimaryKey),
new Column("street", DbType.String, 50),
new Column("city", DbType.String, 50),
new Column("state", DbType.StringFixedLength, 2),
new Column("postal_code", DbType.String, 10)
}
override public void Down()
{
Database.RemoveTable("Address");
}
}

02_AddAddressColumns.cs:

using Migrator.Framework;
using System.Data;
[Migration(2)]
public class AddAddressColumns : Migration
{
public override void Up()
{
Database.AddColumn("Address", new Column("street2", DbType.String, 50));
Database.AddColumn("Address", new Column("street3", DbType.String, 50));
}
public override void Down()
{
Database.RemoveColumn("Address", "street2");
Database.RemoveColumn("Address", "street3");
}
}

003_AddPersonTable.cs:

using Migrator.Framework;
using System.Data;
[Migration(3)]
public class AddPersonTable : Migration
{
public override void Up()
{
Database.AddTable("Person",
new Column("id", DbType.Int32, ColumnProperty.PrimaryKey),
new Column("first_name", DbType.String, 50),
new Column("last_name", DbType.String, 50),
new Column("address_id", DbType.Int32, ColumnProperty.Unsigned)
);
Database.AddForeignKey("FK_PERSON_ADDRESS", "Person", "address_id", "Address", "id");
}
public override void Down()
{
Database.RemoveTable("Person");
}
}

Run Your Migrations

The best way to run your migrations will be to integrate it into your build automation tool of choice. If you are not using one, now is the time.

MigratorDotNet supports MSBuild and NAnt.

MSBuild:







NAnt:



So You Want to Migrate?

Some more documentation and example are available MigratorDotNet. Some of the changes represented are still in an experimental branch that is in the process of being merged.


MigratorDotNet is a continuation of code started by Marc-André Cournoyer and Nick Hemsley.

Implementing Mixins with C# Extension Methods

Wikipedia defines a mixin as “a class that provides a certain functionality to be inherited by a subclass, but is not meant to stand alone. Inheriting from a mixin is not a form of specialization but is rather a means to collect functionality. A subclass may even choose to inherit most or all of its functionality by inheriting from one or more mixins through multiple inheritance. A mixin can also be viewed as an interface with implemented methods.”

Inheritance defines an “is a” relationship between classes. A car “is a” vehicle — a car is a specialization of a vehicle. A mixin, on the otherhand, is a means of sharing functionality among a series of related classes that do not inherit from the same base class. A mixin allows you to reuse code without implying a relationship among the classes. It also allows you to get around the Single Inheritance model a bit to do this.

An extension method is a new feature of C# 3.0. It is essentially a way of providing methods on existing classes that you may or may not have access to. This is similar in many ways to “open classes” that allow you to add methods to existing classes in languages such as Ruby. The interesting things is that you can add domain specific methods to framework classes like String or Int.

e.g.:
3.Days().FromNow()


public static class TimeMixin {
public static TimeSpan Days(this int days)
{
return new TimeSpan(days, 0, 0, 0);
}

public static DateTime FromNow(this TimeSpan timeSpan)
{
return DateTime.Now.Add(timeSpan);
}
}

This is a great way to keep code looking and feeling like Object Oriented code and avoiding a whole bunch of Utility classes with static methods. Avoiding these Utility methods keeps the power of OO for dealing with complexity through polymorphism and avoids the often more complex structures that come out of procedural code.

So, how do you use Extension Methods as Mixins?

Define an Interface (or use an existing one)

This could be anything from a marker interface to a full interface that defines a contract. In this simple example, I’ll make use of an existing interface.

public interface IComparable {
int CompareTo(object other);
}

Create Extension Methods for the Interface


namespace Mixins {
public static class ComparableExtension {
public static bool GreaterThan(this IComparable leftHand, object other) {
return leftHand.CompareTo(other) > 0;
}

public static bool LessThan(this IComparable leftHand, object other) {
return leftHand.CompareTo(other) < 0; } } }

Notice the 'this' in the method declaration before the first parameter. The 'this' denotes an extension method. It defines the types of objects that will have this method available to be called on them.

Implement the interface in a Concrete class


namespace Domain {
public class Money : IComparable {
private double amount;
private string currency;
private CurrencyConverter converter;

public Money(CurrencyConverter converter, double amount, string currency) {
this.amount = amount;
this.currency = currency;
this.converter = converter;
}

public int CompareTo(object other) {
double otherAmount = converter.ConvertAmount(this.currency, (Money) other);
return this.amount.CompareTo(otherAmount);
}
}
}

Use the Code


namespace Domain {
using Mixin;

public class Account : IComparable {
private Money currentBalance;

public void Withdrawl(Money amount) {
if (currentBalance.LessThan(amount)) {
throw new BalanceExceededException(currentBalance);
}
// ... implement the rest of the method
}

public int CompareTo(object other) {
return currentBalance.CompareTo(((Account) other).currentBalance);
}
}
}

Now the LessThan and GreaterThan methods implementations are reusable on both the Money and the Account classes. They are of course available on any class that implements IComparable.

So now you can avoid some limitations of Single Inheritance and increase the amount of code reuse that is available to you with the concept of a Mixin. This is probably officially a 'sharp knife' so be careful with this kind of programming. It can be non-intuitive to people and can probably be greatly overused but it is a very valuable tool in many cases.

ObjectBuilder Can Inject to UserControls As Well

This is a followup to my previous post on integrating ObjectBuilder and ASP.NET. As I was playing around with the solution I hit on the fact that I was only injecting at the Page level. As ASP.NET is a component model, you can end up with custom User Controls that would need injected properties as well. There is a relatively simple, if not entirely obvious way to do that as well.

Building on the previous example, you can hook into the lifecycle of a page that you are injecting. You can not access the controls directly in the PreRequestHandlerExecute of the IHttpModule because they have not been instantiated yet. Instead, you can create a callback event handler for the Page.InitComplete event and inject properties at that point.


void InjectProperties(object sender, EventArgs e)
{
IHttpHandler h = app.Context.CurrentHandler;
if (h is DefaultHttpHandler)
return;
chain.Head.BuildUp(builderContext, h.GetType(), h, null);
if (h is Page)
{
// Register a page lifecycle event handler to inject
// user controls on the page itself
page = (Page) h;
page.InitComplete += InjectControls;
}
}

private void InjectControls(object sender, EventArgs e)
{
InjectControls(page);
if (null != page.Form)
InjectControls(page.Form);
}

private void InjectControls(Control mainControl)
{
if (mainControl.Controls != null && mainControl.Controls.Count > 0)
{
foreach (Control c in mainControl.Controls)
{
if (c is UserControl)
{
chain.Head.BuildUp(builderContext, c.GetType(), c, null);
InjectControls(c);
}
}
}
}

As you see with this example, you need to recursively inject the objects into all of the controls. That’s in case there are any nested controls that might need injecting. The other thing to notice is that the controls in the top-level Page doe not contain all of the controls that are also declared in a Form on the page, so you need to handle both of those paths.

Hopefully this will get you further along the path to being able to do dependency injection in your ASP.NET application.

Integrating ObjectBuilder with ASP.NET

ObjectBuilder is a .NET framework made for building Inversion of Control or Dependency Injection containers. Unlike Spring or Pico, ObjectBuilder is not a full DI framework, instead it’s a framework for building DI solutions. Yes, that’s incredibly weird to me too.

Why Use Object Builder?

Why would you use ObjectBuilder instead of a fully baked DI solution? In the .NET world it seems like if it’s not built by Microsoft then it’s not on the table. People really are looking for the “garden path”. So in part it’s really a psychological thing. On the other hand the Microsoft Application Blocks and Enterprise Library are built using ObjectBuilder. So, if you are already using some of those solutions, then you already have a dependency on ObjectBuilder. It is natural to try to control the number of dependencies that a project relies upon. In this case, you have a simple solution that just requires a bit of custom code.

In addition to the fact that it’s basically a roll-your-own DI framework, the other downside of ObjectBuilder is that the documentation is non-existent. It’s said ‘Under construction…’ sine July of 2006, so don’t get your hopes up if you don’t want to look at the API and read the source.

Wire Together Services

If you are going to build a service you could do property based injection. What this means is that in your service layer all of the dependencies are passed into a service via property setters. This keeps the code clean and makes changing the implementation of a dependency very easy. If you have a hardcoded constructor for your dependency in a class then making this kind of change will be harder. It also makes Unit Testing very easy because you can mock out dependencies and isolate the layers of your code.


public class SomeService
{
private SomeDAO someDao;
[Dependency]
public SomeDAO SomeDAO
{
get { return someDao; }
set { someDao = value; }
}

public void Save(SomeObject o)
{
if (o.IsValid)
SomeDAO.Save(o);
}
}

ASP.NET

Most of the examples you can find on the web dealing with ObjectBuilder are about how to use ObjectBuilder as a ServiceBroker where you instantiate a service type object in a view.


public partial class SomePage : System.Web.UI.Page
{
private SomeService someService;
protected void Page_Load(object sender, EventArgs e)
{
Builder builder = NewBuilder();
someService = builder.BuildUp(null, null, null);
}

private Builder NewBuilder()
{
// Create a Builder
}
}

While that’s ok, I think it’s a pretty limiting way to do things. Once you get to any sort of complex object construction, you are going to have to create another class to encapsulate all of the policies and strategies for building objects. The other issue is that using the Services in UI code is very different than compositing the services themselves. It also stands in stark contrast to the clean implementation of the Service layer.

I want to make my pages look more like my Service layer.


public partial class SomePage : System.Web.UI.Page
{
private SomeService someService;
protected void Page_Load(object sender, EventArgs e)
{
}

[Dependency]
public SomeService SomeService()
{
get { return someService; }
set { someService = value; }
}
}

IHttpModules to the Rescue

Page lifecycles are handled by the ASP.NET worker processes so there is no way to construct your own pages and do the Dependency Injection. I immediately went to the idea of a filter (which is how this is done in the Java world). The filter concept in ASP.NET is implemented as an IHttpModule.

The module should do 2 main things:

  • Construct and manage the DI container
  • Inject Properties into the Page object that is going to handle the request

Construct a series of ObjectBuilder classes to create your DI container.

Locator locator = new Locator();
ILifetimeContainer container = new LifetimeContainer();

// Setup the strategy for how objects can be created
BuilderStrategyChain chain = new BuilderStrategyChain();
chain.Add(new CreationStrategy());
chain.Add(new PropertyReflectionStrategy());
chain.Add(new PropertySetterStrategy());
locator.Add(typeof(ILifetimeContainer), container);

// Create a context to build an object
BuilderContext builderContext = new BuilderContext(chain, locator, null);
builderContext.Policies.SetDefault(new DefaultCreationPolicy());

You can access the current Page that will handle the request in an IHttpModule using:

HttpApplication app;
app.Context.CurrentHandler; // The Page handler

Now to tie it all together we create an IHttpModule that will filter each request and wire up our Pages with their dependencies. IHttpModules are configured to respond to callback events related to the Life Cycle of a request. In this case we need wire up our Pages after the PostMapRequestHandler because the Page is created prior to this. I set this up on the PreRequestHandlerExecute because everything is setup at this point and it is right before the Page methods are called.


///

/// ObjectBuilderModule handles PropertyBased Injection for ASP.NET Forms.
///

public class ObjectBuilderModule : IHttpModule
{
private HttpApplication app;
private readonly Locator locator = new Locator();
private readonly ILifetimeContainer container = new LifetimeContainer();
private readonly BuilderStrategyChain chain = new BuilderStrategyChain();
private readonly BuilderContext builderContext;
public ObjectBuilderModule()
{
chain.Add(new CreationStrategy());
chain.Add(new PropertyReflectionStrategy());
chain.Add(new PropertySetterStrategy());
locator.Add(typeof(ILifetimeContainer), container);
builderContext = new BuilderContext(chain, locator, null);
builderContext.Policies.SetDefault(new DefaultCreationPolicy());
}

public void Init(HttpApplication context)
{
app = context;
// PreRequestHandler so that everything is setup including the Session, etc
app.PreRequestHandlerExecute += InjectProperties;
}

void InjectProperties(object sender, EventArgs e)
{
IHttpHandler h = app.Context.CurrentHandler;
if (h is DefaultHttpHandler)
return;
chain.Head.BuildUp(builderContext, h.GetType(), h, null);
}

public void Dispose()
{
app.PostMapRequestHandler -= InjectProperties;
}
}

Conclusion

Hopefully this shows you that you can create a powerful, transparent solution for doing Dependency Injection with ASP.NET. This isn’t necessarily a complete solution. We could implement many other things like Singleton management for various service classes for example. Now all you have to do is extend it for your application’s own needs.

Mocking .NET Objects with NUnit

NUnit is my Unit Testing tool of choice for .NET development. Microsoft provides a unit testing framework but it only works with some higher-end versions of Visual Studio. They’re so similar that it’s almost ridiculous that Microsoft created their own version.
(See one of my previous posts for more information on Automating NUnit with MSBuild.) In the Java world it’s fairly common to do Mocking to help make unit testing easier. I’ve written about using JMock for Unit Tesing in Java. In this post, I’d like to talk about a relatively new feature of NUnit which now supports Mocks out of the box.

What Are Mock Objects

Mock Objects are a technique that allow you to isolate classes from their dependencies for testing purposes. This isolation allows for fine-grained testing of single methods in a single class, possibly even before the dependent classes are fully implemented. This isolation allows your tests to run quickly and it makes testing small pieces of functionality much easier. When you’ve tested individual pieces of code in isolation you can have much higher confidence in larger-grained tests. This isolation becomes even more interesting when you are dealing with dependencies such as a data layer or a web service layer. External calls like that can be very time consuming or could fail if the remote system is down for maintenance.

One of the great things about using Mock Objects is that they force you to think about the dependencies that your classes and methods have. It forces you to think about the coupling between your classes. If you have high coupling then your code is often harder to test. If you have a loosely coupled design then testing and using Mock Objects is very much easier. Thinking about those design notions early can help you more easily manage change over time.

Maxims:

  • Good design is better than bad design
  • Loosely coupled objects are usually a better design than tightly coupled objects
  • Testing improves code quality and developer efficiency over time
  • Testing is easier with a loosely coupled designs

A Sample Project

We’re going to start with some simple code. We create a Domain object called Person and an interface for a Data Access object called IPersonRepository. Pretty simple at this point.

public class Person
{
public string Id;
public string FirstName;
public string LastName;
public Person(string newId, string fn, string ln)
{
Id = newId;
FirstName = fn;
LastName = ln;
}
}


public interface IPersonRepository
{
List GetPeople();
Person GetPersonById(string id);
}

Next we create a PersonService object. This would represent all of the business logic in our application. It would interact with the Data Access tier and return information to the UI layer for display.

We wire together our objects using Constructor based Dependency Injection. All of the dependent Objects are sent in through the constructor. This allows for the loose coupling since the PersonService doesn’t know about the Implementing class, but only the interface. Since it’s done in the constructor we can also never have an invalid PersonService as would be the case if there was a setter for the IPersonRepository implementation.

This is again a fairly straightforward implementation, but I hope enough to display the issue at hand.

public class PersonService
{
private IPersonRepository personRepos;
public PersonService(IPersonRepository repos)
{
personRepos = repos;
}
public List GetAllPeople()
{
return personRepos.GetPeople();
}
public List GetAllPeopleSorted()
{
List people = personRepos.GetPeople();
people.Sort(delegate(Person lhp, Person rhp) {
return lhp.LastName.CompareTo(rhp.LastName);
});
return people;
}
public Person GetPerson(string id)
{
try
{
return personRepos.GetPersonById(id);
}
catch (ArgumentException)
{
return null; // no person with that id was found
}
}
}

Using Mocks with NUnit

Now we can start testing our PersonService. Notice that we haven’t even implemented the IPersonRepository yet. That way we can make sure that everything in our PersonService class works as expected without having to think about other layers of the application.


using System;
using System.Collections.Generic;
using NUnit.Framework;
using NUnit.Mocks;
[TestFixture]
public class PersonServiceTest
{
// The dynamic mock proxy that we will use to implement IPersonRepository
private DynamicMock personRepositoryMock;
// Set up some testing data
private Person onePerson = new Person("1", "Wendy", "Whiner");
private Person secondPerson = new Person("2", "Aaron", "Adams");
private List peopleList;
[SetUp]
public void TestInit()
{
peopleList = new List();
peopleList.Add(onePerson);
peopleList.Add(secondPerson);
// Construct a Mock Object of the IPersonRepository Interface
personRepositoryMock = new DynamicMock(typeof (IPersonRepository));
}
[Test]
public void TestGetAllPeople()
{
// Tell that mock object when the "GetPeople" method is
// called to return a predefined list of people
personRepositoryMock.ExpectAndReturn("GetPeople", peopleList);
// Construct a Person service with the Mock IPersonRepository
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// Call methods and assert tests
Assert.AreEqual(2, service.GetAllPeople().Count);
}
[Test]
public void TestGetAllPeopleSorted()
{
// Tell that mock object when the "GetPeople" method is called to
// return a predefined list of people
personRepositoryMock.ExpectAndReturn("GetPeople", peopleList);
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// This method really has "business logic" in it - the sorting of people
List people = service.GetAllPeopleSorted();
Assert.IsNotNull(people);
Assert.AreEqual(2, people.Count);
// Make sure the first person returned is the correct one
Person p = people[0];
Assert.AreEqual("Adams", p.LastName);
}
[Test]
public void TestGetSinglePersonWithValidId()
{
// Tell that mock object when the "GetPerson" method is called to
// return a predefined Person
personRepositoryMock.ExpectAndReturn("GetPersonById", onePerson, "1");
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
Person p = service.GetPerson("1");
Assert.IsNotNull(p);
Assert.AreEqual(p.Id, "1");
}
[Test]
public void TestGetSinglePersonWithInalidId()
{
// Tell that mock object when the "GetPersonById" is called with a null
// value to throw an ArgumentException
personRepositoryMock.ExpectAndThrow("GetPersonById",
new ArgumentException("Invalid person id."), null);
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// The only way to get null is if the underlying IPersonRepository
// threw an ArgumentException
Assert.IsNull(service.GetPerson(null));
}
}

The PersonService doesn’t have a lot of logic in it, but I hope this illustrates how you easily can test various conditions using Mock objects. It also illustrates the idea of testing early by allowing you to test some code before all of the dependent objects are implemented.

While the Mocks built into NUnit might not be the most powerful or complete Mocking library out there, it should be sufficient for most uses. I’m sure they will continue to improve them over time as well, so I look forward to them becoming more powerful (and having better documentation) in the future.

Download Code Example:
NMock C# Example Project

For more on NUnit you might like to check out:

More .NET Compact Framework Woes

I posted previously on a Bug in the .NET Compact Framework with the XmlEnum Attribute with whitespace in the name. Well I’ve run into some other interesting “features”.

The first thing to realize is that the things that work on the Full framework don’t work on the Compact Framework.

What Works With Serialization

First thing is the good news: Arrays work everywhere. They work on the Compact Framework and the Full Framework. The downside of course is that Arrays are very inconvenient. You have to manually resize them yourself for example.

The second thing that works are regular ILists. When you map them a collection will be used. Of course one of the big improvements in .NET 2.0 was the introduction of Generics. Generics allow you to have strongly-typed collections without manually implementing them for each specific type.

More Generics Problems

Xml Serialization works if you use IList. But there are problems with Generics interfaces. I guess there are more than problems. Short story is that you can not use IList for XML serialization, it plain just doesn’t work.

Here’s where the difference between the Full Framework and the Compact Framework come into play. On the Full Framework, you can map to a concrete collection wether it’s generic or not. So List will work. Unfortunately this does not work in the Compact Framework.


[XmlElement(Name="foo")]
public List Foos
{
get { return this.foos }
set { this.foos = value; }
}

You end up with an exception:

Two mappings for string[].

Stack Trace:
at System.Xml.Serialization.TypeContainer.AddType()
at System.Xml.Serialization.TypeContainer.AddType()
at System.Xml.Serialization.XmlSerializationReflector.AddIXmlSerializableType()
at System.Xml.Serialization.XmlSerializationReflector.AddType()
at System.Xml.Serialization.XmlSerializationReflector.FindType()
at System.Xml.Serialization.XmlSerializationReflector.FindType()
.....

So What Do You Do About It

You basically have 2 options:

  1. Use Arrays
  2. Create your own custom, strongly-typed collections classes

Arrays

The advantage is that this is very simple and requires no extra code. The downside as I said above is that you have to write code to manually do array resizing.


public T[] AppendItem(T[] theArray, T newItem) {
T[] newArray = new T[theArray.Length + 1];
Array.Copy(theArray, newArray, theArray.Length);
newArray[newArray.Length - 1] = newItem;
return newArray;
}

Custom, Strongly-Typed Collections

If you don’t want to use Arrays and deal with manually resizing them, you can build your own Collections classes for each of your types.

Create your strongly-typed collection:

[Serializable]
[EditorBrowsable(EditorBrowsableState.Advanced)]
public class EmployeeCollection : ArrayList {
public Employee Add(Employee obj) {
base.Add(obj);
return obj;
}

public Employee Add() {
return Add(new Employee());
}

public void Insert(int index, Employee obj) {
base.Insert(index, obj);
}

public void Remove(Employee obj) {
base.Remove(obj);
}

new public Employee this[int index] {
get { return (Employee) base[index]; }
set { base[index] = value; }
}
}

Then use that collection in your class to map:

[XmlRoot("company")]
class Company {
private EmployeeCollection employees;

[XmlElement(Type=typeof(Employee),ElementName="employee",IsNullable=false)]
[EditorBrowsable(EditorBrowsableState.Advanced)]
public EmployeeCollection Employees {
get { return this.employees; }
set { this.employees = value; }
}
}

Pick Your Poison

The problems with Generics collections in the .NET Compact Framework seem like yet another bug. So, pick your poison and choose a workaround. Whichever seems simpler to you. Hope that helps someone out.