Testing and Internal Implementation in .NET

Switching back and forth between Java and .NET lets you see some of the differences between the two platforms more easily. This happened to me the other day when I switched from Java to .NET and was writing Unit Tests. In Java, the access modifiers include public, private, protected and default. In C# they are public, private, protected and internal. In general, the public, private access modifiers are very similar. Protected is slightly different in that Java allows both derived classes as well as classes in the same package to access those elements where C# only allows derived classes to access them. Where things diverge more is in the default/internal differences. Default in java restricts access to the same package while internal in C# restricts access to the same Assembly (generally a single DLL).

What does this have to do with testing you might ask?

It’s a good OO design principle to expose only those things that are part of the contract to a class or package and to leave the implementation hidden as much as possible. This is called encapsulation. You can make methods private or default/internal. You can make entire classes default/internal and only publicly expose an interface that clients need to use.

A common practice in the Java world is to mimic the package layout of your main source code in your test code. When you mimic that layout then your test classes and implementation classes end up being in the same packages. Because of this your test classes can access all those default members to test them. In C# because it’s not based on a namespace, but rather an Assembly this doesn’t work.

Luckily there’s an easy workaround.

In the AssemblyInfo.cs of your main project add:

[assembly: InternalsVisibleTo("someOther.AssemblyName.Test")]

Where SomeOther.AssemblyName.Test is the name of the Assembly that contains your tests for the target assembly. Then the test code can access internal details of the assembly. And you can easily test the things that other calling code might not have access to.

Using Quartz.NET, Spring.NET and NHibernate to run Scheduled Tasks in ASP.NET

Running scheduled tasks in web applications is not normally a straightforward thing to do. Web applications are built to respond to requests from users and respond to that request. This request/response lifecycle doesn’t always match well to a long running thread that wakes up to run a task every 10 minutes or at 2 AM every day.

ASP.NET Scheduled Task Options

Using ASP.NET running on Windows, there are a number of different options that you could choose to implement this. Windows built in Scheduled Tasks can be run to periodically perform execute a program. A Windows Service could be constructed that used a Timer or a Thread to periodically do the work. Scheduled Tasks and Windows Service require you to write a standalone program. You can share DLLs from your Web application but in the end it is a separate app that needs to be maintained. Another option if you go this route is to turn the Scheduled Task or Service being run into a simple Web Service or REST client that can call your Web application but doesn’t need any knowledge of the jobs themselves.

Another option is an Open Source tool called Quartz.NET. Quartz.NET is based on the popular Java scheduled task runner called (not surprisingly) Quartz. Quartz.NET is a full-featured system that manages Jobs that do the work and Triggers that allow you to specify when you want those jobs run. It can run in your web application itself or as an external service.

The simplest approach to get started is to run directly in your Web application as a process in IIS. The downside to this is that IIS will periodically recycle it’s processes and won’t necessarily start a new one until a new web request is made. Assuming you can deal with this indeterministic behavior then in an IIS process will be fine. It also creates a relatively easy path that will allow you to migrate to the external service process at a later point if need be.

I’m an ALT.NET kind of .NET developer, so I like to use tools like NHibernate for ORM and Spring.NET for Dependency Injection, AOP and generally wiring everything together. The good news is that Spring.NET supports Quartz.NET through its Scheduling API. Start with that for some basic information on using Quartz.NET with Spring. The bad news is that the documentation is a bit thin and the examples basic. I attempt to remedy that in part here.

Using Quartz.NET, NHibernate and Spring.NET to run Scheduled Tasks

The goal is to integrate an existing Spring managed object like a Service or a DAL that uses NHibernate with a Quartz Job that will run on a periodic basis.

To start with you need to create an interface for your service and then implement that interface. The implementation I’ll leave to you and your problem, but the example below you can image uses one or more NHibernate DALs to lookup Users, find their email preferences, etc.

Implementing Services and Jobs

public interface IEmailService
    void SendEveryoneEmails();

When implementing your Job you need to know a few details about how Quartz works:

  1. The first thing to understand is that if you are going to use the AdoJobScheduler to store your Jobs and triggers in the database the Job needs to be Serializable. Generally speaking your DAL classes and NHibernate sessions and the like are not going to be serializable. To get around that, we make the properties set-only so that they will not be serialized when they are stored in the database.
  2. The second thing to understand is that your Job will not be running in the context of the Web application or a request so anything you have to set up connections (such as an OpenSessionInView filter) will not apply to Jobs run by Quartz. This means that you will need to setup your own NHibernate session for all of the dependent objects to use. Luckily Spring provides some help with this in the SessionScope class. This is the same base class as is used by the OpenSessionInView filter.

Using the Service interface you created, you then create a Job that Quartz.NET can run. Quartz.NET provides the IJob interface that you can implement. Spring.NET provides a base class that implements that interface called QuartzJobObject helps deal with injecting dependencies.

using NHibernate;
using Quartz;
using Spring.Data.NHibernate.Support;
using Spring.Scheduling.Quartz;
public class CustomJob : QuartzJobObject
    private ISessionFactory sessionFactory;
    private IEmailService emailService;
    // Set only so they don't get serialized
    public ISessionFactory SessionFactory { set { sessionFactory = value; } }
    public IEmailService EmailService { set { emailService = value; } }
    protected override void ExecuteInternal(JobExecutionContext ctx)
        // Session scope is the same thing as used by OpenSessionInView
        using (var ss = new SessionScope(sessionFactory, true))

Wiring Services and Jobs Together with Spring

Now that you have your classes created you need to wire everything together using Spring.

First we have our DALs and Services wired in to Spring with something like the following:

<object id="UserDAL" type="MyApp.DAL.UserDAL, MyApp.Data">
  <property name="SessionFactory" ref="NHibernateSessionFactory" />
<object id="EmailService" type="MyApp.Service.EmailService, MyApp.Service">
  <property name="UserDAL" ref="UserDAL" />

Next you create a Job that references the Type of the Job that you just created. The type is referenced instead of the instance because the lifecycle of the Job is managed by Quartz itself. It deals with instantiation, serialization and deserialization of the object itself. This is a bit different than what you might expect from a Spring service normally.

<object id="CustomJob" type="Spring.Scheduling.Quartz.JobDetailObject, Spring.Scheduling.Quartz">
    <property name="JobType" value="MyApp.Jobs.CustomJob, MyApp.Jobs" />

Once your Job is created, you create a Trigger that will run the Job based on your rules. Quartz (and Spring) offer two types of Jobs SimpleTriggers and CronTriggers. SimpleTriggers allow you to specify things like “Run this task every 30 minutes”. CronTriggers follow a crontab format for specifying when Jobs should run. The CronTrigger is very flexible but could be a little confusing if you aren’t familiar with cron. It’s worth getting to know for that flexibility though.

<object id="CustomJobTrigger" type="Spring.Scheduling.Quartz.CronTriggerObject, Spring.Scheduling.Quartz">
    <property name="JobDetail" ref="CustomJob"/>
    <property name="CronExpressionString" value="0 0 2 * * ?" /> <!-- run every morning at 2 AM -->
    <property name="MisfireInstructionName" value="FireOnceNow" />

The last piece that needs to be done is the integration of the SchedulerFactory. The SchedulerFactory brings together Jobs and Triggers with all of the other configuration needed to run Quartz.NET jobs.

A couple of things to understand about configuring the SchedulerFactory:

  1. Specifying (where DbProvider is the db:provider setup used by your Nhibernate configuration) tells the SchedulerFactory to use the AdoJobProvider and store the Jobs and Trigger information in the database. The tables will need to exist already and Quartz provides a script for this task.
  2. Running on SQL Server requires a slight change to Quartz. It uses a locking mechanism to prevent Jobs from running concurrently. For some reason the default configuration uses a FOR UPDATE query that is not supported by SQL Server. (I don’t understand exactly why a .NET utility wouldn’t work with SQL Server out of the box?)
    To fix the locking a QuartzProperty needs to be set:
  3. The JobFactory is set to the SpringObjectJobFactory because it handles the injection of dependencies into QuartzJobObject like the one we created above.
  4. SchedulerContextAsMap is a property on the SchedulerFactory that allows you to set properties that will be passed to your Jobs when they are created by the SpringObjectJobFactory. This is where you set all of the Property names and the corresponding instance references to Spring configured objects. Those objects will be set into your Job instances whenever they are deserialized and run by Quartz.

Here’s the whole ScheduleFactory configuration put together:

<object id="SchedulerFactory" type="Spring.Scheduling.Quartz.SchedulerFactoryObject, Spring.Scheduling.Quartz">
    <property name="JobFactory">
        <object type="Spring.Scheduling.Quartz.SpringObjectJobFactory, Spring.Scheduling.Quartz"/>
    <property name="SchedulerContextAsMap">
            <entry key="EmailService" value-ref="EmailService" />
            <entry key="SessionFactory" value-ref="NHibernateSessionFactory" />
    <property name="DbProvider" ref="DbProvider"/>
    <property name="QuartzProperties">
            <entry key="quartz.jobStore.selectWithLockSQL" value="SELECT * FROM {0}LOCKS WHERE LOCK_NAME=@lockName"/>
    <property name="triggers">
            <ref object="CustomJobTrigger" />


Scheduled tasks in ASP.NET applications shouldn’t be too much trouble anymore. Reusing existing Service and DAL classes allows you to easily create scheduled tasks using existing, tested code. Quartz.NET looks to be a good solution for these situations.

DRY your CruiseControl.NET Configuration

Don’t Repeat Yourself (DRY) is one of the principles of good software development. The idea is that there should ideally be one and only one “source of knowledge” for a particular fact or calculation in a system. Basically it comes down to not copying-and-pasting code around or duplicating code if at all possible. The advantages of this are many.

Advantages of DRY

  • There will be less code to maintain
  • If a bug is found, it should only have to be fixed in one place
  • If an algorithm or process is changed, it only needs to be changed in one place
  • More of the code should become reusable because as you do this you will parameterize methods to make them flexible for more cases

If it’s good for code isn’t it good for other things like configuration? Why yes it is.

Using CruiseControl.NET Configuration Builder

The Configuration Preprocessor allows you to define string properties and full blocks of XML to use for substitution and replacement. To start using the Configuration Preprocessor, you add xmlns:cb=”urn:ccnet.config.builder”, an xml namespace, to your document to tell the config parser that you plan to do this.

From there you can define a simple property like:

<cb:define client="xxx"/>

Or you can make it a full block of XML:

<cb:define name="svn-block">
    <sourcecontrol type="svn">

Defining Reusable Blocks

Using these ideas I wanted to come up with a templated approach that would allow me to share configuration among multiple projects. That way, if I added new statistics or change the layout of my build server, I would only have to change it in a single place. Thus keeping things DRY. It also encourages more consistency across multiple projects making things easier to understand.

So, I started defining some reusable blocks in the main ccnet.config file which you can see below. The exact details will depend on your configuration of course.

Full Example of config.xml

How to add a new project:
Step 1. Create a config file named "<config>-project.xml"
Step 2. Add the project reference below
<cruisecontrol xmlns:cb="urn:ccnet.config.builder">
    <!-- cb defines to compose reusable blocks of configuration -->
    <!-- use <cb:define client="xxx"/> and <cb:define project="yyy"/> to use these -->
    <cb:define name="svn-block">
        <sourcecontrol type="svn">
    <cb:define name="msbuild-20-block">
            <buildArgs>$(build-args) </buildArgs>
            <logger>D:\Program Files\CruiseControl.NET\server\Rodemeyer.MsBuildToCCNet.dll</logger>
    <cb:define name="msbuild-35-block">
            <buildArgs>$(build-args) </buildArgs>
            <logger>D:\Program Files\CruiseControl.NET\server\Rodemeyer.MsBuildToCCNet.dll</logger>
    <cb:define name="merge-block">
        <!-- Merge the output of tests, code coverage and fxcop -->
    <cb:define name="loggers-block">
        <modificationHistory  onlyLogWhenChangesFound="true" />
    <cb:define name="stats-block">
                <firstMatch name="Svn Revision" xpath="//modifications/modification/changeNumber" />
                <firstMatch name="Coverage" xpath="//coverageReport/project/@coverage" generateGraph="true"/>
                <firstMatch name="Warnings" xpath="//msbuild/@warning_count" generateGraph="true"/>
                <firstMatch name="Errors" xpath="//msbuild/@error_count" generateGraph="true"/>
                <!-- NDepend -->
                <firstMatch name="ILInstructions" xpath="//ApplicationMetrics/@NILInstruction" />
                <firstMatch name="Avgerage Complexity" xpath="//ApplicationMetrics/MethodCC/@Avg" />
                <firstMatch name="Max Complexity" xpath="//ApplicationMetrics/MethodCC/@MaxVal" />
                <firstMatch name="LinesOfCode" xpath="//ApplicationMetrics/@NbLinesOfCode" generateGraph="true"/>
                <firstMatch name="LinesOfComment" xpath="//ApplicationMetrics/@NbLinesOfComment" generateGraph="true"/>
    <cb:include href="config-client-project.xml"/>
    <cb:include href="config-client2-project-trunk.xml"/>

At the end of the file you can see the cb:include references. Those are one-line includes to include the configuration of each project. This makes things easier to manage, I think, because you only have to look at the individual project configuration.

Using Reusable Blocks in Individual Configuration Files

From there I need to make use of those defined blocks in in individual file. The first thing I needed to do was to set the parameters that I had defined as simple string replacements in the reusable blocks. Normally you would do that with cb:define as I showed above. But the trick is that you can only have one property with a given name defined. If you include multiple project configurations that doesn’t work. What does work is using cb:scope definitions. This allows for a value to be defined only within a specific scope.


From there you just need to start including the blocks that you defined in the main ccnet.confg within the scope block.

Full Example of Project Configuration

<!-- CruiseControl.NET configuration -->
<project name="ExampleClient SpecialProject" xmlns:cb="urn:ccnet.config.builder">
            <!-- Enable collection of project statistics -->
            <email mailhost="smtp.example.com" from="ccnet@example.com" includeDetails="true">
                    <user name="Developer One" group="buildmaster" address="dev1@example.com" />
                    <user name="Developer Two" group="developers" address="dev2@example.com" />
                    <group name="developers" notification="change" />
                    <group name="buildmaster" notification="change" />

As you can see, the only one I didn’t template out was the email block because that depends on the developers working on each project.

Have fun bringing simplicity and consistency to your Cruise Control.NET configuration!

For the full details see the CruiseControl.NET Configuration Preprocessor documentation.

MSBuild Task for PartCover

I continue to lament the dearth of option for Test Coverage in the .NET world.

In the Java world you have open source tools like Emma and Cobertura that are widely used and supported (and many more) as well as proprietary tools like Clover available.

.NET we have an open source NCover SF that requires you to do odd code instrumentation and is essentially dead it seams, another NCover which is proprietary and costs money and PartCover which is open source, but doesn’t seem real active.

Don’t get me wrong, NCover.org is a good option if you are willing to spend the money for it. But with a team of 30+ and a CI server, I’m not sure if I want to drop $10k on it. (NCover up until version 1.5.8 was Free Software (GPL) before it was closed. Who has the source and why haven’t you forked it yet?)


If you’re not willing to pay that basically leaves PartCover. But of course you want to integrate your code coverage with your automated build. There was no support for MSBuild out of the box, so I decided to build it.

Creating an MSBuild Task

I need to do 2 things to run PartCover:

  1. Register the native PartCover.CorDriver.dll
  2. Execute the PartCover.exe with the proper options

Register Native DLL

To see the details on how to register a native DLL using .NET code so my earlier post Register and Unregister COM DLL from .NET Code.

Execute PartCover

The MSBuild framework provides a ToolTask base class whose whole purpose is for executing external command line tools. I used this as the base of the task.

1. ToolName

First you override the ToolName property to return the name of the EXE to run. Nothing special here, it’s just the executable name.

protected override string ToolName
    get { return "PartCover.exe"; }
2. Properties

Next to start build the task you go about defining all of the settings that a user will need to set to execute the task. You then create those as Properties on the class and they will be set by MSBuild. Start with the simple things that someone will need to pass to get the tool to execute properly. You can build from there for other properties. If possible give the properties sane defaults so that people don’t have to override them in their build file.

// ...
/// <summary>
/// The application to execute to get the coverage results.
/// Generally this will be your unit testing exe.
/// </summary>
public string Target
    get { return _target; }
    set { _target = value; }
/// <summary>
/// The arguments to pass to the <see cref="Target"/> executable
/// </summary>
public string TargetArgs
    get { return _targetArgs; }
    set { _targetArgs = value; }
public string WorkingDirectory
    get { return _workingDirectory; }
    set { _workingDirectory = value; }
// ...
3. Command Arguments

Then you need to override string GenerateCommandLineCommands() method. The whole purpose of this method is to construct any command line parameters that need to be passed to the ToolName command using the Properties defined in the task.

protected override string GenerateCommandLineCommands()
    StringBuilder builder = new StringBuilder();
    AppendIfPresent(builder, "--target", Target);
    AppendIfPresent(builder, "--target-work-dir", WorkingDirectory);
    AppendIfPresent(builder, "--target-args", QuoteIfNeeded(TargetArgs));
    AppendIfPresent(builder, "--output", Output);
    AppendMultipleItemsTo(builder, "--include", Include);
    AppendMultipleItemsTo(builder, "--exclude", Exclude);
    return builder.ToString();
5. Execute

Finally, if you have anything special to do, you can override the Execute(). In this case, I wanted to handle the registering and de-registering of the Core.dll. Make sure that you call the base.Execute() method so that the TaskTarget can do the work that it needs to do.

public override bool Execute()
    string corDriverPath = Path.Combine(ToolPath, CorDriverName);
    Log.LogMessage("CoreDriver: {0}", corDriverPath);
    using (Registrar registrar = new Registrar(corDriverPath))
        return base.Execute();

To see the whole thing, download the files at the bottom of this post.

How to Use PartCover with MSBuild

Now that you have a Custom task you need to create a Target in your MSBuild file to execute the task.

<!-- Register the PartCover.MSBuild.dll so the PartCover task is available -->
<UsingTask TaskName="PartCover.MSBuild.PartCover" AssemblyFile="$(LibDirectory)/PartCover/PartCover.MSBuild.dll" />
<!-- Setup a property so you can use it in your task -->
    <TestAssemblies Include="src/ZorchedProj/bin/$(Configuration)/ZorchedProj.Tests.dll" />
<!-- Create a Target to call the PartCover task -->
<Target Name="Test" DependsOnTargets="CoreBuild">
     <!-- Configure the task to execute -->
    <PartCover ToolPath="$(LibDirectory)/PartCover"
                    TargetArgs="%(TestAssemblies.FullPath) /xml=%(TestAssemblies.Filename).xml /labels /nologo /noshadow"

Download the code:
PartCover MSbuild.zip

Good luck and I hope someone else finds this useful.

Tracking Project Metrics

How do you track the health of your software projects? Ideally you could come up with few, easy to collect metrics and have your Continuous Integration system generate the information and maybe even graph it over time. What we track is going to be based on a set of beliefs and assumptions, so I think we should make that clear.

My Software Metrics Beliefs and Assumptions

  • The simplest code that solves the problem is the best. Simple does not mean rote or repetitive. It means well designed, well abstracted, well factored.
  • Unit Testing improves the quality of code.
  • Overly complex code, code that is not well factored, “big” code is hard to unit test.
  • The metrics need to be easy to interpret and easy to gather or I won’t do it.

Based on those beliefs and assumptions we have defined the kinds of things we care about. We want simple, small classes and methods. We want classes that fit the Single Responsibility Principle. We want unit test coverage. And we want to know when we deviate from those things.

Inevitably this won’t tell you the whole picture of a project. Some deviation is inevitable as well (we’re not perfect). But this is giving us a picture into a project that would let us look at “hot spots” and determine if they are things we want to deal with. It will never tell you if the system does what a user really wants. It will never fully tell you if a project will be successful.

The Metrics I Came Up With

  • Unit Test Coverage – How many code paths are being exercised in our tests.
  • Cyclomatic Complexity – Number of methods over 10, 20, 40
  • Lines of Code – General size information
  • Methods over a certain size – Number of methods over 15, 30, 45 lines
  • Classes over a certain size – Number of classes over 150, 300, 600 lines
  • Afferent/Efferent Coupling – Dead code and code that does too much


I’m currently thinking mostly about .NET projects because at work we do a lot of small to mid-size projects using .NET. Many of the tools already exist and are Open Source in the Java world and many Java Continuous Integration servers support calculating some of those metrics for you already. So you’re probably in pretty good shape if you’re doing Java development.


Out-of-the-box FxCop doesn’t really give you many of these things. But it does provide you a way that you can fairly easily your own rules using their API.

For Example to calculate Cyclomatic Complexity you can implement a class that extends BaseIntrospectionRule and overrides the VistBranch and VisitSwitchInstruction.

public override void VisitBranch(Branch branch)
    if (branch.Condition != null)
         Level ++;
public override void VisitSwitchInstruction(SwitchInstruction switchInstruction)
    Level += switchInstruction.Targets.Count;


PartCover is an open source code coverage tool for .NET. It does the job but does not have what I’d call any “wow” factor.

NCover is a proprietary coverage tool. High on wow factor, but expensive. NCover was once free and open source and gained a good following, but they switched and closed it and made the decision to charge $150 – $300 a license for it depending on the version.

(I lament the state of the .NET ecosystem with regard to tooling. Either an OK open source version or an expensive commercial version is not a real choice. There are so many good options in the Unit Testing space, but not in the coverage space.)

NDepend Mini-Review

Disclosure: I received a free copy of NDepend from the creator. I would like to think that didn’t influence this at all, but wanted to make that known.

One of the tools that I’ve been looking at for .NET projects is NDepend. It seems to cover all of the cases that I mentioned above except for the Code Coverage case (although it integrates with NCover, but I haven’t looked at that). It has a really cool query languages, that looks a lot like SQL, that lets you customize any of the existing metrics that it tracks or write your own (it comes with so many that it seems like I just customize). It comes with so many metrics that in practice it can be seem quite overwhelming. I think the right thing to do for most projects is to pick the handful that you care about and limit it to that.

NDepend comes with NAnt and MSBuild tasks that will let you integrate it into your build automation. It also comes with an XSL stylesheet to integrate the NDepend output into CruiseControl.NET for reporting purposes.

Some things you might run into:

  • C# support is the primary language (e.g. No support for code level Cyclomatic Complexity for non-C# language, but still supports IL level CC). Only a problem if you are forced to do VB.NET of course, or use another CLR language.
  • A single license is $400 and 20 licenses or more are over $200 each. So price could be a concern to many people. I’m thinking it might make the most sense to run this on a CI server, so in that case you would probably only need a small number of licenses.
  • It integrates with code coverage tools, but currently only NCover. See the previous comments about the cost of NCover combined with the cost of NDepend if price is an issue.

What Else?

What does everyone else think? What do you care about and what are the right metrics for keeping track of the health of a software project?

Register and Unregister COM DLL from .NET Code


The command regsvr32 is used to register a native, unmanaged code DLL so that it is available via COM. With a registered DLL you can use COM Interop to call that code from .NET Managed code.

Regsvr32 is unmanaged code and as such makes use of some existing functions that are defined in the kernel32.dll. Fortunately .NET makes available a pretty easy to use foreign function interface (FFI) in the form of P/Invoke.

In general to call an unmanaged function you just need to use the DllImport annotation on an extern function to tell the CLR how to access the function.


static extern int DllGetVersion(ref DLLVERSIONINFO pdvi);

Registering an Unmanaged DLL with C# Code

regsvr32 actually calls functions defined within the DLL itself in what is known as a self-registering DLL. So assuming your DLL is self-registering then you should be able to use this approach as well. The only thing we need to do is figure out what functions to call.

It ends up there are 2 basic functions: LoadLibrary and GetProcAddress.


LoadLibrary returns a handle the module (a pointer to a structure).

After you are done with you library you can clean up by calling FreeLibrary passing it the handle that was returned from LoadLibrary.


GetProcAddress finds a function defined in a module and returns a pointer to that function. A function pointer allows you to call a method in a dynamic way. It is functionally equivalent to a delegate in managed code.

In C e.g.:


Put it All Together

Now we have a basic algorithm to register a DLL:

  1. LoadLibrary to get a handle to the library
  2. GetProcAddress to get a function pointer to the proper function to register the DLL
  3. Call the function returned from GetProcAddress
  4. Cleanup

Mix that in with some error checking code and I got the following:

public class Registrar : IDisposable
    private IntPtr hLib;
    [DllImport("kernel32.dll", CharSet = CharSet.Ansi, ExactSpelling = true, SetLastError = true)]
    internal static extern IntPtr GetProcAddress(IntPtr hModule, string procName);
    [DllImport("kernel32.dll", SetLastError = true)]
    internal static extern IntPtr LoadLibrary(string lpFileName);
    [DllImport("kernel32.dll", SetLastError = true)]
    internal static extern bool FreeLibrary(IntPtr hModule);
    internal delegate int PointerToMethodInvoker();
    public Registrar(string filePath) 
        hLib = LoadLibrary(filePath);
        if (IntPtr.Zero == hLib)
            int errno = Marshal.GetLastWin32Error();
            throw new Win32Exception(errno, "Failed to load library.");
    public void RegisterComDLL()
    public void UnRegisterComDLL()
    private void CallPointerMethod(string methodName)
        IntPtr dllEntryPoint = GetProcAddress(hLib, methodName);
        if (IntPtr.Zero == dllEntryPoint)
            throw new Win32Exception(Marshal.GetLastWin32Error());
        PointerToMethodInvoker drs = 
               (PointerToMethodInvoker) Marshal.GetDelegateForFunctionPointer(dllEntryPoint, 
    public void Dispose()
      if (IntPtr.Zero != hLib)
          hLib = IntPtr.Zero;

The requirement I was dealing with was a build script so I wanted to register the unmanaged DLL, use it, and then unregister it so the computer would be in its previous state. If you want to leave the DLL registered, such as for an install program, you would need to modify the above example.

To call this code you just need to pass it a path to the dll that needs to be registered.

using (Registrar registrar = new Registrar("path\\to\\com.dll"))
    return base.Execute();

Check out pinvoke.net for a lot of good documentation and example of how to call native methods from managed code.

Database Migrations for .NET

One of the more difficult things to manage in software projects is often changing a database schema over time. On the projects that I work on, we don’t usually have DBAs who manage the schema so it is left up to the developers to figure out. The other thing you have to manage is applying changes to the database in such a way that you don’t disrupt the work of other developers on your team. We need the change to go in at the same time as the code so that Continuous Integration can work.


While I don’t know if they were invented there, migrations seem to have been popularized by Ruby on Rails. Rails is a database centric framework that implies the properties of your domain from the schema of your database. For that reason it makes sense that they came up with a very good way of These are some example migrations to give you an idea of the basics of creating a schema.


using Migrator.Framework;
using System.Data;
public class AddAddressTable : Migration
    override public void Up()
             new Column("id", DbType.Int32, ColumnProperty.PrimaryKey),
             new Column("street", DbType.String, 50),
             new Column("city", DbType.String, 50),
             new Column("state", DbType.StringFixedLength, 2),
             new Column("postal_code", DbType.String, 10)
    override public void Down()


using Migrator.Framework;
using System.Data;
public class AddAddressColumns : Migration
    public override void Up()
        Database.AddColumn("Address", new Column("street2", DbType.String, 50));
        Database.AddColumn("Address", new Column("street3", DbType.String, 50));
    public override void Down()
        Database.RemoveColumn("Address", "street2");
        Database.RemoveColumn("Address", "street3");


using Migrator.Framework;
using System.Data;
public class AddPersonTable : Migration
    public override void Up()
            new Column("id", DbType.Int32, ColumnProperty.PrimaryKey),
            new Column("first_name", DbType.String, 50),
            new Column("last_name", DbType.String, 50),
            new Column("address_id", DbType.Int32, ColumnProperty.Unsigned)
        Database.AddForeignKey("FK_PERSON_ADDRESS", "Person", "address_id", "Address", "id");
    public override void Down()

Run Your Migrations

The best way to run your migrations will be to integrate it into your build automation tool of choice. If you are not using one, now is the time.

MigratorDotNet supports MSBuild and NAnt.


<Target name="Migrate" DependsOnTargets="Build">
    <CreateProperty Value="-1"  Condition="'$(SchemaVersion)'==''">
        <Output TaskParameter="Value" PropertyName="SchemaVersion"/>
    <Migrate Provider="SqlServer" 
            Connectionstring="Database=MyDB;Data Source=localhost;User Id=;Password=;" 


<target name="migrate" description="Migrate the database" depends="build">
  <property name="version" value="-1" overwrite="false" />
    connectionstring="Database=MyDB;Data Source=localhost;User Id=;Password=;"
    to="${version}" />

So You Want to Migrate?

Some more documentation and example are available MigratorDotNet. Some of the changes represented are still in an experimental branch that is in the process of being merged.

MigratorDotNet is a continuation of code started by Marc-André Cournoyer and Nick Hemsley.

ALT.NET in Milwaukee

I am a generalist. I like Ruby and Groovy, Rails and Grails, Objective C and Python sometimes. I use bash scripts and I use Java and .NET too. I work on a MacBook Pro running OS X and a Thinkpad running Windows XP. I run my server on Ubuntu Linux. I use to run Linux at home a lot more, but have basically just switched to the Mac, the mullet of OSes – business on top and party in the back! (No I don’t have a mullet, yes I love the Mac OS X.)

But this post was about .NET right? Well the whole idea behind ALT.NET is that we have learned from our experiences. Whether I’m doing a web app in Grails, a handheld application in .NET CF or a desktop application using Objective C I want to bring all of the experience that I have in each of them to the game. When I do ASP.NET I want to leverage the things I’ve seen using Hibernate and Spring in Java and MVC in Rails, Grails and Objective C. I like to think that I fit the solution to the problem and not the other way around. Having a broad based experience helps with that.

ALT.NET is about bringing all of those ideas along with the ideas of Agile development, testing, continuous integration, refactoring and generally embracing change to the .NET world. It’s about evaluating tools on their merits regardless of the vendor. Microsoft is just another “3rd Party vendor” and gets no special treatment.

Dan Miser is a former bigwig in the Delphi community who now sits next to me at work. He’s a .NET guy who owns a Mac and got excited about Rails. He’s taken it upon himself to organize an ALT.NET in Milwaukee.

Do you believe that choosing the right tools doesn’t depend on who makes them? Do you believe that Open Source works? Do you know more than one language and more than one platform?
Check out his site for more information.

Implementing Mixins with C# Extension Methods

Wikipedia defines a mixin as “a class that provides a certain functionality to be inherited by a subclass, but is not meant to stand alone. Inheriting from a mixin is not a form of specialization but is rather a means to collect functionality. A subclass may even choose to inherit most or all of its functionality by inheriting from one or more mixins through multiple inheritance. A mixin can also be viewed as an interface with implemented methods.”

Inheritance defines an “is a” relationship between classes. A car “is a” vehicle — a car is a specialization of a vehicle. A mixin, on the otherhand, is a means of sharing functionality among a series of related classes that do not inherit from the same base class. A mixin allows you to reuse code without implying a relationship among the classes. It also allows you to get around the Single Inheritance model a bit to do this.

An extension method is a new feature of C# 3.0. It is essentially a way of providing methods on existing classes that you may or may not have access to. This is similar in many ways to “open classes” that allow you to add methods to existing classes in languages such as Ruby. The interesting things is that you can add domain specific methods to framework classes like String or Int.


public static class TimeMixin {
    public static TimeSpan Days(this int days)
        return new TimeSpan(days, 0, 0, 0);
    public static DateTime FromNow(this TimeSpan timeSpan)
        return DateTime.Now.Add(timeSpan);

This is a great way to keep code looking and feeling like Object Oriented code and avoiding a whole bunch of Utility classes with static methods. Avoiding these Utility methods keeps the power of OO for dealing with complexity through polymorphism and avoids the often more complex structures that come out of procedural code.

So, how do you use Extension Methods as Mixins?

Define an Interface (or use an existing one)

This could be anything from a marker interface to a full interface that defines a contract. In this simple example, I’ll make use of an existing interface.

public interface IComparable {
    int CompareTo(object other);

Create Extension Methods for the Interface

namespace Mixins {
    public static class ComparableExtension {
        public static bool GreaterThan(this IComparable leftHand, object other) {
            return leftHand.CompareTo(other) > 0;
        public static bool LessThan(this IComparable leftHand, object other) {
            return leftHand.CompareTo(other) < 0;

Notice the ‘this’ in the method declaration before the first parameter. The ‘this’ denotes an extension method. It defines the types of objects that will have this method available to be called on them.

Implement the interface in a Concrete class

namespace Domain {
    public class Money : IComparable {
        private double amount;
        private string currency;
        private CurrencyConverter converter;
        public Money(CurrencyConverter converter, double amount, string currency) {
            this.amount = amount;
            this.currency = currency;
            this.converter = converter;
        public int CompareTo(object other) {
            double otherAmount = converter.ConvertAmount(this.currency, (Money) other);
            return this.amount.CompareTo(otherAmount);

Use the Code

namespace Domain {
    using Mixin;
    public class Account : IComparable {
        private Money currentBalance;
        public void Withdrawl(Money amount) {
            if (currentBalance.LessThan(amount)) {
                 throw new BalanceExceededException(currentBalance);
            // ... implement the rest of the method
        public int CompareTo(object other) {
            return currentBalance.CompareTo(((Account) other).currentBalance);

Now the LessThan and GreaterThan methods implementations are reusable on both the Money and the Account classes. They are of course available on any class that implements IComparable.

So now you can avoid some limitations of Single Inheritance and increase the amount of code reuse that is available to you with the concept of a Mixin. This is probably officially a ‘sharp knife’ so be careful with this kind of programming. It can be non-intuitive to people and can probably be greatly overused but it is a very valuable tool in many cases.

ALT.NET Milwaukee

Dan wrote a post about ALT.NET in Milwaukee. Basically the idea of ALT.NET is to take the best practices from software development, to find the best tools available in .NET regardless of the vendor and to use them when they make sense. Some people see the term as divisive. As and us vs. them sort of thing. I don’t really agree.

ALT.NET is not a divisive term to me. It’s a distinct and simple name. There might be other alternatives like INDY.NET? As in indy music? I don’t think that would be terrible, but it doesn’t have the same connotation as an indy film or an indy band. What it generally comes down to often is “I know more than one programming language”.NET. We’ve done Java and Ruby (and/or Python, perl, Delphi, Scheme) and seen what those languages and frameworks have to offer.

Microsoft provides a “full stack”, OS, Database, IDE, Language and some frameworks in there. This is very helpful. It makes getting started with the .NET platform very easy because you don’t have to make a lot of choices up front. Their goal is to get developers to use their tools, solutions and practices. There is nothing wrong with that. Their target market is mostly dominated by mid-sized companies, often with an average talent (don’t take that the wrong way, there are obviously exceptions). This is why they focus so much on RAD development and click and run kind of development. It makes sense for a certain market.

But is it right for every market, for every developer, or as a solution to every problem. I think the obvious answer is no. Just because the whole solution is not right for every scenario doesn’t mean you can’t use parts of it, or can’t go to the buffet instead of ordering off of the set menu though. That is ALT.NET.

Because of this full stack mentality at MS, I think almost by definition, anything not provided by Microsoft is “alternative” in the .NET world. Scrum and XP are not MSF, so you’ve chosen to use an alternative process in the .NET world. The tools we use in those disciples are generally not developed by Microsoft. That’s ok. We want to use the right tools for the job and the right tools to support the ways that we want to work. We’re comfortable with open source and a bit of trial and error. We’re comfortable with support via email and forums. We’re comfortable with a bit of ambiguity and living with the consequences of our decisions.

Alternatives and choices are not about “us vs. them” or “elite vs. noobs” it’s just about…choice. Use the right tool for the job, fit the tools and frameworks to the problem at hand and not the other way around.

I think ALT.NET is a valuable idea. It takes the best ideas from other development languages and cultures and brings them to the .NET world. It makes available a broader set of tools and techniques that can help better solve some problems.

What ALT.NET do I use?

  • CruiseControl.NET for continuous integration
  • log4net for logging
  • NUnit for unit tests
  • NCover for code coverage of those unit tests
  • Resharper for awesome refactoring support in Visual Studio

What am I looking at and hopeful about?

  • ORM – such as NHibernate
  • Dependency Injection – such as Spring.NET
  • MVC frameworks
  • Front Controller frameworks for web development

What do you use that’s off the garden path?

Check out Dan’s blog and help get some ideas going for ALT.NETers in the Milwaukee area.