What is the Cost of Not Doing Things?

We’re really good at measuring the cost of some things. We’re good at measuring the cost of a new computers for everyone on the team, we’re good at measuring the cost per hour of a resource on a project and we’re good at measuring the time it will take to complete a new feature.

It seems like people are not good at is measuring the cost of not doing things. What is the cost of maintaining an application on 10 year old technology instead of upgrading it to newer versions as they come out? What is the cost of not having unit tests and automated test suites? What is the cost of running many different versions of a framework or a virtual machine?

Unfortunately this leaves us with a problem. When we cannot quantify the cost of inaction it often looks like a reasonable choice because we assume that it’s free. That assumption is the root of a lot of problems.

This post is just me ranting, I wish I had the answer.

Using Quartz.NET, Spring.NET and NHibernate to run Scheduled Tasks in ASP.NET

Running scheduled tasks in web applications is not normally a straightforward thing to do. Web applications are built to respond to requests from users and respond to that request. This request/response lifecycle doesn’t always match well to a long running thread that wakes up to run a task every 10 minutes or at 2 AM every day.

ASP.NET Scheduled Task Options

Using ASP.NET running on Windows, there are a number of different options that you could choose to implement this. Windows built in Scheduled Tasks can be run to periodically perform execute a program. A Windows Service could be constructed that used a Timer or a Thread to periodically do the work. Scheduled Tasks and Windows Service require you to write a standalone program. You can share DLLs from your Web application but in the end it is a separate app that needs to be maintained. Another option if you go this route is to turn the Scheduled Task or Service being run into a simple Web Service or REST client that can call your Web application but doesn’t need any knowledge of the jobs themselves.

Another option is an Open Source tool called Quartz.NET. Quartz.NET is based on the popular Java scheduled task runner called (not surprisingly) Quartz. Quartz.NET is a full-featured system that manages Jobs that do the work and Triggers that allow you to specify when you want those jobs run. It can run in your web application itself or as an external service.

The simplest approach to get started is to run directly in your Web application as a process in IIS. The downside to this is that IIS will periodically recycle it’s processes and won’t necessarily start a new one until a new web request is made. Assuming you can deal with this indeterministic behavior then in an IIS process will be fine. It also creates a relatively easy path that will allow you to migrate to the external service process at a later point if need be.

I’m an ALT.NET kind of .NET developer, so I like to use tools like NHibernate for ORM and Spring.NET for Dependency Injection, AOP and generally wiring everything together. The good news is that Spring.NET supports Quartz.NET through its Scheduling API. Start with that for some basic information on using Quartz.NET with Spring. The bad news is that the documentation is a bit thin and the examples basic. I attempt to remedy that in part here.

Using Quartz.NET, NHibernate and Spring.NET to run Scheduled Tasks

The goal is to integrate an existing Spring managed object like a Service or a DAL that uses NHibernate with a Quartz Job that will run on a periodic basis.

To start with you need to create an interface for your service and then implement that interface. The implementation I’ll leave to you and your problem, but the example below you can image uses one or more NHibernate DALs to lookup Users, find their email preferences, etc.

Implementing Services and Jobs

public interface IEmailService
{
    void SendEveryoneEmails();
}

When implementing your Job you need to know a few details about how Quartz works:

  1. The first thing to understand is that if you are going to use the AdoJobScheduler to store your Jobs and triggers in the database the Job needs to be Serializable. Generally speaking your DAL classes and NHibernate sessions and the like are not going to be serializable. To get around that, we make the properties set-only so that they will not be serialized when they are stored in the database.
  2. The second thing to understand is that your Job will not be running in the context of the Web application or a request so anything you have to set up connections (such as an OpenSessionInView filter) will not apply to Jobs run by Quartz. This means that you will need to setup your own NHibernate session for all of the dependent objects to use. Luckily Spring provides some help with this in the SessionScope class. This is the same base class as is used by the OpenSessionInView filter.

Using the Service interface you created, you then create a Job that Quartz.NET can run. Quartz.NET provides the IJob interface that you can implement. Spring.NET provides a base class that implements that interface called QuartzJobObject helps deal with injecting dependencies.

using NHibernate;
using Quartz;
using Spring.Data.NHibernate.Support;
using Spring.Scheduling.Quartz;
 
public class CustomJob : QuartzJobObject
{
    private ISessionFactory sessionFactory;
    private IEmailService emailService;
 
    // Set only so they don't get serialized
    public ISessionFactory SessionFactory { set { sessionFactory = value; } }
    public IEmailService EmailService { set { emailService = value; } }
 
    protected override void ExecuteInternal(JobExecutionContext ctx)
    {
        // Session scope is the same thing as used by OpenSessionInView
        using (var ss = new SessionScope(sessionFactory, true))
        {
            emailService.SendEveryoneEmails();
            ss.Close();
        }
    }
}

Wiring Services and Jobs Together with Spring

Now that you have your classes created you need to wire everything together using Spring.

First we have our DALs and Services wired in to Spring with something like the following:

<object id="UserDAL" type="MyApp.DAL.UserDAL, MyApp.Data">
  <property name="SessionFactory" ref="NHibernateSessionFactory" />
</object>
<object id="EmailService" type="MyApp.Service.EmailService, MyApp.Service">
  <property name="UserDAL" ref="UserDAL" />
</object>

Next you create a Job that references the Type of the Job that you just created. The type is referenced instead of the instance because the lifecycle of the Job is managed by Quartz itself. It deals with instantiation, serialization and deserialization of the object itself. This is a bit different than what you might expect from a Spring service normally.

<object id="CustomJob" type="Spring.Scheduling.Quartz.JobDetailObject, Spring.Scheduling.Quartz">
    <property name="JobType" value="MyApp.Jobs.CustomJob, MyApp.Jobs" />
</object>

Once your Job is created, you create a Trigger that will run the Job based on your rules. Quartz (and Spring) offer two types of Jobs SimpleTriggers and CronTriggers. SimpleTriggers allow you to specify things like “Run this task every 30 minutes”. CronTriggers follow a crontab format for specifying when Jobs should run. The CronTrigger is very flexible but could be a little confusing if you aren’t familiar with cron. It’s worth getting to know for that flexibility though.

<object id="CustomJobTrigger" type="Spring.Scheduling.Quartz.CronTriggerObject, Spring.Scheduling.Quartz">
    <property name="JobDetail" ref="CustomJob"/>
    <property name="CronExpressionString" value="0 0 2 * * ?" /> <!-- run every morning at 2 AM -->
    <property name="MisfireInstructionName" value="FireOnceNow" />
</object>

The last piece that needs to be done is the integration of the SchedulerFactory. The SchedulerFactory brings together Jobs and Triggers with all of the other configuration needed to run Quartz.NET jobs.

A couple of things to understand about configuring the SchedulerFactory:

  1. Specifying (where DbProvider is the db:provider setup used by your Nhibernate configuration) tells the SchedulerFactory to use the AdoJobProvider and store the Jobs and Trigger information in the database. The tables will need to exist already and Quartz provides a script for this task.
  2. Running on SQL Server requires a slight change to Quartz. It uses a locking mechanism to prevent Jobs from running concurrently. For some reason the default configuration uses a FOR UPDATE query that is not supported by SQL Server. (I don’t understand exactly why a .NET utility wouldn’t work with SQL Server out of the box?)
    To fix the locking a QuartzProperty needs to be set:
  3. The JobFactory is set to the SpringObjectJobFactory because it handles the injection of dependencies into QuartzJobObject like the one we created above.
  4. SchedulerContextAsMap is a property on the SchedulerFactory that allows you to set properties that will be passed to your Jobs when they are created by the SpringObjectJobFactory. This is where you set all of the Property names and the corresponding instance references to Spring configured objects. Those objects will be set into your Job instances whenever they are deserialized and run by Quartz.

Here’s the whole ScheduleFactory configuration put together:

<object id="SchedulerFactory" type="Spring.Scheduling.Quartz.SchedulerFactoryObject, Spring.Scheduling.Quartz">
    <property name="JobFactory">
        <object type="Spring.Scheduling.Quartz.SpringObjectJobFactory, Spring.Scheduling.Quartz"/>
    </property>
    <property name="SchedulerContextAsMap">
        <dictionary>
            <entry key="EmailService" value-ref="EmailService" />
            <entry key="SessionFactory" value-ref="NHibernateSessionFactory" />
        </dictionary>
    </property>
    <property name="DbProvider" ref="DbProvider"/>
    <property name="QuartzProperties">
        <dictionary>
            <entry key="quartz.jobStore.selectWithLockSQL" value="SELECT * FROM {0}LOCKS WHERE LOCK_NAME=@lockName"/>
        </dictionary>
    </property>
    <property name="triggers">
        <list>
            <ref object="CustomJobTrigger" />
        </list>
    </property>
</object>

Conclusion

Scheduled tasks in ASP.NET applications shouldn’t be too much trouble anymore. Reusing existing Service and DAL classes allows you to easily create scheduled tasks using existing, tested code. Quartz.NET looks to be a good solution for these situations.

Mocking .NET Objects with NUnit

NUnit is my Unit Testing tool of choice for .NET development. Microsoft provides a unit testing framework but it only works with some higher-end versions of Visual Studio. They’re so similar that it’s almost ridiculous that Microsoft created their own version.
(See one of my previous posts for more information on Automating NUnit with MSBuild.) In the Java world it’s fairly common to do Mocking to help make unit testing easier. I’ve written about using JMock for Unit Tesing in Java. In this post, I’d like to talk about a relatively new feature of NUnit which now supports Mocks out of the box.

What Are Mock Objects

Mock Objects are a technique that allow you to isolate classes from their dependencies for testing purposes. This isolation allows for fine-grained testing of single methods in a single class, possibly even before the dependent classes are fully implemented. This isolation allows your tests to run quickly and it makes testing small pieces of functionality much easier. When you’ve tested individual pieces of code in isolation you can have much higher confidence in larger-grained tests. This isolation becomes even more interesting when you are dealing with dependencies such as a data layer or a web service layer. External calls like that can be very time consuming or could fail if the remote system is down for maintenance.

One of the great things about using Mock Objects is that they force you to think about the dependencies that your classes and methods have. It forces you to think about the coupling between your classes. If you have high coupling then your code is often harder to test. If you have a loosely coupled design then testing and using Mock Objects is very much easier. Thinking about those design notions early can help you more easily manage change over time.

Maxims:

  • Good design is better than bad design
  • Loosely coupled objects are usually a better design than tightly coupled objects
  • Testing improves code quality and developer efficiency over time
  • Testing is easier with a loosely coupled designs

A Sample Project

We’re going to start with some simple code. We create a Domain object called Person and an interface for a Data Access object called IPersonRepository. Pretty simple at this point.

public class Person
{
    public string Id;
    public string FirstName;
    public string LastName;
    public Person(string newId, string fn, string ln)
    {
        Id = newId;
        FirstName = fn;
        LastName = ln;
    }
}

public interface IPersonRepository
{
    List<Person> GetPeople();
    Person GetPersonById(string id);
}

Next we create a PersonService object. This would represent all of the business logic in our application. It would interact with the Data Access tier and return information to the UI layer for display.

We wire together our objects using Constructor based Dependency Injection. All of the dependent Objects are sent in through the constructor. This allows for the loose coupling since the PersonService doesn’t know about the Implementing class, but only the interface. Since it’s done in the constructor we can also never have an invalid PersonService as would be the case if there was a setter for the IPersonRepository implementation.

This is again a fairly straightforward implementation, but I hope enough to display the issue at hand.

public class PersonService
{
    private IPersonRepository personRepos;
    public PersonService(IPersonRepository repos)
    {
        personRepos = repos;
    }
    public List<Person> GetAllPeople()
    {
        return personRepos.GetPeople();
    }
    public List<Person> GetAllPeopleSorted()
    {
        List<Person> people = personRepos.GetPeople();
        people.Sort(delegate(Person lhp, Person rhp) { 
            return lhp.LastName.CompareTo(rhp.LastName); 
        });
        return people;
    }
    public Person GetPerson(string id)
    {
        try
        {
            return personRepos.GetPersonById(id);
        }
        catch (ArgumentException)
        {
            return null; // no person with that id was found
        }
    }
}

Using Mocks with NUnit

Now we can start testing our PersonService. Notice that we haven’t even implemented the IPersonRepository yet. That way we can make sure that everything in our PersonService class works as expected without having to think about other layers of the application.

using System;
using System.Collections.Generic;
using NUnit.Framework;
using NUnit.Mocks;
[TestFixture]
public class PersonServiceTest
{
    // The dynamic mock proxy that we will use to implement IPersonRepository
    private DynamicMock personRepositoryMock;
    // Set up some testing data
    private Person onePerson = new Person("1", "Wendy", "Whiner");
    private Person secondPerson = new Person("2", "Aaron", "Adams");
    private List<Person> peopleList;
    [SetUp]
    public void TestInit()
    {
        peopleList = new List<Person>();
        peopleList.Add(onePerson);
        peopleList.Add(secondPerson);
        // Construct a Mock Object of the IPersonRepository Interface
        personRepositoryMock = new DynamicMock(typeof (IPersonRepository));
    }
    [Test]
    public void TestGetAllPeople()
    {
        // Tell that mock object when the "GetPeople" method is 
        // called to return a predefined list of people
        personRepositoryMock.ExpectAndReturn("GetPeople", peopleList);
        // Construct a Person service with the Mock IPersonRepository
        PersonService service = new PersonService(
             (IPersonRepository) personRepositoryMock.MockInstance);
        // Call methods and assert tests
        Assert.AreEqual(2, service.GetAllPeople().Count);
    }
    [Test]
    public void TestGetAllPeopleSorted()
    {
        // Tell that mock object when the "GetPeople" method is called to 
        // return a predefined list of people
        personRepositoryMock.ExpectAndReturn("GetPeople", peopleList);
        PersonService service = new PersonService(
                (IPersonRepository) personRepositoryMock.MockInstance);
        // This method really has "business logic" in it - the sorting of people
        List<Person> people = service.GetAllPeopleSorted();
        Assert.IsNotNull(people);
        Assert.AreEqual(2, people.Count);
        // Make sure the first person returned is the correct one
        Person p = people[0];
        Assert.AreEqual("Adams", p.LastName);
    }
    [Test]
    public void TestGetSinglePersonWithValidId()
    {
        // Tell that mock object when the "GetPerson" method is called to 
        // return a predefined Person
        personRepositoryMock.ExpectAndReturn("GetPersonById", onePerson, "1");
        PersonService service = new PersonService(
            (IPersonRepository) personRepositoryMock.MockInstance);
        Person p = service.GetPerson("1");
        Assert.IsNotNull(p);
        Assert.AreEqual(p.Id, "1");
    }
    [Test]
    public void TestGetSinglePersonWithInalidId()
    {
        // Tell that mock object when the "GetPersonById" is called with a null
        // value to throw an ArgumentException
        personRepositoryMock.ExpectAndThrow("GetPersonById", 
             new ArgumentException("Invalid person id."), null);
        PersonService service = new PersonService(
                 (IPersonRepository) personRepositoryMock.MockInstance);
        // The only way to get null is if the underlying IPersonRepository 
        // threw an ArgumentException
        Assert.IsNull(service.GetPerson(null));
    }
}

The PersonService doesn’t have a lot of logic in it, but I hope this illustrates how you easily can test various conditions using Mock objects. It also illustrates the idea of testing early by allowing you to test some code before all of the dependent objects are implemented.

While the Mocks built into NUnit might not be the most powerful or complete Mocking library out there, it should be sufficient for most uses. I’m sure they will continue to improve them over time as well, so I look forward to them becoming more powerful (and having better documentation) in the future.

Download Code Example:
NMock C# Example Project

For more on NUnit you might like to check out:

Understanding Domain Specific Languages as Jargon

Domain Specific Languages (DSLs) are the idea of creating syntaxes that model a very specific problem domain. Domain Specific Languages are not a new concept. Some people call them ‘little languages’. The Unix world has a bunch of little languages. Grep, awk, sed, lex, and yacc all exhibit features of these domain specific languages. They are little tools that do one thing well. In these cases they are often highly encoded and not in natural language of any sort. Modern domain specific languages should aim to be humane and literate in the language of the user.

Domain Specific Languages should be expressed in the language of the problem being solved. They are a higher level of abstraction than for loops and object instantiation. They are at the level of abstraction of the problem space. Neal Ford uses the example of “venti nonfat decaf whip latte”. What am I talking about if I use those terms? If you guessed coffee, then you know the Jargon of one coffee chain out there. The person listening to the order understands that you are ordering a decaf coffee drink of a certain size, with non-fat milk and whipped cream. There is a lot of shared context in the Jargon of the coffee drinker and the coffee order taker. This shared context sets the stage for a rich conversation without a lot of unnecessary noise. This is true of all Jargon.


# CoffeeDSL.rb
# This is the input from the user, likely read from a file
# or input through a user interface of some sort
CoffeeInput = "venti nonfat decaf whip latte"

class Coffee

def method_missing(symbol)
name = symbol.to_s
if %w(venti grande).include?(name)
@size = name
elsif %w(whip nowhip).include?(name)
@whip = 'whip'.eql?(name)
elsif %w(caf decaf halfcaf).include?(name)
@caf = name
elsif %w(regular latte cappachino).include?(name)
@type = name
elsif %w(milk nonfat).include?(name)
@milk = name
else
raise ArgumentError, "Unknown coffee informantion: #{name}."
end
end

def order
params = ''
params += @milk + ' ' if @milk
params += @caf + ' ' if @caf
params += 'whip ' if @whip
print "Ordering coffee: #{@size} #{params}#{@type}\n"
end

def load
# turn one line into multi-line "method calls"
cleaned = CoffeeInput.gsub(/\s+/, "\n")
self.instance_eval(cleaned)
end
end

# this is your code which loads the DSL input and executes it
coffee = Coffee.new
coffee.load # load the user input
coffee.order # submit the order

Jargon is the terminology of a specific proffession or group. Does your user community or problem space have a vocabulary? Can they express the things they want out of a system using that vocabulary or Jargon? If so, there is a very real possibility that you could utilize a DSL to solve some set of problems for those users.

What about field validation?

# ValidationDSL.rb
# Input from the user that would be read in
Input = < @max_length
return false
end
if @min_length and field.length < @min_length return false end return true end def load self.instance_eval(Input) end end val = ValidateDSL.new val.load print val.validate('foo') print "\n" print val.validate('') print "\n" print val.validate('abbbbbbbbbbbbbbbbbbbbbb') print "\n"

How does your user community talk about a problem? Can they easily express what they intend with simple Jargon that they already know? Is that more natural for a power user than some complicated UI with buttons and checkboxes? Then you might have a good place to use a DSL.

.NET Makes Me Mad (Generics and Collections edition)

Ok, so I’ve decided I need to rant a little bit about .NET. This ends up in part being, “What I like about Java that I don’t like about C#”. I think this is fair though. It’s not like C# and .NET were developed in a vacuum. It’s not like C# is the first Object Oriented, VM run language. As such I think it’s fair to point out where they should have learned from others.

Generics Don’t Fully Support Covariant Types

Generic collections in .NET can only handle a single type of object well. You can add a sub-type to a Collection, but if you have two Collections with covariant types, you can not mix them without jumping through hoops.

Example

The simple case of adding a single Covariant type works, but when dealing with a Generic Collection of covariant types, it does not.

abstract class Vehicle {
}
 
class Car : Vehicle {
}
 
class MotorCycle : Vehicle {
}
 
List<Vehicle> vehicles = new List<Vehicle>();
vehicles.Add(new Car());    // This is OK
 
List<MotorCycle> motorCycles = LoadMotorCycles();
vehicles.AddRange(motorCycles);    // This does not work!

To make it work with AddRange, you have to perform a manual conversion

public IList<Vehicle> AllVehicles
{
    get 
    {
         List<Vehicle> vehicleList = new List<Vehicle>();
         vehicleList
             .AddRange(AllCars.ConvertAll(
                 new Converter<Car, LocationContainer>(ToVehicle)));
         vehicleList
             .AddRange(AllCycles.ConvertAll(
                new Converter<MotorCycle, LocationContainer>(ToVehicle)));
         return vehicleList ;
    }
}
 
private static Vehicle ToVehicle<T>(T vehic) where T : Vehicle
{
    return vehic;
}

What’s Good About .NET Generics

.NET Generic type information is available at runtime. In Java Generics are implemented as an erasure. Basically this means all of the type checking is done at compile time. The compiler then inserts explicit casts into the code for you. At runtime the code looks the same as if generics were never used. .NET chose not to use erasures but to make the type information available at runtime. This is generally more efficient and less prone to errors or problems with reflections. So good work there.

Note: (The Java folks did this so as not to break backwards compatibility. I think that major revisions should be allowed to break backwards compatibility when there are compelling reasons to do so.)

.NET Collections

Are collections classes such a mysterious art?

.NET does not have a Set or a Bag. These are generally useful and very common collections. A Set guarantees the uniqueness of elements in List like interface. A Bag can contain any objects. The unique thing about it is that it keeps a count of the same objects.

Example of a Bag

Bag fruitBag = new Bag();
Banana b = new Banana();
Apple a = new Apple();
fruitBag.Add(b);
fruitBag.Add(b);
fruitBag.Add(a);
 
int bananas = fruitBag.GetCount(b);

The SortedList and the SortedDictionary both have a Dictionary interface. Why wouldn’t the SortedList have an, uhh maybe, a List interface?

The IList interface is so anemic as to be basically worthless. IList does not even have an AddRange method (or an AddAll) to merge the values in one collection into another. It’s so limited that it makes it very hard to return interfaces from classes which is a good idea to encapsulate the implementation details of methods.

What Do You Think

Do you have things about .NET that annoy you? If so, leave a comment and let me know what it is.

Hibernate Query Translators

I’ve recently been doing some performance testing and tuning on an application. It makes use of Hibernate for the data access and ORM and Spring to configure and wire together everything. As I was looking at all of the configuration and came upon the fact that we were using the ClassicQueryTranslatorFactory. The job of the Query Translator is to turn HQL queries into SQL queries. The ClassicQueryTranslatorFactory is the version that was included in Hibernate 2. In Hibernate 3 they created a new Query Translator, the ASTQueryTranslatorFactory. This Query Translator makes use of Antlr which is a Java based parser generator in the vein of lex and yacc.

I switched out the the ClassicQueryTranslatorFactory and started to use the ASTQueryTranslatorFactory and saw an immediate boost in performance of about 15% for the application. I also noticed that fewer queries were being generated for the page loads for the application. Of course this application uses quite a bit of HQL, so if you do not make use of HQL extensively, then you might not see the same benefits.

I have yet to see any documentation or any other evidence to support the claim that the newer ASTQueryTranslatorFactory would offer better performance, but in my case it seems like it has. Has anyone else noticed this behavior?

Continuous Integration Revisited

I had a chance to install and play with JetBrain’s new Team City beta today. Team City at its most basic is a Continuous Integration server. Continuous Integration (CI) systems are used to help manage a team’s software development process. Every time a developer checks in code, the CI server will check out the code from the source control system, compile the code, run unit test and perform static analysis and code coverage. This is a great way to back up your development process to ensure that you always have a working source tree and development is never held up because of mistakes.

There are other CI systems available, for example, in the past, I’ve used and liked CruiseControl. It is a very capable system that uses Ant build files to compile and test your code. It’s also Open Source and free.

I want to look at a couple of things that I really liked about Team City that make it shine.

Team City Configuration

The first part is the installation and configuration. It’s dead simple.

Team City is distributed as a WAR file, so if you already have a Servlet container like Tomcat up and running it’s as easy as dropping the WAR file into the webapps directory and deploying it through the management interface.

After the installation all of the configuration of a project can be done directly through the web interface, which is very nice compared to hand editing XML files that is often the case in other CI systems like CruiseControl.

The only gotcha is that in addition to the Team City itself, you have to install one or more Build Agents (this is really a feature, more on that later). I skipped this step at first and of course nothing would compile. Directly from the web application you can run either a Java Web Start or a Windows Installer application that will install a Build Agent on the computer. In my case I used the same computer where I was installing Team City – in a more intensive environment, it might make sense to install it on a different machine.

Multiple Build Agents

While it’s an extra step at the beginning, having multiple Build Agents is a really great idea. You can use many machines to build your projects spreading the load around to different machines. This is especially great if you have big project with lots of unit tests (you do have lots of unit tests right?) or just a bunch of projects and configurations that you would like to build. As software developers know, compiling code and running tests is a pretty intensive activity. Spreading the load around means that you can do more concurrent builds. This is a great way to decouple where things get compiled and tested from the centralized management application. This way you don’t have to have multiple instances of the CI system running on different machines if you want to load balance them.

Multi-Platform

Not only is the server itself multi-platform, thanks to it being written in Java, but Team City can compile and test both Java and .NET code. It can use a number of different configurations for compiling and testing. In Java, it can currently use Ant, Maven and Idea build files. On the .NET side of things, it can use MSBuild files and Visual Studio 2005 Solution files.

In addition to compiling code for these platforms, it can use either JUnit or NUnit for running Unit Tests as well. I really like this aspect of it because it offers shops that do more than one language a way to use the same systems for all of their development. At work we do both .NET development and Java development for our clients, so this makes a lot of sense for us. At least in the consulting world, this is a big win.

Delayed Commits

Team City has IDE integration built in. Currently it only supports Idea (why would you need anything else?) but they have already announced that for the 1.5 release they will support Eclipse, Netbeans and Visual Studio. There are a number of features that you need to have the Plugin installed to use, so for the short term Team City will be an OK solution, but not a great solution for non-Idea users.

Delayed Commits is quite possibly the best idea in Team City. It basically allows you to “commit” your changes to the server, but only have them checked into the source control repository if they compile and test correctly. No longer do you have to wait around for the CI server to report back that your checkin was successful before going to lunch or going home for the night. You can now Delay Commit your code and if everything works out it will be checked in for all the other developers to pull down, if not no one will be the wiser and you will fix any issues when you get back to your workstation.

Buttery

The UI is also very nice. They effectively use AJAX techniques to have a really nice user experience.

While this product is in Beta still and I haven’t had a chance to use it in anger yet, I think it’s a very promising tool. JetBrains has a great track record in delivering excellent products. I’m a huge fan of Idea and I can’t do .NET development without ReSharper. I think they have another hit on their hands with Team City. I haven’t heard anything about pricing yet, but the JetBrains stuff seems to usually be affordable.

As an aside, I have no intention of taking anything away from CruiseControl. It is still a great application as well (did I mention it was free).

(Jet Brains people, Yes, I’m willing to take paid endorsements :))