Java 7 Code Coverage with Gradle and Jacoco


Steven Dicks’ post on Jacoco and Gradle is a great start to integrating Jacoco and Gradle, this is a small iteration on top of that work.

Java 7 Code Coverage

The state of Code Coverage took a serious turn for the worst when Java 7 came out. The byte-code changes in Java 7 effectively made Emma and Cobertura defunct. They will not work with Java 7 constructs. Fortunately there is a new player in town called JaCoCo (for Java Code Coverage). JaCoCo is the successor to Emma which is being built on the knowledge gained over the years by the Eclipse and Emma teams on how best to do code coverage. And works with Java 7 out-of-the-box.

The advantage of using established tools is that they generally are well supported across your toolchain. JaCoCo is fairly new and so support in Gradle isn’t so smooth. Fortunately Steven’s post got me started down the right path. The one thing that I wanted to improve right away was to use transitive dependency declarations as opposed to having local jar files in my source repository. JaCoCo is now available in the Maven repos so we can do that. One thing to note is that the default files build in the Maven repo are Eclipse plugins, so we need to reference the “runtime” classifier in our dependency

The Gradle Script

configurations {
dependencies {
codeCoverage 'org.jacoco:org.jacoco.agent:'
codeCoverageAnt 'org.jacoco:org.jacoco.ant:'
test {
systemProperties =
jvmArgs "-javaagent:${configurations.codeCoverage.asPath}=destfile=${project.buildDir.path}/coverage-results/jacoco.exec,sessionid=HSServ,append=false",
task generateCoverageReport << { ant { taskdef(name:'jacocoreport', classname: 'org.jacoco.ant.ReportTask', classpath: configurations.codeCoverageAnt.asPath) mkdir dir: "build/reports/coverage" jacocoreport { executiondata { fileset(dir: "build/coverage-results") { include name: 'jacoco.exec' } } structure(name: { classfiles { fileset(dir: "build/classes/main") { exclude name: 'org/ifxforum/**/*' exclude name: 'org/gca/euronet/generated**/*' } } sourcefiles(encoding: 'CP1252') { fileset dir: "src/main/java" } } xml destfile: "build/reports/coverage/jacoco.xml" html destdir: "build/reports/coverage" } } }

A Few Details

The magic is in the jvmArgs of the test block. JaCoCo is run as a Java Agent which uses the runtime instrumentation feature added in Java 6 to be able to inspect the running code. Extra arguments can be added to JaCoCo there including things like excludes to exclude specific classes from coverage. The available parameters are the same as the maven JaCoCo parameters.

The generateCoverageReport task converts the jacoco.exec binary into html files for human consumption. If you're just integrating with a CI tool, like Jenkins, then you probably don't need this, but it's handy for local use and to dig into the details of what's covered.

Loose Ends

One problem that I ran into was referencing project paths like the project.buildDir from within an Ant task. Hopefully someone will come along and let me know how that's done.

Check Multiple Mercurial Repositories for Incoming Changes

Currently I have a whole bunch of Mercurial repositories in a directory. All of these are cloned from a central repository that the team pushes their changes to. I like to generally keep my local repositories up-to-date so that I can review changes. Manually running hg incoming -R some_directory on 20 different projects is a lot of work. So I automated it with a simple shell script.

This script will run incoming (or outgoing) on all of the local repositories and print the results to the console. Then I can manually sync the ones that have changed if I want.

I called this file and run it like: ./ incoming


# Find all the directories that are mercurial repos
dirs=(`find . -name ".hg"`)
# Remove the /.hg from the path and that's the base repo dir
merc_dirs=( "${dirs[@]//\/.hg/}" )

case $1 in
for indir in ${merc_dirs[@]}; do
echo "Checking: ${indir}"
hg -R "$indir" incoming
for outdir in ${merc_dirs[@]}; do
echo "Checking: ${outdir}"
hg -R "$outdir" outgoing
echo "Usage: [incoming|outgoing]"

I guess the next major improvement would be to capture the output and then automatically sync the ones that have changed, but I haven’t gotten around to that yet.

DRY your CruiseControl.NET Configuration

Don’t Repeat Yourself (DRY) is one of the principles of good software development. The idea is that there should ideally be one and only one “source of knowledge” for a particular fact or calculation in a system. Basically it comes down to not copying-and-pasting code around or duplicating code if at all possible. The advantages of this are many.

Advantages of DRY

  • There will be less code to maintain
  • If a bug is found, it should only have to be fixed in one place
  • If an algorithm or process is changed, it only needs to be changed in one place
  • More of the code should become reusable because as you do this you will parameterize methods to make them flexible for more cases

If it’s good for code isn’t it good for other things like configuration? Why yes it is.

Using CruiseControl.NET Configuration Builder

The Configuration Preprocessor allows you to define string properties and full blocks of XML to use for substitution and replacement. To start using the Configuration Preprocessor, you add xmlns:cb=”urn:ccnet.config.builder”, an xml namespace, to your document to tell the config parser that you plan to do this.

From there you can define a simple property like:

Or you can make it a full block of XML:$(client)/$(project)/trunk D:\Builds-Net\projects\$(client)\$(project)\trunk

Defining Reusable Blocks

Using these ideas I wanted to come up with a templated approach that would allow me to share configuration among multiple projects. That way, if I added new statistics or change the layout of my build server, I would only have to change it in a single place. Thus keeping things DRY. It also encourages more consistency across multiple projects making things easier to understand.

So, I started defining some reusable blocks in the main ccnet.config file which you can see below. The exact details will depend on your configuration of course.

Full Example of config.xml$(client)/$(project)/trunk D:\Builds-Net\projects\$(client)\$(project)\trunk

D:\Builds-Net\projects\$(client)\$(project)\trunk build.proj $(build-args)
D:\Program Files\CruiseControl.NET\server\Rodemeyer.MsBuildToCCNet.dll

D:\Builds-Net\projects\$(client)\$(project)\trunk build.proj $(build-args)
D:\Program Files\CruiseControl.NET\server\Rodemeyer.MsBuildToCCNet.dll



At the end of the file you can see the cb:include references. Those are one-line includes to include the configuration of each project. This makes things easier to manage, I think, because you only have to look at the individual project configuration.

Using Reusable Blocks in Individual Configuration Files

From there I need to make use of those defined blocks in in individual file. The first thing I needed to do was to set the parameters that I had defined as simple string replacements in the reusable blocks. Normally you would do that with cb:define as I showed above. But the trick is that you can only have one property with a given name defined. If you include multiple project configurations that doesn’t work. What does work is using cb:scope definitions. This allows for a value to be defined only within a specific scope.


From there you just need to start including the blocks that you defined in the main ccnet.confg within the scope block.

Full Example of Project Configuration

As you can see, the only one I didn’t template out was the email block because that depends on the developers working on each project.

Have fun bringing simplicity and consistency to your Cruise Control.NET configuration!

For the full details see the CruiseControl.NET Configuration Preprocessor documentation.

Windows Subversion Maintenance Scripts

Previously I wrote about the Subversion Maintenance Scripts that I use for doing things like backups and upgrades. They were bash shell scripts that automated dealing with multiple Subversion repositories. The secret was that those guys were being run using Cygwin on Windows. Recently we got a new, more powerful server to run our Virtuals on. I decided to rewrite the scripts using native Windows Batch files so that I wouldn’t have to install Cygwin again. Hopefully someone else will find this useful too.

Windows Batch Script for Creating a Default Repository Structure

I work for a custom software development consulting firm so we have a lot of different clients. We don’t want to mix different clients’ code so we create a repository per client. We often get multiple projects per client though (yeah, they like us) so we go ahead and create a project under the client. We also like to use the trunk, tags and branches directories by default.

With all of these rules: Automate!


@echo off

REM Check command line args to make sure a customer and project name were passed

REM If svnadmin is not in your path, uncomment the following
REM PATH=%PATH%;"D:\Program Files\Subversion\bin"

REM Set up variables with the directories

echo Creating directory in %SVN_DIR%

svnadmin create "%SVN_DIR%"
svn -m "Create default structure." mkdir %PROJ_DIR% %PROJ_DIR%/trunk %PROJ_DIR%/tags %PROJ_DIR%/branches

echo Usage: %0 ^ ^

echo Done

Some of the interesting pieces:
%~dp0 gives you the directory that the current script lives in.
%CUR_DIR:\=/% replaces the backslash (\) with forward slashes (/) so that the SVN file:// URL will work correctly.

Windows Batch Script to Dump Your Subversion Repositories

The next thing you might want to do is to dump all of your repositories so that you can back them up or upgrade SVN or move to a new server. With just one repository it’s easy to do by hand. With 20+ repositories it’s much nicer to run a script and then go do something else.


@echo off


echo Creating directory %DUMP_DIR%

echo Dumping...

FOR /f %%i IN ('dir /B /A:D %REPOS_DIR%\*') DO (
echo Dumping %REPOS_DIR%\%%i to %DUMP_DIR%\%%i.dump
svnadmin dump %REPOS_DIR%\%%i > %DUMP_DIR%\%%i.dump

Some of the interesting pieces:
FOR /f %%i IN (‘dir /B /A:D %REPOS_DIR%\*’) DO is the part that finds each directory (in this case an SVN repository). Then for each repository you run the svnadmin dump command and send the dump file to a different directory.

Windows Batch Script to Load Your Subversion Repositories

After you’ve dumped all of your repositories and you need to restore them, because you’ve upgraded or moved them to a new server we need to do the reverse of the dump script.


@echo off


echo Restoring....

REM %%~ni becomes the dump name without .dump

FOR /f %%i IN ('dir /B %DUMP_DIR%\*.dump') DO (
echo Restoring %DUMP_DIR%\%%i to %REPOS_DIR%\%%~ni
svnadmin create %REPOS_DIR%\%%~ni
svnadmin load %REPOS_DIR%\%%~ni < %DUMP_DIR%\%%i )

Some of the interesting pieces:
FOR in this case is the same as the dump, except it's looking for all the dump files.
%%i holds the name of the dump file (e.g. repo.dump). %%~ni strips off the extension to give you just the name of the file (e.g. repo).

Now you can manage your repositories with ease if you're stuck on a Windows machine.

Capistrano and Ferret DRB

This is a bit of a followup to my previous post on Capistrano with Git and Passenger. I decided to use Ferret via the acts_as_ferret (AAF) plugin. Ferret is a full-text search inspired by Apache’s Lucene but written in Ruby.

Basically Ferret and Lucene keep a full-text index outside of the database that allows it to quickly perform full-text searches and find the identifers of rows in your database. Then you can go get those objects out of the database. It’s pretty slick.

Ferret uses DRb as a means of supporting multiple-concurrent clients and for scaling across multiple machines. You really don’t need to know much about DRb to use AAF, but you do need to run the ferret DRb server in your production environment. Which gets us to…

Automating The Starting and Stopping of ferret_server

A few lines of code in your Capistrano deploy.rb and you are off and running.

before "deploy:start" do
run "#{current_path}/script/ferret_server -e production start"

after "deploy:stop" do
run "#{current_path}/script/ferret_server -e production stop"

after 'deploy:restart' do
run "cd #{current_path} && ./script/ferret_server -e production stop"
run "cd #{current_path} && ./script/ferret_server -e production start"

Except it doesn’t work. I ended up with some errors like:
could not execute command
no such file to load — /usr/bin/../config/environment

It also ends up that it’s not Capistrano’s fault.

Acts As Ferret server_manager.rb

In the file vendor/plugins/acts_as_ferret/lib/server_manager.rb there is a line that sets up where to look for its environment information. For some reason this is the default:

# require(File.join(File.dirname(__FILE__), '../../../../config/environment'))
require(File.join(File.dirname(ENV['_']), '../config/environment'))

If you notice, there is a line commented out. It just so happens that uncommenting that line and commenting out the other fixed the issue for me. It ends up that ENV[‘_’] points to the base path of the executable and thats /usr/bin/env. And that doesn’t work. I’m not sure why that’s the default behavior.

Anyway, it’s easily fixed:

require(File.join(File.dirname(__FILE__), '../../../../config/environment'))
# require(File.join(File.dirname(ENV['_']), '../config/environment'))

With that fix in place, the Capistrano deployment will restart the Ferret DRb server when you deploy your application.

According to John in the comments below you can fix the AAF problem without changing the code as well. Just add default_run_options[:shell] = false to your Capistrano script and that will take care of it.

Database Migrations for .NET

One of the more difficult things to manage in software projects is often changing a database schema over time. On the projects that I work on, we don’t usually have DBAs who manage the schema so it is left up to the developers to figure out. The other thing you have to manage is applying changes to the database in such a way that you don’t disrupt the work of other developers on your team. We need the change to go in at the same time as the code so that Continuous Integration can work.


While I don’t know if they were invented there, migrations seem to have been popularized by Ruby on Rails. Rails is a database centric framework that implies the properties of your domain from the schema of your database. For that reason it makes sense that they came up with a very good way of These are some example migrations to give you an idea of the basics of creating a schema.


using Migrator.Framework;
using System.Data;
public class AddAddressTable : Migration
override public void Up()
new Column("id", DbType.Int32, ColumnProperty.PrimaryKey),
new Column("street", DbType.String, 50),
new Column("city", DbType.String, 50),
new Column("state", DbType.StringFixedLength, 2),
new Column("postal_code", DbType.String, 10)
override public void Down()


using Migrator.Framework;
using System.Data;
public class AddAddressColumns : Migration
public override void Up()
Database.AddColumn("Address", new Column("street2", DbType.String, 50));
Database.AddColumn("Address", new Column("street3", DbType.String, 50));
public override void Down()
Database.RemoveColumn("Address", "street2");
Database.RemoveColumn("Address", "street3");


using Migrator.Framework;
using System.Data;
public class AddPersonTable : Migration
public override void Up()
new Column("id", DbType.Int32, ColumnProperty.PrimaryKey),
new Column("first_name", DbType.String, 50),
new Column("last_name", DbType.String, 50),
new Column("address_id", DbType.Int32, ColumnProperty.Unsigned)
Database.AddForeignKey("FK_PERSON_ADDRESS", "Person", "address_id", "Address", "id");
public override void Down()

Run Your Migrations

The best way to run your migrations will be to integrate it into your build automation tool of choice. If you are not using one, now is the time.

MigratorDotNet supports MSBuild and NAnt.



So You Want to Migrate?

Some more documentation and example are available MigratorDotNet. Some of the changes represented are still in an experimental branch that is in the process of being merged.

MigratorDotNet is a continuation of code started by Marc-André Cournoyer and Nick Hemsley.

Start a New Branch on your Remote Git Repository

Git is a distributed version control system so it allows you to create branches locally and commit against them. It also supports a more centralized repository model. When using a centralized repository you can push changes to it so that others can pull them more easily. I have a tendency to work on multiple computers. Because of this, I like to use a centralized repository to track the branches as I work on them. That way no matter what machine I’m on, I can still get at my branches.

The Workflow

My workflow is generally something like this:

  1. Create a remote branch
  2. Create a local branch that tracks it
  3. Work, Test, Commit (repeat) – this is all local
  4. Push (pushes commits to the remote repository)

Git commands can be a bit esoteric at times and I can’t always seem to remember how to create a remote git branch and then start working on new code. There also seems to be multiple ways of doing it. I’m documenting the way that seem to work for me so that I can remember it. Maybe it will help someone else too.

Creating a Remote Branch

  1. Create the remote branch
    git push origin origin:refs/heads/new_feature_name

  2. Make sure everything is up-to-date
    git fetch origin

  3. Then you can see that the branch is created.
    git branch -r

This should show ‘origin/new_feature_name’

  1. Start tracking the new branch
    git checkout --track -b new_feature_name origin/new_feature_name

This means that when you do pulls that it will get the latest from that branch as well.

  1. Make sure everything is up-to-date
    git pull

Cleaning up Mistakes

If you make a mistake you can always delete the remote branch
git push origin :heads/new_feature_name
(Ok Git’ers – that has to be the least intuitive command ever.)

Use the Branch from Another Location

When you get to another computer or clone the git repository to a new computer, then you just need to start tracking the new branch again.
git branch -r to show all the remote branches
git checkout --track -b new_branch origin/new_feature_name to start tracking the new branch

Automate it A Bit

That’s a pretty easy thing to automate with a small shell script luckily



if [ $# -ne 1 ]; then
echo 1>&2 Usage: $0 branch_name
exit 127

set branch_name = $1
git push origin origin:refs/heads/${branch_name}
git fetch origin
git checkout --track -b ${branch_name} origin/${branch_name}
git pull

For further help, you might want to check out:

CruiseControl With a Specific Version of Grails

Continuous Integration is a good practice in software development. It helps catch problems early to prevent them from becoming bigger problems later. It helps to reinforce other practices like frequent checkins and unit testing as well. I’m using CruiseControl (CC) for Continuous Integration at the moment.

One of the things about Grails is that it is really run through a series of scripts and classes that set up the environment. The Ant scripts really just delegate the work to those grails scripts. To run properly, the GRAILS_HOME environment needs to be set so that it can find the proper classes, etc. This is not a problem if you are running a single Grails application in Continuous Integration. The issue arises when you want to run multiple against different version of Grails. A project I’m working on uncovered a bug in the 1.0.2 release of Grails. The code worked fine on 1.0.1 so I wanted to run against that specific version of Grails.

It ends up this is not to hard with a few small changes to your Ant build.xml file.

First you can declares some properties that have the paths to the Grails directory and the grails executable (the .bat version if your CC server is on Windows).

Next you can declare a custom target to execute on the CC server. You reference the ‘cc-grails’ property declared. The key is that you must override the GRAILS_HOME when you execute the grails script.

Now the Continuous Integration of your Grails app runs against a specific version of Grails.

The Full build.xml

Test Automation Seminar

A long, long time ago in a land far away, I worked with my friend Frank Cohen to help him build the first version of a Web Service and Web Application test tool that was called TestMaker. Since then, Frank has made all kinds of improvements turning it into a really nice, graphical, scriptable testing tool. Frank has written books on Fast SOA as well as Java Testing and Design.

Now Frank is putting on a Test Automation Seminar covering a broad array of topics. He’ll be talking about his own test-automation tool TestMaker, naturally, but he’ll also be talking about others including Selenium. The seminar is going to cover a lot of currently hot technologies and techniques. In addition to general web-application testing, they are going to get into Ajax/Web 2.0 testing, REST and SOAP.

If you ever wanted to know more about functional test automation or performance and scalability testing, this might be a good, hands-on seminar to get you jump started. Check it out.

From Frank:

PushToTest is hosting a 2-Day seminar on open-source test automation. We will be covering load and scalability tests of Web applications, Ajax, Web 2.0, Service Oriented Architecture (SOA,) and functional testing of Windows and Java desktop applications. We will teach you TestMaker, soapUI, Glassbox, Selenium, and several other open-source tools.

Automated Subversion Tagging With MSBuild

I’ve written previously about using MSBuild With Nunit as well as a bit of a Manifesto on Relentless Build Automation. I believe that automating the build and deployment process is a necessary step to ensure the reliable delivery of quality software.

Release Management

One of the things that we as software developers have to do is regularly make releases, either to QA or to a customer. When we make releases we need to be able to continue to develop the software to add new features and fix bugs. Tagging is a safety process that we use in a version control system to allow us to easily get to the code that we used to build a specific release. When you Tag, you can always rebuild that release. So, if weeks or months down the road you need to fix a critical bug, but don’t want to release new features, you can get back to the Tag, create a Branch, fix the bug and release the new build to your users.

How Do We Ensure We Can Recreate Releases

How can we ensure that we will be able to recreate a release that we make to either QA or a customer? Use Automation to Tag your builds when you create them of course.

I’ve contributed a new SvnCopy Task to the MSBuild Community Tasks project which was just accepted and committed. It is currently in the Subversion repository for the project but should be available shortly in an official build. This will allow you to easily automate the process of Tagging or Branching your builds when your release. Subversion uses the Copy metaphor for both Branching and Tagging operations which is different from some other version control systems.


You can then integrate the process of creating a Tag every time you generate a build by tying together Tasks with dependencies. In the example below, the GenerateTestBuild calls GenerateCabFiles and Tag to automatically build the installer and Tag Subversion with the current revisions number.

Hopefully this will help you get started on some more automation.

MSBuild Community Tasks version 1.2 has been released containing this code. You can get it here.


MSBuild Community Tasks