Check Multiple Mercurial Repositories for Incoming Changes

Currently I have a whole bunch of Mercurial repositories in a directory. All of these are cloned from a central repository that the team pushes their changes to. I like to generally keep my local repositories up-to-date so that I can review changes. Manually running hg incoming -R some_directory on 20 different projects is a lot of work. So I automated it with a simple shell script.

This script will run incoming (or outgoing) on all of the local repositories and print the results to the console. Then I can manually sync the ones that have changed if I want.

I called this file hgcheckall.sh and run it like: ./hgcheckall.sh incoming

#!/bin/bash

# Find all the directories that are mercurial repos
dirs=(`find . -name ".hg"`)
# Remove the /.hg from the path and that's the base repo dir
merc_dirs=( "${dirs[@]//\/.hg/}" )

case $1 in
incoming)
for indir in ${merc_dirs[@]}; do
echo "Checking: ${indir}"
hg -R "$indir" incoming
done
;;
outgoing)
for outdir in ${merc_dirs[@]}; do
echo "Checking: ${outdir}"
hg -R "$outdir" outgoing
done
;;
*)
echo "Usage: hgcheckall.sh [incoming|outgoing]"
;;
esac

I guess the next major improvement would be to capture the output and then automatically sync the ones that have changed, but I haven’t gotten around to that yet.

Using Ruby Subversion Bindings to Create Repositories

Subversion has a default Web UI that is served up by Apache if you run Subversion that way. It is pretty boring and read-only. Then there are things like WebSVN that make it less boring, but still read-only. I got curious about what it would take to make something even less boring and NOT read-only. Why not allow me to create a new repository on the server? Why not allow me to create a new default directory structure for a new project in a repository all through a web interface?

Parent Path Setup

All of my repositories use the idea of the SVNParentPath. This makes Apache assume that every directory under a given path is an SVN Repository. That structure makes it easy to deal with multiple repositories and secure them with a single security scheme. Using that assumption then it is easier to write some code that will list existing repositories and create new ones in a known location.

SvnAdmin With Ruby Subversion Bindings

Subversion provides language bindings for a number of different languages (Java, Python, Perl, PHP and Ruby) in addition to the native C libraries. Using these bindings it becomes fairly easy to deal with Subversion. The only hiccup will be dealing with the apparent lack of documentation for the code. So be prepared to do some exploration, digging and reading of the code.

I chose to try this using Ruby because it was quick and easy and it was a language I was already familiar with.

First you need to know how to create a new repository and open an existing repository. Fortunately those are simple, one-line operations:

Svn::Repos.create(repos_path, {})
repos = Svn::Repos.open(repos_path)

There was nothing special (from what I could tell) that would allow you to determine if a repository already existed, so I just created a simple function using the Ruby File operations to determine if a directory already existed. This code would allow me to determine if I needed to create new repository or open an existing one:

def repository_exists?(repos_path)
File.directory?(repos_path)
end

Now I have a repository open so I wanted to build a project structure using the default conventions I use for Subversions projects. My convention is to have a repository named after a client, the top-level directories are named for the client’s project and then each project has the standard trunk, branches and tags within that. Depending on the kind of work you do that convention may or may not make sense for you.

With that decided, I created the code to write that structure in a repository. The one thing I found is that interacting with the Subversion repository allowed you to do things within a transaction that would force all of the changes to be recorded as a single commit. I thought this was a good thing, so performed these operations as a transaction:


txn = repos.fs.transaction

# create the top-level, project based directory
txn.root.make_dir(project_name)

# create the trunk, tags and branches for the new project
%w(trunk tags branches).each do |dir|
txn.root.make_dir("#{project_name}/#{dir}")
end

repos.commit(txn)

Finally I put all of those things together into a class. The class had the concept of being initialized to a base Parent Path so all of the operations would know to start from that location:

require "svn/repos"

class SvnAdmin
def initialize(parent_path)
@base_path = parent_path
end

# Provides a list of directory entries. path must be a directory.
def ls(client_name, path="/", revision=nil)
repos_path = File.join(@base_path, client_name)
repos = ensure_repository(repos_path)

entries = repos.fs.root(revision).dir_entries(path)
entries.keys
end

def create(client_name, project_name)
repos_path = File.join(@base_path, client_name)
repos = ensure_repository(repos_path)

txn = repos.fs.transaction
txn.root.make_dir(project_name)
%w(trunk tags branches).each do |dir|
txn.root.make_dir("#{project_name}/#{dir}")
end

repos.commit(txn)
end

private
def ensure_repository(repos_path)
if ! repository_exists?(repos_path)
Svn::Repos.create(repos_path, {})
end
repos = Svn::Repos.open(repos_path)
end

def repository_exists?(repos_path)
File.directory?(repos_path)
end

end

SvnAdmin from Rails

Now that I had some simple code to create new repositories or add a new project to an existing repository I decided to wrap it in a simple Rails application that would allow me to create repositories using a web-based interface.

To start with, I’m not going to use a database or any ActiveRecord classes in this project (which you might do if you wanted authentication or something else) so I disabled ActiveRecord in the config/environment.rb
config.frameworks -= [ :active_record ]

Then I created an ReposController to manage the Subversion repositories. This controller contains a couple of simple actions:

  1. An index action to list the existing repositories (directories)
  2. A new action to display a form to enter the client and project names
  3. A create action to use the SvnAdmin class to create a new repository and/or project


require "svnadmin"

class ReposController < ApplicationController layout 'default' def index @dirs = Dir.new(SVN_PARENT_PATH).entries.sort.reject {|p| p == "." or p == ".."} end def new end def create svn = SvnAdmin.new(SVN_PARENT_PATH) repos = params[:repos] respond_to do |format| begin svn.create(repos[:client_name], repos[:project_name]) flash[:notice] = "Successfully created." format.html { redirect_to :action => "index" }
format.xml { head :ok }
rescue
flash[:error] = "Failed to create structure."
format.html { redirect_to :action => "index" }
format.xml { render :xml => object.errors, :status => :unprocessable_entity }
end
end
end
end

You can also easily create a route and a ProjectsController that allows you to see all of the projects within a repository.

The route in config/routes.rb is simply:

map.connect ':repos/projects/',
:controller => 'projects',
:action => 'index'

And the ProjectsController looks up the :repos param to open the proper repository and list the top-level directories with it:

require "svnadmin"

class ProjectsController < ApplicationController layout 'default' def index repos_path = params[:repos] svn = SvnAdmin.new(SVN_PARENT_PATH) @projects = svn.ls(repos_path) end end

Hopefully that will help you handle your Subversion administration. It should let you code up your conventions so that they are followed whenever a new repository is created or a new project is started.

DRY your CruiseControl.NET Configuration

Don’t Repeat Yourself (DRY) is one of the principles of good software development. The idea is that there should ideally be one and only one “source of knowledge” for a particular fact or calculation in a system. Basically it comes down to not copying-and-pasting code around or duplicating code if at all possible. The advantages of this are many.

Advantages of DRY

  • There will be less code to maintain
  • If a bug is found, it should only have to be fixed in one place
  • If an algorithm or process is changed, it only needs to be changed in one place
  • More of the code should become reusable because as you do this you will parameterize methods to make them flexible for more cases

If it’s good for code isn’t it good for other things like configuration? Why yes it is.

Using CruiseControl.NET Configuration Builder

The Configuration Preprocessor allows you to define string properties and full blocks of XML to use for substitution and replacement. To start using the Configuration Preprocessor, you add xmlns:cb=”urn:ccnet.config.builder”, an xml namespace, to your document to tell the config parser that you plan to do this.

From there you can define a simple property like:


Or you can make it a full block of XML:

http://svn.example.com/svn/$(client)/$(project)/trunk D:\Builds-Net\projects\$(client)\$(project)\trunk
svn.exe
true


Defining Reusable Blocks

Using these ideas I wanted to come up with a templated approach that would allow me to share configuration among multiple projects. That way, if I added new statistics or change the layout of my build server, I would only have to change it in a single place. Thus keeping things DRY. It also encourages more consistency across multiple projects making things easier to understand.

So, I started defining some reusable blocks in the main ccnet.config file which you can see below. The exact details will depend on your configuration of course.

Full Example of config.xml



http://svn.example.com/svn/$(client)/$(project)/trunk D:\Builds-Net\projects\$(client)\$(project)\trunk
svn.exe
true



C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\MSBuild.exe
D:\Builds-Net\projects\$(client)\$(project)\trunk build.proj $(build-args)
$(build-targets)
600
D:\Program Files\CruiseControl.NET\server\Rodemeyer.MsBuildToCCNet.dll



C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe
D:\Builds-Net\projects\$(client)\$(project)\trunk build.proj $(build-args)
$(build-targets)
600
D:\Program Files\CruiseControl.NET\server\Rodemeyer.MsBuildToCCNet.dll





D:\Builds-Net\projects\$(client)\$(project)\trunk\*.Test.xml
D:\Builds-Net\projects\$(client)\$(project)\trunk\*.CoverageMerge.xml
D:\Builds-Net\projects\$(client)\$(project)\trunk\*.CoverageSummary.xml
D:\Builds-Net\projects\$(client)\$(project)\trunk\*.FxCop.xml




D:\Builds-Net\projects\$(client)\$(project)\logs
















At the end of the file you can see the cb:include references. Those are one-line includes to include the configuration of each project. This makes things easier to manage, I think, because you only have to look at the individual project configuration.

Using Reusable Blocks in Individual Configuration Files

From there I need to make use of those defined blocks in in individual file. The first thing I needed to do was to set the parameters that I had defined as simple string replacements in the reusable blocks. Normally you would do that with cb:define as I showed above. But the trick is that you can only have one property with a given name defined. If you include multiple project configurations that doesn’t work. What does work is using cb:scope definitions. This allows for a value to be defined only within a specific scope.



...

From there you just need to start including the blocks that you defined in the main ccnet.confg within the scope block.

Full Example of Project Configuration













As you can see, the only one I didn’t template out was the email block because that depends on the developers working on each project.

Have fun bringing simplicity and consistency to your Cruise Control.NET configuration!

For the full details see the CruiseControl.NET Configuration Preprocessor documentation.

MSBuild Task for PartCover

I continue to lament the dearth of option for Test Coverage in the .NET world.

In the Java world you have open source tools like Emma and Cobertura that are widely used and supported (and many more) as well as proprietary tools like Clover available.

.NET we have an open source NCover SF that requires you to do odd code instrumentation and is essentially dead it seams, another NCover which is proprietary and costs money and PartCover which is open source, but doesn’t seem real active.

Don’t get me wrong, NCover.org is a good option if you are willing to spend the money for it. But with a team of 30+ and a CI server, I’m not sure if I want to drop $10k on it. (NCover up until version 1.5.8 was Free Software (GPL) before it was closed. Who has the source and why haven’t you forked it yet?)

]]>

If you’re not willing to pay that basically leaves PartCover. But of course you want to integrate your code coverage with your automated build. There was no support for MSBuild out of the box, so I decided to build it.

Creating an MSBuild Task

I need to do 2 things to run PartCover:

  1. Register the native PartCover.CorDriver.dll
  2. Execute the PartCover.exe with the proper options

Register Native DLL

To see the details on how to register a native DLL using .NET code so my earlier post Register and Unregister COM DLL from .NET Code.

Execute PartCover

The MSBuild framework provides a ToolTask base class whose whole purpose is for executing external command line tools. I used this as the base of the task.

1. ToolName

First you override the ToolName property to return the name of the EXE to run. Nothing special here, it’s just the executable name.


protected override string ToolName
{
get { return "PartCover.exe"; }
}

2. Properties

Next to start build the task you go about defining all of the settings that a user will need to set to execute the task. You then create those as Properties on the class and they will be set by MSBuild. Start with the simple things that someone will need to pass to get the tool to execute properly. You can build from there for other properties. If possible give the properties sane defaults so that people don’t have to override them in their build file.


// ...

///

/// The application to execute to get the coverage results.
/// Generally this will be your unit testing exe.
///

public string Target
{
get { return _target; }
set { _target = value; }
}

///

/// The arguments to pass to the executable
///

public string TargetArgs
{
get { return _targetArgs; }
set { _targetArgs = value; }
}

public string WorkingDirectory
{
get { return _workingDirectory; }
set { _workingDirectory = value; }
}

// ...

3. Command Arguments

Then you need to override string GenerateCommandLineCommands() method. The whole purpose of this method is to construct any command line parameters that need to be passed to the ToolName command using the Properties defined in the task.


protected override string GenerateCommandLineCommands()
{
StringBuilder builder = new StringBuilder();
AppendIfPresent(builder, "--target", Target);
AppendIfPresent(builder, "--target-work-dir", WorkingDirectory);
AppendIfPresent(builder, "--target-args", QuoteIfNeeded(TargetArgs));
AppendIfPresent(builder, "--output", Output);

AppendMultipleItemsTo(builder, "--include", Include);
AppendMultipleItemsTo(builder, "--exclude", Exclude);

Log.LogCommandLine(builder.ToString());

return builder.ToString();
}

5. Execute

Finally, if you have anything special to do, you can override the Execute(). In this case, I wanted to handle the registering and de-registering of the Core.dll. Make sure that you call the base.Execute() method so that the TaskTarget can do the work that it needs to do.


public override bool Execute()
{
string corDriverPath = Path.Combine(ToolPath, CorDriverName);
Log.LogMessage("CoreDriver: {0}", corDriverPath);
using (Registrar registrar = new Registrar(corDriverPath))
{
registrar.RegisterComDLL();
return base.Execute();
}
}

To see the whole thing, download the files at the bottom of this post.

How to Use PartCover with MSBuild

Now that you have a Custom task you need to create a Target in your MSBuild file to execute the task.









Download the code:
PartCover MSbuild.zip

Good luck and I hope someone else finds this useful.

Tracking Project Metrics

How do you track the health of your software projects? Ideally you could come up with few, easy to collect metrics and have your Continuous Integration system generate the information and maybe even graph it over time. What we track is going to be based on a set of beliefs and assumptions, so I think we should make that clear.

My Software Metrics Beliefs and Assumptions

  • The simplest code that solves the problem is the best. Simple does not mean rote or repetitive. It means well designed, well abstracted, well factored.
  • Unit Testing improves the quality of code.
  • Overly complex code, code that is not well factored, “big” code is hard to unit test.
  • The metrics need to be easy to interpret and easy to gather or I won’t do it.

Based on those beliefs and assumptions we have defined the kinds of things we care about. We want simple, small classes and methods. We want classes that fit the Single Responsibility Principle. We want unit test coverage. And we want to know when we deviate from those things.

Inevitably this won’t tell you the whole picture of a project. Some deviation is inevitable as well (we’re not perfect). But this is giving us a picture into a project that would let us look at “hot spots” and determine if they are things we want to deal with. It will never tell you if the system does what a user really wants. It will never fully tell you if a project will be successful.

The Metrics I Came Up With

  • Unit Test Coverage – How many code paths are being exercised in our tests.
  • Cyclomatic Complexity – Number of methods over 10, 20, 40
  • Lines of Code – General size information
  • Methods over a certain size – Number of methods over 15, 30, 45 lines
  • Classes over a certain size – Number of classes over 150, 300, 600 lines
  • Afferent/Efferent Coupling – Dead code and code that does too much

Tools

I’m currently thinking mostly about .NET projects because at work we do a lot of small to mid-size projects using .NET. Many of the tools already exist and are Open Source in the Java world and many Java Continuous Integration servers support calculating some of those metrics for you already. So you’re probably in pretty good shape if you’re doing Java development.

FxCop

Out-of-the-box FxCop doesn’t really give you many of these things. But it does provide you a way that you can fairly easily your own rules using their API.

For Example to calculate Cyclomatic Complexity you can implement a class that extends BaseIntrospectionRule and overrides the VistBranch and VisitSwitchInstruction.


public override void VisitBranch(Branch branch)
{
if (branch.Condition != null)
{
Level ++;
}
base.VisitBranch(branch);
}

public override void VisitSwitchInstruction(SwitchInstruction switchInstruction)
{
Level += switchInstruction.Targets.Count;
base.VisitSwitchInstruction(switchInstruction);
}

Coverage

PartCover is an open source code coverage tool for .NET. It does the job but does not have what I’d call any “wow” factor.

NCover is a proprietary coverage tool. High on wow factor, but expensive. NCover was once free and open source and gained a good following, but they switched and closed it and made the decision to charge $150 – $300 a license for it depending on the version.

(I lament the state of the .NET ecosystem with regard to tooling. Either an OK open source version or an expensive commercial version is not a real choice. There are so many good options in the Unit Testing space, but not in the coverage space.)

NDepend Mini-Review

Disclosure: I received a free copy of NDepend from the creator. I would like to think that didn’t influence this at all, but wanted to make that known.

One of the tools that I’ve been looking at for .NET projects is NDepend. It seems to cover all of the cases that I mentioned above except for the Code Coverage case (although it integrates with NCover, but I haven’t looked at that). It has a really cool query languages, that looks a lot like SQL, that lets you customize any of the existing metrics that it tracks or write your own (it comes with so many that it seems like I just customize). It comes with so many metrics that in practice it can be seem quite overwhelming. I think the right thing to do for most projects is to pick the handful that you care about and limit it to that.

NDepend comes with NAnt and MSBuild tasks that will let you integrate it into your build automation. It also comes with an XSL stylesheet to integrate the NDepend output into CruiseControl.NET for reporting purposes.

Some things you might run into:

  • C# support is the primary language (e.g. No support for code level Cyclomatic Complexity for non-C# language, but still supports IL level CC). Only a problem if you are forced to do VB.NET of course, or use another CLR language.
  • A single license is $400 and 20 licenses or more are over $200 each. So price could be a concern to many people. I’m thinking it might make the most sense to run this on a CI server, so in that case you would probably only need a small number of licenses.
  • It integrates with code coverage tools, but currently only NCover. See the previous comments about the cost of NCover combined with the cost of NDepend if price is an issue.

What Else?

What does everyone else think? What do you care about and what are the right metrics for keeping track of the health of a software project?

Capistrano and Ferret DRB

This is a bit of a followup to my previous post on Capistrano with Git and Passenger. I decided to use Ferret via the acts_as_ferret (AAF) plugin. Ferret is a full-text search inspired by Apache’s Lucene but written in Ruby.

Basically Ferret and Lucene keep a full-text index outside of the database that allows it to quickly perform full-text searches and find the identifers of rows in your database. Then you can go get those objects out of the database. It’s pretty slick.

Ferret uses DRb as a means of supporting multiple-concurrent clients and for scaling across multiple machines. You really don’t need to know much about DRb to use AAF, but you do need to run the ferret DRb server in your production environment. Which gets us to…

Automating The Starting and Stopping of ferret_server

A few lines of code in your Capistrano deploy.rb and you are off and running.

before "deploy:start" do
run "#{current_path}/script/ferret_server -e production start"
end

after "deploy:stop" do
run "#{current_path}/script/ferret_server -e production stop"
end

after 'deploy:restart' do
run "cd #{current_path} && ./script/ferret_server -e production stop"
run "cd #{current_path} && ./script/ferret_server -e production start"
end

Except it doesn’t work. I ended up with some errors like:
could not execute command
no such file to load — /usr/bin/../config/environment

It also ends up that it’s not Capistrano’s fault.

Acts As Ferret server_manager.rb

In the file vendor/plugins/acts_as_ferret/lib/server_manager.rb there is a line that sets up where to look for its environment information. For some reason this is the default:

# require(File.join(File.dirname(__FILE__), '../../../../config/environment'))
require(File.join(File.dirname(ENV['_']), '../config/environment'))

If you notice, there is a line commented out. It just so happens that uncommenting that line and commenting out the other fixed the issue for me. It ends up that ENV[‘_’] points to the base path of the executable and thats /usr/bin/env. And that doesn’t work. I’m not sure why that’s the default behavior.

Anyway, it’s easily fixed:

require(File.join(File.dirname(__FILE__), '../../../../config/environment'))
# require(File.join(File.dirname(ENV['_']), '../config/environment'))

With that fix in place, the Capistrano deployment will restart the Ferret DRb server when you deploy your application.

Update
According to John in the comments below you can fix the AAF problem without changing the code as well. Just add default_run_options[:shell] = false to your Capistrano script and that will take care of it.

Package Grails DBMigrations in your WAR File

The Grails DBMigrate Plugin is a handy way to give you control over the generation of your database if you don’t want Grails to auto-munge your schema. It works fine in development, but when you create a WAR for deployment on another machine the Migrations are not packaged in the WAR so the database can’t be created or updated.

Packaging DBMigrate Scripts for Deployment

Here’s the quick solution to package your DBMigrations in your WAR file.

Just add this to your Config.groovy file and when you generate the WAR the migrations will be included and found by the Plugin.

grails.war.resources = {stagingDir ->
copy(todir: "${stagingDir}/WEB-INF/classes/grails-app/migrations") {
fileset(dir:"grails-app/migrations")
}
}

Aside

The only problem with DBMigrate is that it doesn’t handle creating or updating Stored Procedures and Functions which means if you want to use those – for Reporting purposes lets say – you’re out of luck currently.

Database Migrations for .NET

One of the more difficult things to manage in software projects is often changing a database schema over time. On the projects that I work on, we don’t usually have DBAs who manage the schema so it is left up to the developers to figure out. The other thing you have to manage is applying changes to the database in such a way that you don’t disrupt the work of other developers on your team. We need the change to go in at the same time as the code so that Continuous Integration can work.

Migrations

While I don’t know if they were invented there, migrations seem to have been popularized by Ruby on Rails. Rails is a database centric framework that implies the properties of your domain from the schema of your database. For that reason it makes sense that they came up with a very good way of These are some example migrations to give you an idea of the basics of creating a schema.

001_AddAddressTable.cs:

using Migrator.Framework;
using System.Data;
[Migration(1)]
public class AddAddressTable : Migration
{
override public void Up()
{
Database.AddTable("Address",
new Column("id", DbType.Int32, ColumnProperty.PrimaryKey),
new Column("street", DbType.String, 50),
new Column("city", DbType.String, 50),
new Column("state", DbType.StringFixedLength, 2),
new Column("postal_code", DbType.String, 10)
}
override public void Down()
{
Database.RemoveTable("Address");
}
}

02_AddAddressColumns.cs:

using Migrator.Framework;
using System.Data;
[Migration(2)]
public class AddAddressColumns : Migration
{
public override void Up()
{
Database.AddColumn("Address", new Column("street2", DbType.String, 50));
Database.AddColumn("Address", new Column("street3", DbType.String, 50));
}
public override void Down()
{
Database.RemoveColumn("Address", "street2");
Database.RemoveColumn("Address", "street3");
}
}

003_AddPersonTable.cs:

using Migrator.Framework;
using System.Data;
[Migration(3)]
public class AddPersonTable : Migration
{
public override void Up()
{
Database.AddTable("Person",
new Column("id", DbType.Int32, ColumnProperty.PrimaryKey),
new Column("first_name", DbType.String, 50),
new Column("last_name", DbType.String, 50),
new Column("address_id", DbType.Int32, ColumnProperty.Unsigned)
);
Database.AddForeignKey("FK_PERSON_ADDRESS", "Person", "address_id", "Address", "id");
}
public override void Down()
{
Database.RemoveTable("Person");
}
}

Run Your Migrations

The best way to run your migrations will be to integrate it into your build automation tool of choice. If you are not using one, now is the time.

MigratorDotNet supports MSBuild and NAnt.

MSBuild:







NAnt:



So You Want to Migrate?

Some more documentation and example are available MigratorDotNet. Some of the changes represented are still in an experimental branch that is in the process of being merged.


MigratorDotNet is a continuation of code started by Marc-André Cournoyer and Nick Hemsley.

Start a New Branch on your Remote Git Repository

Git is a distributed version control system so it allows you to create branches locally and commit against them. It also supports a more centralized repository model. When using a centralized repository you can push changes to it so that others can pull them more easily. I have a tendency to work on multiple computers. Because of this, I like to use a centralized repository to track the branches as I work on them. That way no matter what machine I’m on, I can still get at my branches.

The Workflow

My workflow is generally something like this:

  1. Create a remote branch
  2. Create a local branch that tracks it
  3. Work, Test, Commit (repeat) – this is all local
  4. Push (pushes commits to the remote repository)

Git commands can be a bit esoteric at times and I can’t always seem to remember how to create a remote git branch and then start working on new code. There also seems to be multiple ways of doing it. I’m documenting the way that seem to work for me so that I can remember it. Maybe it will help someone else too.

Creating a Remote Branch

  1. Create the remote branch
    git push origin origin:refs/heads/new_feature_name

  2. Make sure everything is up-to-date
    git fetch origin

  3. Then you can see that the branch is created.
    git branch -r

This should show ‘origin/new_feature_name’

  1. Start tracking the new branch
    git checkout --track -b new_feature_name origin/new_feature_name

This means that when you do pulls that it will get the latest from that branch as well.

  1. Make sure everything is up-to-date
    git pull

Cleaning up Mistakes

If you make a mistake you can always delete the remote branch
git push origin :heads/new_feature_name
(Ok Git’ers – that has to be the least intuitive command ever.)

Use the Branch from Another Location

When you get to another computer or clone the git repository to a new computer, then you just need to start tracking the new branch again.
git branch -r to show all the remote branches
git checkout --track -b new_branch origin/new_feature_name to start tracking the new branch

Automate it A Bit

That’s a pretty easy thing to automate with a small shell script luckily

!/bin/sh

git-create-branch

if [ $# -ne 1 ]; then
echo 1>&2 Usage: $0 branch_name
exit 127
fi

set branch_name = $1
git push origin origin:refs/heads/${branch_name}
git fetch origin
git checkout --track -b ${branch_name} origin/${branch_name}
git pull

For further help, you might want to check out: