DRYing Grails Criteria Queries

When you’re writing code, Don’t Repeat Yourself. Now say that 5 times. *rimshot*

One of the things that I find myself repeating a lot of in many business apps is queries. It’s common to have a rule or filter that applies to many different cases. I came across such a situation recently and wanted to figure out a way to share that filter across many different queries. This is what I came up with for keeping those Criteria DRY.

To start with, I’ll use an example of an Article. This could be a blog post or a newspaper article. One of the rules of the system is that Articles need to be published before they are visible by end users. Because of this seemingly simple rule, every time we query for Articles, we will need to check the published flag. If you get a lot of queries, that ends up being a lot of repetition.

Here’s our example domain class:

package net.zorched.domain
class Article {
String name
String slug
String category

boolean published

static constraints = {
name(blank: false)
slug(nullable: true)
}
}

Now we need to add a query that will retrieve our domain instance by its slug (a slug is a publishing term for a short name given to an article, in the web world it has become a term often used for a search engine optimization technique that uses the title instead of an artificial ID). To perform that query we might write something like this on the Article class:

static getBySlug(String slug) {
withCriteria(uniqueResult:true) {
and {
eq('approved', true)
eq(' slug', slug)
}
}
}

We want to query based on the slug, but we also want to only allow a published Article to be shown. This would allow us to unpublish an article if necessary. Without the approved filter, if the link had gotten out, people could still view the article.

Next we decide we want to list all of the Articles in a particular category so we write something like this, again filtering by the approved flag.

static findAllByCategory(String category) {
withCriteria() {
and {
eq('approved', true)
eq('category', category)
}
}
}

Two simple examples like this might not be that big of a deal. But you can easily see how this would grow if you added more custom queries or if you had some more complicated filtering logic. Another common case would be if you had the same filter across many different domain objects. (What if the Article had attachments and comments all of which needed their own approval?) What you need is a way to share that logic among multiple withCriteria calls.

The trick to this is understanding how withCriteria and createCriteria work in GORM. They are both implemented using a custom class called HibernateCriteriaBuilder. That class invokes the closures that you pass to it on itself. Sounds confusing. Basically the elements in the closure of your criteria queries get executed as if the were called on an instance of HibernateCriteriaBuilder.

e.g.

withCriteria {
eq('a', 1)
like('b', '%foo%')
}

would be the equivalent of calling something like:


def builder = new HibernateCriteriaBuilder(...)
builder.eq('a', 1)
builder.like('b', '%foo%')

That little bit of knowledge allow you to reach into your meta programming bag of tricks and add new calls to the HibernateCriteriaBuilder. Every Class in groovy has a metaClass that is used to extend types of that Class. In this case we’ll add a Closure that will combine our criteria with other criteria like so:

HibernateCriteriaBuilder.metaClass.published = { Closure c ->
and {
eq('published', true)
c()
}
}

This ands together our eq call with all of the other parts of the passed in closure.
Now we can put the whole thing together into a domain class with a reusable filter.


package net.zorched.domain

import grails.orm.HibernateCriteriaBuilder

class Article {

static {
// monkey patch HibernateCriteriaBuilder to have a reusable 'published' filter
HibernateCriteriaBuilder.metaClass.published = { Closure c ->
and {
eq('published', true)
c()
}
}
}

String name
String slug
String category

boolean published
Date datePublished

def publish() {
published = true
datePublished = new Date()
}

static def createSlug(n) {
return n.replaceAll('[^A-Za-z0-9\\s]','')
.replaceAll('\\s','-')
.toLowerCase()
}

static findAllApprovedByCategory(String category) {
withCriteria {
published {
eq('category', category)
}
}
}

static getBySlug(String slug) {
withCriteria(uniqueResult:true) {
published {
eq(' slug', slug)
}
}
}

static constraints = {
name(blank: false)
datePublished(nullable: true)
slug(nullable: true)
}
}

And there you have it. Do you have any other techniques that can be used to DRY criteria?

Using Quartz.NET, Spring.NET and NHibernate to run Scheduled Tasks in ASP.NET

Running scheduled tasks in web applications is not normally a straightforward thing to do. Web applications are built to respond to requests from users and respond to that request. This request/response lifecycle doesn’t always match well to a long running thread that wakes up to run a task every 10 minutes or at 2 AM every day.

ASP.NET Scheduled Task Options

Using ASP.NET running on Windows, there are a number of different options that you could choose to implement this. Windows built in Scheduled Tasks can be run to periodically perform execute a program. A Windows Service could be constructed that used a Timer or a Thread to periodically do the work. Scheduled Tasks and Windows Service require you to write a standalone program. You can share DLLs from your Web application but in the end it is a separate app that needs to be maintained. Another option if you go this route is to turn the Scheduled Task or Service being run into a simple Web Service or REST client that can call your Web application but doesn’t need any knowledge of the jobs themselves.

Another option is an Open Source tool called Quartz.NET. Quartz.NET is based on the popular Java scheduled task runner called (not surprisingly) Quartz. Quartz.NET is a full-featured system that manages Jobs that do the work and Triggers that allow you to specify when you want those jobs run. It can run in your web application itself or as an external service.

The simplest approach to get started is to run directly in your Web application as a process in IIS. The downside to this is that IIS will periodically recycle it’s processes and won’t necessarily start a new one until a new web request is made. Assuming you can deal with this indeterministic behavior then in an IIS process will be fine. It also creates a relatively easy path that will allow you to migrate to the external service process at a later point if need be.

I’m an ALT.NET kind of .NET developer, so I like to use tools like NHibernate for ORM and Spring.NET for Dependency Injection, AOP and generally wiring everything together. The good news is that Spring.NET supports Quartz.NET through its Scheduling API. Start with that for some basic information on using Quartz.NET with Spring. The bad news is that the documentation is a bit thin and the examples basic. I attempt to remedy that in part here.

Using Quartz.NET, NHibernate and Spring.NET to run Scheduled Tasks

The goal is to integrate an existing Spring managed object like a Service or a DAL that uses NHibernate with a Quartz Job that will run on a periodic basis.

To start with you need to create an interface for your service and then implement that interface. The implementation I’ll leave to you and your problem, but the example below you can image uses one or more NHibernate DALs to lookup Users, find their email preferences, etc.

Implementing Services and Jobs


public interface IEmailService
{
void SendEveryoneEmails();
}

When implementing your Job you need to know a few details about how Quartz works:

  1. The first thing to understand is that if you are going to use the AdoJobScheduler to store your Jobs and triggers in the database the Job needs to be Serializable. Generally speaking your DAL classes and NHibernate sessions and the like are not going to be serializable. To get around that, we make the properties set-only so that they will not be serialized when they are stored in the database.
  2. The second thing to understand is that your Job will not be running in the context of the Web application or a request so anything you have to set up connections (such as an OpenSessionInView filter) will not apply to Jobs run by Quartz. This means that you will need to setup your own NHibernate session for all of the dependent objects to use. Luckily Spring provides some help with this in the SessionScope class. This is the same base class as is used by the OpenSessionInView filter.

Using the Service interface you created, you then create a Job that Quartz.NET can run. Quartz.NET provides the IJob interface that you can implement. Spring.NET provides a base class that implements that interface called QuartzJobObject helps deal with injecting dependencies.


using NHibernate;
using Quartz;
using Spring.Data.NHibernate.Support;
using Spring.Scheduling.Quartz;

public class CustomJob : QuartzJobObject
{
private ISessionFactory sessionFactory;
private IEmailService emailService;

// Set only so they don't get serialized
public ISessionFactory SessionFactory { set { sessionFactory = value; } }
public IEmailService EmailService { set { emailService = value; } }

protected override void ExecuteInternal(JobExecutionContext ctx)
{
// Session scope is the same thing as used by OpenSessionInView
using (var ss = new SessionScope(sessionFactory, true))
{
emailService.SendEveryoneEmails();
ss.Close();
}
}
}

Wiring Services and Jobs Together with Spring

Now that you have your classes created you need to wire everything together using Spring.

First we have our DALs and Services wired in to Spring with something like the following:



Next you create a Job that references the Type of the Job that you just created. The type is referenced instead of the instance because the lifecycle of the Job is managed by Quartz itself. It deals with instantiation, serialization and deserialization of the object itself. This is a bit different than what you might expect from a Spring service normally.


Once your Job is created, you create a Trigger that will run the Job based on your rules. Quartz (and Spring) offer two types of Jobs SimpleTriggers and CronTriggers. SimpleTriggers allow you to specify things like “Run this task every 30 minutes”. CronTriggers follow a crontab format for specifying when Jobs should run. The CronTrigger is very flexible but could be a little confusing if you aren’t familiar with cron. It’s worth getting to know for that flexibility though.



The last piece that needs to be done is the integration of the SchedulerFactory. The SchedulerFactory brings together Jobs and Triggers with all of the other configuration needed to run Quartz.NET jobs.

A couple of things to understand about configuring the SchedulerFactory:

  1. Specifying (where DbProvider is the db:provider setup used by your Nhibernate configuration) tells the SchedulerFactory to use the AdoJobProvider and store the Jobs and Trigger information in the database. The tables will need to exist already and Quartz provides a script for this task.
  2. Running on SQL Server requires a slight change to Quartz. It uses a locking mechanism to prevent Jobs from running concurrently. For some reason the default configuration uses a FOR UPDATE query that is not supported by SQL Server. (I don’t understand exactly why a .NET utility wouldn’t work with SQL Server out of the box?)
    To fix the locking a QuartzProperty needs to be set:
  3. The JobFactory is set to the SpringObjectJobFactory because it handles the injection of dependencies into QuartzJobObject like the one we created above.
  4. SchedulerContextAsMap is a property on the SchedulerFactory that allows you to set properties that will be passed to your Jobs when they are created by the SpringObjectJobFactory. This is where you set all of the Property names and the corresponding instance references to Spring configured objects. Those objects will be set into your Job instances whenever they are deserialized and run by Quartz.

Here’s the whole ScheduleFactory configuration put together:







Conclusion

Scheduled tasks in ASP.NET applications shouldn’t be too much trouble anymore. Reusing existing Service and DAL classes allows you to easily create scheduled tasks using existing, tested code. Quartz.NET looks to be a good solution for these situations.

CruiseControl With a Specific Version of Grails

Continuous Integration is a good practice in software development. It helps catch problems early to prevent them from becoming bigger problems later. It helps to reinforce other practices like frequent checkins and unit testing as well. I’m using CruiseControl (CC) for Continuous Integration at the moment.

One of the things about Grails is that it is really run through a series of scripts and classes that set up the environment. The Ant scripts really just delegate the work to those grails scripts. To run properly, the GRAILS_HOME environment needs to be set so that it can find the proper classes, etc. This is not a problem if you are running a single Grails application in Continuous Integration. The issue arises when you want to run multiple against different version of Grails. A project I’m working on uncovered a bug in the 1.0.2 release of Grails. The code worked fine on 1.0.1 so I wanted to run against that specific version of Grails.

It ends up this is not to hard with a few small changes to your Ant build.xml file.

First you can declares some properties that have the paths to the Grails directory and the grails executable (the .bat version if your CC server is on Windows).

Next you can declare a custom target to execute on the CC server. You reference the ‘cc-grails’ property declared. The key is that you must override the GRAILS_HOME when you execute the grails script.








Now the Continuous Integration of your Grails app runs against a specific version of Grails.

The Full build.xml



























Integrating ObjectBuilder with ASP.NET

ObjectBuilder is a .NET framework made for building Inversion of Control or Dependency Injection containers. Unlike Spring or Pico, ObjectBuilder is not a full DI framework, instead it’s a framework for building DI solutions. Yes, that’s incredibly weird to me too.

Why Use Object Builder?

Why would you use ObjectBuilder instead of a fully baked DI solution? In the .NET world it seems like if it’s not built by Microsoft then it’s not on the table. People really are looking for the “garden path”. So in part it’s really a psychological thing. On the other hand the Microsoft Application Blocks and Enterprise Library are built using ObjectBuilder. So, if you are already using some of those solutions, then you already have a dependency on ObjectBuilder. It is natural to try to control the number of dependencies that a project relies upon. In this case, you have a simple solution that just requires a bit of custom code.

In addition to the fact that it’s basically a roll-your-own DI framework, the other downside of ObjectBuilder is that the documentation is non-existent. It’s said ‘Under construction…’ sine July of 2006, so don’t get your hopes up if you don’t want to look at the API and read the source.

Wire Together Services

If you are going to build a service you could do property based injection. What this means is that in your service layer all of the dependencies are passed into a service via property setters. This keeps the code clean and makes changing the implementation of a dependency very easy. If you have a hardcoded constructor for your dependency in a class then making this kind of change will be harder. It also makes Unit Testing very easy because you can mock out dependencies and isolate the layers of your code.


public class SomeService
{
private SomeDAO someDao;
[Dependency]
public SomeDAO SomeDAO
{
get { return someDao; }
set { someDao = value; }
}

public void Save(SomeObject o)
{
if (o.IsValid)
SomeDAO.Save(o);
}
}

ASP.NET

Most of the examples you can find on the web dealing with ObjectBuilder are about how to use ObjectBuilder as a ServiceBroker where you instantiate a service type object in a view.


public partial class SomePage : System.Web.UI.Page
{
private SomeService someService;
protected void Page_Load(object sender, EventArgs e)
{
Builder builder = NewBuilder();
someService = builder.BuildUp(null, null, null);
}

private Builder NewBuilder()
{
// Create a Builder
}
}

While that’s ok, I think it’s a pretty limiting way to do things. Once you get to any sort of complex object construction, you are going to have to create another class to encapsulate all of the policies and strategies for building objects. The other issue is that using the Services in UI code is very different than compositing the services themselves. It also stands in stark contrast to the clean implementation of the Service layer.

I want to make my pages look more like my Service layer.


public partial class SomePage : System.Web.UI.Page
{
private SomeService someService;
protected void Page_Load(object sender, EventArgs e)
{
}

[Dependency]
public SomeService SomeService()
{
get { return someService; }
set { someService = value; }
}
}

IHttpModules to the Rescue

Page lifecycles are handled by the ASP.NET worker processes so there is no way to construct your own pages and do the Dependency Injection. I immediately went to the idea of a filter (which is how this is done in the Java world). The filter concept in ASP.NET is implemented as an IHttpModule.

The module should do 2 main things:

  • Construct and manage the DI container
  • Inject Properties into the Page object that is going to handle the request

Construct a series of ObjectBuilder classes to create your DI container.

Locator locator = new Locator();
ILifetimeContainer container = new LifetimeContainer();

// Setup the strategy for how objects can be created
BuilderStrategyChain chain = new BuilderStrategyChain();
chain.Add(new CreationStrategy());
chain.Add(new PropertyReflectionStrategy());
chain.Add(new PropertySetterStrategy());
locator.Add(typeof(ILifetimeContainer), container);

// Create a context to build an object
BuilderContext builderContext = new BuilderContext(chain, locator, null);
builderContext.Policies.SetDefault(new DefaultCreationPolicy());

You can access the current Page that will handle the request in an IHttpModule using:

HttpApplication app;
app.Context.CurrentHandler; // The Page handler

Now to tie it all together we create an IHttpModule that will filter each request and wire up our Pages with their dependencies. IHttpModules are configured to respond to callback events related to the Life Cycle of a request. In this case we need wire up our Pages after the PostMapRequestHandler because the Page is created prior to this. I set this up on the PreRequestHandlerExecute because everything is setup at this point and it is right before the Page methods are called.


///

/// ObjectBuilderModule handles PropertyBased Injection for ASP.NET Forms.
///

public class ObjectBuilderModule : IHttpModule
{
private HttpApplication app;
private readonly Locator locator = new Locator();
private readonly ILifetimeContainer container = new LifetimeContainer();
private readonly BuilderStrategyChain chain = new BuilderStrategyChain();
private readonly BuilderContext builderContext;
public ObjectBuilderModule()
{
chain.Add(new CreationStrategy());
chain.Add(new PropertyReflectionStrategy());
chain.Add(new PropertySetterStrategy());
locator.Add(typeof(ILifetimeContainer), container);
builderContext = new BuilderContext(chain, locator, null);
builderContext.Policies.SetDefault(new DefaultCreationPolicy());
}

public void Init(HttpApplication context)
{
app = context;
// PreRequestHandler so that everything is setup including the Session, etc
app.PreRequestHandlerExecute += InjectProperties;
}

void InjectProperties(object sender, EventArgs e)
{
IHttpHandler h = app.Context.CurrentHandler;
if (h is DefaultHttpHandler)
return;
chain.Head.BuildUp(builderContext, h.GetType(), h, null);
}

public void Dispose()
{
app.PostMapRequestHandler -= InjectProperties;
}
}

Conclusion

Hopefully this shows you that you can create a powerful, transparent solution for doing Dependency Injection with ASP.NET. This isn’t necessarily a complete solution. We could implement many other things like Singleton management for various service classes for example. Now all you have to do is extend it for your application’s own needs.

All Database Backed Web Apps are “Just CRUD Apps”

It’s time to end the debate between “Just simple CRUD Apps” and “more complex” apps.

Very complex behavior can be expressed through a small set of fairly simple rules. This is often described as emergent behavior or just emergence. The general idea is that simple building blocks can construct complex systems – atoms form molecules, molecules form proteins form cells, forms a human. Each of the building blocks is simple, but the end result is an amazingly complex organism.

The same use of sets of simple commands often is expressed in software development. In object-oriented development, groups of simple methods are used to create more complex behavior. Likewise simple objects collaborate to form more complex behavior. Likewise all of the databases in the world answering questions about all kinds of things, aggregating and filtering, are being manipulated by four simple statements. To access data using SQL, you use four simple commands:

SQL

  • SELECT
  • INSERT
  • UPDATE
  • DELETE

What other SQL operations are there? None. The only other things you can do are DDL operations to modify a database schema. Every operation that can be done to manipulate data in a schema can be expressed by some combination of these four simple statements.

REST/HTTP

  • GET
  • POST
  • PUT
  • DELETE

HTTP happens to map the same concepts as SQL to simple Verbs. While HTTP supports a few other Verbs – OPTIONS, HEAD, TRACE, and CONNECT – these methods are generally for diagnostics, discovery and proxy support. Two independent protocols have defined the same set of general purpose actions and have decided that they are the only ones needed. I would hazard a guess that this is not a coincidence.

The input from the user and the storage of the system can only be manipulated using four simple commands that correspond to Create, Read, Update and Delete. These simple, constrained sets of operations allows you to build systems that solve complex problems. As a software developer you’re doing something wrong if the way that you are building an application doesn’t allow you to start simple and only add complexity as needed. Software is at its best when it is only as complex as it needs to be to solve the problem at hand. Software is also at its best when it can easily be modified to add complexity as needed without drastic rework. Intentionally imposing some constraints on a design can help enforce a consistent, simple design through the entire system.

Mongrel Cluster and Apache Need Memory

I use a VPS hosted by SliceHost as my personal server. SliceHost uses Xen to host multiple instances of Linux on a single machine. The performance of this setup has been very good.

I have been running:

  • Apache 2.2 with PHP
  • MySQL 5
  • Postfix Mail Server
  • Courier IMAP Server
  • ssh for remote access of course

I recently started playing with a site built using Radiant CMS which is itself built on top of Ruby on Rails. So, I’ve added to the mix:

  • 3 Mongrel instances running under mongrel_cluster

These mongrel instances are proxied behind Apache using mod_proxy_balance as described here. This setup works very well and is more and more becoming the defacto standard for deploying Rails applications. Even the Ruby on Rails sites are deployed with this setup now. It allows you to serve all of your dynamic content through Rails and all of your static content through Apache. This gives you all of the speed and robustness that Apache has to offer (afterall it runs over 50% of all the hosts on the internet) for serving static content without burdening Mongrel with this task.

I was noticing that the site was pretty slow though. I tracked it down to the fact that I had started using too much memory. I was running the site on a VPS with 256M of RAM, but with the new Mongrel instances I had just pushed my server into swap space. Web applications in general are happier with more RAM. In this case it is definitely born out. I upped the VPS to have 512M of RAM and things became VERY SNAPPY! While I didn’t do a scientific before and after. The page loads prior to the upgrade were taking about 5-10s. After the memory increase you can’t tell if the application is static or dynamic.

So, if you’re running into performance issues with Mongrel behind an Apache mod_proxy_balance setup, check your memory. If you are running into swap space then you are likely to see serious performance issues. Let me know of any other simple tweaks to get more performance out of this setup if you have them.

As an aside:
Big kudos to SliceHost on their VPS upgrade capabilities. I clicked 2 buttons on my web-based management console and about 10 minutes later I was running on a bigger VPS. You can’t ask for much better than that if you need to scale up a server!

Update:
I guess Lighttpd and Nginx do both support running PHP applications under fast_cgi. You might want to try this kind of setup if you are so inclined. I’m still an Apache partisan.

Interact with REST Services from the Command Line

REST is becoming more popular as a means of implementing Service Oriented Architectures (SOA) as well as merely providing simple remote APIs for interacting with systems. The main reason for this is that it provides a very simple means of creating and consuming Services. Contrasted to SOA implementations like SOAP and REST can be a relief of simplicity.

One of the main advantages of REST is that it requires no tooling to use. Unlike SOAP, it is very easy to construct ad-hoc clients to consume a RESTful service. These examples use curl a command-line utility available on Unix systems or using Cygwin on Windows. The same concepts can be translated to anything that can send HTTP requests.

Example REST Service with Ruby on Rails

As the example implementation, I’ll use a Ruby on Rails controller. Rails has very good support for implementing RESTful services so is easy to show.

To get started with this example you can generate a Rails project and the Order object with the following commands:

rails order_example
cd order_example
./script generate resource order name:string

Then you can implement a RESTful controller with the following code:

class OrdersController < ApplicationController # GET /orders # GET /orders.xml def index @orders = Order.find(:all) respond_to do |format| format.html # index.rhtml format.xml { render :xml => @orders.to_xml }
end
end

# GET /orders/1
# GET /orders/1.xml
def show
@order = Order.find(params[:id])
respond_to do |format|
format.html # index.rhtml
format.xml { render :xml => @order.to_xml }
end
end

# POST /orders
# POST /orders.xml
def create
@order = Order.new(params[:order])
respond_to do |format|
if @order.save
flash[:notice] = 'Order was successfully created.'
format.html { redirect_to order_url(@order) }
format.xml { head :created, :location => order_url(@order) }
else
format.html { render :action => "new" }
format.xml { render :xml => @order.errors.to_xml }
end
end
end

# PUT /orders/1
# PUT /orders/1.xml
def update
@order = Order.find(params[:id])
respond_to do |format|
if @order.update_attributes(params[:order])
flash[:notice] = 'Order was successfully updated.'
format.html { redirect_to order_url(@order) }
format.xml { head :ok }
else
format.html { render :action => "edit" }
format.xml { render :xml => @order.errors.to_xml }
end
end
end

# DELETE /orders/1
# DELETE /orders/1.xml
def destroy
@order = Order.find(params[:id])
@order.destroy
respond_to do |format|
format.html { redirect_to orders_url }
format.xml { head :ok }
end
end
end

This controller allows you respond to all of the Actions that can be taken on a Resource: GET, POST, PUT and DELETE.

Command Line Interaction with the Service

Start our Rails application and then you can see the following commands at work.

./script/server

Get a list of all of the Orders

The first thing you want to do is get a list of all of the orders in the system. To do this we perform a GET command asking for an XML response. The URI in this case represents a list of all the Orders in the system.

curl -X GET -H 'Accept: application/xml' http://localhost:3000/orders

Get a single Order

If we want to get the XML representation of a single order then we can ask for a specific Order by changing the ID to a URI that represents just one Order.

curl -X GET -H 'Accept: application/xml' http://localhost:3000/orders/15

Delete an Order

REST keeps things simple by having consistent Resource URIs. The URI that represents Order number 15 can also be used to Delete or Modify that Order. In this case the URI for the GET is the same, but we ask it to delete the Order instead.

curl -X DELETE -H 'Accept: application/xml' http://localhost:3000/orders/15

Modify an existing Order

Just as with delete, if we want to modify an Order we use the URI that represents that specific Order. The only difference is that we have to tell the server that we are sending it XML, and then actually send the XML.

curl -i -X PUT -H 'Content-Type: application/xml' -H 'Accept: application/xml' \
-d 'Foo' http://localhost:3000/orders/15

Create a new Order

Creating an Order looks very similar to modifying an Order but the URI changes to the Resource URI for the collection of all Orders. The response to this command will be an HTTP 302 Redirect that gives you the URI of the newly created Order Resource.

curl -i -X POST -H 'Content-Type: application/xml' -H 'Accept: application/xml' \
-d 'Foo' http://localhost:3000/orders/

Conclusion

I think you can see how easily you can interact with a REST service using only the most basic tools available, namely simple Unix command line utilities. This simplicity offers a lot of power, flexibility and interoperability that you lose when you implement services with more complicated implementations such as SOAP. That’s not to say that SOAP and all of the WS-* specifications don’t have have their place, because they do. When you can implement a simple solution and meet your needs you will often find that solution to have a surprising amount of added benefits such as flexibility.

Making Session Data Available to Models in Ruby on Rails

Ruby on Rails is implemented as the Model View Controller (MVC) pattern. This pattern separates the context of the Web Application (in the Controller and the View) from the core Model of the application. The Model contains the Domain objects which encapsulate business logic, data retrieval, etc. The View displays information to the user and allows them to provide input to the application. The Controller handles the interactions between the View and the Model.

This separation is a very good design principle that generally helps prevent spaghetti code. Sometimes though the separation might break down.


The following is really an alternative to using the ActionController::Caching::Sweeper which is a hybrid Model/Controller scoped Observer really. It seems to me, based on the name, that the intent is much more specific than giving Observers access to session data. Which do you prefer?

Rails provides the concept of a Model Observer. This Observer allows you to write code that will respond to the lifecycle events of the Model objects. For example you could log information every time a specific kind of Model object is saved. For example you could record some information every time an Account changed using the following Observer:

class AccountObserver < ActiveRecord::Observer def after_update(record) Audit.audit_change(record.account_id, record.new_balance) end end

You might have noticed a limitation with the previous API though. You didn't notice? The only information passed to the Observer is the Object that is being changed. What if you want more context than this? For example, what if you want to audit not only the values that changed them, but the user who made the change?

class AccountObserver < ActiveRecord::Observer def after_update(record) Audit.audit_change(current_user, record.account_id, record.new_balance) end end

How do you get the current_user value? Well, you have to plan ahead a little bit. The User in this application is stored in the HTTP Session when the user is authenticated. The session isn't directly available to the Model level (including the Observers) so you have to figure out a way around this. One way to accomplish this is by using a named Thread local variable. Using Mongrel as a web server, each HTTP request is served by its own thread. That means that a variable stored as thread local will be available for the entire processing of a request.

The UserInfo module encapsulates reading and writing the User object from/to the Thread local. This module can then be mixed in with other objects for easy access.

module UserInfo
def current_user
Thread.current[:user]
end

def self.current_user=(user)
Thread.current[:user] = user
end
end

A before_filter set in the ApplicationController will be called before any action is called in any controller. You can take advantage of this to copy a value out of the HTTP session and set it in the Thread local:


class ApplicationController < ActionController::Base include UserInfo # Pick a unique cookie name to distinguish our session data from others' session :session_key => '_app_session_id'

before_filter :set_user

protected
def authenticate
unless session[:user]
redirect_to :controller => "login"
return false
end
end

# Sets the current user into a named Thread location so that it can be accessed
# by models and observers
def set_user
UserInfo.current_user = session[:user]
end
end

At any point in an Observer of a Model class that you need to have access to those values you can just mixin the helper module and then use its methods to access the data. In this final example we mixin the UserInfo module to our AccountObserver and it will now have access to the current_user method:

class AccountObserver < ActiveRecord::Observer include UserInfo def after_update(record) Audit.audit_change(current_user, record.account_id, record.new_balance) end end

You generally shouldn't need this kind of trick outside of an Observer. In most cases the Controller should pass all of the information needed by a Model object to it through its methods. That will allow the Model objects to interact and the Controller to do the orchestration needed. But in a few special cases, this trick might be handy.

RESTful Rails for Ajax

Ruby on Rails 1.2 added full support for building RESTful services to the already nice web page support. REST is a conceptually simple idea, yet incredibly powerful. REST is a Web based API to an application. It builds on the basic building blocks of HTTP: URLs and HTTP Methods (think verbs).

A URL (Uniform Resource Locator) uniquely identifies a resources on the web. HTTP uses the concept of Methods to give context to a request to a URL. Most developers will be familiar with a GET and a POST. These methods are used to get a resource and to modify a resource respectively. But there are other http verbs as well. The two other interesting ones for a REST service are PUT and DELETE. Both of these are pretty self explanatory. PUT creates a resource at the remote location and DELETE removes one.

For Example:
GET
http://example.com/catalog/item/1

DELETE
http://example.com/catalog/item/1

Both of these use the same URL, but the HTTP Method means different things will happen.

Creating A RESTful API with Rails

Rails makes it easy to create a RESTful application. The same controllers can be used to render a web page and to provide a programmatic API to your application.

Rails provides a scaffold_resource generator that creates the skeleton of a resource driven application:

./script/generate scaffold_resource order

This creates Model, View and Controllers just like regular scaffold, but unlike scaffold, it adds some extra functionality.

class OrdersController < ApplicationController # GET /orders/1 # GET /orders/1.xml def show @order = Order.find(params[:id]) respond_to do |format| format.html # show.rhtml format.xml { render :xml => @order.to_xml }
end
end
end

Now if you request a path ending in .xml it will render the response as an XML document that can be consumed by another program.

Applying REST as an Ajax solution

The great news is that you can use this RESTful API directly as an API to use for building a highly-dynamic Ajax application. (See my post on using Ajax with PHP for an example). But what’s even cooler is that you can use the same technique to build a JSON API. JSON is much easier and less resource intensive to consume in an Ajax application than XML.

Changing your controller to support JSON ends up being really easy. All you have to do is add a single line in the respond_to block to support it:

class OrdersController < ApplicationController # GET /orders/1 # GET /orders/1.xml # GET /orders/1.js def show @order = Order.find(params[:id]) respond_to do |format| format.html # show.rhtml format.xml { render :xml => @order.to_xml }
format.js { render :json => @order.to_json }
end
end
end

Just like in the XML example, if you make a request that ends in .js then you will get a response rendered as JSON. Consuming that JSON Service with something like Dojo allows you to easily create a dynamic web application.


dojo.require("dojo.event.*");
dojo.require("dojo.io.*");
dojo.require("dojo.date.*");
dojo.require("dojo.lfx.*");

function getOrder(id) {
dojo.io.bind({url: "/orders/" + id + ".js", handler: showOrder, mimetype: "text/json"});
}

function showOrder(type, data, evt) {
dojo.dom.removeChildren(dojo.byId('order'));
appendOrderPart('order_number', data.attributes.order_number);
appendOrderPart('time', data.attributes.time);
dojo.lfx.highlight(dojo.byId('order'), dojo.lfx.Color);
}
function appendOrderPart(id, value) {
var element = document.createElement("div");
element.id=id;
element.innerHTML=value;
dojo.byId('order').appendChild(element);
}
function init() {
getOrder(1);
}

dojo.addOnLoad(init);

Conclusion

With a few simple lines of code you can not only render a web page, you can also create an XML API and a JSON API. That’s what I call developer friendly!

Using SQL Compact Edition Under ASP.NET

What is SQLCE?

SQL Compact Edition is the “low end” version of a SQL database solution from Microsoft. It is a single-file, application managed, database implementation. It doesn’t have all of the bells and whistles of the high end database solutions. This is great when you realize the next lowest version, SQL Express is over a 100MB install.

The beta of this software was called SQL Everywhere Edition (SQLEV). Microsoft decided that they didn’t like the name so they went with SQL Compact Edition (SQLCE ). The name Compact Edition is a bit of a misnomer. It can be used on the Compact Framework, but it can also be used on the full-framework anywhere a single-file, zero-install database might be needed. Well almost Everywhere at least. There is an explicit check when running on the Full Framework to make sure that you are not using it in an ASP.NET application. SQLCE is not a very scalable solution. It has some inherent limitations with concurrent connections for example. This is fine though if you go back to “what are you using this for”? An embedded, single-file database. Well I ran into a case where I need a small web-service to be running where an embedded database makes a lot of sense. I’m using the HttpListener class to run my own Http.sys server without using IIS. This still counts as ASP.NET to the SQLCE code though.

Force SQLCE to run under ASP.NET

Steve Lasker posted blog entry on how to use SQLCE under ASP.NET using the pre-release version of the SQLEV. Under SQLEV you set a flag that would tell the SQL Connection, “yes I know this isn’t supported, but let me do it anyway”:

AppDomain.CurrentDomain.SetData("SQLServerEverywhereUnderWebHosting", true)

As you can see the name of the product is right there in the key. Well they changed the name of the product and so they changed the name of the key. So, if you were using the beta for development and are now switching over to the release version of SQLCE , you will need to change the key:

AppDomain.CurrentDomain.SetData("SQLServerCompactEditionUnderWebHosting", true)

That should allow you to use the database under ASP.NET. Now you can revel in the fact that you are swimming in unsupported waters!

Special Thanks
I found this using the great .NET disassembler Lutz Roeder’s .NET Reflector. You should check it out. It can be a great way to track down details of an API implementation when the abstraction is a bit leaky.