USB to Serial Adapters in VMWare

I needed to do some work using a pin pad (a device that allows you to enter a numeric code at a point-of-sale or other system) and needed to test it in a 32bit Windows environment. The pin pad uses a serial port to communicate to a computer. Of course no portable computers (maybe no desktops either, it’s been years since I’ve had one) have serial ports anymore, so you have to use a USB to Serial adapter. It seems like many brands of these converters use a chip by a company called Prolific. Prolific makes the chip that lives in the converter that does the translation from serial to USB and back. I found the drivers from prolific and the device loaded just fine on Windows 7. I then created a Windows XP image in VMWare.

This is where the fun started.

I loaded the driver in Windows XP but no matter what I tried I couldn’t get VMWare to take control of the adapter. Every time I chose Virtual Machine -> Removable Devices -> Prolific USB device -> Connect it would give me an error: “Driver error”. Nice and specific and helpful right? I rebooted the host and the guest, tried without the driver installed on the host and rebooted again. Try as I might nothing worked.

Long story short, I was plugging the adapter into a USB3 port and either VMWare or the driver didn’t like USB3. So If you are seeing a similar problem, try and find out what kind of USB port it is. I was using Windows 7 on a Lenovo W510. This machine has USB2 and USB3 ports. The 2 obvious USB ports on the left hand side are USB3. There is a USB2 port on the back, but I keep another device plugged into that one so I didn’t even think to try it. It turns out though that the W510 has dual ESATA/USB2 port next to the USB3 ports. That dual port looks physically different than a normal USB port so I assumed it was just ESATA. I plugged the usb to serial adapter into this dual port and everything worked flawlessly.

Hopefully that will helps someone save a bunch of time that I wasted trying to get it to work.

What is the Cost of Not Doing Things?

We’re really good at measuring the cost of some things. We’re good at measuring the cost of a new computers for everyone on the team, we’re good at measuring the cost per hour of a resource on a project and we’re good at measuring the time it will take to complete a new feature.

It seems like people are not good at is measuring the cost of not doing things. What is the cost of maintaining an application on 10 year old technology instead of upgrading it to newer versions as they come out? What is the cost of not having unit tests and automated test suites? What is the cost of running many different versions of a framework or a virtual machine?

Unfortunately this leaves us with a problem. When we cannot quantify the cost of inaction it often looks like a reasonable choice because we assume that it’s free. That assumption is the root of a lot of problems.

This post is just me ranting, I wish I had the answer.

NoSQL with MongoDB and Ruby Presentation

I presented at the Milwaukee Ruby User’s Group tonight on NoSQL using MongoDB and Ruby.

Code Snippets for the Presentation

Basic Operations

// insert data
db.factories.insert( { name: "Miller", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Lakefront", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Point", metro: { city: "Steven's Point", state: "WI" } } );
db.factories.insert( { name: "Pabst", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Blatz", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Coors", metro: { city: "Golden Springs", state: "CO" } } );

// simple queries
db.factories.find()
db.factories.findOne()
db.factories.find( { "metro.city" : "Milwaukee" } )
db.factories.find( { "metro.state": {$in : ["WI", "CO"] } } )

// update data
db.factories.update( { name: "Lakefront"}, { $set : { thebest : true } } );
db.factories.find()

// delete data
db.factories.remove({name:"Coors"})
db.factories.remove()

Ruby Example


require 'rubygems'
require 'mongo'
include Mongo

db = Connection.new.db('sample-db')
coll = db.collection('factories')

coll.remove

coll.insert( { :name => "Miller", :metro => { :city => "Milwaukee", :state => "WI" } } )
coll.insert( { :name => "Lakefront", :metro => { :city: "Milwaukee", :state => "WI" } } )
coll.insert( { :name => "Point", :metro => { :city => "Steven's Point", :state => "WI" } } )
coll.insert( { :name => "Pabst", :metro => { :city => "Milwaukee", :state => "WI" } } )
coll.insert( { :name => "Blatz", :metro => { :city => "Milwaukee", :state => "WI" } } )
coll.insert( { :name => "Coors", :metro => { :city => "Golden Springs", :state => "CO" } } )

puts "There are #{coll.count()} factories. Here they are:"
coll.find().each { |doc| puts doc.inspect }
coll.map_reduce("function () { emit(this.metro.city, this.name); }", "function (k, vals) { return vals.join(","); }").each { |r| puts r.inspect }

Map Reduce Example


db.factories.insert( { name: "Miller", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Lakefront", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Point", metro: { city: "Steven's Point", state: "WI" } } );
db.factories.insert( { name: "Pabst", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Blatz", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Coors", metro: { city: "Golden Springs", state: "CO" } } );

var fmap = function () {
emit(this.metro.city, this.name);
}
var fred = function (k, vals) {
return vals.join(",");
}
res = db.factories.mapReduce(fmap, fred)
db[res.result].find()
db[res.result].drop()

The Presentation

Download NoSQL with MongoDB and Ruby Slides

Thanks to Meghan at 10Gen for sending stickers and a copy of MongoDB: The Definitive Guide that I gave out as a door prize. I read the book quickly this weekend before the talk and found it quite good, so I recommend it if you want to get started with MongoDB.

MongoDB: MapReduce Functions for Grouping

SQL GROUP BY allows you to perform aggregate functions on data sets; To count all of the stores in each state, to average a series of related numbers, etc. MongoDB has some aggregate functions but they are fairly limited in scope. The MongoDB group function also suffers from the fact that it does not work on sharded configurations. So how do you perform grouped queries using MongoDB? By using MapReduce functions of course (you read the title right?)

Understanding MapReduce

Understanding MapReduce requires, or at least is made much easier by, understanding functional programming concepts. map and reduce (fold, inject) are functions that come from Lisp and have been inherited by a lot of languages (Scheme, Smalltalk, Ruby, Python).

map
A higher-order function which transforms a list by applying a function to each of its elements. Its return value is the transformed list. In MongoDB terms, the map is a function that is run for each Document in a collection and can return a value for that row to be included in the transformed list.
reduce
A higher-order function that iterates an arbitrary function over a data structure and builds up a return value. The reduce function takes the values returned by map and allows you to run a function to manipulate those values in some way.

Some Examples

Let’s start with some sample data:

db.factories.insert( { name: "Miller", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Lakefront", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Point", metro: { city: "Steven's Point", state: "WI" } } );
db.factories.insert( { name: "Pabst", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Blatz", metro: { city: "Milwaukee", state: "WI" } } );
db.factories.insert( { name: "Coors", metro: { city: "Golden Springs", state: "CO" } } );
db.factories.find()

Lets say I want to count the number of factories in each of the cities (ignore the fact that I could have the same city in more than one state, I don’t in my data). For a count, I write a function that “emits” the group by key and a value that you can count. It can be any value, but for simplicity I’ll make it 1. emit() is a MongoDB server-side function that you use to identify a value in a row that should be added to the transformed list. If emit() is not called then the values for that row will be excluded from the results.

mapCity = function () {
emit(this.metro.city, 1);
}

The next piece is the reduce() function. The reduce function will be passed a key and an array of values that were collected by the map() function. I know my map function returns a 1 for each row keyed by city. So the reduce function will be called with a key “Golden Springs” and a single-element array containing a 1. For “Milwaukee” it will be passed an 4-element array of 1s.

reduceCount = function (k, vals) {
var sum = 0;
for (var i in vals) {
sum += vals[i];
}
return sum;
}

With those 2 functions I can call the mapReduce function to perform my Query.

res = db.factories.mapReduce(mapCity, reduceCount)
db[res.result].find()

This results in:

{ "_id" : "Golden Springs", "value" : 1 }
{ "_id" : "Milwaukee", "value" : 4 }
{ "_id" : "Steven's Point", "value" : 1 }

Counting is not the only thing I can do of course. Anything can be returned by the map function including complex JSON objects. In this example I combine the names of all of the Factories in a given City into a simple comma-separated list.

mapCity = function () {
emit(this.metro.city, this.name);
}
reduceNames = function (k, vals) {
return vals.join(",");
}
res = db.factories.mapReduce(mapCity, reduceNames)
db[res.result].find()

Give you:

{ "_id" : "Golden Springs", "value" : "Coors" }
{ "_id" : "Milwaukee", "value" : "Miller,Lakefront,Pabst,Blatz" }
{ "_id" : "Steven's Point", "value" : "Point" }

Conclusion

These are fairly simple examples, but I think it helps to work through this kind of simple thing to fully understand a new technique before you have to work with harder examples.

For more on MongoDB check out these books:

MongoDB Replication is Easy

Database replication with MongoDB is easy to setup. Replication duplicates all of the data from a master to one or more slave instances and allows for safety and quick recovery in case of a problem with your master database. Here is an example of how quick and easy it is to test out replication in MongoDB. Create a couple of directories for holding your mongo databases.
mkdir master slave
Start by running an instance of the “master” database.
cd master
mongod --master --dbpath .
Start a new terminal window and continue by running an instance of a “slave” database. This example is running on the same machine as master which is great for testing, but wouldn’t be a good setup if you were really trying to implement replication in a production environment since you would still have a single-point-of-failure in the single server case.
cd slave
mongod --slave --port 27018 --dbpath . --source localhost
And start another terminal window to use as the client
mongo
db.person.save( {name:'Geoff Lane'} )
db.person.save( {name:'Joe Smith'} )
db.person.find()
db.person.save( {name:'Jim Johnson', age: 65} )
db.person.find()
Now kill the master instance in your terminal with Control+C. This simulates the the master server dying. Lastly connect to the slave instance with a mongo client by specifying the port.
mongo --port 27018
db.person.find()
As you can see, the db.person.find() returns all of the values that were saved in the master list as well which shows that replication is working. One of the other interesting facts is that you can start a slave instance even after the mongod master is already running and has data and all of the data will be replicated over to the slave instance as well. This all works without ever shutting down your mongod master instance. This allows you to add replication after the fact with no downtime. For more on MongoDB check out these books:
* MongoDB: The Definitive Guide
* The Definitive Guide to MongoDB: The NoSQL Database for Cloud and Desktop Computing
* MongoDB for Web Development (Developer’s Library)

MongoDB and Java: Find an item by Id

MongoDB is one of a number of new databases that have cropped up lately eschewing SQL. These NoSQL databases provide non-relational models that are suitable for solving different kinds of problems. This camp includes document oriented, tabular and key/value oriented models among others. These non-relational databases are supposed to excel at scalability through parallelization and replication but sometimes (although not always) at the expense of some of the transactional guarantees of SQL databases.

Why would you care about any of this? Document oriented databases allow for each document to store arbitrary pieces of data. This could allow for much easier customization of data storage such as when you want to store custom fields. Many of these databases also make horizontal scaling quite simple as well as providing high performance for write heavy applications.

With this in mind I figured I should look and see what’s there. So I started looking at MongoDB.

Start by creating an object to add to the database

With MongoDB, a collection is conceptually similar to a table in a SQL database. It holds a collection of related documents. A DBObject represents a document that you want to add to a collection. MongoDB automatically creates an id for each document that you add. That id is set in the DBObject after you pass it to the save method of the collection. In a real world application you might need that id to later access the document.


DBObject obj = new BasicDBObject();
obj.put("title", getTitle());
obj.put("body", getBody());

DBCollection coll = db.getCollection("note"));
coll.save(obj);

String idString = obj.get("_id").toString();

Retrieve an object previously added to a collection

To get a document from MongoDB you again use a DBObject. It does double duty in this case acting as a the parameters you want to use to identify a matching document. (There are ways you can do comparisons other than equality, of course, but I’ll leave that for a later post.) Using this as a “query by example” model we can set the _id property that we previously retrieved. The one catch is that the id is not just a string, it’s actually an instance of an ObjectId. Fortunately when we know that it’s quite easy to construct an instance with the string value.


String idString = "a456fd23ac56d";
DBCollection coll = db.getCollection(getCollectionName());
DBObject searchById = new BasicDBObject("_id", new ObjectId(idString));
DBObject found = coll.findOne(searchById);

A couple of easy examples, but it wasn’t obvious to me when I started how to get the id of a document that I just added to the database. More to come in the future.

For more on MongoDB check out these books:

Random but Evenly Distributed Sets of Numbers

Let’s say one is a computer programmer and let’s say one’s wife (or roommate; or significant other) does social science research (a totally hypothetical scenario of course). When doing social science research one needs to create randomized groups of participants.

So in a group you might be testing a variable X and you need to divide the group in half to test that variable with a number of control subjects. In this scenario, one needs to create groups A and B (or 1 and 2) such that there are the same number of A and B in a given set.

Now normally when generating random numbers you can just generate an arbitrary set of random numbers. But if you need a set of 10 numbers with 5 1’s and 5 2’s then that’s a different problem. You could of course generate a set and test to make sure it’s evenly distributed and then throw it away if it doesn’t meet this constraint. That of course is an inefficient way of generating your sets because you would likely have to throw a lot of them away.

The solution to this conundrum is to use random numbers not to create your set, but to shuffle it. Luckily other smart people have proven ways to do this already. The Fisher-Yates shuffle is the technique that I used.


from random import randrange
items = [1,1,1,1,1,2,2,2,2,2]

# Fisher-Yates shuffle, Durstenfeld in-place implementation
n = len(items)
while n > 1:
k = randrange(n) # 0..n-1
n = n - 1
items[k], items[n] = items[n], items[k]

print items # e.g. [2, 1, 2, 1, 1, 2, 1, 2, 2, 1]

Basically the Fisher-Yates shuffle shuffle picks a random item and puts it at the end of the list by swapping the end with the randomly selected item. It then continues with the ‘unpicked’ numbers and puts them at the end of the unpicked set until it reaches the beginning of the list. Once it’s traveled through, you have a randomly sorted set.

That’s fine if you’re a programmer and want to run python from the command line and change your items set if you need a different size, etc. But if you want an end-user who’s not a programmer to use it, you better come up with something a bit configurable. So I need to allow for some variation to the sets that will be generated and shuffled.

Based on this information I created a simple class that would allow you to specify 3 properties that would define the random sets to generate: blocks, the number of individual sets; size, the number of numbers in each block; and groups, the number of variants within each block. So now you could generate a set of 10 blocks, with 12 numbers in each containing 1, 2, and 3 evenly distributed (i.e. 4 of each number).


from random import randrange

class Shuffler(object):
def __init__(self, blocks, blockSize, groups):
self.blocks = blocks
self._groups = groups
self._blockSize = blockSize
self.deck = self.make_deck()

@property
def blockSize(self):
return self._blockSize

# BlockSize setter also initializes deck
@blockSize.setter
def blockSize(self, value):
self._blockSize = value
self.deck = self.make_deck()

@property
def groups(self):
return self._groups

# Groups setter also initializes deck
@groups.setter
def groups(self, value):
self._groups = value
self.deck = self.make_deck()

def is_valid_group_size(self):
return self.blockSize % self.groups == 0

# Fisher-Yates shuffle, Durstenfeld in-place implementation
def shuffle(self):
items = self.deck[:] # copy deck for in-place shuffle
n = len(items)
while n > 1:
k = randrange(n) # 0..n-1
n = n - 1
items[k], items[n] = items[n], items[k]
return items

def make_deck(self):
result = []
for i in range(self.blockSize):
result.append(i % self.groups + 1)
return result

Finally to wrap the shuffler in a nice command-line interface I created a simple specialization of the Shuffler class that would take command line arguments to generate blocks of specific properties.


#!/usr/bin/env python
"""
Usage: %(program)s [options] ... [-b ] [-s ] [-g ]
Options:
-h, --help This help message
-b, --blocks The number of blocks (default: 20)
-s, --size The number of participants in each group (default: 10)
-g, --groups The number of groups in each block (default: 2)
"""

import random, sys, getopt
import shuffle

program = sys.argv[0]

class CliShuffler(shuffle.Shuffler):
def __init__(self):
shuffle.Shuffler.__init__(self, 20, 10, 2)

def print_results(self):
for i in range(self.blocks):
print ' '.join([str(i) for i in self.shuffle()])

def usage(self):
print >> sys.stderr, __doc__ % globals()
sys.exit()

def run(self, argv):
try:
opts, args = getopt.getopt(sys.argv[1:], "hg:s:b:", ["help", "groups=", "size=", "blocks="])
except getopt.GetoptError, err:
# print help information and exit:
print str(err) # will print something like "option -a not recognized"
self.usage()
sys.exit(2)

for opt, arg in opts:
if opt in ("-b", "--blocks"):
try:
self.blocks = int(arg)
except ValueError:
self.usage()
elif opt in ("-s", "--size"):
try:
self.blockSize = int(arg)
except ValueError:
self.usage()
elif opt in ("-g", "--groups"):
try:
self.groups = int(arg)
except ValueError:
self.usage()
elif opt in ("-h", "--help"):
self.usage()
else:
assert False, "unhandled option"

if not self.is_valid_group_size():
print("Block Size must be evenly divisible by groups to get an even grouping.")
self.usage()
sys.exit()

self.print_results()

if __name__ == "__main__":
shuffler = CliShuffler()
shuffler.run(sys.argv[1:])

Happy shuffling…

Announcing Grails Constraints Custom Domain Constraint Plugin

I’ve released my first public Grails Plugin today.

The Grails Constraint plugin gives you the ability to create custom Constraints that you can apply to your Domain classes to validate them. These are applied and act just like the built in Domain class constraints.

Why would you want this?

Grails provides two generic, catch-all Constraints in the core application:

  • validator – a generic closure mechanism for validation
  • matches – a regular expression mechanism

While those work, I find myself often wanting to use the same Constraints on multiple Domain classes (think Social Security Number, Phone Number, Zipcode, etc.) and I don’t like to repeat those regular expressions or validations all over the place.

What does this plugin do?

With the Grails Constraints plugin, you can write a custom Constraint class, drop it in /grails-app/utils/ and it will automatically be available to all of your domain classes.

Example

I create a new constrain by hand in /grails-app/utils/ComparisonConstraint.groovy. (You can also use the provided create-constraint script like grails create-constraint com.foo.MyConstraint)

class ComparisonConstraint {

static name = "compareTo"
static expectsParams = true

def validate = { val, target ->
def compareVal = target."$params"
if (null == val || null == compareVal)
return false

return val.compareTo(compareVal) == 0
}
}

Then you can apply your constraint to your Domain class:

class Login {
String password
String confirm

static constraints = {
password(compareTo: 'confirm')
}
}

See Grails Custom Constraints Plugin for the full documentation on what all of the above means and the source code.

DRYing Grails Criteria Queries

When you’re writing code, Don’t Repeat Yourself. Now say that 5 times. *rimshot*

One of the things that I find myself repeating a lot of in many business apps is queries. It’s common to have a rule or filter that applies to many different cases. I came across such a situation recently and wanted to figure out a way to share that filter across many different queries. This is what I came up with for keeping those Criteria DRY.

To start with, I’ll use an example of an Article. This could be a blog post or a newspaper article. One of the rules of the system is that Articles need to be published before they are visible by end users. Because of this seemingly simple rule, every time we query for Articles, we will need to check the published flag. If you get a lot of queries, that ends up being a lot of repetition.

Here’s our example domain class:

package net.zorched.domain
class Article {
String name
String slug
String category

boolean published

static constraints = {
name(blank: false)
slug(nullable: true)
}
}

Now we need to add a query that will retrieve our domain instance by its slug (a slug is a publishing term for a short name given to an article, in the web world it has become a term often used for a search engine optimization technique that uses the title instead of an artificial ID). To perform that query we might write something like this on the Article class:

static getBySlug(String slug) {
withCriteria(uniqueResult:true) {
and {
eq('approved', true)
eq(' slug', slug)
}
}
}

We want to query based on the slug, but we also want to only allow a published Article to be shown. This would allow us to unpublish an article if necessary. Without the approved filter, if the link had gotten out, people could still view the article.

Next we decide we want to list all of the Articles in a particular category so we write something like this, again filtering by the approved flag.

static findAllByCategory(String category) {
withCriteria() {
and {
eq('approved', true)
eq('category', category)
}
}
}

Two simple examples like this might not be that big of a deal. But you can easily see how this would grow if you added more custom queries or if you had some more complicated filtering logic. Another common case would be if you had the same filter across many different domain objects. (What if the Article had attachments and comments all of which needed their own approval?) What you need is a way to share that logic among multiple withCriteria calls.

The trick to this is understanding how withCriteria and createCriteria work in GORM. They are both implemented using a custom class called HibernateCriteriaBuilder. That class invokes the closures that you pass to it on itself. Sounds confusing. Basically the elements in the closure of your criteria queries get executed as if the were called on an instance of HibernateCriteriaBuilder.

e.g.

withCriteria {
eq('a', 1)
like('b', '%foo%')
}

would be the equivalent of calling something like:


def builder = new HibernateCriteriaBuilder(...)
builder.eq('a', 1)
builder.like('b', '%foo%')

That little bit of knowledge allow you to reach into your meta programming bag of tricks and add new calls to the HibernateCriteriaBuilder. Every Class in groovy has a metaClass that is used to extend types of that Class. In this case we’ll add a Closure that will combine our criteria with other criteria like so:

HibernateCriteriaBuilder.metaClass.published = { Closure c ->
and {
eq('published', true)
c()
}
}

This ands together our eq call with all of the other parts of the passed in closure.
Now we can put the whole thing together into a domain class with a reusable filter.


package net.zorched.domain

import grails.orm.HibernateCriteriaBuilder

class Article {

static {
// monkey patch HibernateCriteriaBuilder to have a reusable 'published' filter
HibernateCriteriaBuilder.metaClass.published = { Closure c ->
and {
eq('published', true)
c()
}
}
}

String name
String slug
String category

boolean published
Date datePublished

def publish() {
published = true
datePublished = new Date()
}

static def createSlug(n) {
return n.replaceAll('[^A-Za-z0-9\\s]','')
.replaceAll('\\s','-')
.toLowerCase()
}

static findAllApprovedByCategory(String category) {
withCriteria {
published {
eq('category', category)
}
}
}

static getBySlug(String slug) {
withCriteria(uniqueResult:true) {
published {
eq(' slug', slug)
}
}
}

static constraints = {
name(blank: false)
datePublished(nullable: true)
slug(nullable: true)
}
}

And there you have it. Do you have any other techniques that can be used to DRY criteria?

Struts2 Map Form to Collection of Objects

The Struts2 documentation contains examples that are often basic at best which can make it challenging to figure out how to do things sometimes. I was working on creating a form that would allow me to select values from a list to connect 2 objects in a One-to-Many relationship. This is a common relationship for many things. In this example, I’ll use a User and Role class to demonstrate the concept.

For background, here’s a JPA mapped User and Role class.

import java.util.List;
import javax.persistence.*;

@Entity
public class User {

private Long id;
// ... other member variables
private List roles;

@Id
@GeneratedValue(strategy = GenerationType.AUTO)
public Long getId() {
return id;
}

public void setId(Long id) {
this.id = id;
}

@OneToMany
@JoinTable(name = "UserRoles",
joinColumns = @JoinColumn(name = "user_Id"),
inverseJoinColumns = @JoinColumn(name = "role_Id"),
uniqueConstraints = @UniqueConstraint(columnNames = {"user_Id", "role_Id"})
)
public List getRoles() {
return roles;
}

public void setRoles(List roles) {
this.roles = roles;
}

// ... other properties
}

@Entity
public class Role {

private Long id;

@Id
@GeneratedValue(strategy = GenerationType.AUTO)
public Long getId() {
return id;
}

public void setId(Long id) {
this.id = id;
}

// ... other properties
}

A list of Roles exists in the database. When a User is created, they are assigned one or more Roles. The User and Roles are connected in the Database through a join table as defined in the mapping. At this point I created some DAO Repository classes to manage the persistence and an Action to handle the form values. (The details of JPA, setting up Persistence and Actions are beyond the scope of this post.)

The part that caused me the most grief ended up being the form. I wanted to add a checkbox list that contained all of the Roles. The example on the Struts site for checkboxlist, a control with 43 documented properties is:



Needless to say, there was some ‘figuring out’ to be done.

The form itself is pretty vanilla for the most part. The checkboxlist is the interesting part because it’s what allows us to map the User to the Roles. I knew that I was looking for something to put into the value property of the control that would tell it to pre-select the Role values that were already associated with the User.

I started out with something like:


That didn’t work. When you think about it, that makes sense because the keys in the list are ids and the values supplied are Role objects. So I needed to figure out how to get the Ids from the Roles. I could have done that in the Action class, but it seemed like there should be a better way. A way that would allow me to continue in more of a Domain fashion.

Doing some research into OGNL, I came upon the list projections section which was the key…

The OGNL projection syntax gives us user.roles.{id}. Basically that is a list comprehension that takes of list of Role objects and turns it into a list of Role Ids. That list of Ids becomes the list of values that will be preselected.

Knowing that I can now create a select box that will include the pre-selected values on an edit form:


Enjoy.