Category Archives: .NET

ObjectBuilder Can Inject to UserControls As Well

This is a followup to my previous post on integrating ObjectBuilder and ASP.NET. As I was playing around with the solution I hit on the fact that I was only injecting at the Page level. As ASP.NET is a component model, you can end up with custom User Controls that would need injected properties as well. There is a relatively simple, if not entirely obvious way to do that as well.

Building on the previous example, you can hook into the lifecycle of a page that you are injecting. You can not access the controls directly in the PreRequestHandlerExecute of the IHttpModule because they have not been instantiated yet. Instead, you can create a callback event handler for the Page.InitComplete event and inject properties at that point.


void InjectProperties(object sender, EventArgs e)
{
IHttpHandler h = app.Context.CurrentHandler;
if (h is DefaultHttpHandler)
return;
chain.Head.BuildUp(builderContext, h.GetType(), h, null);
if (h is Page)
{
// Register a page lifecycle event handler to inject
// user controls on the page itself
page = (Page) h;
page.InitComplete += InjectControls;
}
}

private void InjectControls(object sender, EventArgs e)
{
InjectControls(page);
if (null != page.Form)
InjectControls(page.Form);
}

private void InjectControls(Control mainControl)
{
if (mainControl.Controls != null && mainControl.Controls.Count > 0)
{
foreach (Control c in mainControl.Controls)
{
if (c is UserControl)
{
chain.Head.BuildUp(builderContext, c.GetType(), c, null);
InjectControls(c);
}
}
}
}

As you see with this example, you need to recursively inject the objects into all of the controls. That’s in case there are any nested controls that might need injecting. The other thing to notice is that the controls in the top-level Page doe not contain all of the controls that are also declared in a Form on the page, so you need to handle both of those paths.

Hopefully this will get you further along the path to being able to do dependency injection in your ASP.NET application.

Integrating ObjectBuilder with ASP.NET

ObjectBuilder is a .NET framework made for building Inversion of Control or Dependency Injection containers. Unlike Spring or Pico, ObjectBuilder is not a full DI framework, instead it’s a framework for building DI solutions. Yes, that’s incredibly weird to me too.

Why Use Object Builder?

Why would you use ObjectBuilder instead of a fully baked DI solution? In the .NET world it seems like if it’s not built by Microsoft then it’s not on the table. People really are looking for the “garden path”. So in part it’s really a psychological thing. On the other hand the Microsoft Application Blocks and Enterprise Library are built using ObjectBuilder. So, if you are already using some of those solutions, then you already have a dependency on ObjectBuilder. It is natural to try to control the number of dependencies that a project relies upon. In this case, you have a simple solution that just requires a bit of custom code.

In addition to the fact that it’s basically a roll-your-own DI framework, the other downside of ObjectBuilder is that the documentation is non-existent. It’s said ‘Under construction…’ sine July of 2006, so don’t get your hopes up if you don’t want to look at the API and read the source.

Wire Together Services

If you are going to build a service you could do property based injection. What this means is that in your service layer all of the dependencies are passed into a service via property setters. This keeps the code clean and makes changing the implementation of a dependency very easy. If you have a hardcoded constructor for your dependency in a class then making this kind of change will be harder. It also makes Unit Testing very easy because you can mock out dependencies and isolate the layers of your code.


public class SomeService
{
private SomeDAO someDao;
[Dependency]
public SomeDAO SomeDAO
{
get { return someDao; }
set { someDao = value; }
}

public void Save(SomeObject o)
{
if (o.IsValid)
SomeDAO.Save(o);
}
}

ASP.NET

Most of the examples you can find on the web dealing with ObjectBuilder are about how to use ObjectBuilder as a ServiceBroker where you instantiate a service type object in a view.


public partial class SomePage : System.Web.UI.Page
{
private SomeService someService;
protected void Page_Load(object sender, EventArgs e)
{
Builder builder = NewBuilder();
someService = builder.BuildUp(null, null, null);
}

private Builder NewBuilder()
{
// Create a Builder
}
}

While that’s ok, I think it’s a pretty limiting way to do things. Once you get to any sort of complex object construction, you are going to have to create another class to encapsulate all of the policies and strategies for building objects. The other issue is that using the Services in UI code is very different than compositing the services themselves. It also stands in stark contrast to the clean implementation of the Service layer.

I want to make my pages look more like my Service layer.


public partial class SomePage : System.Web.UI.Page
{
private SomeService someService;
protected void Page_Load(object sender, EventArgs e)
{
}

[Dependency]
public SomeService SomeService()
{
get { return someService; }
set { someService = value; }
}
}

IHttpModules to the Rescue

Page lifecycles are handled by the ASP.NET worker processes so there is no way to construct your own pages and do the Dependency Injection. I immediately went to the idea of a filter (which is how this is done in the Java world). The filter concept in ASP.NET is implemented as an IHttpModule.

The module should do 2 main things:

  • Construct and manage the DI container
  • Inject Properties into the Page object that is going to handle the request

Construct a series of ObjectBuilder classes to create your DI container.

Locator locator = new Locator();
ILifetimeContainer container = new LifetimeContainer();

// Setup the strategy for how objects can be created
BuilderStrategyChain chain = new BuilderStrategyChain();
chain.Add(new CreationStrategy());
chain.Add(new PropertyReflectionStrategy());
chain.Add(new PropertySetterStrategy());
locator.Add(typeof(ILifetimeContainer), container);

// Create a context to build an object
BuilderContext builderContext = new BuilderContext(chain, locator, null);
builderContext.Policies.SetDefault(new DefaultCreationPolicy());

You can access the current Page that will handle the request in an IHttpModule using:

HttpApplication app;
app.Context.CurrentHandler; // The Page handler

Now to tie it all together we create an IHttpModule that will filter each request and wire up our Pages with their dependencies. IHttpModules are configured to respond to callback events related to the Life Cycle of a request. In this case we need wire up our Pages after the PostMapRequestHandler because the Page is created prior to this. I set this up on the PreRequestHandlerExecute because everything is setup at this point and it is right before the Page methods are called.


///

/// ObjectBuilderModule handles PropertyBased Injection for ASP.NET Forms.
///

public class ObjectBuilderModule : IHttpModule
{
private HttpApplication app;
private readonly Locator locator = new Locator();
private readonly ILifetimeContainer container = new LifetimeContainer();
private readonly BuilderStrategyChain chain = new BuilderStrategyChain();
private readonly BuilderContext builderContext;
public ObjectBuilderModule()
{
chain.Add(new CreationStrategy());
chain.Add(new PropertyReflectionStrategy());
chain.Add(new PropertySetterStrategy());
locator.Add(typeof(ILifetimeContainer), container);
builderContext = new BuilderContext(chain, locator, null);
builderContext.Policies.SetDefault(new DefaultCreationPolicy());
}

public void Init(HttpApplication context)
{
app = context;
// PreRequestHandler so that everything is setup including the Session, etc
app.PreRequestHandlerExecute += InjectProperties;
}

void InjectProperties(object sender, EventArgs e)
{
IHttpHandler h = app.Context.CurrentHandler;
if (h is DefaultHttpHandler)
return;
chain.Head.BuildUp(builderContext, h.GetType(), h, null);
}

public void Dispose()
{
app.PostMapRequestHandler -= InjectProperties;
}
}

Conclusion

Hopefully this shows you that you can create a powerful, transparent solution for doing Dependency Injection with ASP.NET. This isn’t necessarily a complete solution. We could implement many other things like Singleton management for various service classes for example. Now all you have to do is extend it for your application’s own needs.

Mocking .NET Objects with NUnit

NUnit is my Unit Testing tool of choice for .NET development. Microsoft provides a unit testing framework but it only works with some higher-end versions of Visual Studio. They’re so similar that it’s almost ridiculous that Microsoft created their own version.
(See one of my previous posts for more information on Automating NUnit with MSBuild.) In the Java world it’s fairly common to do Mocking to help make unit testing easier. I’ve written about using JMock for Unit Tesing in Java. In this post, I’d like to talk about a relatively new feature of NUnit which now supports Mocks out of the box.

What Are Mock Objects

Mock Objects are a technique that allow you to isolate classes from their dependencies for testing purposes. This isolation allows for fine-grained testing of single methods in a single class, possibly even before the dependent classes are fully implemented. This isolation allows your tests to run quickly and it makes testing small pieces of functionality much easier. When you’ve tested individual pieces of code in isolation you can have much higher confidence in larger-grained tests. This isolation becomes even more interesting when you are dealing with dependencies such as a data layer or a web service layer. External calls like that can be very time consuming or could fail if the remote system is down for maintenance.

One of the great things about using Mock Objects is that they force you to think about the dependencies that your classes and methods have. It forces you to think about the coupling between your classes. If you have high coupling then your code is often harder to test. If you have a loosely coupled design then testing and using Mock Objects is very much easier. Thinking about those design notions early can help you more easily manage change over time.

Maxims:

  • Good design is better than bad design
  • Loosely coupled objects are usually a better design than tightly coupled objects
  • Testing improves code quality and developer efficiency over time
  • Testing is easier with a loosely coupled designs

A Sample Project

We’re going to start with some simple code. We create a Domain object called Person and an interface for a Data Access object called IPersonRepository. Pretty simple at this point.

public class Person
{
public string Id;
public string FirstName;
public string LastName;
public Person(string newId, string fn, string ln)
{
Id = newId;
FirstName = fn;
LastName = ln;
}
}


public interface IPersonRepository
{
List GetPeople();
Person GetPersonById(string id);
}

Next we create a PersonService object. This would represent all of the business logic in our application. It would interact with the Data Access tier and return information to the UI layer for display.

We wire together our objects using Constructor based Dependency Injection. All of the dependent Objects are sent in through the constructor. This allows for the loose coupling since the PersonService doesn’t know about the Implementing class, but only the interface. Since it’s done in the constructor we can also never have an invalid PersonService as would be the case if there was a setter for the IPersonRepository implementation.

This is again a fairly straightforward implementation, but I hope enough to display the issue at hand.

public class PersonService
{
private IPersonRepository personRepos;
public PersonService(IPersonRepository repos)
{
personRepos = repos;
}
public List GetAllPeople()
{
return personRepos.GetPeople();
}
public List GetAllPeopleSorted()
{
List people = personRepos.GetPeople();
people.Sort(delegate(Person lhp, Person rhp) {
return lhp.LastName.CompareTo(rhp.LastName);
});
return people;
}
public Person GetPerson(string id)
{
try
{
return personRepos.GetPersonById(id);
}
catch (ArgumentException)
{
return null; // no person with that id was found
}
}
}

Using Mocks with NUnit

Now we can start testing our PersonService. Notice that we haven’t even implemented the IPersonRepository yet. That way we can make sure that everything in our PersonService class works as expected without having to think about other layers of the application.


using System;
using System.Collections.Generic;
using NUnit.Framework;
using NUnit.Mocks;
[TestFixture]
public class PersonServiceTest
{
// The dynamic mock proxy that we will use to implement IPersonRepository
private DynamicMock personRepositoryMock;
// Set up some testing data
private Person onePerson = new Person("1", "Wendy", "Whiner");
private Person secondPerson = new Person("2", "Aaron", "Adams");
private List peopleList;
[SetUp]
public void TestInit()
{
peopleList = new List();
peopleList.Add(onePerson);
peopleList.Add(secondPerson);
// Construct a Mock Object of the IPersonRepository Interface
personRepositoryMock = new DynamicMock(typeof (IPersonRepository));
}
[Test]
public void TestGetAllPeople()
{
// Tell that mock object when the "GetPeople" method is
// called to return a predefined list of people
personRepositoryMock.ExpectAndReturn("GetPeople", peopleList);
// Construct a Person service with the Mock IPersonRepository
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// Call methods and assert tests
Assert.AreEqual(2, service.GetAllPeople().Count);
}
[Test]
public void TestGetAllPeopleSorted()
{
// Tell that mock object when the "GetPeople" method is called to
// return a predefined list of people
personRepositoryMock.ExpectAndReturn("GetPeople", peopleList);
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// This method really has "business logic" in it - the sorting of people
List people = service.GetAllPeopleSorted();
Assert.IsNotNull(people);
Assert.AreEqual(2, people.Count);
// Make sure the first person returned is the correct one
Person p = people[0];
Assert.AreEqual("Adams", p.LastName);
}
[Test]
public void TestGetSinglePersonWithValidId()
{
// Tell that mock object when the "GetPerson" method is called to
// return a predefined Person
personRepositoryMock.ExpectAndReturn("GetPersonById", onePerson, "1");
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
Person p = service.GetPerson("1");
Assert.IsNotNull(p);
Assert.AreEqual(p.Id, "1");
}
[Test]
public void TestGetSinglePersonWithInalidId()
{
// Tell that mock object when the "GetPersonById" is called with a null
// value to throw an ArgumentException
personRepositoryMock.ExpectAndThrow("GetPersonById",
new ArgumentException("Invalid person id."), null);
PersonService service = new PersonService(
(IPersonRepository) personRepositoryMock.MockInstance);
// The only way to get null is if the underlying IPersonRepository
// threw an ArgumentException
Assert.IsNull(service.GetPerson(null));
}
}

The PersonService doesn’t have a lot of logic in it, but I hope this illustrates how you easily can test various conditions using Mock objects. It also illustrates the idea of testing early by allowing you to test some code before all of the dependent objects are implemented.

While the Mocks built into NUnit might not be the most powerful or complete Mocking library out there, it should be sufficient for most uses. I’m sure they will continue to improve them over time as well, so I look forward to them becoming more powerful (and having better documentation) in the future.

Download Code Example:
NMock C# Example Project

For more on NUnit you might like to check out:

Custom XML Serialization for the .NET Compact Framework

.NET provides a whole slew of utilities for serializing objects into an XML form. But as I wrote in my previous post, .NET Compact Framework has serious problems with this serialization. The good news is that you can leverage all of the existing Attributes and tricks that you think should work (if it weren’t so buggy) and use them in your own serialization scheme.

Get Started

For example I want to know if I should skip a given member? There are a number of different things I can check. Is a Reference type null? Is there and XmlIgnore attribute? Is there a PropertyNameSpecified value set to false? All of those questions can easily be answered using reflection.


///

/// Should the current property be skipped based on rules
/// such as the existence of a propertySpecified value set to false?
///

/// The MemberInfo to check /// The object that contained this member /// true if this member should be skipped
public bool SkipMember(MemberInfo member, object o)
{
object val = null;
if (member.MemberType == MemberTypes.Field)
{
val = ((FieldInfo)member).GetValue(o);
}
else if (member.MemberType == MemberTypes.Property)
{
val = ((PropertyInfo)member).GetValue(o, null);
}

if (null == val)
return true;

string propertyToTest = member.Name + "Specified";

PropertyInfo specifiedProperty = o.GetType().GetProperty(propertyToTest);
if ((null != specifiedProperty && !(bool)specifiedProperty.GetValue(o, null)))
return true;

FieldInfo specifiedField = o.GetType().GetField(propertyToTest, FIELD_BINDING_FLAGS);
if ((null != specifiedField && !(bool)specifiedField.GetValue(o)))
return true;

return member.IsDefined(typeof(XmlIgnoreAttribute), false);
}

I can use a similar “fall-through” strategy to determine the name of the element to write using the XmlElement attribute for example. Now that I know I can answer some basic questions about an Object using the built-in mechanisms that .NET uses for serialization I can get down to serious serialization.

We’re all Object-Oriented programmers these days right? Right!? So to start I decided that the best way to handle this problem was to decompose it into a bunch of simpler problems.

ITagWriter

There are two things that we can write in XML. Either an XML Element or an XML Attribute. So, I created an interface ITagWriter with two concrete implementations to correspond to these two XML types: AttributeTagWriter and ElementTagWriter. These classes allow me to write the structure of the XML Document.


///

/// Interface to implement to write different Xml tags
/// Either Elements or Attributes.
///

internal interface ITagWriter
{
///

/// Write the opening Xml tag with the given name
///

/// The XML Document to write the tage to. /// The name of the tag void WriteStart(XmlWriter doc, string tagName);

///

/// Write the appropriate end tag
///

/// The XML Document to write the tage to. void WriteEnd(XmlWriter doc);
}

IValueWriter

With the ability to write the structure, I then need to be able to write out the values of the various objects and their properties. Just like with the ITagWriter interface, I decided to create an IValueWriter for the various kinds of values that I would need to write. The types I came up with were ObjectWriter, CollectionValueWriter, EnumValueWriter, SimpleValueWriter, and XmlElementValueWriter.


///

/// Interface to implement to write different kinds of values.
///

internal interface IValueWriter
{
///

/// Write the Entry value to the XmlDocument
///

/// The XML Document to write the tage to. /// The meta-information and value to write. void Write(XmlWriter doc, CustomSerializationEntry entry);
}

You’ll notice the CustomSerializationEntry class is the parameter for the IValueWriter.Write() method. This class contains all of the metadata and the value about the various properties of an Object. This alows us an easy way to ask questions about a given property. Is it a Collection? Is it an Enum? Is there a sort order? Basically the idea is to encapsulate all of the things that are interesting from a serialization point of view.

To help manage the interaction I also created a basic TypeLookup class. The job of this class is to determine what type of ITagWriter and IValueWriter to use for a given CustomSerializationEntry instance. This allows us to centralize that decision making in a single class. The centralized knowledge keeps the individual writer implementations much simpler. They just need to ask for the correct writer and then call the methods defined in the interface. They don’t need to care what type they are writing. All hail power of encapsulation and abstraction!

Start Serializing

I bootstrap the serialization by creating an ObjectWriter to handle the outermost object. From there, the ObjectWriter takes over, constructing CustomSerializationEntry objects for each of the serialized object’s properties. The type of the property determines the type of IValueWriter that is used to write the property value.


///

/// Serialize an object using the given
/// xmlRoot as the root element name.
///

/// /// ///
public string Serialize(object o, string xmlRoot)
{
StringBuilder sb = new StringBuilder();
using (XmlTextWriter writer = new XmlTextWriter(new StringWriter(sb)))
{
writer.Formatting = Formatting.Indented;

XmlWriter xmlDoc = XmlWriter.Create(writer);
WriteRootElement(xmlDoc, o, xmlRoot);
}

return sb.ToString();
}

private static void WriteRootElement(XmlWriter doc, object o, string rootElement)
{
doc.WriteStartDocument();

ObjectWriter writer = new ObjectWriter(new TypeLookup());
writer.Write(doc, o, rootElement);

doc.WriteEndDocument();
}

The ObjectWriter itself creates a CustomSerializationEntry for all the properties that should be written. It then loops over the properties. Notice how it uses the TypeLookup (lookup) to ask for the proper value writer for each of the properties.

// ...
public void Write(XmlWriter doc, object o, string elementName)
{
doc.WriteStartElement(elementName);
IEnumerable entries = GetMemberInfo(o);
foreach (CustomSerializationEntry currentEntry in entries)
{
lookup.GetValueWriter(currentEntry).Write(doc, currentEntry);
}
doc.WriteEndElement();
}
// ...

Conclusion

OK, so I left out a lot of details! But If I gave you all of the answers it wouldn’t be any fun now would it. I hope you can see how decomposing the problem of serialization turns it a series of relatively simple problems that you can answer. So, if I can do this in about 500 lines of code, how come Microsoft can’t implement a decent XML Serializer for the .NET Compact Framework.

.NET Compact Framework Serialization Bugs

XML Serialization in the .NET Compact Framework seems to be buggy enough that it is generally not useful if you need to serialize a class that conforms to a schema of some sort. If all you need to do is serialize and deserialize representations of a class you are probably fine. But if you need to use the serialized data to interoperate with a service (for example) it likely will not work.

Enums

I wrote previously about Problems with Enum Serialization on the .NET compact framework. To summarize: The XmlEnum attribute allows you to change the value that is serialized. One of the reasons to do this is to limit the valid values in the document. Of course, Enums have naming restrictions such as not being able to have spaces in them. So, one of the main reasons you would do this would be to put spaces in the name that was serialized.

The problem is that the XmlEnum under the .NET Compact Framework truncates the value at the first space. So, in the example below your Enums would serialize to “Some” and “Another” instead of the correct “Some Value” and “Another Value”.


public enum Foo {
[XmlEnum("Some Value")
Some,
[XmlEnum("Another Value")]
Other
}

Controlling Serialization of Value Types

.NET has reference types and value types. Reference types are stored on the heap and can be null. Value types are stored on the stack and can not be null. What is a value type and what is a reference type is not always obvious. int and double are value types but so is DateTime. string is a reference type though.

The serialization code is smart enough to not serialize null values. But what about those value types? They can not be null, so how do you determine if they should be serialized or not? There are two ways you can do it.

  1. DefaultValue
  2. PropertySpecified

DefaultValue

One way is to specify a default value. If the property has that default value it will not be serialized.

Example:

[XmlAttribute("age"), DefaultValue(-1)]
public int Age;

I have not found any problem with this on the .NET Compact Framework, but I haven’t used it extensively.
DefaultValue really does not make sense for every case. What about a value where negative, positive and zero all make sense? What about a boolean value where true and false are both meaningful and different from null? Valid values are really a business concept and many business concepts will not constrain them in such a way that you can specify a DefaultValue to control serialization, so this is not always useful.

See: more information on DefaultValue.

PropertySpecified

.NET serialization also allows you to specify a control value to check to see if a property should be serialized. The form of the control value is propertynameSpecified. It is a boolean value. If the value is false, then the property it controls will not be serialized. If the value, is true then it will be serialized.

Example:

[XmlIgnore]
public bool FooSpecified = false;

[XmlElement("foo)]
public int Foo {
get { return this.foo; }
set {
this.foo = value;
this.FooSpecified = true;
}
}

This is fine in the full framework. The problem is with the .NET Compact Framework. When the serializer comes across a propertynameSpecified value that is false, the serialization of that class stops. This means if you have 5 properties and the second property has a control value set to false, only the first value will be serialized!

Coming Soon

In a future post, I will write about how you can relatively easily write your own XML Serializer for the .NET Compact Framework in about 500 lines of code.

Update: follow-up post on Writing a Custom XML Serializer has been posted.

Automated Subversion Tagging With MSBuild

I’ve written previously about using MSBuild With Nunit as well as a bit of a Manifesto on Relentless Build Automation. I believe that automating the build and deployment process is a necessary step to ensure the reliable delivery of quality software.

Release Management

One of the things that we as software developers have to do is regularly make releases, either to QA or to a customer. When we make releases we need to be able to continue to develop the software to add new features and fix bugs. Tagging is a safety process that we use in a version control system to allow us to easily get to the code that we used to build a specific release. When you Tag, you can always rebuild that release. So, if weeks or months down the road you need to fix a critical bug, but don’t want to release new features, you can get back to the Tag, create a Branch, fix the bug and release the new build to your users.

How Do We Ensure We Can Recreate Releases

How can we ensure that we will be able to recreate a release that we make to either QA or a customer? Use Automation to Tag your builds when you create them of course.

I’ve contributed a new SvnCopy Task to the MSBuild Community Tasks project which was just accepted and committed. It is currently in the Subversion repository for the project but should be available shortly in an official build. This will allow you to easily automate the process of Tagging or Branching your builds when your release. Subversion uses the Copy metaphor for both Branching and Tagging operations which is different from some other version control systems.

Example:



PropertyName="RemoteSvnRevisionNumber" />



Text="You must set your Subversion Username."/>

DestinationPath="$(SvnRemoteRoot)/tags/REV-$(RemoteSvnRevisionNumber)"
Message="Auto-tagging Revision: $(RemoteSvnRevisionNumber)"
Username="$(SvnUserName)" password="$(SvnPassword)"/>


You can then integrate the process of creating a Tag every time you generate a build by tying together Tasks with dependencies. In the example below, the GenerateTestBuild calls GenerateCabFiles and Tag to automatically build the installer and Tag Subversion with the current revisions number.



Command="devenv "$(SolutionFileName)" /build $(Configuration) /project $(ConfigCabFileProject)"/>
Command="devenv "$(SolutionFileName)" /build $(Configuration) /project $(UiCabFileProject)"/>




DestinationFolder="$(DestRoot)\Build$(SvnRevisionNumber)"/>
DestinationFolder="$(DestRoot)\Build$(SvnRevisionNumber)"/>

DestinationFolder="$(DestRoot)\$(BuildFolderPrefix)$(SvnRevisionNumber)\FormXmls"/>


Hopefully this will help you get started on some more automation.

Update:
MSBuild Community Tasks version 1.2 has been released containing this code. You can get it here.

Resources

MSBuild Community Tasks
Subversion

More .NET Compact Framework Woes

I posted previously on a Bug in the .NET Compact Framework with the XmlEnum Attribute with whitespace in the name. Well I’ve run into some other interesting “features”.

The first thing to realize is that the things that work on the Full framework don’t work on the Compact Framework.

What Works With Serialization

First thing is the good news: Arrays work everywhere. They work on the Compact Framework and the Full Framework. The downside of course is that Arrays are very inconvenient. You have to manually resize them yourself for example.

The second thing that works are regular ILists. When you map them a collection will be used. Of course one of the big improvements in .NET 2.0 was the introduction of Generics. Generics allow you to have strongly-typed collections without manually implementing them for each specific type.

More Generics Problems

Xml Serialization works if you use IList. But there are problems with Generics interfaces. I guess there are more than problems. Short story is that you can not use IList for XML serialization, it plain just doesn’t work.

Here’s where the difference between the Full Framework and the Compact Framework come into play. On the Full Framework, you can map to a concrete collection wether it’s generic or not. So List will work. Unfortunately this does not work in the Compact Framework.


[XmlElement(Name="foo")]
public List Foos
{
get { return this.foos }
set { this.foos = value; }
}

You end up with an exception:

Two mappings for string[].

Stack Trace:
at System.Xml.Serialization.TypeContainer.AddType()
at System.Xml.Serialization.TypeContainer.AddType()
at System.Xml.Serialization.XmlSerializationReflector.AddIXmlSerializableType()
at System.Xml.Serialization.XmlSerializationReflector.AddType()
at System.Xml.Serialization.XmlSerializationReflector.FindType()
at System.Xml.Serialization.XmlSerializationReflector.FindType()
.....

So What Do You Do About It

You basically have 2 options:

  1. Use Arrays
  2. Create your own custom, strongly-typed collections classes

Arrays

The advantage is that this is very simple and requires no extra code. The downside as I said above is that you have to write code to manually do array resizing.


public T[] AppendItem(T[] theArray, T newItem) {
T[] newArray = new T[theArray.Length + 1];
Array.Copy(theArray, newArray, theArray.Length);
newArray[newArray.Length - 1] = newItem;
return newArray;
}

Custom, Strongly-Typed Collections

If you don’t want to use Arrays and deal with manually resizing them, you can build your own Collections classes for each of your types.

Create your strongly-typed collection:

[Serializable]
[EditorBrowsable(EditorBrowsableState.Advanced)]
public class EmployeeCollection : ArrayList {
public Employee Add(Employee obj) {
base.Add(obj);
return obj;
}

public Employee Add() {
return Add(new Employee());
}

public void Insert(int index, Employee obj) {
base.Insert(index, obj);
}

public void Remove(Employee obj) {
base.Remove(obj);
}

new public Employee this[int index] {
get { return (Employee) base[index]; }
set { base[index] = value; }
}
}

Then use that collection in your class to map:

[XmlRoot("company")]
class Company {
private EmployeeCollection employees;

[XmlElement(Type=typeof(Employee),ElementName="employee",IsNullable=false)]
[EditorBrowsable(EditorBrowsableState.Advanced)]
public EmployeeCollection Employees {
get { return this.employees; }
set { this.employees = value; }
}
}

Pick Your Poison

The problems with Generics collections in the .NET Compact Framework seem like yet another bug. So, pick your poison and choose a workaround. Whichever seems simpler to you. Hope that helps someone out.

.NET Makes Me Mad (Generics and Collections edition)

Ok, so I’ve decided I need to rant a little bit about .NET. This ends up in part being, “What I like about Java that I don’t like about C#”. I think this is fair though. It’s not like C# and .NET were developed in a vacuum. It’s not like C# is the first Object Oriented, VM run language. As such I think it’s fair to point out where they should have learned from others.

Generics Don’t Fully Support Covariant Types

Generic collections in .NET can only handle a single type of object well. You can add a sub-type to a Collection, but if you have two Collections with covariant types, you can not mix them without jumping through hoops.

Example

The simple case of adding a single Covariant type works, but when dealing with a Generic Collection of covariant types, it does not.

abstract class Vehicle {
}

class Car : Vehicle {
}

class MotorCycle : Vehicle {
}

List vehicles = new List();
vehicles.Add(new Car()); // This is OK

List motorCycles = LoadMotorCycles();
vehicles.AddRange(motorCycles); // This does not work!

To make it work with AddRange, you have to perform a manual conversion

public IList AllVehicles
{
get
{
List vehicleList = new List();
vehicleList
.AddRange(AllCars.ConvertAll(
new Converter(ToVehicle)));
vehicleList
.AddRange(AllCycles.ConvertAll(
new Converter(ToVehicle)));
return vehicleList ;
}
}

private static Vehicle ToVehicle(T vehic) where T : Vehicle
{
return vehic;
}

What’s Good About .NET Generics

.NET Generic type information is available at runtime. In Java Generics are implemented as an erasure. Basically this means all of the type checking is done at compile time. The compiler then inserts explicit casts into the code for you. At runtime the code looks the same as if generics were never used. .NET chose not to use erasures but to make the type information available at runtime. This is generally more efficient and less prone to errors or problems with reflections. So good work there.

Note: (The Java folks did this so as not to break backwards compatibility. I think that major revisions should be allowed to break backwards compatibility when there are compelling reasons to do so.)

.NET Collections

Are collections classes such a mysterious art?

.NET does not have a Set or a Bag. These are generally useful and very common collections. A Set guarantees the uniqueness of elements in List like interface. A Bag can contain any objects. The unique thing about it is that it keeps a count of the same objects.

Example of a Bag


Bag fruitBag = new Bag();
Banana b = new Banana();
Apple a = new Apple();
fruitBag.Add(b);
fruitBag.Add(b);
fruitBag.Add(a);

int bananas = fruitBag.GetCount(b);

The SortedList and the SortedDictionary both have a Dictionary interface. Why wouldn’t the SortedList have an, uhh maybe, a List interface?

The IList interface is so anemic as to be basically worthless. IList does not even have an AddRange method (or an AddAll) to merge the values in one collection into another. It’s so limited that it makes it very hard to return interfaces from classes which is a good idea to encapsulate the implementation details of methods.

What Do You Think

Do you have things about .NET that annoy you? If so, leave a comment and let me know what it is.

Using SQL Compact Edition Under ASP.NET

What is SQLCE?

SQL Compact Edition is the “low end” version of a SQL database solution from Microsoft. It is a single-file, application managed, database implementation. It doesn’t have all of the bells and whistles of the high end database solutions. This is great when you realize the next lowest version, SQL Express is over a 100MB install.

The beta of this software was called SQL Everywhere Edition (SQLEV). Microsoft decided that they didn’t like the name so they went with SQL Compact Edition (SQLCE ). The name Compact Edition is a bit of a misnomer. It can be used on the Compact Framework, but it can also be used on the full-framework anywhere a single-file, zero-install database might be needed. Well almost Everywhere at least. There is an explicit check when running on the Full Framework to make sure that you are not using it in an ASP.NET application. SQLCE is not a very scalable solution. It has some inherent limitations with concurrent connections for example. This is fine though if you go back to “what are you using this for”? An embedded, single-file database. Well I ran into a case where I need a small web-service to be running where an embedded database makes a lot of sense. I’m using the HttpListener class to run my own Http.sys server without using IIS. This still counts as ASP.NET to the SQLCE code though.

Force SQLCE to run under ASP.NET

Steve Lasker posted blog entry on how to use SQLCE under ASP.NET using the pre-release version of the SQLEV. Under SQLEV you set a flag that would tell the SQL Connection, “yes I know this isn’t supported, but let me do it anyway”:

AppDomain.CurrentDomain.SetData("SQLServerEverywhereUnderWebHosting", true)

As you can see the name of the product is right there in the key. Well they changed the name of the product and so they changed the name of the key. So, if you were using the beta for development and are now switching over to the release version of SQLCE , you will need to change the key:

AppDomain.CurrentDomain.SetData("SQLServerCompactEditionUnderWebHosting", true)

That should allow you to use the database under ASP.NET. Now you can revel in the fact that you are swimming in unsupported waters!

Special Thanks
I found this using the great .NET disassembler Lutz Roeder’s .NET Reflector. You should check it out. It can be a great way to track down details of an API implementation when the abstraction is a bit leaky.

Bug In the .NET CompactFramework XmlEnum with Whitespace

In .NET you can use Attributes to mark up properties in a class to tell the XmlSerializer how to marshal that class to and from XML. There are a number of different attributes you can use. You can use enums for restricted lists of values.

E.g:


public enum Foo {
[XmlEnum("Some Value")
Some,
[XmlEnum("Another")]
Other

}

To represent something like <element>Some Value</element>

At least that’s the idea…
It seems like XmlEnum marked values with whitespace in their names will not be deserialized on the .NET CF 2.0. It throws an InvalidOperationException saying it can’t find the enum with value of the part of the string before the space.

I would expect it to deserialize into the enum Foo.Some. All of this works on the Full Framework.

Workaround

If you control the schema you can workaround this issue by changing the allowed values not to have spaces in them. But in many cases you might be trying to integrate with an existing service or schema that you do not control and as such, you won’t have the ability to change it. If that’s the case what do you do?

Luckily I found some code to help me get the XmlEnumAttribute value for an Enum field (thanks for sharing).

I made some slight modifications to it, so that it was a little safer (I think).

///

/// Convert an Enum value to a string representation using the XmlEnum value if
/// it is appropriate.
///

public static string ConvertToString(Enum e)
{
if (null == e)
throw new ArgumentNullException("Enum type can not be null.");

// Get the Type of the enum
Type t = e.GetType();

// Get the FieldInfo for the member field with the enums name
FieldInfo info = t.GetField(e.ToString("G"));

// Check to see if the XmlEnumAttribute is defined on this field
if (null != info && info.IsDefined(typeof(XmlEnumAttribute), false))
{
// Get the XmlEnumAttribute
object[] attrs = info.GetCustomAttributes(typeof(XmlEnumAttribute), false);
if (attrs.Length > 0)
{
XmlEnumAttribute att = (XmlEnumAttribute)attrs[0];
return att.Name;
}
}

// If no XmlEnumAttribute was found then return the string version of the enum.
return e.ToString("G");
}

I also needed to go the other way then, and convert a String back into an Enum so that I can support deserializing from XML into the object model.

///

/// Get an Enum value for a given type of Enum based on the String name of the Enum
/// itself or the XmlEnum element that it has.
///

public static Enum ConvertToEnum(Type enumType, string name)
{
// Look for an XmlEnumAttribute with the given name
foreach (FieldInfo fi in enumType.GetFields())
{
object[] attrs = fi.GetCustomAttributes(typeof(XmlEnumAttribute), false);
if ((attrs.Length > 0 && ((XmlEnumAttribute)attrs[0]).Name == name )
|| fi.Name == name)
return (Enum) Enum.Parse(enumType, fi.Name, false);
}

return null;
}

Using this code, you can offer an API to client code that uses the strongly-typed Enum, but does a manual conversion for the serialization process.

class Bar {

private Foo fooType;

// The code called by the application
[XmlIgnore]
public Foo FooType {
get { return fooType; }
set { fooType = value; }
}

// The code used to manage serialization
// Ideally this could be private, but that doesn't work with the serializer
[XmlElement("baz")]
public string FooTypeForSerializer {
get { return ConvertToString(fooType); }
set { fooType = (Foo) ConvertToEnum(typeof(Foo), value);
}
}

Conclusion

One big workaround for what seems to me to be a bug. But it works, and I can continue to get work done. It should also be easy to refactor this workaround out if the bug ever gets fixed. Just change the [XmlIgnore] tag and delete the ForSerializer methods.

I hope that helps if you run into a similar problem.