TFS Upgrade Nightmares

Recently we needed to upgrade our TFS 2012 server to TFS 2013.  The last time we upgraded TFS 2010 to TFS 2012 it was a nightmare so we took some more precautions this time.  Alas they didn’t help.  This upgrade proved to be just as big of a nightmare as the last time.  Here’s my story of upgrading TFS based upon my 2 previous attempts.

Let’s start with our environment.  We have the TFS 2012 application tier hosted on its own Windows Server 2008 R2 machine.  When we first installed TFS 2012 we were running an older version of SQL that we couldn’t upgrade so we deployed SQL Server 2008 R2 to the TFS server along with SSRS.  Later on we got a SQL 2008 R2 server up on another server so we moved just the TFS database to the new server.  At the time we didn’t have any SSRS servers so we left the reporting DB on the TFS server.  Our database servers are managed by our data group so we don’t have full permissions to the database.

As part of upgrading to TFS 2013 we have to upgrade our SQL server to 2012 because SQL 2008 R2 isn’t supported.  Therefore we decided to break up the upgrade into 2 steps done about a week apart.  The first step would be migrating our existing TFS and reporting databases to a new SQL 2012 database dedicated to TFS.  Once it is on its own, dedicate server we can have better management of it.  We would bring the current databases offline and let it bake for a week. 

The second step would have us upgrade the TFS server to TFS 2013.  We were doing an in place upgrade because: a) the server is more than sufficient, and b) we didn’t want to have to send out new URLs.

Problem 1 – Database Privileges

The first problem we ran into is with how TFS manages database permissions.  Over the years we’ve had people come and go so I need to add and remove TFS administrators.  Unfortunately this requires that you be a SQL admin.  Why?  I don’t know but it certainly causes the DB admins fits when I tell them I need SQL admin rights just to add a user to the admin console.

When upgrading TFS not only do you need full sysadmin rights to the SQL database but also SSAS (for reporting) and SSRS.  This is understandable but it still doesn’t make the DB admins happy.

Problem 2 – Moving a TFS Database

The problem with moving the TFS database is you cannot do it in place.  If you read the documentation for TFS it says you can.  It says to rerun the install (which should be really quick) and then when the configuration tool starts choose the option to upgrade.  Alas the configuration options are all disabled. 

The only real solution is to uninstall TFS and then reinstall it.  Why can’t we have an option to move the database?  Reinstalling TFS is a complete waste of time just to move the database.  Even worse is that uninstalling TFS takes longer than installing it.  Why can’t we access the configuration tool outside the installer?  Looking at the command line it is actually the standard admin tool but with a parameter added so why can’t we have a shortcut in the Start Menu for it?  Trying to reinstall TFS introduces the next problem.

Problem 4 – Reinstalling TFS

Have you ever tried to find the version of TFS that you originally installed with?  In our case we were running Update 2 but every link to Update 2 will redirect you to Update 4.  You have 2 options: keep looking or upgrade.

The only place I actually found Update 2 was by going back to MSDN Downloads.  Why doesn’t MS provide easier access to the previous updates?  Ideally we would just keep a copy of the update on the server but the full version is over 1 GB in size and our servers don’t have a lot of hard drive space (because they don’t need that much).  Remember, the only reason you need the installer is so you can move the database.  So I decided to go ahead and update to Update 4.

UPDATE 3: Someone from the TFS team responded that all you need to do to move the database is the following:

  • Stop the collection
  • Backup/restore the database
  • Edit the collection settings to move the database

I do see these options in TFS but I'm not sure whether that would include the configuration database (which got moved as well) or just the collection database.  The next time I need to move the database I'll have to try it and see.

Problem 5 –  Restoring the Reporting Database

After restoring the database and installing Update 4 I started the configuration process to point it to the new database.  The configuration tool found the database, confirmed it could be updated but it refused to work with the new SSRS instance.  We had already copied the reporting database to the new server and restored the encryption key but it continued to fail with a TFS error saying the reporting server could not be found.  The various suggestions it provided were all useless – firewall, permissions, bad name.  We checked and rechecked everything but nothing.

At this point it was getting late and we needed TFS online so we decided to abort.  We brought the original databases back online so I pointed our newly reinstalled TFS server to the new databases.  It had no problem finding the database or SSRS instance.  But it did let us know that we would be updating our database to Update 4.  No problem.  We let it do its validation checks and, beyond the firewall warning, it said everything would be great.  We finished the configuration wizard, it completed the update steps and then began running the collection upgrades.  At this point we got a failure in one of the scripts – “fails in WorkItemTrackingToDev11M55.sql’. 

Problem 6 – TFS 2012 != TFS 2012 for Any Value of Update

Hmm, a quick google found this reference.  Evidently Update 4 requires that you’re running SQL 2008 R2 SP1 even though TFS Update 2 does not.  Does MS even bother documenting system requirement changes for updates?  Not everybody upgrades their servers every time a new update comes out.

Now we’re in trouble.  We cannot upgrade our SQL database during an after hours TFS migration.  For reasons unknown to anyone but the TFS team all the validations succeeded, the databases were partially updated but the update scripts failed.  Why isn’t this caught back before it is too late?  What is the purpose of running validation if you’re not catching things that will cause it to fail?

We ended up having to restore the databases to get them back to Update 2.  By that time I had downloaded Update 2 from MS, uninstalled Update 4 and reinstalled Update 2.  Now we were back up and running, at least for now.  We went home defeated and annoyed.

Problem 7 – Reporting, Part 2

A few days later we were ready to move the databases again.  This time we knew a reinstall of Update 2 was needed so we were ready.  Uninstall, move databases, reinstall, run the configuration wizard.  Again we ran into the problem where it could find the TFS database and even the Warehouse and Analysis databases but it refused to recognize the SSRS instance.  We verified we could access SSRS but the installer refused. 

We decided to risk it.  We manually entered the reporting URLs and continued anyway.  TFS did its configuration and everything checked out OK.  But when we tried to run a report we got an error about the datasource.  We verified the source was correct but the reports weren’t working.  But this time we saw something about the databases needing to be rebuilt.  Arg!! I just wanted to go home.  We started the rebuild of the databases but it was taking too long to run so we left it and hoped for the best.  Late that night we verified the rebuild was done and our reports were working.  Success!!!

UPDATE 1: Unfortuately not everything is going well with the reports.  We were finally able to track down that the issue was with the user account I was using to configure TFS.  Evidently you need to be an administrator on the machine hosting the SSRS instance (not just an admin for SSRS).  Without admin privileges you cannot access the WMI provider remotely which causes the configuration to fail.  Needless to say the network admins were not pleased.  They tried to strip things down to the minimal privileges but nothing short of full admin worked. 

UPDATE 2: We have run into another, more serious issue though, we can no longer create new team projects in any collection.  I can create a new collection but creating any team project fails.  It always fails when trying to create the SSRS reports and the error message is useless.  The error message says it cannot connect to SSRS so the host name is bad, the connection timed out or the database is corrupt.  Now I'm really annoyed.  In a last ditch effort we are rebuilding the reporting databases.  If this doesn't work we may need to retreat back to the old server.

It was another brutal upgrade and all we really accomplished was moving the databases.  We are still planning to upgrade to TFS 2013 but nobody is looking forward to it.  Since the TFS upgrade won’t requiring moving servers or database we’re hoping it’ll be easier but experience says otherwise.

Final Thoughts

Is it always going to be this bad?  I don’t believe so.  I’ve run TFS in other places with the SQL database and application tiers on the same machine and I’ve had minimal trouble upgrading over the years.  But I don’t believe the environment I described earlier is at all unique.  So I’m confused as to why we have so many issues upgrading TFS when it is a standard use case.  Why is it so hard to move a TFS database?  Why do we have to uninstall/reinstall so much?  Why does the installer do verification on the databases but still fail to catch obvious issues (like server versions)?  TFS really has a long way to go before the upgrade process is smooth.  Until then I’ll live in fear every time I need to update it.

Posted Sat, Apr 12 2014 by Michael Taylor | 2 comment(s)
Filed under:
A Smarter WCF Service Client, Part 4

In the last article we presented a solution for calling WCF services that needed only a single line of code and no using statements.  But it had many limitations including

  • Reliance on a static class
  • Testability was low for any code using it
  • Reliance on the WCF client rather than the contract
  • Not optimized for multiple calls

In reality the previous series of articles where all about setting up the infrastructure to support the final solution.  All this infrastructure will be hidden from code when we are done.  Let’s get started.

Service Template

Ultimately we will be using T4 to generate service clients but the templates are complex and we do not yet even have an idea of what we’re building.  For this article we will build the final service client by hand.  In subsequent articles we will convert the code to a template. 

The service client should meet the following goals

  • Creatable using a standard new operator with no requirements for specifying any WCF endpoint information.
  • Implement the WCF contract interface so instances can be used as parameters such as in an IoC.
  • Instances do not need to be disposed.
  • Should allow multiple calls to be optimized through the same client.
  • Extensible to allow callers to adjust the client if needed.

ServiceChannelClient

To start we will create a new abstract class that will serve as the base class for service clients.  While we could reuse the ServiceClientWrapper of previous articles that would expose too much WCF infrastructure.  Instead the class will wrap calls to ServiceClientWrapper and ServiceClientFactory from the previous articles.  All the functionality is protected since this is an abstract class.

public abstract class ServiceChannelClient<TChannel> where TChannel: class { protected virtual ServiceClientWrapper<TChannel> CreateInstance () { return ServiceClientFactory.CreateAndWrap<TChannel>(); } protected virtual void InvokeMethod ( Action<TChannel> action ) { ServiceClientFactory.InvokeMethod<TChannel>(action, CreateInstance); } protected virtual TResult InvokeMethod<TResult> ( Func<TChannel, TResult> action ) { return ServiceClientFactory.InvokeMethod<TChannel, TResult>(action, CreateInstance); } }


Basically the class just wraps the classes from previous articles.  It is about to get more functionality but first let’s implement a WCF service client using the class.

Typed Service Clients

Creating a class to wrap a WCF service is now pretty straightforward (depending upon the size of the interface).  We need only create a derived type and implement the interface using the methods provided in the base class.

public class EmployeeServiceClient : ServiceChannelClient<IEmployeeService> , IEmployeeService { public Employee Get ( int id ) { return InvokeMethod<Employee>(c => c.Get(id)); }

public Employee Update ( Employee employee ) { return InvokeMethod<Employee>(c => c.Update(employee)); }
public IEnumerable<Employee> GetEmployees () { return InvokeMethod<IEnumerable<Employee>>(c => c.GetEmployees()); } }


If we wanted we could make all the methods virtual to allow for extensibility.  We could also mark the type as partial to extend its abilities but the above code is sufficient until we move to T4. 

Switching the application over to use the new client is straightforward.

  • Remove the service reference code
  • Add the service interface assembly to the project
  • Replace all the existing calls with the new type

When the service reference code is removed it will remove the configuration entry for the service as well so that information will need to be put back in.  For now ensure the endpoint name matches the class name and that the contract name matches the namespace name of the interface.

<!-- Original --> <client> <endpoint address="http://localhost:21054/EmployeeService.svc" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IEmployeeService" contract="ServiceReference1.IEmployeeService" name="BasicHttpBinding_IEmployeeService" /> </client> <!-- New --> <client> <endpoint address="http://localhost:21054/EmployeeService.svc" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IEmployeeService" contract="ServiceLib.IEmployeeService" name="EmployeeServiceClient" /> </client>


The matching of the endpoint name with the client type name is only a convenience.  We can add a constructor that allows the endpoint name to be specified if we want.  Finally we can fix up the calls to the client.

//Original ServiceClientFactory.InvokeMethod<ServiceReference1.IEmployeeService>(c => { dataGridView1.DataSource = c.GetEmployees(); }); //New dataGridView1.DataSource = Client.GetEmployees(); //Poor man's injection private IEmployeeService Client { get { return m_client.Value; } } private readonly Lazy<IEmployeeService> m_client = new Lazy<IEmployeeService>(() => new EmployeeServiceClient());


Now that we don’t need to dispose of objects or rely on static classes we can use just the interface which makes testing much easier.

Multiple Calls

We’re almost done but we want to make one more change.  WCF service calls are not cheap so most calls tend to do several things at once rather than focusing on a single task like standard OOP would have you do.  But there are times when multiple calls are needed and the above code would require 2 client connections to be created.  Take, for example, this code in the sample application that creates 2 connections just to update an employee.

private void OnNewEmployee ( object sender, EventArgs e ) { var dlg = new EmployeeForm(); if (dlg.ShowDialog(this) != DialogResult.OK) return; //Call the service Client.Update(dlg.Employee); LoadEmployees(); } private void LoadEmployees ( ) { dataGridView1.DataSource = Client.GetEmployees(); }


What we want to do is allow the client to tell us they plan to make multiple calls so we can optimize the client creation.  Since the client needs to be able to define the lifetime of the calls it makes sense to use a using statement.  All we need to do is provide a mechanism for creating a type that controls the lifetime of our client.  We can add this directly to the base client.

public abstract class ServiceChannelClient<TChannel> : ISupportsPersistentChannel where TChannel: class { protected bool HasOpenChannel { get { return m_channel != null; } } protected virtual void CloseChannel () { var channel = Interlocked.Exchange(ref m_channel, null); if (channel != null) channel.Close(); } protected virtual ServiceClientWrapper<TChannel> CreateInstance () { return ServiceClientFactory.CreateAndWrap<TChannel>(); } protected virtual void InvokeMethod ( Action<TChannel> action ) { if (HasOpenChannel) action(m_channel.Client); else ServiceClientFactory.InvokeMethod<TChannel>(action, CreateInstance); } protected virtual TResult InvokeMethod<TResult> ( Func<TChannel, TResult> action ) { return HasOpenChannel ? action(m_channel.Client) : ServiceClientFactory.InvokeMethod<TChannel, TResult>(action, CreateInstance); } protected virtual void OpenChannel () { if (m_channel == null) { var channel = CreateInstance(); var oldChannel = Interlocked.CompareExchange(ref m_channel, channel, null); if (oldChannel != null) channel.Close(); }; }

void

 

 

ISupportsPersistentChannel.Close ()

{

CloseChannel();

}

 

 

void ISupportsPersistentChannel.Open ()

{

OpenChannel();

}

 

private ServiceClientWrapper<TChannel> m_channel; }


Now the base class can have a temporary or permanent connection open depending upon the caller’s needs.  But how do we expose this to the caller?  Introducing the ISupportsPersistentChannel interface.  The sole purpose of this interface is to expose Open/Close methods.  Notice the methods are not exposed publicly but only through the interface.  This prevents callers from calling these methods accidentally.  Why is this important?  Why do we need an interface?  It’s all about interchangeability.

Creating A Persistent Channel

Remember that one of the requirements is that we want to be able to use the service interface directly without regards to the implementation, including mocks.  Therefore persistence cannot require that we use the client class.  Instead we add an extension to the interface which gives us the ability to open a persisted channel irrelevant of the implementation.  The caller code will work the same whether we’re using our client type or a test mock.  Here’s the implementation.

public static class EmployeeServiceExtensions { public static IDisposable OpenPersistentChannel ( this IEmployeeService source ) { var batch = source as ISupportsPersistentChannel; if (batch != null) { batch.Open(); return new ActionDisposable(batch.Close); }; return Disposable.Empty; } }


The key is that the extension method will look for the ISupportsPersistentChannel interface.  For types deriving from ServiceChannelClient the interface will exist.  For other types it likely will not.  If the interface does exist then the channel is opened and an instance of a disposable class is returned that will close the channel.  Note the client itself is never returned.  If the interface does not exist then a disposable object is returned that does nothing.  This makes it safe for both production and test code.  Here’s how the caller would use it.

private void OnNewEmployee ( object sender, EventArgs e ) { var dlg = new EmployeeForm(); if (dlg.ShowDialog(this) != DialogResult.OK) return; //Call the service using (var proxy = Client.OpenPersistentChannel()) { Client.Update(dlg.Employee); LoadEmployees(Client); }; } private void LoadEmployees ( ) { LoadEmployees(Client); } private void LoadEmployees ( IEmployeeService service ) { dataGridView1.DataSource = service.GetEmployees(); }


Next Steps

We are finally at the point where we have replaced the original service reference logic with a fully testable client that eliminates the dependency on WCF plumbing.  The resultant code is clean and can be mocked.  In the general case of single calls to WCF then the caller does not need to anything special.  For multiple calls the client (can) optimize performance with a using statement and it won’t break any test implementations.  The only issue remaining is the amount of code we’d need to generate for each WCF service.  The example client we created can be converted to a T4 template but it is going to require quite a bit of work.  However once we’re done we won’t have to revisit this code ever again.  We’ll pick up T4 next time.

A Smarter WCF Service Client, Part 3

In the last article we replaced the standard WCF service client with an improved version.  But beyond solving an issue with cleaning up the client we haven’t really improved the calling code any.  In this article we are going to look at one approach for making the calling code cleaner.  The less code that is needed to call something the cleaner it tends to make code.  We will present one approach for making WCF calls one-liners.  This is not always the best solution so we will talk about the disadvantages as well.

ServiceClientFactory

ServiceClientFactory is a static class that wraps calls to WCF services.  It’s sole purpose is to hide the boilerplate code involved in most service calls.  If you analysis a standard WCF call you will see the following pattern.

using (var client = new SomeServiceClientProxy())
{
    //Call a service method
};

Boilerplate code is ideal for encapsulation as it rarely changes, reduces the amount of code that has to be written and eliminates issues with copy/paste coding.  ServiceClientFactory wraps all this boilerplate code into a single method call making it easier to manage.

Here is the class boiled down to its core.  It really isn’t that complicated but it dramatically reduces the amount of code that has to been by clients.

public static class ServiceClientFactory
{
    public static ServiceClientWrapper<TChannel> CreateAndWrap<TChannel> () where TChannel : class
    {
        return new ServiceClientWrapper<TChannel>();
    }

    public static ServiceClientWrapper<TChannel> CreateAndWrap<TChannel> ( Binding binding, EndpointAddress remoteAddress ) where TChannel : class
    {
        return new ServiceClientWrapper<TChannel>(binding, remoteAddress);
    }

    public static ServiceClientWrapper<TChannel> CreateAndWrap<TChannel> ( InstanceContext callbackInstance,
                                        Binding binding, EndpointAddress remoteAddress ) where TChannel : class
    {
        return new ServiceClientWrapper<TChannel>(callbackInstance, binding, remoteAddress);
    }

    public static ServiceClientWrapper<TChannel> CreateAndWrap<TChannel> ( InstanceContext callbackInstance,
                                        string endpointConfigurationName, EndpointAddress remoteAddress ) where TChannel : class
    {
        return new ServiceClientWrapper<TChannel>(callbackInstance, endpointConfigurationName, remoteAddress);
    }

    public static ServiceClientWrapper<TChannel> CreateAndWrap<TChannel> ( InstanceContext callbackInstance,
                                        string endpointConfigurationName, string remoteAddress ) where TChannel : class
    {
        return new ServiceClientWrapper<TChannel>(callbackInstance, endpointConfigurationName, remoteAddress);
    }

    public static void InvokeMethod<TChannel> ( Action<TChannel> invocation ) where TChannel : class
    {
        if (invocation == null)
            throw new ArgumentNullException("invocation");

        using (var proxy = CreateAndWrap<TChannel>())
        {
            invocation(proxy.Client);
        };
    }

    public static void InvokeMethod<TChannel> ( Action<TChannel> invocation, Func<ServiceClientWrapper<TChannel>> initializer
                                                ) where TChannel : class
    {
        if (invocation == null)
            throw new ArgumentNullException("invocation");

        Func<ServiceClientWrapper<TChannel>> init = initializer ?? (() => CreateAndWrap<TChannel>());

        using (var proxy = init())
        {
            invocation(proxy.Client);
        };
    }

    public static TResult InvokeMethod<TChannel, TResult> ( Func<TChannel, TResult> invocation ) where TChannel : class
    {
        if (invocation == null)
            throw new ArgumentNullException("invocation");

        using (var proxy = CreateAndWrap<TChannel>())
        {
            return invocation(proxy.Client);
        };
    }

    public static TResult InvokeMethod<TChannel, TResult> ( Func<TChannel, TResult> invocation, Func<ServiceClientWrapper<TChannel>> initializer
                                                            ) where TChannel : class
    {
        if (invocation == null)
            throw new ArgumentNullException("invocation");

        Func<ServiceClientWrapper<TChannel>> init = initializer ?? (() => CreateAndWrap<TChannel>());

        using (var proxy = init())
        {
            return invocation(proxy.Client);
        };
    }
}

Invoking Methods

The core function ServiceClientFactory provides is to call WCF methods.  With ServiceClientFactory it is a single line of code.  Internally the code creates an instance of the service client, using the ServiceClientWrapper from the last article, wrapped in a using statement.  It then invokes an action provided as a parameter passing the action the client proxy.  The client code from previous articles can be rewritten as follows.

ServiceClientFactory.InvokeMethod<ServiceReference1.IEmployeeService>(c => c.Update(dlg.Employee));

In a good SOA architecture you are taught that service calls should be broad, meaning a single service call should do all the work necessary to handle the request.  Unlike traditionally object oriented code it is generally bad if you need to make multiple calls to a service in a single request.  Hence ServiceClientFactory is optimized for the common case where a request needs to call one and only one service method.  In the original version it took 4 lines of code to call one method.  Now it takes one.

Another benefit of this approach is that the action is working with the service interface and not the proxy, this hides the actual proxy implementation from the client code because all the client sees is the interface.  If you need to do more work than just a single method call then use a lambda.

ServiceClientFactory.InvokeMethod<ServiceInterface>(c =>
{
    //Call first method
    //Do some work
    //Call another method
});

Calling a method that returns a value is just as simple, just add the return type as a parameter to the method.

private void ShowEmployeeDetails ( int id )
{
    var employee = ServiceClientFactory.InvokeMethod<ServiceReference1.IEmployeeService, Employee>(
                        c => c.Get(id)
        );

    var form = new EmployeeForm();
    form.Employee = employee;

    form.ShowDialog(this);
}

Proxy Generation

The other core function provided by ServiceClientFactory is to create the WCF service client using CreateAndWrap.  In the simplest case it just creates an instance of the wrapper but there are various overloads that accept endpoints, bindings and other parameters in the cases where the service is not defined declaratively in the config file.  In the rare cases where a client needs to create an explicitly instance of the proxy then it can do so using these methods.  The only time this becomes a burden is when you want to create a proxy in a non-standard way and invoke one of its methods.  For this special case there is an overload of InvokeMethod that accepts a function that returns the wrapper.

public static TResult InvokeMethod<TChannel, TResult> ( Func<TChannel, TResult> invocation, Func<ServiceClientWrapper<TChannel>> initializer
                                                        ) where TChannel : class
{
    if (invocation == null)
        throw new ArgumentNullException("invocation");

    Func<ServiceClientWrapper<TChannel>> init = initializer ?? (() => CreateAndWrap<TChannel>());

    using (var proxy = init())
    {
        return invocation(proxy.Client);
    };
}

Disadvantages

Seem simple?  It should because that is the point.  With very little code we can completely eliminate the boilerplate code that is written for something as simple as calling a WCF service.  By using our custom service client from last time we also resolve the issue with clean up.  And all this is mostly invisible to the actual client code.  This is a great first step in refactoring calls to WCF services but we are not done yet.  All this code is actually infrastructure code for the final solution that we are working toward.  As such it has some disadvantages if you choose to start using it right now.

Perhaps the biggest issue with the above code is the reliance on a static class.  Static classes cannot be easily mocked during testing so any code that uses the static class cannot be easily unit tested without the WCF services being available.  You can work around this by creating a virtual method that then calls the static class but you will end up with lots of virtual methods (probably one for each service method) just for unit testing purposes.

Another issue is that it tightly couples the code with the WCF service call.  It is clear by looking at the code you are making a WCF call which partially defeats the purpose of using WCF to begin with.  WCF is about contracts so that is what you should be using, yet the static class prevents code from using just the contract (at least outside the action parameter).

Yet another issue with the code revolves around the proxy creation.  WCF provides quite a few different options for creating a client in order to handle the various approaches.  This mostly has to be replicated in the factory class as well.  This leads to method overloading which can be confusing.  There is bound to be some case that the factory doesn’t handle but it is not easy to extend or change the static class without changing the actual source code.

Even with these limitations it is a great first step toward refactoring to the final solution.  If you can live with the disadvantages then this code can be used now.  But next time we’ll start working toward the final form that client code can use that fixes all these issues.

A Smarter WCF Service Client, Part 2

In the last article we made a case for why the standard WCF client generated by a service reference is not a great idea.  Among the issues listed were testability and proper cleanup.  To fix these issues we will need to ultimately replace service references altogether but this is time consuming for large code bases.  When we started down this path at my company we did it using an iterative approach.  While the intermediate stages may appear to be (and, in fact, may be) a step back we are ultimately working toward a solution that I believe is far better. 

Wrapping the Client

ClientBase<T> is the base class for service clients.  As mentioned previously this type does not properly handle error states when cleaning up the client.  The type implements IDisposable which in turn calls Close.  The implementation of Close does not look at the state of the channel other than to confirm it is not null.  In some cases an exception during a service call will set the entire channel to a faulted state.  When this occurs calling Close will throw an exception.  Instead it is necessary to call Abort.  We can handle this in our code by using a try-catch statement with the appropriate checks but that code is boilerplate and easy to forget.  A better solution is to fix the implementation.  Here is a simple class that does this wrapping.

public class ServiceClientWrapper<TChannel> : ClientBase<TChannel>, IDisposable
                    where TChannel: class
{
    public ServiceClientWrapper ()
    {
    }

    public ServiceClientWrapper ( string endpointConfigurationName ) : base(endpointConfigurationName)
    {
    }

    public TChannel Client
    {
        get { return Channel; }
    }

    public new void Close ()
    {
        ((IDisposable)this).Dispose();
    }

    protected virtual void Dispose ( bool disposing )
    {
        try
        {
            if (State != CommunicationState.Closed)
                base.Close();
        } catch (CommunicationException)
        {
            base.Abort();                
        } catch (TimeoutException)
        {
            base.Abort();
        } catch
        {
            base.Abort();
            throw;
        };
    }

    void IDisposable.Dispose ()
    {
        Dispose(true);

        GC.SuppressFinalize(this);
    }
}

The above code just shows the core methods.  For consistency with ClientBase<T> the actual implementation includes additional constructors to allow for creating clients.  The wrapper is designed to replace existing code that uses ClientBase<T>.  Also note that it is actually not designed for use directly in code.  It is actually a building block for a better solution.

The WCF interface is exposed via the Client property.  For now we will use this property whenever we need to call the WCF services.  By the end of this series this restriction will be gone so we will just accept it for now.

Another interesting point about the wrapper is that it implements IDisposableClientBase<T> already implements this interface so why are we reimplementing it?  Because the base type doesn’t follow the correct approaching to implementing the interface.  When implementing this interface a class is supposed to provide an overridable method that derived classes can implement but this one does not.  In order to fix the cleanup code we have to reimplement the interface and the actual method.  But we will implement it properly, both in terms of derived types and in handling faulted channels.

Using the Wrapper

Now that we have the wrapper we can replace all instances of ClientBase<T> with it.  But the service reference code should not be modify because it is auto generated.  We are not yet ready to fully replace the service reference code but we can, at least, replace any places where we create the service client with our new type.  Unfortunately we will lose some functionality for now.  Here is the original and updated versions of the application code.

//Original
using (var client = new ServiceReference1.EmployeeServiceClient())
{
    client.Update(dlg.Employee);
};

//New
using (var client = new ServiceClientWrapper<ServiceReference1.IEmployeeService>())
{
    client.Client.Update(dlg.Employee);
};

It is not much cleaner than before but it is only the first step in a larger solution.  Even with the new client there is still too much code in my opinion.  The best code is the code you don’t have to write and using statements are nothing more than infrastructure pushing its way into code.  Next time we will eliminate the need for even this much code.

A Smarter WCF Service Client, Part 1

WCF is a great way to implement service-based APIs but the standard approach to consuming a service lacks a lot to be desired.  In this series of articles I will discuss the approach that I’ve used in commercial applications to make consuming WCF services much cleaner and simpler.  We will start with the standard approach and identify its flaws.  From there we will work toward a solution that has minimal impact on consumers while providing maximal benefit.  The solution itself is large but not overly complex.  More importantly, once finished, you never need to worry about it again.

Before Getting Started

This series of articles has prerequisite knowledge.  If any of the following items are not familiar to you then read up on them before continuing.

Sample WCF Service

For purposes of demonstration we need a simple WCF service to call.  Since the focus is on the client our service will not do anything useful.  We will start with a simple service interface and expand as we get into more complicated scenarios.  The service interface (and any types needed by the interface) will be contained in a class library separate from the service implementation.  This is a traditional approach to service development.

[ServiceContract]
public interface IEmployeeService
{
    [OperationContract]
    Employee Get ( int id );

    [OperationContract]
    Employee Update ( Employee employee );

    [OperationContract]
    IEnumerable<Employee> GetEmployees ();
}

Caveat: The purpose of this series of articles is to demonstrate how to write a better service client.  The focus is not to demonstrate how to write proper service interfaces.  Therefore the interface will be created in a manner that best demonstrates the techniques being shown and not necessarily how you should write a production ready service interface.

The service implementation is not relevant for consumption so we will ignore it.

Service References

The standard approach to consuming a WCF service is using a service reference.  Visual Studio (VS) will generate the necessary service proxy to talk with the service along with any contract types.  On the surface this sounds great and is a common approach but it has many, many flaws.  Before we get there though here’s an example of how you might consume the service using the generated code.

private void LoadEmployees ( )
{
    using (var client = new ServiceReference1.EmployeeServiceClient())
    {
        dataGridView1.DataSource = client.GetEmployees();
    };
}

Problems with Service References

Caveat: I loathe service references.  At my company we forbid the use of service references because they introduce more problems than they solve.  We have a superior approach anyway.

There are many problems with service references in general. 

  • In order to add the reference the service must be running somewhere.  Often this is the developer’s machine during development.  This also means the service needs to be runnable before the client can even be started making parallel development harder.
  • A service reference generates lots of files (including .datasource files in some cases) that you might or might not need.  Even worse is that whenever the reference is updated all the files are refreshed as well.
  • The URL used to generate the reference is stored in the generated code (.svcmap).  If the reference is updated then the original URL is used.  This can cause problems if the original URL is old or refers to a developer’s machine.  Even worse is that changing the URL in order to regenerate the reference causes all the files to change even if there is no actual code changes involved.
  • Unless you check the appropriate options when setting up the service reference then all non-standard data types used by the service are recreated in each project.  This makes it difficult to share types between a service reference and underlying libraries.  Even though the types are the same the compiler sees them differently and will not allow you to convert from a reference-generated type to an assembly-provided type.  Note that the default for newer versions of VS is to reuse types in referenced assemblies but the references have to already exist and developers have to know not to turn this option off.

The generated code has problems as well.

  • The service client needs to be disposed when it is not needed anymore.  If you’re using interface-based programming then this starts to fall apart.  If you are using an IoC this is less of a concern. 
  • A using statement is not sufficient because the default client proxy will throw an exception when disposing itself under certain conditions.  The above code should actually be written like this.

    private void LoadEmployees ( )
    {
        ServiceReference1.EmployeeServiceClient client = null;
        try
        {
            client = new ServiceReference1.EmployeeServiceClient();
            dataGridView1.DataSource = client.GetEmployees();                
        } finally
        {
            try
            {
                if (client.State != CommunicationState.Closed)
                    client.Close();
            } catch
            {
                client.Abort();
            };
        };            
    }
  • While WCF uses interfaces to hide the implementation details, the generated code actually contains a different interface that happens to share the same name.  The interface is defined in the service reference code.  This makes using the original interface across projects more difficult.
  • Even worse is that some of the method signatures may be changed.  For example enumerables and collections get converted to arrays, by default.  Parameters can even be moved around.
  • In Visual Studio you can use Find All References to find all references to types and members.  But if you are using service references then FAR won’t detect them because, again, the service reference generate a new interface.
  • Any data contracts that are pulled over are modified as well.  They include properties that are not on the original object.  If any code uses these properties then they are now using infrastructure provided by WCF which makes unit testing harder.
  • Don’t even think about sharing data that was generated by a service reference in one project with any other project (even if it is the same service).  The types are different to the compiler irrelevant of their name.
  • The whole reason WCF uses interfaces is for abstraction but because of how service references work there is no easy way to abstract the actual usage of the service. 

Building a Smarter Client

Building a smarter client is not terribly hard but it does require quite a bit of work.  However once we’re done we won’t have to write this code again.  Furthermore we will have an easily extensible framework to add new features as needed.  Here is the rough outline of how we’re going to get to a smarter client.

  • Modify the existing code to isolate the service reference code.
  • Move all service contracts and data contracts to an assembly accessible by clients (using a shared folder, NuGet, etc).
  • Create a more robust service client wrapper that can handle the lifetime of the channel.
  • Create an example client implementation using the wrapper.
  • Modify client code to use the service contract directly and rely on the example client where needed.
  • Convert the example client into a T4 template that can be used to generate clients for any contract.
  • Extend the solution to provide support for optional functionality such as asynchronous calls and optimized client lifetime management.
Adding Dates to C#

In .NET the DateTime type represents both a date and a time.  .NET doesn’t really have a pure date type.  Ironically it does have a pure time type in the form of TimeSpan.  In general this isn’t an issue but when you really just need a date (i.e. the end of a grading period) then you have to be aware of the time.  For example when comparing 2 date/time values the time is included in the comparison even if it does not matter.  To eliminate time from the comparison you would need to set both to the same value (i.e. 00:00:00).  Another example is conversion to a string.  A date/time value will include the time so you have to use a format string to eliminate it.  Yet another issue is that creating specific dates using DateTime isn’t exactly readable (i.e. July 4th, 2000 is new DateTime(2000, 4, 4)).

In this post we’ll create a pure date type based upon code I have been using for a while.  When originally creating this type I had the following goals.

  • Unobtrusive – It should work interchangeably with DateTime.  It should also be immutable like DateTime.
  • Performant – There should be no noticeable performance difference between using it and a regular DateTime.
  • Flexible – It should behave like any other type which means it should be easy to add extension methods to handle business-specific date requirements.
  • Readable – In the rare cases where an absolute date is required (i.e. unit testing) then it should be readable.

Caveat 1: This new type will conflict with the existing Visual Basic keyword.  The VB keyword is backed by a DateTime value.

Caveat 2: This type is designed to work with American-style dates.  For other cultures some changes may be necessary to support the culture-specific requirements.

Introducing the Date Type

We will start with the base type definition.  Because performance is important and it is immutable we will make it a value type.  Here is the basic definition.

[Serializable]
public struct Date
{
    public Date ( DateTime value )
    {
        m_dt = value.Date;
    }

    public Date ( int year, int month, int day )
    {
        m_dt = new DateTime(year, month, day);
    }

    public int Day
    {
        get { return m_dt.Day; }
    }

    public int Month
    {
        get { return m_dt.Month; }
    }

    public int Year 
    {
        get { return m_dt.Year; }
    }

    private readonly DateTime m_dt;
}

The type is little more than a wrapper around a DateTime with a couple of properties to expose the date information.  An alternative approach to creating the type would be to use a simple integral value to represent the date.  This would reduce the size of the type but at significant expense.  Implementing add/subtract functionality would be harder as we would need to take into account month lengths and leap year.  We would also need to implement more complicated parsing and formatting.  Furthermore we lose the interoperability with DateTime during serialization/deserialization.  In my opinion the additional size overhead is worth the cost.  Yet another approach would be to store the number of days since a fixed date.  This would simplify the math but the other problems remain.

Date provides a couple of constructors to build a date either from its parts or an existing DateTime.  The type is also serializable so it can be used more easily.  One of the core assumptions used in the code is that the time is always zero.  Therefore whenever we receive a DateTime as a parameter we will always ensure we remove the time component.

Here’s how it might be used.

[TestMethod]
public void Ctor_FromDateTime ()
{
    var expected = new DateTime(2013, 12, 10);

    var actual = new Date(expected);

    actual.Day.Should().Be(expected.Day);
    actual.Month.Should().Be(expected.Month);
    actual.Year.Should().Be(expected.Year);
}

[TestMethod]
public void Ctor_FromParts ()
{
    int expectedDay = 10, expectedMonth = 12, expectedYear = 2013;

    var actual = new Date(expectedYear, expectedMonth, expectedDay);

    actual.Day.Should().Be(expectedDay);
    actual.Month.Should().Be(expectedMonth);
    actual.Year.Should().Be(expectedYear);
}

Moving Functionality to Date

Now that the base type is in place we can move date specific functionality from DateTime to the new type.  This will make Date more interchangeable with DateTime.

public static readonly Date MaxValue = new Date(DateTime.MaxValue);
public static readonly Date MinValue = new Date(DateTime.MinValue);
public static readonly Date None = new Date();

public DayOfWeek DayOfWeek
{
    get { return m_dt.DayOfWeek; }
}
public int DayOfYear
{
    get { return m_dt.DayOfYear; }
}

public bool IsLeapYear
{
    get { return DateTime.IsLeapYear(Year); }
}

public Date AddDays ( int value )
{
    return new Date(m_dt.AddDays(value));
}

public Date AddMonths ( int value )
{
    return new Date(m_dt.AddMonths(value));
}

public Date AddYears ( int value )
{
    return new Date(m_dt.AddYears(value));
}

Interoperability with DateTime

While we have a constructor that can accept a DateTime we should make it easier to convert to and from DateTime.  However there is one problem, converting from DateTime is losing data (the time).  Therefore we will make the conversion from Date to DateTime implicit but the conversion from DateTime to Date has to be explicit.

public static implicit operator DateTime ( Date value )
{
    return value.m_dt;
}

public static explicit operator Date ( DateTime value )
{
    return new Date(value);
}

public DateTime At ( TimeSpan time )
{
    return m_dt.Add(time);
}

public DateTime At ( int hours, int minutes, int seconds )
{
    return m_dt.Add(new TimeSpan(hours, minutes, seconds));
}

Here’s how you might use this functionality.

var someDt = new DateTime(2013, 10, 15);
var someDate = (Date)someDt;

DateTime anotherDt = someDate;
var withTime = someDate.At(12, 22, 45);

Because DateTime to Date is explicit we will add an extension method to DateTime to convert it to a date.

public static Date ToDate ( this DateTime source )
{
    return new Date(source);
}

//Usage
var target = new DateTime(2013, 5, 10, 12, 20, 30);

var actual = target.ToDate();

Implementing Equality

With the base type and interoperability with DateTime out of the way we can flesh out some standard value type functionality like IEquatable.  Equality for a date is simply confirming the date part matches.  Unlike DateTime the time portion is completely ignored.  For interoperability we will implement equality for both Date and DateTime.  When comparing a Date to a DateTime the time portion will be ignored.  Following standard rules for implementing equality we ultimately just need to implement the core Equals method.  The code sample contains the full implementation but here is the core method.

public override int GetHashCode ()
{
    return m_dt.GetHashCode();
}

public bool Equals ( Date other )
{
    return m_dt.Equals(other.m_dt);
}

public bool Equals ( DateTime other )
{
    return m_dt == other.Date;
}

Implementing Comparison

Comparing dates is incredibly useful, and simple, so we can implement IComparable as well.  As with equality we will implement it for both Date and DateTime.  The core methods are shown here.

public int CompareTo ( Date other )
{
    return m_dt.CompareTo(other.m_dt);
}

public int CompareTo ( DateTime other )
{
    return m_dt.CompareTo(other.Date);
}

Formatting and Parsing

One of the issues with trying to use DateTime for dates is that it will, by default, print out the time as well.  So for Date we will override this behavior and have it only print out the date portion.  Additionally we will implement IFormattable so that a developer can customize the behavior.  We could go out of our way and define a custom formatter along with format specifiers but dates are complex and we don’t really want to reinvent the wheel.  Therefore we will simply defer to the underlying DateTime.  This means that a developer could specify a time format if they wanted.  But this is a reasonable tradeoff given the complexity of formatting.

public string ToLongDateString ()
{
    return m_dt.ToLongDateString();
}

public string ToShortDateString ()
{
    return m_dt.ToShortDateString();
}

public override string ToString ()
{
    return ToLongDateString();
}

public string ToString ( string format )
{
    return m_dt.ToString(format);
}

public string ToString ( string format, IFormatProvider formatProvider )
{
    return m_dt.ToString(format, formatProvider);
}

Parsing is a little harder simply because parsing a date requires understanding the various formats.  As with formatting we will simply defer to DateTime and truncate any time that comes back.  The core implementation is shown below.

public static Date Parse ( string value )
{
    return new Date(DateTime.Parse(value));
}

public static Date ParseExact ( string value, string format, IFormatProvider formatProvider )
{
    return new Date(DateTime.ParseExact(value, format, formatProvider));
}

Here is how it would be used.

var target = "10/20/2012";

var actual = Date.Parse(target);

Creating Dates Fluently

One issue that I have with DateTime is that it is not clear, when creating values, what the actual date is.  I much prefer a fluent interface.  For Date we can create a fluent interface that allows us to create dates in a more readable format.  This is mostly useful for unit tests where fixed dates are useful but it can be used anywhere.   Here’s how it might be used.

var left = new Date(2010, 5, 1);
var right = new DateTime(2010, 5, 31);

var target = Dates.May(15, 2010);

var actual = target.IsBetween(left, right);

actual.Should().BeTrue();

This is far more readable than the constructor.  While we could add all the various functionality to the Date type I feel that the core type is already large enough.  Furthermore the usefulness of defining exact dates in non-testing code is limited so it is a clean separation to have the fluent functionality in its own type.  Dates exposes a method for each month of the year.  For each month you can specify a day and year.  This, of course, follows the American approach to dates but you could extend it to use other styles as well.  For example if you prefer the European approach where day precedes month you could do this.

public static class EuropeanDates
{
    public static Date January ( this int source, int year )
    {
        return new Date(year, 1, source);
    }
}

//Usage
return 10.January(2013);

There are other methods in the type as well that allow for the creation of partial dates.  This is useful in a few cases where the year or day is not yet available.  To support this some support types (-Part) have been added to represent partial dates.  They are not designed for general use but as part of creating a full date.

var target = Dates.January()
                    .OfYear(2013)
                    .Day(10);

target.Day.Should().Be(10);
target.Month.Should().Be(1);
target.Year.Should().Be(2013);

One interesting benefit of this is that extension methods can be created to build specific dates from partial information.  For example you can define an extension method that returns the first weekend of a month and year.

public static Date GetFirstWeekend ( this MonthYearPart source )
{
    switch (source.FirstDayOfMonth().DayOfWeek)
    {                
        case DayOfWeek.Monday: return source.Day(6);
        case DayOfWeek.Tuesday: return source.Day(5);
        case DayOfWeek.Wednesday: return source.Day(4);
        case DayOfWeek.Thursday: return source.Day(3);
        case DayOfWeek.Friday: return source.Day(2);
    };

    return source.Day(1);
}

There is also a Months type that represents the standard months in an (American) year.  It is interesting to note that a standard static class with constant integral members is used in lieu of an enumeration to eliminate the need for a cast when using with the standard Date methods.

Extending the Type

One of the key goals was to make the type extensible.  This allows for business and application extensions to be added easily.  For example we could add an extension that determines if a Date is on a weekend like this.

public static bool IsWeekend ( this Date source )
{
    return source.DayOfWeek == DayOfWeek.Saturday || source.DayOfWeek == DayOfWeek.Sunday;
}

For the core type a few extensions have already been added (to the core type).  Here’s a summary.

  • FromDayOfYear – Static method to create dates from the day of the year
  • IsLeapDay – Determines if the date is a leap day
  • IsSet – Determines if the date is set
  • AddWeeks – Adds a number of weeks to the date
  • Difference – Determines the difference between 2 Dates or DateTimes
  • DifferenceAbsolute – Determines the absolute difference between 2 Dates or DateTimes
  • FirstDayOfMonth/LastDayOfMonth
  • Yesterday/Tomorrow
  • LastMonth/NextMonth
  • IsBetween – Various methods to determine ordering of dates

Finally, date ranges are very common in programs (start/end of pay periods, reports, etc).  Included is a simple DateRange type that can be used to represent a date range using the Date type. 

Attachment: P3NetDates.zip
Posted Mon, Dec 23 2013 by Michael Taylor | 6 comment(s)
Filed under:
Entity Framework 6 Conventions

I was incredibly excited when conventions were finally made public in EF6.  A convention allows you to set up a policy that a model will follow.  For example EF comes with a convention that tables are plural by entities are singular.  EF has supported conventions for a while but the necessary public interface was not exposed until EF6.  In this post I’m going to walk through creating a simple convention.

Data Annotations

A data annotation is an attribute that you can apply to a model property to control how it is treated.  EF ships with several data annotations including ColumnAttribute, KeyAttribute and StringLengthAttribute.  Data annotations are used in many places in the framework including validation and binding.  In fact the above annotations are actually part of the core framework and not EF.  Annotations make it simple to add information to a model. 

Unfortunately EF will ignore annotations it does not recognize.  In some cases you can derive from an existing annotation and EF will work.  For example it is common to use type aliases in SQL for commonly used column types.  If a type alias is defined to limit a string to a specific length then StringLengthAttribute can be used to create a new data annotation to represent that. 

The following code defines a maximum string length of 50 to match a type alias in SQL.  Since it derives from an annotation that EF recognizes it will be used.

public class NameDefinition : StringLengthAttribute
{
    public NameDefinition () : base(50)
    {  }
}

Here is how it might be used in a model.

[Column("RoleName")]
[NameDefinition]
public string Name { get; set; }

Data Configurations

When data annotations will not work you have to revert to a data configuration type that is registered with the model.  The configuration type can modify the model even more than the data annotations but it does require an extra type and that you register it.

For example EF assumes strings are Unicode by default.  In most databases this is recommended but there are cases where it is not true.  There is no data annotation that identifies the character set of a string so a data configuration has to be used.  Here is how you might define an ANSI string.

internal class UserConfiguration : EntityTypeConfiguration<User>
{
    public UserConfiguration ()
    {
        Property(x => x.Name).IsUnicode(false);
    }
}

Just defining the type is not sufficient because EF doesn’t know about it.  Prior to EF 6 you would have had to override the model creation of the context and add the configuration like so.

protected override void OnModelCreating ( DbModelBuilder modelBuilder )
{
    base.OnModelCreating(modelBuilder);

    modelBuilder.Configurations.Add(new UserConfiguration());           
}

This was a pain for maintenance so many developers created a helper method to find all such types using reflection.  Fortunately EF6 added such a method.

protected override void OnModelCreating ( DbModelBuilder modelBuilder )
{
    base.OnModelCreating(modelBuilder);

    modelBuilder.Configurations.AddFromAssembly(this.GetType().Assembly);
}

The above code will load all the data configurations in the given assembly.

Conventions

As the number of custom annotations and configurations grow the above solutions start to become problematic.  Ideally we want to be able to define a custom annotation and then have that annotation applied to a model at model creation.  Ideally we could derive from some base “model annotation” attribute and EF would recognize the attribute but this isn’t possible with the existing annotations.  But EF does support the concept of user-defined conventions. 

A user-defined convention is a rule that you can define for a model or portions thereof.  EF already has a couple but starting with EF6 we can define our own.  Going back to the ANSI string of earlier it might be necessary to make all strings of a model ANSI.  Rather than adding a configuration for each model that has string properties we can define a convention that sets all strings to ANSI.

public class AnsiStringConvention : Convention
{
    public AnsiStringConvention ()
    {
        Properties<string>().Configure(c => c.IsUnicode(false));
    }
}

Registering the convention is as simple as this.

protected override void OnModelCreating ( DbModelBuilder modelBuilder )
{
    base.OnModelCreating(modelBuilder);

    //Set up conventions
    modelBuilder.Conventions.Add(new Conventions.AnsiStringConvention());            

    //Register configurations
    //modelBuilder.Configurations.AddFromAssembly(this.GetType().Assembly);
}

For one-off conventions you can actually just apply the convention at model creation rather than creating a custom type but I find the custom type to be cleaner.

Custom Annotations

Even with conventions there will be occasions when a convention needs to be violated.  For example maybe all the strings in a database are Unicode (or ANSI) but a couple are not.  It would be nice to be able to make the convention smarter to handle these exceptions.  Data annotations are a great way to handle this. 

Here is a simple data annotations that identifies a string as either Unicode (the default) or ANSI.

[AttributeUsage(AttributeTargets.Property, AllowMultiple=false, Inherited=true)]
public class UnicodeAttribute : Attribute
{
    public UnicodeAttribute ( ) : this(true)
    { }

    public UnicodeAttribute ( bool isUnicode )
    {
        IsUnicode = isUnicode;
    }

    public bool IsUnicode { get; private set; }
}

The attribute can now be applied to the properties of a model. 

[Unicode]
public string Name { get; set; }

By itself it does nothing so we need to create a new convention that detects this attribute and applies it accordingly. 

public class CharSetConvention : Convention
{
    public CharSetConvention ()
    {
        //For each string property with a Unicode attribute, set the property to use the attribute's value
        Properties<string>()
            .Having(p => p.GetCustomAttributes(false).OfType<UnicodeAttribute>())
            .Configure((c,a) =>
            {
                if (a.Any())
                    c.IsUnicode(a.First().IsUnicode);
            });
    }
}

This convention gets all string properties with the custom annotation.  It then applies the appropriate configuration to the property.

Ordering

One issue to be aware of with conventions is ordering.  Conventions may conflict and therefore the order in which they are applied is important.  EF follows a last convention wins approach.  The Conventions property allows you to order conventions relative to each other if needed.  With our new annotation and convention we can run into this issue.  The charset convention needs to be applied after the ANSI string convention.

Summary

Conventions are going to eliminate the need for data configurations in many cases.  Custom conventions will allow developers to create annotations for common database properties like type aliases and column defaults.  Conventions can be used to configure more than just model properties though.  Refer to MSDN for examples on how to use conventions to control other aspects of model creation as well. 

Language Friendly Type Names

.NET uses Type everywhere to represent type information.  Not surprisingly Type is language-agnostic.  In many cases it is useful to get the friendly (aka language-specific) name for a Type object.  .NET does not provide this easily.  There are several different approaches but none of them work really well if you want the name that would have been used in your favorite language.  This post will discuss some of the options available and then provide a more general solution to the problem that doesn’t actually require much effort.

Different Type Categories

Before discussing the options available it is useful to summarize the different categories of types.  Each category of types can potentially result in different syntax for the type name.  Furthermore many of the existing options do not work for all the categories.  Therefore we will define each of the categories of types and what kind of output we would expect.  For simplicity we will use C# as the example language but the same concept applies to other languages.

  • Primitives – This includes any type that is implicitly known by the language.  Most primitives have aliases in a language.  For example Int32 is aliased as int in C# and Integer in VB.
  • Arrays – This includes both single and multi-dimensional arrays (rectangular arrays).  It also includes arrays of arrays (jagged arrays) which may have a different syntax depending upon the language.  For example C# uses a different syntax for rectangular arrays than it does for jagged arrays.
  • Pointers – This includes pointers to any other category of types.  Some languages do not support pointers.
  • Closed generic types – This includes any generic type that has all type parameters specified with a type (i.e. List<int>).
  • Open generic types – This includes any generic type where one or more type parameters do not yet have a type (i.e. IList<T>).
  • Nested types – This includes any type nested inside another type (generally a class).
  • Nullable types – This includes any value type in addition to primitives.  Some languages (like C#) have a special syntax for nullables.
  • Simple– All other types, including normal value and reference types, require no special consideration.  The type name is language friendly.

It is important to remember that a full type can be in several different categories.  Building the friendly name requires the type to be broken down, in the correct order, into its individual categories.  For example List<int>[] is an array of closed generic types where the type parameter is a primitive.

Available Approaches

The following approaches are available in the framework currently.  Each has advantages and disadvantages.

  • Type.Name – The Type class has a property (actually two) to get the type name.  But the returned string is the formal framework name.  For example primitives are returned as the formal .NET type and generic types include the type parameter numbers.
  • CodeDomProvider.GetTypeOutput – The CodeDOM provides a method to get the type name given a CodeTypeReference.  This is the currently recommended approach to this this problem.  There are several issues with this approach though.  The CodeDOM is not lightweight especially when you need to create the provider.  The CodeDOM also requires a CodeTypeReference so the type has to be converted.  Here’s the code required to do this.

    var provider = new Microsoft.CSharp.CSharpCodeProvider();
    var typeRef = new System.CodeDom.CodeTypeReference(typeof(Int32));
    var actual = provider.GetTypeOutput(typeRef);

    Unto itself it is not too bad but if performance is important then the CodeDOM is going to hurt.  Even worse however is that the method returns back a fully-qualified type name for each type.  If you don’t want or need the full name then you’d have to parse out the resulting string.  It also does not handle nullable types.
  • TypeName – VB provides a helper function that can get a type name.  While it is a function for VB it is also callable by using Microsoft.VisualBasic.Information.TypeName.  Unfortunately it requires an instance of the type and not just the type name.

None of the above approaches really work well.  What we want is to be able to pass any type to a method and get back the string equivalent.  Since different languages use different syntax we will need to identify the language we want to use as well.  It should be very fast and handle all the categories mentioned earlier.  Since none of the above approaches are very good we will create our own and it is surprisingly easy once you’ve broken the problem down.

Type Name Provider

Converting a type to a string involves two separate components: processing and formatting.  During processing the type is taken apart to identify what category it is.  This can be recursive as more complex types, like arrays, are broken up into their subtypes.  Processing is the same irrelevant of the target language, in general.  Formatting, on the other hand, requires a target language.  It involves converting the category to the language specific syntax. 

To keep things simple, but flexible, a simple abstract class called TypeNameProvider will contain the processing workflow.  Derived types can override the workflow if needed.  Each type category will have its own abstract method for formatting.  Derived types will provide the language-specific implementation.

public abstract class TypeNameProvider
{
    public string GetTypeName ( Type type )
    {
        if (type == null)
            throw new ArgumentNullException("type");

        return GetTypeNameCore(type);           
    }
    protected virtual string GetTypeNameCore ( Type type )
    {
        //Do processing of type
    }

    // Abstract format methods
}

For C# the language provider is called CSharpTypeNameProvider and simply derives from TypeNameProvider.

Processing – Simple Types

Simple types include any type not handled by any other category, including primitives.  During processing simple type formatting will be the default behavior if none of the other categories are found. 

protected virtual string GetTypeNameCore ( Type type )
{            
    return ProcessSimpleType(type);
}
protected virtual string ProcessSimpleType ( Type type ) { return FormatSimpleType(type); } protected virtual string FormatSimpleType ( Type type ) { return type.Name; }

Formatting a simple type just returns the type name.  Primitives, which are language-specific can be handled in this way as well.  For C# the primitive types are stored in a static dictionary along with the alias.  A lookup is done on the simple type and the alias returned, if found.

protected override string ProcessSimpleType ( Type type )
{
    string alias;
    if (s_aliasMappings.TryGetValue(type, out alias))
        return alias;

    return FormatSimpleType(type);
}
private static readonly Dictionary<Type, string> s_aliasMappings;

Processing – Generic Types

Generic types are a little harder to handle.  IsGenericType determines if a type is a generic type (open or closed).  IsGenericTypeDefinition is true if the type is open or false if it closed.  Combining these calls will identify a closed generic type that can be processed.

if (type.IsGenericType && !type.IsGenericTypeDefinition)
    return ProcessClosedGenericType(type);

To process the type the base type needs to be extracted along with each of the type arguments.  The information will then be passed to the format method for final processing.

protected virtual string ProcessClosedGenericType ( Type type )
{
    var baseType = type.GetGenericTypeDefinition();

    var typeArgs = type.GetGenericArguments();

    return FormatClosedGenericType(baseType, typeArgs);
}

The C# implementation of the formatting is implemented like this.

protected override string FormatClosedGenericType(Type baseType, Type[] typeArguments)
{        
    var argStrings = String.Join(", ", from a in typeArguments select GetTypeName(a));

    //Format => Type<arg1, arg2, ...>
    return String.Format("{0}<{1}>", RemoveTrailingGenericSuffix(GetTypeName(baseType)), argStrings);
}

Processing – Nullable Types

Now that generic types are out of the way handling nullable types is simply a matter of special casing the base type.  This could be handled by the language provider but since several languages provide a special syntax we will handle the logic in the base type for processing generic types.

if (baseType.IsValueType && baseType.Name == "Nullable`1")
    return FormatNullableType(typeArgs[0]);

The C# implementation looks like this.

protected override string FormatNullableType ( Type type )
{
    //Format => Type?   
    return GetTypeName(type) + "?";
}

Processing – Arrays

Arrays are complicated by the fact that they can be single, multi-dimensional or jagged.  Arrays need to be processed before most other types.

if (type.IsArray)
    return ProcessArrayType(type);

Processing an array requires separating the element type (which may be an array) from the array definition and then identifying the number of dimensions the array has. 

protected virtual string ProcessArrayType ( Type type )
{
    var elementType = type.GetElementType();
    var dimensions = type.GetArrayRank();

    return FormatArrayType(elementType, dimensions);
}

For C# a multi-dimensional array requires inserting a comma for each dimension past the first.  The implementation looks like this.

protected override string FormatArrayType ( Type elementType, int dimensions )
{
    //Format => Type[,,,]           
    return String.Format("{0}[{1}]", GetTypeName(elementType), new string(',', dimensions - 1));
}

Processing – Pointers, ByRef

Not all languages support a pointer but it is still a valid .NET type.  Additionally there is a special category for by ref types.  These need to be handled early in the processing because the modifier needs to be stripped so the base type can be processed. 

if (type.IsPointer)
    return ProcessPointerType(type);
if (type.IsByRef)
    return ProcessByRefType(type);

Processing either of these types requires getting the element type and then formatting it appropriately.

protected virtual string ProcessByRefType ( Type type )
{
    var refType = type.GetElementType();

    return FormatByRefType(refType);
}
protected virtual string ProcessPointerType ( Type type )
{
    var pointerType = type.GetElementType();

    return FormatPointerType(pointerType);
}

The C# implementation looks like this.

protected override string FormatByRefType ( Type elementType )
{
    //C# doesn't have a byref syntax for general types
    return GetTypeName(elementType);
}
protected override string FormatPointerType ( Type elementType )
{
    //Format => Type*
    return GetTypeName(elementType) + "*";
}

Processing – Other Categories

My ultimate goal for originally writing this code was to be able to generate cleaner T4 template code.  Therefore the code was not written to handle categories of code that wouldn’t appear in a template.  The following types are not supported but the code could be modified to support them rather easily if desired.

Nested types – The general guideline is that a nested type is not publicly available.  As such exposing a public, nested type in a T4 template does not make a lot of sense and is not supported.  A nested type will be displayed as ParentType+NestedType.  It would be relatively easy to handle a nested type by breaking up the parent from the child and replacing the plus sign with the separator for the language.

Open generic types – As of yet it has not been necessary for me to generate an open type in T4 and therefore the code does not support this category.  To support an open generic type it is necessary to modify the generic type code a little.  If the type is generic but it is a generic type definition then instead of retrieving the type arguments the type parameters would be used instead.  Keep in mind that an open type may have a mix of arguments and parameters.  Where it would get more difficult is with constraints.  Each constraint would have to be generated as well.  For each constraint it could be a constraint keyword (depending upon the language) or a type requirement.

Namespaces

The above code should now work with any type that might be found in the framework.  But there is one more scenario that ideally will be handled.  In a T4 template we cannot always assume that the namespace for a type is included.  It makes sense to allow an option to include the namespace on the type.  Because of the recursive nature of the provider we really only need to add the namespace in the FormatSimpleType method.

public bool IncludeNamespace { get; set; }

protected virtual string FormatSimpleType ( Type type )
{
    return IncludeNamespace ? type.FullName : type.Name;
}

Simplifying the Usage

We want this to be as easy as possible so it makes sense to create an extension method off of Type that returns the language friendly name.  Because I’m a C# developer the base extension will assume C# but overloads will be provided to allow for other providers.  Here’s the basic code.

public static class TypeExtensions
{
    public static string GetFriendlyName ( this Type source )
    {
        return GetFriendlyName(source, new CSharpTypeNameProvider());
    }

    public static string GetFriendlyName ( this Type source, bool includeNamespace )
    {
        return GetFriendlyName(source, new CSharpTypeNameProvider() { IncludeNamespace = includeNamespace });
    }

    public static string GetFriendlyName ( this Type source, TypeNameProvider provider )
    {
        if (provider == null)
            throw new ArgumentNullException("provider");

        return provider.GetTypeName(source);
    }
}

Enhancements

There are several enhancements that can be made to the code if desired.

  • A type provider is pretty static in its behavior.  Currently the namespace option is a property on the provider.  It might be better to move the option into a simple options type that can be passed to the provider as an argument.  This would make it easier to add other options later on.
  • Since the provider is static in its behavior it might be useful to expose a singleton instance that can be used rather than requiring a new instance to be created each time.
  • You could go further and expose “standard” providers off of a type (like StringComparer does).
  • Add support for nested types by replacing the plus sign with the parent type name.
  • Add support for open generic types by processing both the type arguments and the type parameters along with the constraints.
Attachment: P3NetTypeNames.zip
Posted Sun, Sep 29 2013 by Michael Taylor | no comments
Filed under:
Entity Framework and User Context

Auditing is generally important in most databases because it is important to know who changed data and when.  How auditing data is stored depends upon the system requirements but in general the date/time and user who made a change is important.  SQL Server already provides the infrastructure to identify the who and what.  Setting up EF to provide this information is straightforward once you know how EF works.  In this post I’ll illustrate a simple approach we’ve been using in web applications for over a year with no issues and very little effort.  

Database Structure

The focus of this article is on EF and the user context so we will use a simple auditing approach where only the create and last modified information is saved.  It would be straightforward to expand this to store all changes in an auditing table.  For our purposes we’ll assume a simple table of products where the audit columns are stored directly on the table. 

CREATE TABLE [dbo].[Products]
(
    [Id] INT NOT NULL PRIMARY KEY IDENTITY, 
    [Name] VARCHAR(100) NOT NULL, 
    [Price] MONEY NOT NULL, 
    [CreateDate] DATETIME NOT NULL, 
    [CreateUser] VARCHAR(128) NOT NULL, 
    [LastModifiedDate] DATETIME NULL, 
    [LastModifiedUser] VARCHAR(128) NULL
)

The CreateDate and CreateUser columns are set when an insert occurs via default values.  The LastModifiedDate and LastModifiedUser columns are set when an update occurs via a trigger.  The dates are set to the current UTC date.  The user will be set based upon some rules discussed next.

Getting User Context

In SQL the typical way to get the user context (which includes the user’s name) is to use SUSER_NAME() or equivalent.  This returns back the user name of the user associated with the current SQL session.  This information is generally passed as part of the connection string.  If your application is using Integrated Security (Windows authentication) and the application is running on a user’s desktop then this will work just fine. 

For external web applications this does not work well for the following reasons.

  • Each user of the application would need their own Windows login
  • Each user would also need a SQL CAL or the database would need to have per-server licensing.
  • Scalability would suffer because each user of the application would get their own connection to the database

Even for internal web applications a shared database account is often used for licensing and scalability reasons.  The result is that getting the current user would return the shared account rather than the actual user name.  For web applications the current SQL session is not sufficient.

Another approach that is often used is to pass parameters to the database during updates (generally stored procedures) that include the user information.  While this is certainly easy to do there are some problems with this approach as well.

  • Auditing data is mixed in with the functional data
  • Each sproc would need to handle auditing rather than centralizing it in a trigger or other mechanism
  • Clients are responsible for passing the correct data which is error prone if it is being done all over the application
  • Maintenance is harder because the auditing is riddled throughout the application and database

Fortunately SQL has a better approach.  CONTEXT_INFO is a small value (up to 128 bytes) that can be associated with a session.  Each session gets its own value and the value is completely up to the database and application to manage.  The application is responsible for setting the value each time it connects to the database and the database can then use whatever value was stored.  To simplify things a stored procedure can be used to set the value.

CREATE PROCEDURE [dbo].[SetUserContext]
    @userName VARCHAR(128)
AS
BEGIN
    SET NOCOUNT ON;

    DECLARE @context VARBINARY(128)
    SET @context = CONVERT(BINARY(128), @userName)

    SET CONTEXT_INFO @context
END

To use the context value simply call CONTEXT_INFO().  It might be a good idea to fall back to SUSER_NAME if it is not available.  For simplicity this can be wrapped up in a user-defined function that can be used both as the default value for the create column and for the last modified value in the trigger.

CREATE FUNCTION [dbo].[GetUserContext] ()
RETURNS VARCHAR(128)
AS
BEGIN
    RETURN COALESCE(CONVERT(VARCHAR(128), CONTEXT_INFO()), SUSER_NAME())
END

To set the create column values, default values need to be added to the columns.

[CreateDate] DATETIME NOT NULL DEFAULT getutcdate() , 
[CreateUser] VARCHAR(128) NOT NULL DEFAULT dbo.GetUserContext(), 

An update trigger needs to be set up to use the context information for the modified columns. 

CREATE TRIGGER [dbo].[trg_Products_Update]
    ON [dbo].[Products] AFTER UPDATE
AS BEGIN
    SET NOCOUNT ON

    UPDATE Products
    SET
        LastModifiedUser = dbo.GetUserContext(),
        LastModifiedDate = GETUTCDATE()
    FROM Products p INNER JOIN inserted i ON p.Id = i.Id
END

EF Model

Now that the database is set up we can define a simple POCO to support it.  It is important to note that the audit columns need to be marked as database generated so the application will not save any changes made to them.

[Table("Products")]
public class Product
{
    [Key]
    [DatabaseGenerated(DatabaseGeneratedOption.Identity)]
    public int Id { get; set; }

    [Required]
    [StringLength(100)]        
    public string Name { get; set; }

    [Required]
    public decimal Price { get; set; }
                
    [DatabaseGenerated(DatabaseGeneratedOption.Computed)]
    public string CreateUser { get; set; }

    [DatabaseGenerated(DatabaseGeneratedOption.Computed)]
    public DateTime CreateDate { get; set; }

    [DatabaseGenerated(DatabaseGeneratedOption.Computed)]
    public string LastModifiedUser { get; set; }

    [DatabaseGenerated(DatabaseGeneratedOption.Computed)]
    public DateTime? LastModifiedDate { get; set; }
}

The corresponding context would look like this.

public class SampleDbContext : DbContext
{
    public SampleDbContext ( string connectionString ) : base(connectionString)
    {
    }

    public DbSet<Product> Products { get; set; }

    public string UserName { get; set; }
}

Notice the property that was added for user name.  Finally some sample code to insert a record.

using (var ctx = CreateContext())
{                
    var item = new Product() { Name = "Router", Price = 200 };
                
    ctx.Products.Add(item);
    ctx.SaveChanges();
};

Setting the Context in EF

Calling a stored procedure in EF is not too difficult.  But there is a problem with how EF works.  Calling a stored procedure using ExecuteSqlCommand opens a connection to the database, runs the command and then closes the connection.  When EF saves changes inside SaveChanges it will open a new connection to the database.  As mentioned earlier the context is per session.  Since EF will not be using the same session for each call we have to set the user context using the same connection that the changes will use.  (Note: I believe versions of EF after 5 may provide better support for opening a connection early)

Fortunately EF is pretty smart so it is possible to open the connection explicitly inside SaveChanges and EF will not open a new connection.  However ExecuteSqlCommand will still close the connection when it is done so we cannot use that.  Instead we have to manually call the stored procedure after opening the connection and before performing the actual update.  The user name will come from the property we added earlier.

public override int SaveChanges()
{           
    SetUserContext();

    return base.SaveChanges();
}

private void SetUserContext ()
{
    if (String.IsNullOrWhiteSpace(UserName))
        return;

    //Open a connection to the database so the session is set up
    this.Database.Connection.Open();

    //Set the user context
    //Cannot use ExecuteSqlCommand here as it will close the connection
    using (var cmd = this.Database.Connection.CreateCommand())
    {
        var parm = cmd.CreateParameter();
        parm.ParameterName = "@userName";
        parm.Value = UserName;

        cmd.CommandText = "SetUserContext";
        cmd.CommandType = System.Data.CommandType.StoredProcedure;
        cmd.Parameters.Add(parm);

        cmd.ExecuteNonQuery();
    };
}

Now whenever changes are saved in EF we will set the user context prior to saving any changes.  The audit triggers will get the user information set through the EF context and set the audit columns appropriately.  The only real question now is where to set the user name information.  For most applications this will likely be done when the context is created through whatever factory is being used (often an IoC).  The user information will likely come from the thread’s identity information but it can be set to anything. 

static SampleDbContext CreateContext ()
{
    var ctx = new SampleDbContext("SampleDb");

    //Get from thread identity or something
    ctx.UserName = "Bob";

    return ctx;
}

Drawbacks

One drawback to this approach is it only works for model changes.  If your application modifies data using stored procedures that do not involve a call to SaveChanges then the context will not be set.  One workaround would be to create a general method in the context that executes stored procedures but also sets the user context first.  The same rule applies though that the database connection has to remain open for both calls in order to be effective.

Another drawback is that this specific implementation is SQL Server specific.  Of course the same concept can be applied to other databases as well but the code will likely have to be modified accordingly.

Yet another issue to be aware of is that the context is limited to 128 bytes.  A user name in SQL can be up to 128 Unicode characters.  Therefore if you are using Unicode user names then the full name may not fix.

Environmental Transforms/AppSettings Transforms for Visual Studio 2013 Preview

In a recent series of articles I discussed how to create an environmental tranform template that could be run on each build.  I also posted a series of articles on how to generate a template to generate a strongly typed class to back appsettings in a configuration file.  Alas shortly thereafter VS2013 Preview was released and changes to VS have broken the code.  This post will discuss the minor changes that need to be made to get everything to work.

Shared Targets

The first problem is with the shared .targets file.  Within the file an inline task is used to run the environmental transforms during a build.  Because of an issue with MSBuild it has to dynamically load the necessary assembly to do the transform and, as such, uses the dynamic keyword to keep things simple.  MSBuild v12 (the version shipping with VS2013) has a “bug” that causes the dynamic binding to fail.  I reported this to Microsoft on the Connect site.  Microsoft identified it as a bug but said it was unlikely to be fixed before release.  Fortunately though they provided a workaround that I’ll discuss next.

For VS2013 you can continue to use the same shared targets file but you will need to make an adjustment to the inline task.  MSBuild has a problem (I don’t really understand why) with inline tasks that use the AssemblyFile attribute to identify the assembly and the task uses the dynamic keyword.  Fortunately the solution is to simply switch to the AssemblyName attribute instead.  The assembly name needs to be fully qualified but otherwise it is a simple change.

<UsingTask TaskName="TransformXmlFiles" 
    TaskFactory="CodeTaskFactory"
    AssemblyName="Microsoft.Build.Tasks.v4.0,Version=4.0.0.0,PublicKeyToken=b03f5f7f11d50a3a,Culture=Neutral">

After this change is made the .targets file will now work with VS 2012 and VS 2013 Preview. 

VSIX Installation Target

The documentation for defining versioning ranges for the Visual Studio installation target is wrong as of VS 2013 Preview.  I reported this to Microsoft via Connect and they confirmed they would be updating it.  The first issue is that the documentation states that if you specify just a version (i.e. 11.0) then that identifies a minimal version with no maximal.  However as VS 2013 Preview this is no longer true.  Instead a single version indicates that the extension will only work with that version of VS.  Thus VS 2012 extensions will not show up for VS 2013.  To fix that we need to use a version range.

This brings up the second set of errors in the documentation.  The documentation states that a square bracket ([) indicates a maximum inclusive version but then it mixes inclusive and exclusive with parenthesis.  After some testing I’ve found that square brackets should be used for minimum/maximum inclusive ranges.  So to support VS 2012 and VS 2013 you should use the version range of [11.0,12.0].  Updating the VSIX manifest accordingly allows the extension to be installed in either version.

T4 Toolbox

The next problem is that the T4 templates rely on the T4 Toolbox which hasn’t been updated yet.  Without that extension our custom extension will not install.  To work around the issue we need to temporarily modify the VSIX package for T4 Toolbox so it will install on VS 2013.

  1. Download T4 Toolbox from the Visual Studio Gallery
  2. Open the VSIX file, it is a zip file.
  3. Extract the extension.vsixmanifest file..
  4. Open the file in a text editor.
  5. Change the line that identifies the installation target to use the version range mentioned earlier [11.0,12.0]
  6. Save and close the file.
  7. Add the file back to the VSIX file.
  8. Run the installer and it should install for VS 2013 Preview.

Note that this works for any VSIX file but ideally you should wait for the author to verify their extension before installing it.

Nested Projects

This is not actually a problem with VS 2013 but rather an implementation issue with the code we use to find projects.  Here’s the relevant piece of code.

foreach (EnvDTE.Project project in DteInstance.Solution.Projects)
{
   if (String.Compare(project.Name, projectName, true) == 0)
      return project;
};

Unfortunately this code will only return root level projects.  Projects that are contained in solution folders (yes it happens) will not be seen as only the root items are returned.  To fix that we will need to modify the code to enumerate solution folders as well.  Here is the updated code that will now find projects in subfolders.

foreach (var project in GetAllProjects(source, false))
{
   if (String.Compare(project.Name, projectName, true) == 0)
      return project;
};

public static IEnumerable<EnvDTE.Project> GetAllProjects ( this EnvDTE.DTE source, bool includeFolders )
{
   foreach (EnvDTE.Project project in source.Solution.Projects)
   {
      var isFolder = IsProjectFolder(project);

      if (isFolder)
      {
         foreach (var item in GetProjectsCore(project, includeFolders))
            yield return item;
      };

      if (!isFolder || includeFolders)
         yield return project;
   };
}

private static IEnumerable<EnvDTE.Project> GetProjectsCore ( EnvDTE.Project project, bool includeFolders ) {
   foreach (EnvDTE.ProjectItem item in project.ProjectItems)
   {
      if (item.SubProject != null)
      {
         var isFolder = IsProjectFolder(item.SubProject);
         if (isFolder)
         {
            foreach (var child in GetProjectsCore(item.SubProject, includeFolders))
               yield return child;
         };

         if (!isFolder || includeFolders)
            yield return item.SubProject;
      };
   };
}

Summary

Getting the transforms and code working with VS 2013 Preview was pretty straightforward although it would have been nice if they had just worked.  Nevertheless none of the changes were that bad.  I’ve attached an updated copy of the files from the earlier articles.

Visual Studio 2013 Preview

Note: This is strictly my opinion and in no way should be conveyed as the opinion of anyone else.

Now that VS2013 Preview is available I can provide some feedback about it.  Then I can provide my opinion about whether it is a mandatory upgrade or not.

General Thoughts

Honestly, there are really no new features in VS2013 that are really critical.  VS2013 is a minor update, in my opinion, to support Win 8.1 and .NET 4.5.1.  There are no new language features or changes to even blog about.  The exception is with WinRT apps.  As such don’t expect a VS2010 to VS2012 jump in performance, behavior or features.

WinRT Apps

Almost all the enhancements made in VS2013 are around WinRT apps.  If you write WinRT apps then you’ll be targeting Win 8.1.  Therefore VS2013 is pretty much going to be mandatory.  There are lots of changes to support the enhancements Win 8.1 is going to provide.  Along the same front there are lots of enhancements made to JavaScript and C++ to support WinRT apps. 

Languages

C++

The C++ team continues to make improvements in this release.  If you are using C++ then you’ll be glad to hear that C++ is more compliant with the latest standard.  There are also some useful enhancements that you’ll want to check out.  If you don’t use C++ then none of this will matter to you.

JavaScript

Given that JavaScript is a core language in WinRT it is no surprise that the JS editor got more love in this release.  I haven’t verified but I’ve heard that Go to Definition works correctly now and that Intellisense is working much better.  If you do a lot of web development then this should be great.

Debugger

The debugger team has added some new features as well.  The one feature I’m really excited about is viewing the return value.  Since the early days of C++ it has been possible to see the return value of a function even if the calling program didn’t capture it.  .NET has never had this feature, until now.  I cannot count the times I wanted to see the return value of a function but wasn’t capturing the result.  Therefore I had to temporarily modify my code to capture the results and then make sure I undid the changes before checking in the code.  With VS2013 method return values show up in the Autos window even if the value isn’t being explicitly captured.  Awesome!!!

Edit and Continue

Are you sitting down?  One of the most requested features in VS history is EnC support for 64-bit managed code.  MS has always said that this feature required both CLR and Windows changes to support and the demand wasn’t there yet.  Well starting with VS2013 EnC with 64-bit managed code is supported.  EnC is not trivial to implement and I haven’t seen it in action yet so I cannot confirm whether it is as solid as 32-bit EnC but this is at least a step forward.

Managed Crash Dumps

One of the challenging aspects of managed code is debugging a crash dump file.  Managed memory isn’t like native memory so finding objects and piecing together chains can be hard.  With VS2013 you can load a managed crash dump into VS and VS will be able to view the dump as a managed process such that you can see managed memory objects.  I haven’t been able to play around with this feature to see if it provides any real benefits but it sounds promising.

Roaming Settings

Going along with the roaming profile concept of Windows, VS2013 is getting on the bandwagon.  One of the biggest issues with setting up a dev machine is getting your VS configured the way you want.  With roaming settings this becomes less of an issue.  Provided you hook VS up to your MS account (an option when you start VS the first time) then VS will pull down your VS profile.  This allows you to share VS settings across machines.  In this first release it isn’t clear what all will be pulled other than theme and possibly window layouts but still this feature has the potential to grow into a truly useful tool. 

One thing that caught me off guard is the fact that the roaming settings appear to be tied to your online TFS account rather than with your MS account.  This means you’ll have to set up a TFS account whether you want the rest of the services or not.  Even after setting all this up I didn’t see any options to configure settings for VS.  In reality you have to go to Tools\Options –> Environment\Synchronized Settings to specify whether roaming settings is enabled or not and what options you want to roam.  I like the fact that I have some choices I just wish it was easier to get to.  At this point it seems disconnected from my account link.

User Interface

General Changes

When Win8 was released the mandate from MS was for the products to comply with the new UI features.  VS did that and the backlash during beta was intense.  All caps menus and lack of color really angered a lot of people.  MS added some color back before final release but were pretty adamant that the new UI was better.  After release the anger still hasn’t fully subsided but MS has learned a valuable less.  One of the big changes coming in VS2013 is adding more color back into the UI.  This will make it far easier to identify things like selected items and to distinguish icons.  There is still work to do but I’m glad the VS team listened and are retreating on their original decisions.  There are also some other UX treatments being made such as altering lines to make things look nicer overall.  Compared to VS2012, VS2013 has a much friendlier UI.

Notifications

There is now a notification icon next to the Quick Search control.  VS will now display notifications here rather than (or perhaps in addition to) toast notices on the Task Bar.  It is unclear yet whether extensions will be able to hook into this and whether extensions/updates will be displayed here as well.

Editor Changes

Outside the chrome stuff there are various tweaks and changes to allow more formatting of code.  Most of the changes aren’t compelling enough to warrant an upgrade but they are certainly nice to have.

One feature that is really nice is the new Code Indicator feature in the editor.  Above methods you will now see informational data such as the # of references to that method (think Find All References).  It is a link so if you click on it you’ll get to see the actual references.  Find All References is a heavily used feature in my world so this unto itself is worth its weight in gold. 

Code indicators also include other things like test results.  This is configurable from Tools\Options –> Text Editors\All Languages.  I was privileged to be able to play around with its predecessors in VS2012 and I can assure you know that when I lost that functionality on the last VS update I really missed it.  This feature is incredibly useful.

Source Control

VS2012 and TFS2012 have had partial Git support for a while.  It is officially integrated in VS2013.  If you are a fan of Git then it’ll be there for you.

Another nice change is related to multiple source control providers.  Some times you want to use an in house TFS server and other times you might want to use TFS Services.  Unfortunately this process isn’t streamlined.  The process of connecting to a local TFS server remains the same.  For TFS Services you’ll create a new TFS server connection as well.  But this triggers a login prompt to TFS Services even though I’m already hooked up via my MS account.  This same workflow also occurs when you are trying to edit your profile.  Hopefully this will be resolved before release.  Once you get past all that though you can now see each of your source control servers and their projects which makes it real easy to switch between them. 

Project Compatibility

When VS2012 came out one of the very big features was backwards compatibility with VS2010 SP1.  This is one of the reasons that VS2012 did so well.  It allowed for mixed development.  VS2013 makes this same guarantee but the link here indicates a slightly different story.  Some project types indicate VS2010 SP1 support and others don’t.  There are even duplicate entries for the same project type with different compatibility statements.  Hopefully MS will clear this up before release.  Right now it would be safe to assume VS2012 is the only compatible version.

Final Thoughts

The question boils down to whether VS2013 is a mandatory upgrade or not.  My initial thoughts were no unless you were doing WinRT work.  However given some critical new features like Code Indicators, 64-bit EnC and seeing return values of methods I’m now in the camp that VS2013 is going to be a good upgrade provided the upgrade price is reasonable.

One thing that has not been clarified yet is the upgrade cost from VS2012.  Will there be an upgrade price or will devs have to pay full price?  If there is no upgrade price then I don’t expect VS2013 to do well at all.  There simply isn’t any critical features that justify the high cost of VS.  Of course most devs either have MSDN or SA so the cost isn’t relevant but hobby developers or those who didn’t go the above route will be out of luck.  We’ll have to wait and see until MS publishes the price before we can decide.

Windows 8.1 Preview

Note: This is strictly my opinion and in no way should be conveyed as the opinion of anyone else.

Now that the Win 8.1 Preview is out I can give my personal opinion on it.  First we should discuss some of the new features and then whether it is a mandatory upgrade or not.  Note that I’m ignoring all the new features around corporate environments and Windows Store apps. 

Microsoft vs. Local Account

When Win8 released you had the option of a local vs. MS account.  An MS account is effectively your Windows Live account and allows Windows to hook up to all that information including your contacts, email, etc.  For a home user this can be really useful compared to a local account which is specific to your machine.  Since Win8 was the first release to support this it wasn’t that useful.  Now that Win8.1 is coming you can start to see some of the benefits.  For example my home machine is Win8 and I’m running Win8.1 in a VM.  Both are hooked up to my MS account.  (The preview only allows an MS account during installation but local account will be available upon release)  As I was playing around with some of the Win8.1 settings I noticed they were already configured the way I wanted (because it pulled them from my main machine).  Likewise I changed the background image in Win8.1 and my Win8 machine updated (eventually).  Some of this is pretty nice but I still haven’t bought into the roaming settings concept because not every setting I want on every machine.  I really wish this was configurable.  So, for now, if you want different settings between Win8/Win8.1 machines you should use different accounts.

Desktop

The first feature that is really nice is the ability to log directly into the desktop.  With Win8 you always logged into the Start screen.  For a tablet this is fine but for desktops the Start screen isn’t as useful.  In Win8.1 you can default to the desktop at login by going to Taskbar and Navigation Properties and then the Navigation tab and selecting the option to Go to the desktop

Honestly, now that I’ve been using Win8 for a while I don’t mind the Start screen as much because almost all my common programs are there.  If MS would just allow me to pin anything to the Start screen I would happy but there are some things that cannot be pinned there (shortcuts that look like URLs but aren’t).

While you’re on the Navigation tab you might also notice a new option called Replace Command Prompt with PowerShell.  This effectively makes PowerShell more accessible and is speeding up the end of the DOS prompt.

Start Button

This got a lot of publicity.  One of the biggest complaints about Win8 was that the Start button was removed.  Win8.1 adds it back.  This is sort of like a genie wish though, you need to be very careful about how you word it.  People didn’t actually miss the Start button.  They missed the Start Menu which appears when you click the Start button.  The Start button is back in W8.1 but the Start Menu isn’t.

What MS has done is move some common functionality into the fixed menu.  You still won’t have the Start Menu from Win7 and prior.  What is not clear at this time is whether the button is pulling shortcuts from some location that can be manipulated such that it will be possible to add items to the menu. 

The new button does provide some useful functionality though such as easy access to PowerShell, Run command and shutdown commands.  One of the things that came up during the Win8 beta was no easy way to shut down the machine.  MS assumed, I can only imagine, that you would never shut down your machine so they made it a 4 step process.  Now you can easily shut down or restart your computer from the Start button.  I still don’t think this is going to make people happy.  Somebody will probably provide a hack to solve this so we’ll just have to wait.

Start Screen

The Start screen has a few interesting new features.  The first one that will be noticeable is that you can name the column of icons you have.  This may be useful to help distinguish the various columns of icons you have.  Personally I have only a couple of columns so I can keep them apart but if you fill up your Start screen with lots of icons this could be useful.

Another feature is the ability to resize the icons.  Depending upon the icon you can make it small, medium or large.  It appears that Win Store apps can be resized to all 3 sizes but regular program icons can only be small or medium.  The API documents that tiles can be 4 different sizes but only 3 show up on the screen.  I personally like how the Windows Phone allows you to resize an icon just by touch, dragging the icon.  This might be possible on a touch screen in Win8.1 as well.

Somewhat related is that more than one Win Store app can be on the screen at the same time.  This has been mentioned in several previews and in the documentation but it isn’t intuitive on how to do it.  What you do is open a Win Store app and then move your mouse to the top of the window until it turns into the hand icon.  Then hold, left click and you can drag the window to either the left or right side of the screen.  If you drag the window to the left (or right) of the screen then you can remove it from the split screen view.  This places it effectively in the “toolbox” and allows you to move Win Store apps to and from split screen depending upon what you’re doing at the time.

In theory you can do this for each monitor so if you have 2 monitors you can have 4 apps running at once.  It’s sort of funny how the single window, single focus concept of Win8 is already going away and MS is moving back toward multiple windows at the same time again.

Under Construction

There are still lots of issues with Win8 that Win8.1 doesn’t resolve.  The management aspect is still scattered to the four winds.  The Control Panel remains the best place to go to do all the system management but Win8.1 still provides links all over the place to sub pieces.  For an IT person I’d recommend just sticking with Control Panel (which is one of the options on the Start button).

Homegroup still doesn’t seem to add any value to me and yet by default you’ll get auto-added to a Homegroup when you install.  This adds a needless layer of removal once the OS is installed.  Please MS, give me the option of joining a Homegroup when I install.

The OS continues to run software at startup that I neither asked for nor want.  SkyDrive and the touch keyboard start automatically.  Even worse is that I turned off the touch keyboard and it was still there the next time I logged in.  I turned it off again but it is still running in the background.  Please MS, stop adding more stuff to an already bloated OS.  Don’t make me have to turn things off via hacks!!

Worthy Upgrade

At this point it is time to evaluate whether Win8.1 is a mandatory upgrade or not.  The answer is, it depends.  The upgrade is free for Win8 users.  If you’re running Win8 then Win8.1 is a recommended upgrade if you want the newer features.  Let’s face it Win8 was more akin to Vista and Win7 so Win8.1 is more like Win7.  Anybody running Win8 should go ahead and apply the free update.

If you’re running Win7 then Win8.1 isn’t really a mandatory upgrade.  Most people didn’t upgrade either because Win7 was doing just fine or the Win8 UI was so different.  Nothing has really changed on that front so if you’re content with Win7 then stay with it.  Once you are ready to upgrade though you’ll be switching to Win8.1.  If you’re running anything prior to Win7 then you probably need to upgrade so Win8.1 makes the most sense.  Just be ready to learn the new UI.

Environmental Config Transforms, Part 2

Update 29 July 2013: Refer to this link for updated code supporting Visual Studio 2013 Preview.

In the previous article we updated the build process to support environmental config transforms in projects and to have the transforms run on any build.  In this article we are going to package up the implementation to make it very easy to add to any project.  There are a couple of things we want to wrap up.

  • Place the .targets file into a shared location
  • Add an import into the current project to the .targets file
  • Add environmental config transforms into the project
  • Remove the default config transforms generated by Visual Studio

Installing the Shared Targets File

The default location for storing shared .targets files is MSBuildExtensionsPath.  This property is set automatically during builds and will, by default, be pointing to C:\Program Files (x86)\MSBuild.  If you look under this directory you will see that the standard .targets file shipped by .NET are already there.  We need to place our .targets file under there as well but we should create a subdirectory for our file.  This prevents us from colliding with other files and provides a convenient place to store additional files if we need them.  For this article I’m going to call the directory P3Net.

For a single machine you can easily just create the directory and copy the file into it.  But for several machines a better approach is to provide an installation script.  I’ve attached a simple one called Install.ps1 to this article.  One issue that the installation has is that you have to be an administrator to write to the directory so either the script has to be run using an elevated account or it will need to be manually done using Windows Explorer. 

Irrelevant of how the file ultimately gets installed we now need to update the project file that we used last time to use the new path. 

<Import Project="$(MSBuildExtensionsPath32)\P3Net\P3Net.targets"/>

Reloading the project and rebuilding the solution should result in no errors.  In the future if we need to modify the .targets file then we’ll have to redeploy it.  We could, if we wanted, move this to a VS extension but the extension would need to be installed using administrator privileges. 

Creating the Item Template

Now that we have the .targets file in a shared location we can set about creating an item template to generate the config transforms for any project we want.  In previous articles we went over how to create item templates using T4 and how to deployment.  In this case we’ll be creating an item template but it won’t use T4.  We will however use the same deployment process that we used in the previous articles.  If you have not yet done so then download the version from the final article.  We will add the template to the solution so it is deployed like the others.

Before we can create the template we need to decide what config transforms we need for our environments and what they should contain by default.  To keep it simple we will stick with the environments we defined in the last article (Production, Test) and our transforms will simply contain the environment name.  For your environments you’ll want to set up any additional, standard transforms you’ll need.  It is important to note that the actual environment transform files isn’t relevant to the build as it builds all the transforms. 

Following the instructions for adding new item templates that were discussed in the template article we do the following to the ItemTemplates project.

  1. Create a new directory called EnvConfigs
  2. Add the environmental configs that we want into the folder.  Since we do not know whether this a web or Windows project rename the files to base.environment.config.  We’ll see later how to rename them.
  3. For each transform set the following properties:
    • Build Action = Content
    • Copy to Output = Do Not Copy
    • Include in VSIX = True
  4. Add the .vstemplate file to the project and update it accordingly.
  5. Set the following properties for the .vstemplate file
    • Build Action = Content
    • Copy to Output = Do Not Copy
    • Include in VSIX = True
    • (Optional) Category = My Templates

Here’s what my .vstemplate looks like.

<?xml version="1.0" encoding="utf-8"?>
<VSTemplate Version="3.0.0" Type="Item" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005" xmlns:sdk="http://schemas.microsoft.com/developer/vstemplate-sdkextension/2010">
  <TemplateData>
    <Name>Environmental Configuration Transforms</Name>
    <Description>Template for generating basic environmental config transforms.</Description>
    <Icon Package="{FAE04EC1-301F-11d3-BF4B-00C04F79EFBC}" ID="4600" />
    <TemplateID>f7a423ac-7948-42a7-b9b9-cba719569106</TemplateID>
    <ProjectType>CSharp</ProjectType>
    <RequiredFrameworkVersion>4.0</RequiredFrameworkVersion>    
    <DefaultName>Environment</DefaultName>
  </TemplateData>
  <TemplateContent>
      <ProjectItem SubType="Code" ReplaceParameters="true" ItemType="None" TargetFileName="web.config\web.Test.config">base.Test.config</ProjectItem>
      <ProjectItem SubType="Code" ReplaceParameters="true" ItemType="None" TargetFileName="web.config\web.Production.config">base.Production.config</ProjectItem>
  </TemplateContent>
</VSTemplate>

There are a couple of interesting points about this file.  Notice the Icon element.  Since VS already ships with icons for config transforms I’m using the same values for my icon.  This gives the user a consistent icon for both versions.  The DefaultName element is not a full file name and won’t actually be used anyway.

The most interesting part of this is the template contents.  There is a project item for each environment transform.  The target file name is a partial path starting with the web.config file.  This causes VS to insert the transforms as subfiles under the base file.  This mimics the behavior that you see in VS today.

At this point we have a working item template but there are still a couple of issues.  The first issue is that the template is keyed to a web project.  If we try to use it on a Windows project it will be using the wrong config.  We need to fix that but we cannot using the existing .vstemplate file.  Instead we need to modify the project item entry to use a template property that we can replace when the template runs.  Here’s the updated project item elements.

<ProjectItem SubType="Code" ReplaceParameters="true" ItemType="None" TargetFileName="$BaseConfigFileName$.config\$BaseConfigFileName$.Test.config">base.Test.config</ProjectItem>
<ProjectItem SubType="Code" ReplaceParameters="true" ItemType="None" TargetFileName="$BaseConfigFileName$.config\$BaseConfigFileName$.Production.config">base.Production.config</ProjectItem>

Now all we need to do is replace the template property with the actual config file name when it is inserted.  But to do that we need to use a template wizard.

Creating a Template Wizard

Most templates do not need any code to support them but some do.  For those that need custom code you have to write a template wizard.  The documentation makes lots of restrictions on template wizards but based upon the current behavior of VS and building on the existing template deployment architecture we can simply create a new template wizard project as part of our template project and deploy it along with the item templates.

  1. Create a new class library called P3Net.MyTemplateWizards.
  2. Add references to the following assemblies
    • EnvDTE (set Embed Interop Types to false)
    • Microsoft.Build
    • Microsoft.VisualStudio.TemplateWizardInterface
    • System.Windows.Forms
    • TemplateLib (contains some extension methods)
  3. Add a new class called EnvironmentConfigsWizard that implements IWizard.

The code for the wizard is too long to post so I’ll only mention the RunStartedCore method.  This method is called when the template runs.  It will be responsible for doing the heavily lifting.  Here’s the code.

private void RunStartedCore ( EnvDTE.DTE dte, 
                Dictionary<string, string> replacementsDictionary, 
                WizardRunKind runKind, 
                object[] customParams )
{
    //Get the current project                
    var project = dte.GetCurrentProject();

    //Get the configuration file
    var configFile = GetConfigurationFile(project);
    if (configFile == null)
        ReportErrorAndCancel("No configuration file could be found.");

    //Set the template parameters
    replacementsDictionary.Add("$IsWebProject$", configFile.ProjectType == ProjectType.Web ? "1" : "0");
    replacementsDictionary.Add("$BaseConfigFileName$", configFile.BaseConfigFileName);
}

The method first finds the config file in the project.  Based upon the config file name it knows whether this is a web or Windows project.  It then updates the template property accordingly so that the .vstemplate file will generate the correct target information.

Now that the template wizard is defined we need to update the .vstemplate file to reference the wizard.  Add the following to the end of the .vstemplate file.

<WizardExtension>
    <Assembly>P3Net.MyTemplateWizards, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null</Assembly>
    <FullClassName>P3Net.MyTemplateWizards.EnvironmentConfigsWizard</FullClassName>
</WizardExtension>

The above elements identify the assembly to load and the class within the assembly that will be used to support adding the template to the project.  Now we need to hook it up to the deployment project.

Adding Wizard to Setup

To add the template wizard library to the setup we do the following.

  1. Open the .vsixmanifest file in the designer
  2. Go to the Assets tab and click New
  3. Select Assembly as the type
  4. Select A project in the current solution as the source
  5. Select the project
  6. Click OK

The template wizard will now be deployed as part of the extension.  We have just a couple of more issues to resolve before we are done.

Removing the Default Config Transforms

The next issue we need to resolve is the removal of the default transforms that are most likely in the project.  This is a simple matter of removing the files from the project so we update the RunStartedCore method from earlier to find and remove any existing transforms before we add our new ones.

//Remove the default config transforms if they exist            
RemoveProjectItem(configFile.ConfigurationItem, configFile.BaseConfigFileName + ".Debug.config");
RemoveProjectItem(configFile.ConfigurationItem, configFile.BaseConfigFileName + ".Release.config");

The above code only removes the default transforms.  If you wanted to remove them all you would need to update the code.

Importing the Targets File

The final issue to solve is getting the shared .targets file into the project.  We don’t want to manually have to do that so we will modify the template method to check for the import of the .targets file.  If it hasn’t been imported yet then the template will add it.  Here’s the code.

private void EnsureStandardTargetIsImported ( EnvDTE.Project project )
{
    var buildProject = ProjectCollection.GlobalProjectCollection.GetLoadedProjects(project.FullName).First();

    //Check for the import
    var hasImport = (from i in buildProject.Xml.Imports
                     where String.Compare(Path.GetFileName(i.Project), SharedTargetsFileName, true) == 0
                     select i).Any();
    if (hasImport)
        return;

    //Make sure it exists first
    var extensionsPath = buildProject.GetPropertyValue("MSBuildExtensionsPath32");
    var targetsPath = Path.Combine(SharedTargetsFilePath, SharedTargetsFileName);
    var fullPath = Path.Combine(extensionsPath, targetsPath);
    if (!File.Exists(fullPath))
        ReportErrorAndCancel("The standard .targets file could not be located.");

    //Add it             
    buildProject.Xml.AddImport(@"$(MSBuildExtensionsPath32)\" + targetsPath);
}

The above method gets the imports from the project and looks for the shared .targets file.  If it isn’t found then an import is added to it otherwise nothing happens.  A call to this method is placed in the template method shown earlier.

Conclusion

We are done.  We have a new item template that generates the environmental transforms that we need and cleans up the existing version that VS uses.  Whenever the project is built all the environmental transforms are generated and stored so we can easily build once and deploy to any of our environments.

Additionally we have added a template wizard to our T4 template project that we can use as a basis for more advanced templates in the future.  Finally we created a shared .targets file that we can use to add new build tasks to any project. 

Environmental Config Transforms, Part 1

Configuration file transforms have been around for quite a while and most companies use them to generate different config files for each environment but Visual Studio does not really support the functionality most companies need.  In this article I’m going to talk about why the existing Visual Studio implementation is flawed and how to fix it so that you can have true environmental transforms that are actually useful.

For purposes of this article we will assume a simple web application (although it applies to any application) that will be deployed to 2 different environments – test and production.  We will also assume that developers debug the application on their local machine and therefore should not have to do any special tasks to debug the application locally.  As such we’ll have a 3th, default, environment – debug.

Per Configuration Transforms

One very big issue with the existing VS implementation of transforms is that it is tied to the configuration (Debug, Release) of the project being built.  For example if you’re building a project in debug mode then only the debug transform is run.  There are several problems with this approach.  The first problem is that VS is tying environmental settings to build configurations.  I don’t know about you but since my earliest years in the business I have heard (and followed) the philosophy that each (potentially releasable) build has to be tested.  Therefore if we have 3 environments we would need to have 3 different configurations (one for each environment) and a separate transform for each.  Furthermore we’d have to build the application 3 times even though just the transform should change.  When you start getting into large applications with many different projects it becomes clear that managing configuration per environment just doesn’t add up.  A single build should be deployable to any environment along with its transformed config file. 

Another issue with this approach is verification of transforms.  If I make a change to the root config file or any transform I’d like to ideally build and verify all the transforms still work.  I shouldn’t have to build my application for each environment to verify one change.

Clearly the VS approach of tying environmental transforms to build configurations just does not make sense.  Unfortunately that is the default.  Fortunately the rest of this article will show you how to create environmental transforms that avoid these issues.  Specifically this article will allow the following:

  • Have 1 transform for each environment (not configuration)
  • Validate all transforms at build time
  • Provide the final, transformed configs for each environment to simplify deployment
  • Make it very easy to add environmental transforms to new projects

For this article I’ve created a simple ASP.NET application that simply displays the environment name.  It was generated using VS 2012 and then stripped of everything except the default page with a label for the environment name.

Per Environment Transforms

To get started we need to agree upon an approach.  Following the default VS approach we will create a separate config file for each environment called web.environment.config.  Ideally these transforms will be stored under the config file.  Within each environmental transform will be the standard transforms for that environment.  For example in development we will likely leave debugging information turned on but in the production environment we will turn off all debugging. 

Assuming you have an application open you would simply add a new config transform for each environment (test, production).  Since the transform syntax can be a little different the easiest approach is to simply copy and paste the existing web.debug.config transform.  Since we do not want per configuration transforms then the default transforms can be removed.  Here’s what you should have in the final project.

  • Web.config
    • Web.Production.Config
    • Web.Test.Config

For clarification I’ll add a simple app setting entry that contains the environmental name so it is easier to identify which transform was used.

<appSettings>
    <add key="Name" value="Debug" />
</appSettings>

And the corresponding Test transform

<appSettings>
    <add key="Name" value="Test" xdt:Transform="SetAttributes" xdt:Locator="Match(key)"/>
</appSettings>

Running the application at this point would print out “Debug” because VS cannot find a transform for the configuration.  More importantly without a manual process we do not have the environmental transforms to test. 

Configuration File Names

Before moving on it is important to note that there are two different types of config files we have to deal with.  Web-based projects use a web.config file.  The filename does not change between design and run time.  Web config files are simple to deal with.  On the other hand Windows-based projects use app.config at design time but the file is copied and renamed to myapp.exe.config where myapp is the name of the executable.  To support the transformation of either type of file we have to write extra code to select the correct base name and transform the file to the correct final name.  This will add some extra code later on but does not overly complicate the process any.

Building the Transforms

Whenever we do a build we want the transforms to run, irrelevant of configuration.  This provides both the ability to validate any change to the config or the transforms at build time and to allow us to have the fully transformed configs so that we can deploy the build and its config to any environment, without the need for another build.

To trigger something at build time you are going to have to use MS Build.  A project file is really nothing more than a set of MS Build tasks that run.  To do something during a build you need only call a pre-defined task, or write your own.  If the task is involved or is going to be used in many projects then it is best to store it separately in a targets file.  .NET ships with lots of them.  For building the transforms we will store the appropriate task in a targets file that can be easily reused in any number of project files.

Creating the Targets File

A description of how targets files and MS Build work is beyond the scope of this article.  You can refer to the download file for the full file.  I am just going to highlight the process of building the transforms.  Here is the base flow:

  • Get the source file to be transformed
  • Get the list of transform files to be applied
  • For each transform file apply it to the source file to get the output file
  • Save the output file

All this work will require a custom task (TransformXmlFiles).  MS Build supports different approaches to building tasks but I’m opting for an inline task to simplify deployment.  As such the task code is part of the file and contained within a UsingTask.

First we need to set up some parameters for the transformation task.  MS Build tasks are parameter-based so we define the following:

  • SourceFile – The config file that will be transformed (web.config or app.config)
  • TransformFiles – The list of files to process
  • OutputDirectory – The directory where the final transformed files will be stored.
  • TargetFile – The name of the final transform file
  • ProjectName – The project name
  • ToolsDirectory – The path to the VS installation

To trigger a task the project file must either explicitly call the task or the task must hook into the build process.  Since I want the .targets file to be as non-intrusive as possible in the project file I have opted to automatically run the transformation task after a build.  Using this approach I can completely configure the task within the .targets file and project files need only import the file to get the behavior.  Here’s the relevant section.

<!-- Because of the MS Build "bug" VSToolsPath is needed but it isn't always available so handle that case now -->
<PropertyGroup>
    <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">11.0</VisualStudioVersion>
    <VSToolsPath Condition="'$(VSToolsPath)' == ''">$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)</VSToolsPath>
</PropertyGroup>


<!-- Get the config transform files -->
<ItemGroup>
    <WebConfigTransformFiles Include="web.*.config" Exclude="web.config" />
    <AppConfigTransformFiles Include="app.*.config" Exclude="app.config" />
</ItemGroup>
    
    <!-- Runs after a successful build -->
<Target Name="TransformConfigurationFiles" AfterTargets="AfterBuild">
    <TransformXmlFiles TransformFiles="@(WebConfigTransformFiles)" SourceFile="web.config" TargetFile="web.config" 
                            OutputDirectory="$([System.IO.Path]::Combine($(OutDir), 'Configs'))" ProjectName="$(MSBuildProjectName)" 
                            ToolsDirectory="$(VSToolsPath)" />

    <TransformXmlFiles TransformFiles="@(AppConfigTransformFiles)" SourceFile="app.config" TargetFile="$(TargetFileName).config"
                            OutputDirectory="$([System.IO.Path]::Combine($(OutDir), 'Configs'))" ProjectName="$(MSBuildProjectName)"
                            ToolsDirectory="$(VSToolsPath)" />
</Target>  

Notice that the transform task is called twice, once for each type of config.  This is simpler (in a task) than trying to determine which type of config to transform.  Notice also that in each case the parameters are set accordingly to ensure the final transform is generated.  The targets file is set based upon whether we are transforming a web or app config. 

The output directory is set to the project’s output directory with a subdirectory of Configs.  Additionally the project name is determined by the project property.  This results in a hierarchy where all the configs are stored as subfolders based upon the project and environment name.  This allows a single solution to have multiple projects with multiple environmental transforms and each one goes into a separate directory for ease of deployment.

MS Build “Bug”

All that remains is the actual logic for transforming the config file. 

if (TransformFiles != null && TransformFiles.Length > 0)
{
    //The reference assembly path is only used for compilation so force the assembly to load so it is available when we need it
    var assemblyWebPublishing = Assembly.LoadFrom(Path.Combine(ToolsDirectory, @"Web\Microsoft.Web.Publishing.Tasks.dll"));

    dynamic transform = assemblyWebPublishing.CreateInstance("Microsoft.Web.Publishing.Tasks.TransformXml");
    //var transform = new TransformXml();

    transform.BuildEngine = this.BuildEngine;
    transform.HostObject = this.HostObject;

    foreach (var inputFile in TransformFiles)
    {
        //Get the env name
        var fileParts = Path.GetFileNameWithoutExtension(inputFile.ItemSpec).Split('.');
        var envName = fileParts.LastOrDefault();

        //Build output directory as base output directory plus environment plus project (if supplied)                        
        var outDir = Path.Combine(OutputDirectory, envName);
        if (!String.IsNullOrEmpty(ProjectName))
            outDir = Path.Combine(outDir, ProjectName);

        //Build the output path
        var outFile = Path.Combine(outDir, TargetFile);

        //Make sure the directory exists
        if (!Directory.Exists(outDir))
            Directory.CreateDirectory(outDir);

        //Transform the config                
        transform.Destination = outFile;
        transform.Source = SourceFile;
        transform.Transform = inputFile.ItemSpec;
        transform.Execute();
    };
};

The actual transformation is triggered by calling the same code that VS itself uses.  That functionality resides in a web publishing assembly shipped with VS.  Note: If you use TFS Build with a project using this .targets file then either VS needs to be installed on the build agent or the assembly needs to be copied to a location that MSBuild will use.

Unto itself this is trivial except for a “bug” in MSBuild.  MSBuild allows you to specify assembly references for inline tasks and it will honor the path during compilation.  Unfortunately when the task is then run the assembly will not be found because the runtime path is not being configured.  I reported this bug at Connect but whatever engineer was assigned to the ticket completely missed the point of the bug and marked the bug as by design.  Connect can be a nightmare to use because of the lackluster support that MS sometimes gives it.  Rather than fighting that battle I simply worked around the issue by loading the assembly manually.  Note the use of dynamic to work around the compiler issues.

Using the .targets file

That completes the .targets file.  Now all we need to do is put it someplace the build can find it and then add an import into the project file.  The standard location for shared .targets files is MSBuildExtensions32Path which is created when .NET is installed and contains most of the standard .targets file.  To keep things simple I’ll just reference the .targets file that is in the solution directly.  For real solutions the .targets file should be stored in the standard location.

Now just add an import into the project file (I prefer at the bottom).

<Import Project="$(SolutionDir)Targets\P3Net.targets"/>

Rebuild and in the output directory should be the Configs directory with the transformed files in subdirectories.  To verify the validation is occurring you can add a bad transform in one of the transform files and you should get a compilation warning.

Next Time

We now have the ability to generate environmental transforms during a VS build for any type of project.  We’ve solved the problem we started with but we can go further.  The current solution, while slick, is a little much to do for every new project.  Next time we’ll wrap this up in a deployable package and add a template to automate the setup of this stuff so we can add template and run on any new projects.

Using T4 to Create an AppSettings Wrapper, Part 7

Update 29 July 2013: Refer to this link for updated code supporting Visual Studio 2013 Preview.

In this final article in the series we’re finishing up the deployment projects started last time.  When we’re complete we’ll have a set up projects that can be used to build and deploy T4 templates where needed.  The projects will be able to support any number of templates so maintenance will be simple even as more templates are added.  In the previous article we moved the core template functionality into a library that we can expand upon.  We also set up an item template project to store the root T4 template that someone would use.  In this article we’re going to create the project to store the nested templates (we’ll discuss why later) along with the Visual Studio Extension (vsix) file that will be used to install everything.

Corrections

There are a couple of corrections that need to be made from the source in the last article to prepare for the setup project.

In the AppSettingsTemplate.tt file the assembly reference for the shared assembly needs to include the .dll otherwise the T4 host will assume it is coming from the GAC.  An assembly reference for EnvDTE is also missing so it needs to be added.

<#@ assembly name="TemplateLib.dll" #>
<#@ assembly name="EnvDTE" #>

In the nested template there were still some hard-coded references to the original class name rather than using the ClassName property in the template.  Do the replacement so the template behaves properly based upon the name that is ultimately used.

Nested Template Project

For the nested templates we need to create a new VS Project Template project.  When a developer uses one of the item templates they will only add the core template to their project.  The nested template, and support assembly, will need to be stored in a shared location that the T4 host can find.  The easiest way to do this is to use a VS Project Template project.  All the shared templates will be stored in this project.  We could technically even store the base template class code in this project but I find it easier to keep them separate. 

Add a new project to the solution (Visual C#\Extensibility\C# Project Template) called TextTemplates (the name is not really relevant).  Once the project has been added remove all the files from the project as they will not be needed.  Add a reference to the shared assembly project that was created earlier.

This project mirrors the item template project so set up the same folder structure for each template as needed.  The nested template file will be moved to this project.  Every time a new nested template is added the following steps need to be followed.

  1. Create a subfolder in the project
  2. Add the nested .tt file to the subfolder
  3. For each .tt file set the following properties for the item
    • Build Action = Content
    • Copy to Output = Copy Always
    • Custom Tool = (blank)
    • Include in VSIX = True

There is only one issue with the approach we’re taking – T4 does not know where to find our nested templates and custom assembly.  Therefore we need to update the search path that T4 uses to include the installation path of the package.  The simplest way to do that is to add it to the package definition (.pkgdef) file.  When a package is installed the package definition is processed to allow the package to do any customization it needs.  We can use this feature to update the T4 include path.

Add a new text file to the project.  It’s name must match the name of the project and it should have an extension of .pkgdef (i.e. TextTemplates.pkgdef).  Change the following properties for the item.

  • Build Action = Content
  • Copy to Output = Copy Always
  • Include in VSIX = True

When T4 runs across an assembly or include directive that it cannot find then it uses the registry to search for additional paths.  Each time we add a new template we need to add the path to the template to the list using the .pkgdef file.  Note that the subfolder we use in the project will correspond to the subfolder that the template is installed to so we need to include the full path.  Here’s the code we’ll add to the package definition file.

[$RootKey$\TextTemplating\IncludeFolders\.tt]

"IncludeMyAppSettings"="$PackageFolder$\TextTemplates\AppSettings"

This will add a new key to the registry with the given name and value.  The installer will replace $PackageFolder$ with the installation path for the package.  The project name follows that and the last name is the folder that was used when adding the template to the project.  It is critical that the key name start with “Include” otherwise the T4 host will ignore it.  It must also be unique.  Each new subfolder of nested templates will need a corresponding name-value pair in the definition file.

Some discussion of this can be found in MSDN.  But credit for pointing me in the correct direction when I was trying to figure this out has to go here.

VSIX Project

 

We’re on the home stretch.  We have set up the project for the shared assembly code, defined the item templates that will be available to the developers and got the support templates hooked up to T4.  The final step is to create a VSIX file to install everything.  By far this is the most frustrating part because VSIX is very picky about how things have to work and it can be a bear to work with.

Create a new VSIX Project (Visual C#\Extensibility\VSIX Project) called MyTemplateSetup.  The manifest editor will open up.  The manifest controls several important aspects of the setup including the information the user sees, the files to be installed and the version of the setup.

  1. Set Product Name to the name you want to the extension to appear as in the gallery.
  2. The Product ID must be unique and should already be set.
  3. Set Author to an appropriate value.
  4. The Version should default to 1.0.
  5. Provide a description of the extension.
  6. Since this is a template installer the Tags should probably be set to Templates.
  7. Optionally set the other attributes such as licensing, icon and release notes.

Switch to the Install Targets tab.  This tab indicates what version(s) and edition(s) of Visual Studio the extension supports.  The default is Visual Studio 2012 Pro or higher which should be fine.  Microsoft licensing does not allow third-party extensions to the Express products.  If a newer version of Visual Studio comes out then the version number can be updated.

Switch to the Assets tab.  This is where we identify the files to be installed.  This is also where things can get difficult if the projects were not created using the right project type.

Add a new asset for the shared assembly. 

  1. Type is Microsoft.VisualStudio.Assembly
  2. Source is a project in the solution
  3. Project will be the shared assembly project
  4. Click OK

Add a new asset for the item templates.

  1. Type is Microsoft.VisualStudio.ItemTemplate
  2. Source is a project in the solution
  3. Project will be the item template project
  4. Click OK

Notice that the path that is displayed includes an output group.  This property is set inside the item template project.  If the referenced project actually isn’t an item template then the build will fail.

Add a new asset for the nested templates.

  1. Type is Microsoft.VisualStudio.VsPackage
  2. Source is a project in the solution
  3. Project will be the text template project 
  4. Click OK

An interesting (and sometimes problematic) thing that happens is that project assets are added as references to the project.  For the nested templates this is problematic because by default the setup project expects an assembly to be generated.  For the nested templates there is no assembly so open the properties for the nested template reference and set Reference Output Assembly to False.  If this is not done then a compilation error will occur.

Switch to the Dependencies tab.  This is where any dependencies can be defined.  The .NET framework is already included but we depend upon T4 Toolbox so that needs to be added as well.  Assuming it is already installed on the machine you can do the following. to add the dependency.

  1. Source is Installed Extension
  2. Name is T4 Toolbox
  3. Version range will be set to the current version but you can adjust this as needed
  4. Click OK

One issue with dependencies is that if the user does not have the dependency installed they may get a generic error message before they get information about the missing dependency.  Hence it is useful to put dependencies in the description of the extension.

Almost done.  The last thing we need to do is add another package definition file (.pkgdef) matching the name of the shared assembly project.  As was done earlier, configure the item properties.

  • Build Action = Content
  • Copy to Output = Copy Always
  • Include in VSIX = True

The package definition file used earlier had an entry for each template.  This file will contain an entry for the shared assembly.

[$RootKey$\TextTemplating\IncludeFolders\.tt]

"IncludeMyTemplateAssemblies"="$PackageFolder$"

Compilation

Time to compile the setup but before you do I recommend that you change the VSIX properties in the setup project to not deploy to the experimental instance of VS.  The experimental instance of VS is designed to allow you to test your packages without impacting your main development environment.  Unfortunately in my experience it does not work well when you have additional packages or extensions installed.  You end up sitting through lots of error dialogs. 

If you do not use the experimental instance of VS then you won’t be able to easily debug your template.  However there is a simple approach that I find useful.  In most cases you will develop your template in a stand alone project so you can tweak the template.  Once it is added to the extension though you can still change it by finding the directory where the extension was installed.  By default it will be a randomly generated directory under <VSDir>\Common7\IDE\Extensions for shared extensions or <profiledir>\AppData\Local\Microsoft\VisualStudio\11.0\Extensions for user extensions.  Once you find the directory you can edit and/or replace the files until you’ve resolved any issues you’re having.  You can then update the project and redeploy.

To test the setup simply run the .vsix file.  Start VS, create or open a project and add the new item template to the project.  Confirm the generated code is correct.  In general if you are adding or removing extensions you should ensure that all instances of VS are closed first.  VS doesn’t full install/remove extensions until it is restarted and having multiple instances running can cause problems with that.

Versioning

Versioning of the extension is incredibly important.  When VS is looking for an updated extension it uses the version as defined in the manifest.  Ensure that whenever you build a new version of the setup that you increment the version number.  You should also consider keeping the major/minor version number of the shared assembly in sync with the manifest file.

Do not change the product ID.  If you do then this becomes a brand new extension that can be installed side-by-side with the old one.  In most cases this can cause  conflicts that you would want to avoid.

Troubleshooting

Troubleshooting T4 templates is an art more than a science but here’s a few thoughts based upon my experience.

If you get a compilation error while compiling the setup project saying it cannot find the assembly for the nested templates project then you forgot to change the reference’s properties.  Refer to the earlier section on fixing that.

If the root template cannot find the nested template then the registry was not updated properly.  This can occur if the extension was installed but VS was not restarted, multiple instances of VS were running or if something went wrong.  Uninstall the extension, restart VS to clean everything up, shut down VS and reinstall the extension.

If the shared assembly file cannot be found then the above comments about the root template also apply.  Additionally ensure that the assembly reference in the nested template includes the .dll or the assembly is in the GAC.

If the generated file contains an error token then use Error List to determine the actual error(s) that occurred.  In most cases it is a compilation issue with the nested template.  Load up the nested template in VS and edit it until it compiles.  Then rebuild the deployment package.

If the item templates do not show up then ensure that they were properly added to the setup project and that they appear in the extension directory.  The path to the extension directory is contained in the install log that is accessible at the end of the extension installation.

When adding a new template get it working in a standalone console application and then move it into the templates solution.  This will make it easier to make changes and you can still rely on the shared assembly and break up the template into the item and nested components.

Enhancements

There are quite a few enhancements that could be made with all this code.  One area that I personally recommend is using data annotations for validating configuration properties.  Apply annotations to the configuration properties where appropriate.  Alternatively implement IValidatableObject in the template class.  Then modify the Validate method in the base class to validate the annotations as part of the core validation.  This eliminates the need for per-template validation of things like required parameters.

Another area of improvement is breaking out all the non-template functionality.  I personally like extension methods so you could move all the non-template functionality into extension methods. 

Yet another area is defining some common template generation methods that derived types can override.  For example most T4-generated code should be marked as non user code so they won’t step into it.  Wrapping this up in a method allows derived types to call it where appropriate and, optionally, add their own attributes.

For deployment you can provide the developers with the VSIX package.  If you’ve read my earlier article on hosting your own private VS extension gallery then you could deploy the VSIX to that as well.

Attachment: MyTemplates.zip
Posted Sun, Apr 28 2013 by Michael Taylor | no comments
Filed under:
Using T4 to Create an AppSettings Wrapper

Update 29 July 2013: Refer to this link for updated code supporting Visual Studio 2013 Preview.

This is the table of contents for the series on using T4 to create an AppSettings wrapper for use in any code.

  1. Creating a static T4 Template for AppSettings
  2. Creating a dynamic T4 Template for AppSettings
  3. Reading Settings from the Configuration File
  4. Separating Customizable Parts from Static Parts
  5. Customizing the Template
  6. Creating a Deployment Package (Part 1)
  7. Creating a Deployment Package (Part 2)
Posted Sun, Apr 21 2013 by Michael Taylor | 3 comment(s)
Filed under:
Using T4 to Create an AppSettings Wrapper, Part 6

In this (almost) final post in the series on creating an AppSettings wrapper in T4 we’re going to package all the work we’ve done into an easily deployable and reusable package that we can extend to include other templates.  We’ll also make sure that adding the template to a project is as easy as Add New Item.

Here’s the list of things we’ll accomplish

  • Create a custom base class and move all the shared template functionality to it.
  • Create a VS extension (VSIX) to install the parts of our template.
  • Install the shared template components into a centralized directory so projects only need to include the core template.

Preparation

There will be 4 projects by the time we’re done so now is a good time to create a new solution for all the template work.  Create a new blank solution in Visual Studio (Other Project Types\Visual Studio Solutions).  This solution will be able to host any number of T4 templates so you should use a generic name (i.e. MyTemplates).

We’ll be building a VSIX so the Visual Studio 2012 SDK needs to be installed.  If the T4 Toolbox isn’t installed then that will be needed as well.

Shared Template Assembly

Let’s start by moving all the common functionality out of the nested template and into a reusable base class.  The base class will contain any logic that can be shared across T4 templates.  Create a new Class Library (i.e. TemplateLib) to store the base class and any additional functionality that will be needed by all templates.  Since this will be hosted by VS it needs to target the .NET 4.5 framework.  Delete any created class file.

We have been using a lot of assemblies so we need to add references to all of them.  Note that when referencing VS assemblies there may be several.  Prefer the versions in the Program Files directories over the VS directory to make it easier to move solutions.

  • Assemblies\Extensions
    • EnvDTE (version 8.0.0.0)
    • Microsoft.VisualStudio.OLE.Interop
    • Microsoft.VisualStudio.Shell.11.0
    • Microsoft.VisualStudio.Shell.Design
    • Microsoft.VisualStudio.Shell.Interop.11.0
    • Microsoft.VisualStudio.TextTemplating.11.0
    • Microsoft.VisualStudio.TextTemplating.Interfaces.11.0
    • VSLangProj
  • Assemblies\Framework
    • System.Configuration
  • T4 Toolbox installation directory
    • T4Toolbox
    • T4Toolbox.VisualStudio

For the T4 Toolbox binaries you’ll unfortunately have to reference them from the installation directory where VS puts them.  By default this will be a folder under your AppData\Local\Microsoft\VisualStudio\11.0\Extensions directory.  If you like you could also copy the files into a shared assemblies folder under the solution to make it easier to use.

When adding EnvDTE and VSLangProj be sure to open the reference properties and set Embed Interop Types to False.  Otherwise you’ll get several compilation warnings.

With all the references in place create a new class that will represent the base template class (i.e. MyTemplate).  It should be public, abstract and derive from CSharpTemplate in the T4 Toolbox. 

/// <summary>Base class for T4 templates.</summary>
public abstract class MyTemplate : CSharpTemplate
{
}

Move all the members not specifically related to the app settings template into the base class.  Make them protected so they are not exposed outside hierarchy.  Here’s the members I see that should be moved.

  • ActiveProject (and field)
  • DteInstance (and field)
  • FindConfigProjectItem
  • FindItem
  • FindProject
  • GetFriendlyName
  • GetProjectItemFileName

With the members moved the nested template can be cleaned up to remove unused namespaces and assembly references.  All it contains now is the functionality specifically needed for dealing with the settings.

The nested template will need a reference to our assembly and an import for the namespace.  Additional the nested template needs to derive from the new class.


<#@ assembly name="TemplateLib" #> <#@ include file="T4Toolbox.tt" #> …
<#@ Import namespace="TemplateLib" #> <#+ public class AppSettingsTemplate : MyTemplate

At this point the template will not work correctly because T4 cannot find the custom assembly.  To temporarily confirm all the changes work you can copy the assembly to the directory containing the templates.

Item Template Project

In order to use the template developers will need to be able to add the template from Add New Item.  To simplify deployment we can create a new template project that installs the items into the appropriate location.  This project can install any number of item templates so as more T4 templates are created they can be added here.

Create a new Item Template project (Visual C#\Extensibility\C# Item Template) called ItemTemplates.  Since we might have several different templates in this project I like to create a folder for each template (i.e. AppSettings).  Each new T4 template can be placed into its own subfolder so there is no worries about collisions.  Every time a new item template is added the following steps need to be followed.

  1. Create a subfolder in the project
  2. Add the .tt file to the subfolder along with the .vstemplate and optional .ico files
  3. For each .tt file set the following properties for the item
    • Build Action = Content
    • Custom Tool = (blank)
    • Include in VSIX = True
  4. For the .vstemplate file set the following properties
    • Build Action = VSTemplate
    • Include in VSIX = False
    • Category = (see below)
  5. In the VSTemplate file
    • Set RequiredFrameworkVersion as appropriate
    • Set a unique TemplateID
    • Set the name and description
    • Set DefaultName accordingly
    • Reference the icon included in the project
    • Add any referenced assemblies
    • Change the existing ProjectItem entry to reference the .tt file instead
    • For each ProjectItem set the ItemType to the appropriate Build Action (for .tt files it should be set to None)

Move the .vstemplate and .ico files that were generated into the template folder.  The .cs file can be deleted.  Rename the copied files to match the template name.  Finally copy the root T4 template from previous articles into the folder.  Then follow the steps above to complete the process.

The Category property for the .vstemplate allows you to group your templates into a folder in Add New Items.  It should be used to keep your custom templates separate from VS or others.

Time Out

This post is getting long so it’s time to take a break.  In the next (and final) post we’ll create the other 2 projects to finish up the deployment of the T4 template.

Attachment: MyTemplates.zip
Posted Sun, Apr 21 2013 by Michael Taylor | no comments
Filed under:
Using T4 to Create an AppSettings Wrapper, Part 5

In the last article we broke out the template into 2 separate template files: main and nested .  A developer adds the main template to their project but it includes the nested template where the actual work is done.  In this article we’re going to update the included template to allow a developer to alter the generated code without having to touch the nested template.

In the original article we listed 6 requirements for the template.  At this point we have met half of them.  By adding customization we will meet the final three.  They are:

  • Exclude some settings
  • Change the type of a setting
  • Use the configuration file of a different project

Customizing a Template

Customizing a template is actually not that hard now that it’s broken up.  The key is in understanding what we can and cannot change when we release updates.  The main template can be updated but in order to get the updates a developer would need to remove the main template from their project and add it back.  The nested template can be updated as needed.  Provided the developer reruns the template they will get any changes.

To customize a template we add public members (properties or methods) to the template class in the nested template.  As with a normal class we can then use the properties/methods when it comes time to execute the template.  We can even add new customizations in future updates without breaking the developer’s existing customizations.

In the main template a developer can call the public members after the template class instance is created but before it renders.  Since they will not have access to Intellisense it makes sense to provide some commented code demonstrating the available customizations.  Additionally we can provide some basic customizations if desired.  Since the main template is never updated once it is added to the project (unless the developer does it explicitly) the customizations are preserved even if the nested template is updated.  Now let’s focus on adding each of the customizations we listed earlier.

Excluding Settings

With certain projects we often find extra settings that we do not care about included in the appSettings section.  ASP.NET is a good example because it stores several options there.  We probably do not want to generate code for these.  For this customization we will allow the developer to exclude a setting based upon its name. 

In the nested template add a public method called ExcludeSetting that accepts the name of a setting.  Some frameworks have a lot of settings so it would be painful to exclude each one.  Therefore we’ll make the exclusion functionality a little smarter by allowing an asterisk on the end (i.e. webpages*).  The asterisk will be a wildcard for anything that follows the name when doing matching.  To simplify the code define 2 private lists to store the excluded settings (one for regular and one for wildcard exclusions).  Also define a private method called IsExcluded that takes a setting name and returns whether it should be excluded or not based upon the lists.

public AppSettingsTemplate ExcludeSetting ( string settingName )
{   
   if (settingName.Contains("*"))
   {
      var token = settingName.Substring(0, settingName.IndexOf('*'));
      m_exclusionMasks.Add(token);
   } else
      m_exclusions.Add(settingName);

   return this;
}

private List<string> m_exclusions = new List<string>();
private List<string> m_exclusionMasks = new List<string>();

private bool IsExcluded ( string settingName )
{
    return m_exclusions.Where(x => String.Compare(x, settingName, true) == 0).Any() ||
            m_exclusionMasks.Where(x => settingName.StartsWith(x, StringComparison.OrdinalIgnoreCase)).Any();
}

In the code where the settings are enumerated call the IsExcluded method to determine if the setting should be generated or not.

<#+
   foreach (KeyValueConfigurationElement setting in GetAppSettings()) { 
      if (IsExcluded(setting.Key))
continue;

   var type = GetSettingType(setting.Value); #>

Now we can add some default entries for some commonly used frameworks.  Notice that we can chain calls together because the public method returns the template instance.

<# var template = new AppSettingsTemplate();
    
// Exclude any settings that are not needed, use * for a wildcard
template.ExcludeSetting("aspnet:*")
           .ExcludeSetting("webpages:*);
  
template.Render(); 
#>

Changing a Setting’s Type

By default the template will determine the type of a setting based upon its value.  But sometimes the value doesn’t properly indicate the type (i.e. 123 for a phone extension) or a specific type is needed (i.e. a long even if the value is 123).  For this we will allow the developer to specify the type to use for a setting rather than relying on the default behavior.

Add a public method to the nested template called OverrideSettingType.  It should accept a setting name and a type.  Store the setting name in a private dictionary for later.

public AppSettingsTemplate OverrideSettingType ( string settingName, Type settingType )

{
   m_settingsTypes[settingName] = settingType;

   return this;
}

private Dictionary<string, Type> m_settingsTypes = new Dictionary<string, Type>(StringComparer.OrdinalIgnoreCase);

Modify the GetSettingType method to check the override dictionary before trying to determine the type of the setting based upon its value.  Note that the setting name needs to be added to the parameter list so this check can be done.  Update the calling code accordingly.

private Type GetSettingType ( string name, string value )

{    
   Type explicitType;
  
if (m_settingsTypes.TryGetValue(name, out explicitType) && explicitType != null)
      
return explicitType;

   //Use heuristics
}

Here is how it might look in the main template.

//Override a setting's type

template.OverrideSettingType("IntValue", typeof(long));

Using a Different Project’s Configuration

Normally you would agree that a project should only rely on its own settings but there are a few cases where this doesn’t make sense.  One example is a WCF service host.  The host generally consists of only the configuration file for the service and the .svc file.  The actual implementation is stored in a separate project so the service can be re-hosted with ease.  In this case we would want to add the template to the implementation project but have it rely on the host project’s configuration file.  For this customization we will allow the developer to specify a project other than the active project from which to read the configuration file.

Add a new public property to the nested template called ConfigurationProject.  If it is set then it is the name of the project where the config file resides that will be read.

public string ConfigurationProject { get; set; }

The hard part is changing the nested template to use a different project.  Fortunately we already have almost all the code.  In an earlier article we wrote a function to get the settings given a ProjectItem.  Up until now we’ve been using ActiveProject.  To use a different project we need only find the project in the solution and then pass it to FindConfigProjectItem instead.

private KeyValueConfigurationCollection GetAppSettings ()
{
    var project = ActiveProject;

    //If a custom configuration file is specified then find the project
    if (!String.IsNullOrEmpty(ConfigurationProject))
        project = FindProject(ConfigurationProject);

    //Get the config file
    var configItem = FindConfigProjectItem(project);

    return GetAppSettings(configItem);
}

public EnvDTE.Project FindProject ( string projectName )
{
    if (String.IsNullOrEmpty(projectName))
        return null;

    foreach (EnvDTE.Project project in DteInstance.Solution.Projects)
    {
        if (String.Compare(project.Name, projectName, true) == 0)
            return project;
    };

    return null;
}

Here is how a developer would specify it in the main template.

//Change the project to use
template.ConfigurationProject = "";

Validation

Up until now we haven’t really needed any validation.  But as we allow developers to customize the template it makes sense to ensure the template can be generated.  If something goes wrong the template host generally spews out useless messages.  Before we finish up lets add some basic validation to the template.  With T4 you can generate either warnings or errors.  With the T4 Toolbox these can be generated using the Warning and Error methods.

For this template the following validation seems appropriate.

  1. If a custom configuration project is set then the project must exist otherwise it is an error.
  2. If no config file can be found then this would be a warning.
  3. If the config file has no app settings then this would be a warning.

I personally like to do validation before the template runs so we’ll add a Validate method that wraps the validation logic.  T4 Toolbox already defines a base validation method so we can override it.  As an optimization we will store off the found settings so we do not have to search for them again when we render.

Add a Validate method to the nested template that handles all the validation rules.

private KeyValueConfigurationCollection AppSettingsInfo { get; set; }

protected override void Validate ()
{
    base.Validate();
KeyValueConfigurationCollection info = null; var project = String.IsNullOrEmpty(ConfigurationProject) ? ActiveProject : FindProject(ConfigurationProject); if (project != null) { var configItem = FindConfigProjectItem(project); if (configItem != null) { info = GetAppSettings(configItem); if (info == null || info.Count == 0) Warning("No appSetting entries found."); } else Warning("Unable to locate configuration file."); } else Error("Unable to locate configuration project '{0}'.", ConfigurationProject); AppSettingsInfo = info ?? new KeyValueConfigurationCollection(); }

The original GetAppSettings method can go away now and inside the template text we can replace the call to it with AppSettingsInfo.  When the template runs the validation method will execute and we’ll get pretty errors and warnings if appropriate.

Next Steps

At this point we have a template that a developer can add to their project and customize to meet their needs.  The template relies on a nested template to do the actual heavy lifting of generating the final file.  There are some downsides to this approach including the fact that the project needs to include multiple files even though only 1 is added to the project.  Another is the fact that even though we have nested the core functionality we would still need to replicate the code if we wanted to create a different template.  Even worse is the fact that if we update the core template (add enhancements or fix bugs) then every project that relies on the template would need to get an updated version.  In the next article we’ll package the template up into a reusable component that is easy to deploy and update and will have minimal impact on the end developer.

Posted Sun, Apr 7 2013 by Michael Taylor | 2 comment(s)
Filed under:
NuGet with Active Directory Support

In a previous article I discussed how to host a private NuGet repository.  If you aren't familiar with NuGet then please refer to that article.  If you're hosting a private gallery then chances are you're on a network (probably an Active Directory one).  One downside to hosting a private NuGet gallery is that it is tied to Forms authentication.  For a public repository this makes sense but for a private one it would be better to have NuGet use Windows authentication.  This article will discuss the changes that have to be made to the NuGet source to support Windows authentication.  The changes are scattered but minor. 

NuGet Authentication

Official NuGet uses Forms authentication.  When a user is browsing the site or downloading packages then they have not logged in yet.  When a user attempts to do something like manage a package or upload one then NuGet prompts for login.  It does this using the standard Authorize attribute on the appropriate controller actions.  If the user does not have an account yet then they are prompted to register.  The registration process associates the user name, password and email address.  If configured then the user must confirm their email address before their account is recognized.  If the user owns packages then emails sent from NuGet will go to their registered email address.

NuGet defines a single role - Admin.  A user is either an admin or they are not.  There is no UI for assigning roles.  Instead a database call needs to occur.  A normal user can manage their own packages and send emails to other owners.  An admin can manage any packages and assign ownership.

NuGet View Modes

Out of the box it would seem that adding Windows authentication to NuGet would be a simple matter of changing the authentication mode but unfortunately it isn't that easy.  NuGet can be accessed several different ways.  Some support Windows auth and some don't.

  • Website viewing - In this mode a user should be able to view the packages and even download them without having to be authenticated.  In terms of Windows auth any user should probably have permissions.
  • Website uploading - Only users who are authenticated should be able to upload packages. 
  • Visual Studio - VS will connect to NuGet to get the list of packages and to download them using the web API.  This shouldn't require any special privileges.  More importantly VS will not respond to a challenge/response from the server so Windows auth cannot be used.

Because of the different ways of accessing NuGet it seems that the best approach will be to allow anybody to access the site (assuming they have the right Windows group membership).  The Admin role can remain and be used as needed.  A new role, Authors, is needed to control who can upload packages.  This role is only used when accessing the website for uploads.

Before going any further you need to get the NuGet source and ensure it compiles correctly.  I walked through that process in the previous article.  We need to make some changes to the NuGet files and we need to add some additional files.  I'll walk through the process step by step.

Preparing NuGet

Before continuing further be sure that Windows authentication has been installed for IIS.  Also ensure that the app pool that is hosting NuGet has read/write permissions to the App_Data directory.  All other IIS changes will be handled by the web.config file.  If you are using a version of IIS prior to Server 2008 then you might need to make some of these changes to IIS directly.

We will be adding new files to NuGet and making changes to existing ones so we want to try to keep them separated.  In the NuGet website add a new folder to store the new files (i.e. Custom).  Whenever we add new files they will go here.

Replacing User Service

NuGet uses IUserService to create and manage users.  It is clear that IUserService was written based upon a Forms auth approach because it doesn't really encapsulate anything other than Forms auth.  For the most part we just need to replace a few methods that don't make sense in an AD environment.  To do that we'll create a new type called ADUserService and have NuGet use it instead.  Because we aren't providing a full implementation we'll derive from the existing type.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web.Security;

namespace NuGetGallery
{
    public class ADUserService : UserService
    {
        public ADUserService ( GallerySetting settings,
                               ICryptographyService cryptoService,
                               IEntityRepository<User> userRepository )
            : base(settings, cryptoService, userRepository)
        {
        }
    }    
}

When a user registers with NuGet they must enter their user name, password and email.  Once the user has entered the information it is stored in the database by calling Create.  The user is associated with packages through the user table in the database.  Irrevelant of authentication we need to ensure that the entry is created.  For AD we don't need to do anything different except for the confirmation email and password.  By default a confirmation email is sent that the user must respond to.  For AD we don't need to send a confirmation email.  It is a setting in NuGet to disable confirmation emails but we'll go ahead and ensure that the email is confirmed when the user is created.

public override User Create ( string username, string password, string emailAddress )
{
    var user = base.Create(username, password, emailAddress);

    //Confirm the email address
    ConfirmEmailAddress(user, user.EmailConfirmationToken);

    return user;
}

Whenever NuGet needs user information it will find the user by calling one of several find methods.  A couple of these require a password.  Since we're using AD we don't need the password (in fact we won't have it).  When we create the user we'll use a dummy password.  When searching for a user we will use only the user's name and ignore any password.

public override User FindByUsernameAndPassword ( string username, string password )
{
    //Just search by user name
    return base.FindByUsername(username);
}

public override User FindByUsernameOrEmailAddressAndPassword ( string usernameOrEmail, string password )
{
    //Ignore the password
    return base.FindByUsername(usernameOrEmail) ?? base.FindByUsername(usernameOrEmail);
}

That completes the change for the user service.  Now we just need to register it by modifying ContainerBindings.Load (App_Start\ContainerBindings.cs).

//MODIFIED: Use ADUserService
Bind<IUserService>()
    .To<ADUserService>()
    .InRequestScope();

 

Updating the User View

The view UserDisplay.cshtml is responsible for displaying the log on, register and log off buttons.  None of these make sense with Windows auth so modify the view to remove them.  I went ahead and displayed the full domain name of the user.  If you wanted to get fancy you could either display only the user name or even query AD to get the user's display name.

<div class="user-display">
   <span class="welcome">@User.Identity.Name</span>
</div>

 

Adding Authors Role

By default a user going to NuGet won't require authentication so the user service won't be called.  However when a user does anything that requires authentication such as trying to upload a package they will get redirected to the login page.  This happens because NuGet uses the Authorize attribute on the controller action.  Since we'll be using Windows auth the user will already be authorized so we don't need to check for authentication so we could remove the attribute.  But at some point a user has to be added to the NuGet database in order to upload packages.  We could do that when the user comes to the site but presumably most users don't need an account.  Instead it is probably better to only create the NuGet account when the user tries to do something that requires one (i.e. an author action).  So instead of removing the authorization attribute we'll create a new action filter attribute that verifies the user is in the Authors role.  As part of this check the filter will ensure that the user has an account in the NuGet database.  Add a new action filter called MustBeAuthorAttribute.

public class MustBeAuthorAttribute : AuthorizeAttribute
{
    public MustBeAuthorAttribute ()
    {
        this.Roles = "Authors";
    }

    public string ViewUrl
    {
        get { return "~/Users/Account/AccessDenied"; }
    }

    public override void OnAuthorization ( AuthorizationContext filterContext )
    {
        base.OnAuthorization(filterContext);

        if (filterContext.Result is HttpUnauthorizedResult)
        {
            filterContext.Result = new RedirectResult(ViewUrl);
            return;
        };

        EnsureUserIsRegistered(filterContext.HttpContext.User.Identity.Name);
    }

    private void EnsureUserIsRegistered ( string username )
    {
        var svc = GetUserService();
        var user = svc.FindByUsername(username);
        if (user == null)
        {
            //Get the user name without the domain
            var member = Membership.GetUser(username.Split('\\').Last());

            //Create the user in NuGet
            svc.Create(username, "", member.Email);
        };
    }

    private IUserService GetUserService()
    {
        return (IUserService)Container.Kernel.GetService(typeof(IUserService));
    }
}

In order to authenticate the user must be in the Authors role (we'll set this up later).  If the user is authenticated then we confirm they have a user account with NuGet by looking them up.  If we don't find them then we call the Membership API to get their email address (we'll see how this works later) and then creating their user account.  Notice the password is empty because we don't need it.  The final step is to replace all instances of [Authorize] with the new attribute.

If the user is not authenticated then they are redirected to a new access denied page.

@model SignInRequest
@
{
   ViewBag.Title = "Access Denied";
}

<h1>Access Denied</h1>

<p> You have insufficient privileges to acces this page.</p>

 

public partial class AuthenticationController : Controller
{
    public virtual ActionResult AccessDenied ()
    {
        return View();
    }
    ...
}

 

Switching to AD Authentication

It's time to modify the config file to use AD authentication instead of Forms authentication.  .NET already ships with a provider that will use AD and implement the functionality needed by the Membership API.  All we need to do is configure it.  Replace the existing authentication element with this.

<authentication mode="Windows" />

Add an authorization element that allows all users access to the site.  You could technically limit access to specific users (such as developers) but that might cause problems with Visual Studio.  Also note that the config file already contains an element under a location element.  Do not change that element!!

<authorization>
   <allow users="*" />
</authorization>

Add the necessary role and membership providers to use AD authentication.

<roleManager enabled="truecacheRolesInCookie="falsedefaultProvider="RoleManagerAzManProvider">
    <providers>
        <clear />
        <add name="RoleManagerAzManProvidertype="System.Web.Security.AuthorizationStoreRoleProvider,System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
                connectionStringName="LocalPolicyStoreapplicationName="NuGet" />
    </providers>
</roleManager>
<membership defaultProvider="ActiveDirectoryProvider">
    <providers>
        <add name="ActiveDirectoryProvidertype="System.Web.Security.ActiveDirectoryMembershipProvider,System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
                connectionStringName="ActiveDirectoryConnectionconnectionProtection="SecureenablePasswordReset="falserequiresQuestionAndAnswer="falseenableSearchMethods="trueattributeMapUsername="sAMAccountName" />
    </providers>
</membership>

This hooks up AD to the Membership API allowing us to query for basic AD information such as user name and email address.  The above configuration relies on AzMan which we'll discuss next. 

Note: Ensure that the web site has Windows authorization enabled but not Anonymous otherwise IIS will not work properly.

Authorization Manager

Since we're using application roles and AD understands Windows groups we need a translation layer.  Fortunately newer versions of Windows has us covered with Authorization Manager (AzMan).  AzMan allows us to define our application roles and store them in one of several different data stores.  IT administrators can then configure what AD users and groups are a member of each role.  For NuGet I'm going to use AzMan backed by an XML file but you could just as easily use a SQL database.  Getting the data store set up is pretty straightforward.

  1. Start Microsoft Management Console (mmc.exe).
  2. Add or browse to Authorization Manager.
  3. Create a new data store (i.e. AzManNuGetStore.xml)  Note: You must be in Developer Mode (menu Action\Option) to set up the store but you can switch back to Administrator Mode after the roles have been defined.
  4. Under the store add a new application called NuGet.  This value must match the role manager's applicationName entry in the config file.
  5. Under Definitions\Role Definitions define the application roles - Authors, Admins
  6. Save the data store and optionally switch it back to Administrator Mode

At this point the application specific AzMan settings are completed but no AD users or groups are associated with the roles.  To assign an AD user or group to an application role open the store in MMC and go to the Role Assignments section.  In general dev leads and managers will likely be Admins while the developer groups will be Authors.

Place the store file in the root of the web site so it can be found.  The app pool identity will need read access to this file.  Now the membership provider needs to be pointed to the file.  To do that we'll add two entries to the connectionStrings section.  The first entry (LocalPolicyStore) provides the path to the AzMan store.  This must match the role manager's connectionStringName entry.  The second entry will be the AD connection information.  This must match the membership provider's connectionStringName entry.

<add name="LocalPolicyStoreconnectionString="msxml:{path}/AzManNuGetStore.xml" />
<add name="ActiveDirectoryConnectionconnectionString="LDAP://{domain}/DC={subdomain},DC={subdomain}" />

Note that the path for the AzMan store uses URL slashes (/) between directory names.  That's it.  Now ASP.NET is hooked up to AD for authentication and AzMan is providing the roles based upon the AD groups that are configured.

Additional Enhancements

As I mentioned in the earlier article there are some things about NuGet that don't work well in a private environment, at least for me.  Therefore I went ahead and updated the code for them as well.  The next few paragraphs discuss some of these enhancements.  Refer to the original article for more information.

I wanted to make the option of using HTTPS configurable so the code didn't have to keep changing.  Therefore I added a new gallery setting (Gallery.RequireHttps) to allow it to be toggled on and off.  It is set in the config file.  I updated the RequireRemoteHttpsAttribute to read the app setting and do nothing if it is disabled.  I didn't bother caching the result but you could if you wanted to. 

public void OnAuthorization(AuthorizationContext filterContext)
{
    //MODIFIED: Don't require HTTPS if it is disabled
    var requireHttps = Configuration.ReadAppSettings("requireHttps", x => (x != null) ? Boolean.Parse(x) : false);
    if (!requireHttps)
        return;

Email, by default, uses SSL.  I wanted to be able to turn this off as well.  Most of the email settings are stored in the database so that would be the logical place to change it but NuGet uses EF and updating the models and whatnot would be too much work so I added yet another gallery setting (Gallery.EnableSmtpSsl) to control whether or not to use SSL for email.  To use this setting I modified the ContainerBindings.Load (App_Start\ContainerBindings.cs) method to read the setting when it is creating the mailSenderThunk object.  Here's the relevant code for reference.

var mailSenderThunk = new Lazy<IMailSender>(
    () =>
        {
            var settings = Kernel.Get<GallerySetting>();
            if (settings.UseSmtp)
            {
                var mailSenderConfiguration = new MailSenderConfiguration
                    {
                        DeliveryMethod = SmtpDeliveryMethod.Network,
                        Host = settings.SmtpHost,
                        Port = settings.SmtpPort,

                        //MODIFIED: Pull from the settings
                        EnableSsl = Configuration.ReadAppSettings("enableSmtpSsl", x => (x != null) ? Boolean.Parse(x) : false)
                    };

The newer versions of NuGet displays statistics on the home page.  The stats are handled via stats.js.  Unfortunately this file has a bug in the path that it uses to get the stats.  If the site is hosted as a full website then it will work but if it is an application under a website then the path is wrong.  Since the stats are handled in a Javascript file the change isn't as simple as I'd like.  After talking with one of my Javascript experts we came up with this.

  1. Add a new Javascript variable to the shared layout page that stores the root path of the site.
  2. Modify stats.js to use the Javascript variable rather than using a rooted directory.
<script type="text/javascript">
var rootUrl = "@Url.Content("~")";
</script>

//Inside stat.js:getStats
$.get(rootUrl + 'stats/totals', function(data) {

That should resolve any issues with the stats not displaying on the main page.  The site should now be configured to use AD authentication.  You should confirm the website is working properly as well as being able to view packages from Visual Studio and during builds.

What's Not Covered

Some areas of NuGet have not been covered and may need to be changed if you use them.

  • Uploading packages from the web API might work since the web API doesn't require any special privileges.  The user credentials that are passed should authenticate against the custom user service but some tweaks may be necessary.
  • Accessing any of the existing user APIs will probably not fail but won't work as expected.  I originally started removing all the code but NuGet uses T4MVC which puts way too many hooks into the system for my taste.  In the end I left the existing API in place so I wouuldn't have to touch every file.
  • Non-Active Directory networks won't work directly since the user information is not available.  Specifically the email address would need to be obtained through some other means.
Using T4 to Create an AppSettings Wrapper, Part 4

In the previous article we finished the basic appsettings template that allowed us to generate a strongly typed class from the settings in a configuration file.  We now want to expand the template to allow it to be customized depending upon the needs of the project, a later article.  Before we get there though it is time to refactor the code to make it more reusable and easier to maintain.  That is the focus of this article.

T4 Toolbox

Under the hood a template is nothing more than a compiled class that implements a method to render the template.  Up to this point we have been using the default class because our template wasn't that complicated but we want to be able to customize the template so it is time to move away from the default settings.  The T4 Toolbox is a handy set of extensions to T4 that allows us to work with the template as a class rather than as a text file.  By switching to a class approach we can take advantage of all the features we are used to in normal code but still ultimately generate a template.  There is nothing we are going to implement that you couldn't do by hand or with another library but the T4 Toolbox makes things easier so we'll use it.  You'll need to download and install it before moving on.

To make the best use of T4 Toolbox we need to move our template into a class that derives from Template.  We can then implement the TransformText method to generate our template.  First ensure that the T4 Toolbox is installed through Extension Manager.  Normally if you're creating a new template you will add a new template file using T4 Toolbox\Template but since we already have one we'll just modify our existing template file.  

Including Files

T4 allows you to include text files inside templates using the include directive.  It works similar to C++ #includes in that the contents of the file are inserted into the template file whereever the directive resides.  This is useful for sharing common functionality across multiple templates.  The included file must be either in the same directory as the template, a directory relative to the template or in a path that T4 will search.

Open the template file and add an include for the T4 Toolbox.

<#@ include file="T4Toolbox.tt" #>

This include brings in the infrastructure needed by T4 Toolbox.  I normally place it after the assembly directives but before the import directives.  A word of warning, T4 complains if it finds multiple assembly or import statements for the same string so try to keep these directives down to a single file.

Creating the Template Class

Now it is time to move the template into a Template class. Right after the import directives start a new class block.  Define a new class that derives from CSharpTemplate.  The class name is not that relevant.

...
<#@ output extension=".generated.cs" #>
<#+ 
public class AppSettingsTemplate : CSharpTemplate
{
}
#>

Move any methods from the original template inside the class body.  They should probably be private since they are used only for generating the template.  The last method in the class should be the override for the TransformText.  You can generate the template body by making method calls but the easier approach is to simply end the class block and starting the template text.  After the template text will be another class block that finishes the TransformText method.  The call to GenerationEnvironment is what returns the template text from the method.  The template text outside of the class block is automatically added to the returned text. 

    public override string TransformText ()
    {
        var className = MakeIdentifier(Path.GetFileNameWithoutExtension(Host.TemplateFile));
#>
/*
 * This file is auto-generated from a config file.   BR /> * Do not modify this file.  Modifications will be lost when the file is regenerated.
 */

...
<#+
        return this.GenerationEnvironment.ToString();
    }
}
##>

Remember that indentation and blank lines can be confusing in templates so it might be necessary to play around with the template to get them right.  The goal isn't to make the template be styled correctly but rather the generated code.

Notice that the className variable that was in a statement block before has been moved inside the method.  Variables defined in the method are accessible to the template text.  Also note that any statement blocks used inside the template text need to be switched to class blocks because a statement block cannot follow a class block.

Removing Unneeded Code

When we were writing the template in previous articles we defined a few helper methods.  Some of these can go away because T4 Toolbox provides them for us.

MakeIdentifier() was used to create a valid identifier given a string.  It can be replaced by a call to PropertyName() if you want a Pascal cased identifier or FieldName() if you want a camel cased identifier.  Replace the calls and remove the method.

GetNamespaceForTemplate() was used to determine the namespace to use for the type.  It can be replaced by a call to the property DefaultNamespace which should provide the same information.  Replace the calls and remove the method.

Host was used to access the host information.  T4 Toolbox wraps the host to better provide isolation from the underlying implementation.  Replace references of Host with TransformationContext.Current.Host.

Nested Templates

If you run the template at this point you'll see that the generated file is empty.  What's going on?  If you look at your template file you'll realize that all you've done is defined a template class.  There isn't anything that is actually causing the template to run.  Time to fix that.

A nested template is a template inside a template.  It is one of the key techniques that we can use to both encapsulates a template and make it configurable.  The issue with the current template is that someone would need to know how the template works before they could customize it.  Additionally they would need to change the template file to make any customizations.  If we later release a new version of our template then they would either need to manually merge the changes in or redo their customizations.  A nested template allows us to expose a simple template that exposes the customizable parts of the template (generally through properties) while calling the real template to do the actual work.  This allows us to separate the real template from the customizable parts.  Let's break our template up into the customizable part and the template part. 

I like for my customizable template to bear the name of the template as will be shown in the Add New Items dialog and the real template to match the template class name so rename the existing template file to AppSettingsTemplate.tt.  Then create a new template file called AppSettings.tt which will be the customizable part.

The customizable template represents what will get generated so it needs the template directive, the output directive and an include of the real template.

<#@ template debug="false" hostspecific="true" language="C#" #>
<#@ output extension=".generated.cs" #>
<#@ include file="AppSettingsTemplate.tt" #>
<#
#>

The real template doesn't need the template or output directives anymore.  In fact it would be an error to leave them in.  Instead it just defines the assemblies and namespaces needed to generate the template.  Additionally we need to tell VS not to treat it as a template file anymore so view the properties of the file and remove the custom tool setting.  A caveat to this is that VS will not automatically regenerate the output if you modify this template anymore.  If you make any changes to it you will need to regenerate it manually.

The customizable template needs to create an instance of the real template, call any customization members that it needs to and then render the template.  Since we have no customization yet it would boil down to this.

<#
   var template = new AppSettingsTemplate();

   //Do customization here

   template.Render();
#>

We've now set the stage to allow our template to be customized.  In the next article we'll expose customization points to make our template truly useful in real world projects.

Posted Sun, Mar 24 2013 by Michael Taylor | 18 comment(s)
Filed under:
More Posts Next page »