My Blog is Moving


Hi folks – It’s that time again. Time to move my blog. Currently, my block is hosted on the MSMVPs site. It’s been a good place overall for my blog, but lately I’ve decided that I really want to take blogging more seriously, and the only way to do that is to do it right. The previous host seemed to have some stabilities issues, but I was happy there. Now, my blog is on my own domain, For now I will continue to cross post onto my old blog, but if you do have RSS feeds of Bookmarks to my blog – people update them!

Posted by vcsjones | with no comments

Writing a Managed Internet Explorer Extension: Part 5 – Working with the DOM

time-warpInternet Explorer is known for having a quirky rendering engine. Most web developers are familiar with with concept of a rendering engine. Most know that Firefox uses Gecko, and Chrome / Safari use WebKit. WebKit itself has an interesting history, originally forked from the KHTML project by Apple. However pressed, not many can name Internet Explorer’s engine. Most browsers also indicate their rendering engine in their User Agent. For example, my current Chrome one is “Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7” Not as many web developers could name Internet Explorer’s, it was simply referred to as “Internet Explorer”. The actual name of IE’s rendering engine is Trident. It’s been part of Internet Explorer since 4.0 – it was just deeply integrated into Internet Explorer. At it’s heart, Trident lives in the mshtml.dll and shdocvw.dll libraries in the system32 directory. Earlier, you referenced these libraries as a COM type library.

When accessing IE’s DOM from a BHO, it’s in some regards very similar to doing it from JavaScript. It has the oh-so-familiar getElementById, and the rest of the gang. You’re also constrained, like JavaScript, by the minimum version of IE you plan to support with your BHO. If your BHO is going to be commercial, it isn’t unreasonable to still support IE6. In many respects, you will be using OLE Automation to manipulate the DOM.

Like JavaScript, it is desirable to know what version of IE you are working against. Many JavaScript developers will tell you it’s poor practice to code against versions of a browser, but rather test if the feature is available in a browser. That keeps the JavaScript agnostic to the browser. However, we know we are just coding against IE. I have no strong recommendation one way or the other, but I’ll show you both. This is probably the simplest way to just get IE’s version:

var version = Process.GetCurrentProcess().MainModule.FileVersionInfo;

That provides a plethora of information about IE’s version. The ProductMajorPart will tell you if it's 6, 7, or 8. There are many other details in there – it can tell you if it’s a debug build, the service pack, etc. You may have surmised that if JavaScript can do it, then we can do it the same way JavaScript does using the appVersion property. Before you start going crazy looking for it on the IWebBrowser2 interface though – I’ll tell you it’s not there. Nor is it on any of the HTMLDocument interfaces. It has it’s own special interface, called IOmNavigator. That interface is defined in mshtml.dll - so since you have already referenced that Type Library you should already have access to it - but how do I get an instance of that thing?

It isn’t difficult, but there is where the interface complexity has it’s disadvantages. IOmNavigator is on the window, and the IHTMLDocument2 interface can provide a path to the window.

   1:  var document = (IHTMLDocument2) _webBrowser2;
   2:  var appVersion = document.parentWindow.navigator.appVersion;

However, if we wanted to do the right thing and test for feature availability rather than relying on version numbers, how do we do that?

The most straightforward is determining which interfaces an object supports. Most of your DOM work is going to be done through the Document property off of WebBrowser2. This is of type HTMLDocument, but there are several different interfaces available. Every time a change was made to the Document API, a new interface was created to maintain backward compatibility (Remember COM uses Interface Querying, so it makes more sense in that respect.)

In .NET we can do something similar using the “is” keyword.

   1:  private void _webBrowser2Events_DocumentComplete(object pdisp, ref object url)
   2:  {
   3:      if (!ReferenceEquals(pdisp, _pUnkSite))
   4:      {
   5:          return;
   6:      }
   7:      if (_pUnkSite.Document is IHTMLDocument5)
   8:      {
   9:          //IHTMLDocument5 was introduced in IE6, so we are at least IE6
  10:      }
  11:  }


There are a several IHTMLDocumentX interfaces, currently up to IHTMLDocument7 which is part of IE9 Beta.

WAIT! Where is IHTMLDocument6?

The MSDN Documentation for IHTMLDocument6 says it’s there for IE 8. Yet there is a good chance you won’t see it even if you have IE 8 installed.

This is a downside of the automatically generated COM wrapper. If you look at the reference that says MSHTML, and view it’s properties, you’ll notice that its Path is actually in the GAC, something like this: C:\Windows\assembly\GAC\Microsoft.mshtml\7.0.3300.0__b03f5f7f11d50a3a\Microsoft.mshtml.dll

Microsoft Shipped a GAC’ed version of this COM wrapper, which is used within the .NET Framework itself. However, the one in the GAC is sorely out-of-date. We can’t take that assembly out of the GAC (or risk a lot of problems).

What to do?

We are going to manually generate a COM wrapper around MSHTML without the Add Reference Dialog. Pop open the Visual Studio 2010 Command Prompt. The tool we will be using is part of the .NET Framework SDK, called tlbimp.

The resulting command should look something like this:

tlbimp.exe /out:mshtml.dll /keyfile:key.snk /machine:X86 mshtml.tlb

This will generate a new COM wrapper explicitly and write it out to mshtml.dll in the current working directory. The keyfile switch is important – it should be strong name signed, and you should already have a strong name key since it is required for regasm. mshtml.tlb is a type library found in your system32 directory. This new generated assembly will contain the IHTMLDocument6 interface, as we expect. If you have IE 9 beta installed, you will see IHTMLDocument7 as well. NOTE: This is a pretty hefty type library. It might take a few minutes to generate the COM Wrapper. Patience.

If you are happy just being able to access the DOM using IE 6’s interfaces, then I wouldn’t bother with this. There are advantages to using the one in the GAC (smaller distributable, etc).

In summary, you have two different means of detecting a browser’s features. Using the version by getting the version of the browser, or testing if an interface is implemented. I would personally recommend testing against interfaces, because there is always a tiny chance that Microsoft may remove functionality in a future version. It’s doubtful for the IHTMLDocument interfaces, however for other things it’s a reality.

Now that we have a way of knowing what APIs are at our disposal, we can manipulate the DOM however you see fit. There isn’t much to explain there – if you think it’s hard, it’s probably because it is. It’s no different that trying to do it in JavaScript.

This is an extremely resourceful page when trying to figure out which interface you should be using based on a markup tag:

Posted by vcsjones | 1 comment(s)
Filed under: , ,

Writing a Managed Internet Explorer Extension: Part 4–Debugging

 Mosquito Isolated on White 2Picking up where we left of with Writing a Managed Internet Explorer Extension, debugging is where I wanted to go next. I promise I’ll get to more “feature” level stuff, but when stuff goes wrong, and it will, you need to know how to use your toolset. .NET Developers typically write some code and press F5 to see it work. When an exception, the debugger, already attached, steps up to the plate and tells you everything that is wrong. When you write an Internet Explorer Extension it isn’t as simple as that. You need to attach the debugger to an existing process, and even then it won’t treat you like you’re use to. Notably, breakpoints aren’t going to launch the debugger until the debugger is already attached. So we have a few options, and some tricks up our sleeves, to get the debugger to aide us.

Explicit “Breakpoints”iebreak

The simplest way to emulate a breakpoint is to put the following code in there:


Think of that as a breakpoint that is baked into your code. One thing to note if you’ve never used it before is that the Break method has a [Conditional(“DEBUG”)] attribute on it – so it’ll only work if you are compiling in Debug. When this code gets hit, a fault will occur. It will ask you if you want to close, or attach a debugger. Now is your opportunity to say “I want a debugger!” and attach.

It’ll look like just a normal Internet Explorer crash, but if you probe at the details, “Problem Signature 09” will tell you if it’s a break. When working on a BHO, check this every time IE “crashes” – it’s very easy to forget that these are in there. It’s also important that you compile in Release mode when releasing to ensure none of these sneak out into the wild. The user isn’t going to look at the details and say, “Oh it’s just a breakpoint. I’ll attach and hit ‘continue’ and everything will be OK”. Once that’s done, choose Visual Studio as your debugger of choice (more on that later) and you should feel close to home.


This is by far one of the easiest ways to attach a debugger, the problem with it is it requires a code change to get working, meaning you need to change the code, close all instances of IE, drop in the new DLL, restart Internet Explorer, and get it back into the state it was. A suggestion would be to attach on SetSite when the site isn't null. (That’s when the BHO is starting up. Refresher here.) That way, your debugger is always attached throughout the lifetime of the BHO. The disadvantage of that is it’s get intrusive if you like IE as just a browser. You can always Disable the extension or run IE in Safe Mode when you want to use it as an actual browser. If you take this approach, I recommend using Debugger.Launch(). I'll leave you to the MSDN Documents to understand the details, but Launch won’t fault the application, it will skip straight to the “Which debugger do you want to use?” dialog.

Attaching to an Existing Process

attachYou can just as well attach to an existing process like you normally would, but there is one drawback: “Which process do I want to attach to?” In IE 8 that is a question that can be difficult to answer. Each tab has it’s own process (a trend in new generation browsers – IE was the first to support it). You will have at minimum of two IE processes. One for each tab, and one per actual instance of IE acting as a conductor for the other processes. Already, with just a single tab open, you have a 50/50 chance of getting it right if you guess. Visual Studio can give us some help though. If you pull up the Attach to Process Dialog, you should see your two instances of IE. The “Type” column should give it away. We want the one with Managed code in it (after all, the title of this blog series is "Writing a Managed Extension”).

Once you’re attached, you can set regular breakpoints the normal way and they’ll get hit. Simple!


It isn’t quite as easy when you have multiple tabs open – sometimes that’s required when debugging a tricky issue. You have a few options here:

  1. When building a UI for your BHO (It’s a catch 22 – I know I haven’t gotten there yet) have it display the PID of the current process. That’s easy enough to do using the Process class. You can dumb it down a little more though and write a log file in a safe location (IE is picky where BHOs write to the File System Refresher here).
  2. Attach to all tab processes. That can lead to a lot of confusion of which tab you are currently in, because if you have two tabs open – and a breakpoint gets hit – which tab did it? The Threads Window should help you there if that is the route you choose.
  3. Always debug with a single tab, if you can.
Power Debugging

There is a trick you can do in Visual Studio to gain access to some additional debugging features. Hopefully this isn’t brand new material to everyone, but for some I would suspect it is. If you manually choose what you want to attach, include managed code and Native. Attaching to Native is very helpful if you are trying to debug a COM Marshaling issue, and plenty of other issues. We can start a the Managed Debugger Extension to diagnose issues at the CLR level, and even poke at the CLR’s native memory and objects. Once attached in Visual Studio with Native code, get to a breakpoint or pause, and launch the Immediate Window. Type .load sos and hit enter. If it worked, you should get a message like extension “C:\Windows\Microsoft.NET\Framework\v4.0.30319\sos.dll loaded”. There are many blogs out there about SOS (Son of Strike). I may blog on it later. Some useful commands are:

  • !help
    Pretty self explanatory. Shows you some available commands.
  • !dso (!DumpStackObjects)
    Does a dump of all CLR objects that are in the stack.
  • !dumpheap
    Dumps the entire heap. Careful! - That really means the entire heap. A more generally useful use of dumpheap is to specify the -stat flag (!dumpheap -stat). This gives you general statistics of the heap. It will tell you which objects are in memory, and how many of them there are. This is useful starting point if you believe there is a memory leak - this can at least tell you what you are leaking.
  • !soe (!StopOnException)
    Again, I feel that the name of this is pretty self explanatory. The usage of it is a little tricky to beginners. A simple example would be, “I want to stop whenever there is an OutOfMemoryException”. This is useful for some types of exception, OOM is a good example. The problem with debugging an OOM in a purely managed way is the CLR is dying by the time the exception happens, so you will get limited information by slapping a debugger on the managed code. For an OOM, a !dumpheap -stat is a good place to start. Other examples where this is useful are Access Violations (more common when doing Marshaling or Platform Invoke), Stack Overflows, and Thread Aborts. The usage is !soe -create System.AccessViolationException.
  • !CLRStack
    This will dump the CLR's stack only. The Native stack is left out. This is the same as normal managed stacks that you have looked at. It has some cool parameters though. The -p parameter will tell you what the values of the parameters that were passed into it. Often, it will be the address of what was passed in. Use !DumpObject and pass in the address to figure out exactly what it was. The -l flag dumps the locals in the frame, and –a dumps both parameters and locals.
  • !DumpStack
    This is like CLRStack but on steroids. It has the managed stack like CLRStack, but also has the Native stack. It’s useful if you use Platform Invoke. This command is best used outside of Visual Studio and instead in something like WinDbg – more on that further down.

That’s the tip of the iceberg though. The complete documentation on MSDN is here. That lists commands that !help doesn’t list – so have a look. However, you’re not getting your money’s worth by doing this in Visual Studio. Visual Studio is great for managed debugging and using SOS, but when you want to use the Native commands, such as !analyze Visual Studio falls short. In addition, SOS is limited to the debugging functionality that Visual Studio provides it – you may often see a message like this: “Error during command: IDebugClient asked for unimplemented interface” Visual Studio doesn’t fully implement these features that SOS is asking for.

Other debuggers, like WinDbg, are significantly more powerful at the cost of they aren’t as simple to use. If there is demand for further details, I’ll post them. Using WinDbg is fairly similar, once you are attached, run .load C:\Windows\Microsoft.NET\Framework64\v4.0.30319\sos.dll. In WinDbg, you need to specify the full path. In addition, you will want to get symbols from Microsoft’s Symbol Server. There are symbols for mscoree and mscorwks. Having symbols for these can significantly help diagnose native to managed (and vice-versa) transitions.

Happy Debugging!

Posted by vcsjones | 1 comment(s)
Filed under: , ,

A Really Super Light and Simple IoC Container for Windows Phone 7

I finally managed to get the Windows Phone 7 tools installed. I’m not going to vent on that anymore because I feel like I’ve done that enough already – and once they did install correctly they’ve been very pleasurable to use. I started working on an application, and old habits die hard. I like Inversion of Control, it’s a major need for me simply because I’ve forced my mind to work like that. I’ve previously worked with Unity and Windsor. Unity has grown on me a lot, and I like it. However, none of them seem to work in Windows Phone 7, or at least they weren’t designed to. So I wrote my own really super simple IoC container for Windows Phone 7. I wanted the following features, and nothing else (for now):

  1. Able to register types such that arguments of it’s constructor are resolved
  2. Able to register an interface and type such that if the interface is resolved, the component is resolved and it’s constructor arguments are resolved
  3. Assume that there is either a default constructor or a single constructor that takes parameters
  4. Everything will have a singleton lifetime.

Super simple requirements. This will be running on phone hardware, so it needs to be lightweight, too. It fits in a single class which is about 65 lines. I’ve split it into two different files using a partial class to keep the registration separate. I can imagine some additional features that other people might want, such as a transient lifestyle, use WeakReferences if you are registering a lot of things, etc.

For the source, see the Gist on GitHub.

Posted by vcsjones | 3 comment(s)
Filed under: ,

My Netbook Vaio P running Chrome OS - The Experience

photoI have a small little netbook, a Vaio P. It’s a cool little laptop, and I took it in favor of a clunker laptop. It handles 95% of my needs, and it’s actually pretty fast. a 128 GB solid state drive, and 2 GB of (non upgradable) RAM. It ran Windows 7 perfectly, only hiccupping on performance when it came to video and medium to large Visual Studio solutions. But I didn’t get it for development, I got it for a browser and a chat client (this was pre-iPad days). I figured if all I used it for was that, most of the time just a browser, I owed it to myself to try installing Chrome OS.

I’d previously played with it in a Virtual Machine, and was very impressed by the boot time (it boots faster than the BIOS can do a POST). It’s borderline an instant-on operating system. That’s typically what I use it for – open it up, use a browser for 30 seconds to figure something out, and then I am done. For Windows, it’ll go into standby, but ends up in hibernate pretty quick because my battery desperately needs to be replaced. I’m impatient, I want it on now. That’s what Chrome OS seemed to offer.

Installing Chrome OS is not nearly as simple as installing another distribution. In fact, it has no installer. Going even further, there isn’t an official build of it yet. I ended up getting the Flow version from Hexxeh. Basically you are given an IMG file, which is a drive image. You have to image it to something, I used a USB Key per the instructions. Having another Linux distribution is close to required – you need grub to boot, and an easy way to copy partitions (dd). First, in Windows 7, I used Computer Management to get rid of all of the partitions I didn’t need (like the recovery one) and the other one that Windows 7 likes to use for BitLocker purposes. I just used BCDEdit to make the main partition bootable. Doing this from Windows 7 is highly preferable because while GPart can resize an NTFS Partition, it will often make Windows unhappy and require that you repair the boot loader. I ended up with a drive that had 7 GB free in the front, then the NTFS volume for Windows 7. I shrunk the Windows 7 partition by 3 GB and left those 3 GB unallocated. I installed Ubuntu on the 7 GB space in the front (which works great by the way, more on that later), and the other 3 GB would be for Chrome. I won’t go into the details of installing Chrome because I think the documentation is pretty good, but if you fear spending an angering hour with grub when something might go wrong or don’t want to risk destroying all of your partitions, this may not be for you.

In the end, I got everything working correctly. The first thing you will notice is that Chrome OS requires an internet connection to login. But you need to login to configure WiFi. Fortunately, I have the Ethernet / VGA adapter for it so I just plugged in, logged in, then configured WiFi. Alternatively, you could use the username “facepunch” and the password “facepunch” to login an configure your WiFi… which leads me to my first point. Hexxeh, as he goes by, is a 17-year-old kid. A brilliant one at that. However, I have reservations about the safety of these distributions. I’m not doubting him, but he could, like any other person including me, make a mistake. He could be doing worse and harvesting passwords. The 17-year-old me would find “facepunch” funny (and the current me does, too) but it sends a bit of a mixed message.

After getting logged in, you’ll go through a few EULAs, all of which are for Java. After that, you’re done! Unfortunately, that’s as far as I got really. It was dirt slow. Actually unusable. This isn’t Hexxeh’s fault either. I’d put most of the blame, if not all of it, on the lack of proper video drivers for the Intel GMA 500. If you installed a Linux distribution on your Vaio P you know you have to do a few magic tricks to get your video card working at something other than awful. With Chrome OS, that isn’t really an option due to the partition layout. It was disappointing, quite a bit a work for a net loss. When Chrome OS goes live, hopefully I can try again. This little laptop will always have a home in my tech closet.

On a related topic, Ubuntu 10.10 works great on it. There is one trick to getting it installed. I could only get the installer to work correctly was from a NetInstall using UNetbootin. Directly from a USB Key resulted in the error “(initramfs) Unable to find a medium containing a live file system.” Basically even though it just booted from USB, it couldn’t read from it. On top of that, once I was booted into the setup, it couldn’t figure out my WiFi so everything had to be downloaded via LAN. Once I got that far, everything went smooth. Take the time to straighten out the video drivers and you’ll be happy. My feeling on performance was basically “Ubuntu boots faster than Windows 7 but runs slower (noticeably) once booted.” I’m sure there are a lot more tweaks I can do to get this working better. We’ll see! I know, “real” Linux guys will tell you, “If you are after performance, why the hell did you install Ubuntu?” Yeah, I know. Maybe I’ll try another one later.

The verdict on Chrome OS is don’t bother with Hexxeh’s flow build. Wait for a newer one. Ubuntu is worth a shot though.

Posted by vcsjones | with no comments
Filed under:

The vCard

My MVC site will have a little bit of social networking in it. One of the requirements is to be able to export a contact into a vCard. vCard is an exchange format used like an electronic business card. It’s the format that is used by Outlook, Thunderbird, or even Apple’s iPhone when exchanging contacts. It’s an open standard, the latest version is 3.0 specified in RFC 2426. It’s been around for a while, too. The 3.0 specification was put out in 1998. It’s been 12 years, and not much as changed with the vCard. This specification was put out before XML came to true fruition, so it’s got a bit of an odd format. Here is an example of a vCard, with the bare minimum specified.

FN:Kevin Jones

It’s a set of “types” and values that are delimited by a colon. There can be optional parameters specified with the type as well, such as an indicator for special encoding rules. The types can be looked up in the documentation, but these are:

  • N: The name components.
  • FN: The Full Name. Possibly could be considered the Display Name.
  • REV: The revision. At minimum, the Date portion must be specified, such as “2010-08-01”. The Time portion and Offset portion are optional. The format requires that dates are padded with zeros. Like “08” instead of “8”.
  • BEGIN, END: Pretty obvious. The card must end and begin with these values.
  • VERSION: Also obvious. The version of the specification you are sticking to. See the RFC for notes on differences between 3.0 and 2.1.

My implementation sticks to the RFC’s design as close as possible, and tries to implement all features supported by vCard, including embedding binary objects (like a picture or sound clip). There are two important classes, the VCard class and the RFC2426VCardFormatter. I used the formatting engine built into .NET since people are familiar with it; and we can implement some cool features doing that. The VCard class is the card itself, and RFC2426VCardFormatter implements IFormatProvider and ICustomFormatter and converts a VCard instance to a string. Here is a sample:

   1: VCard card = new VCard
   2:                 {
   3:                     NameFamily = "Jones",
   4:                     NameGiven = "Kevin",
   5:                     NameMiddle = "George",
   6:                     NameFormatted = "Kevin Jones",
   7:                     JobTitle = "Team Lead",
   8:                     JobCompany = "Thycotic Software Ltd",
   9:                     JobRole = "Programmer",
  10:                     DateBirth = new DateTimeOffset(1987, 8, 7, 4, 30, 0, new TimeSpan(-5, 0, 0)),
  11:                     NameSort = "Jones",
  12:                     NameNickname = "Kev",
  13:                     Url = "",
  14:                     Mailer = "Outlook 2010",
  15:                     Note = "Is a connoisseur of cheese." + Environment.NewLine + "And poor speller."
  16:                 };
  17: Console.Out.WriteLine(card);

the output will be:

FN:Kevin Jones
TITLE:Team Lead
ORG:Thycotic Software Ltd
MAILER:Outlook 2010
NOTE:Is a connoisseur of cheese.\nAnd poor speller.

The formatting engine supports just portions of the vCard object as well. Changing the Console.Out.WriteLine to

   1: Console.Out.WriteLine("{0:BEGIN;VERSION;N;FN;REV;END}",card);

Will only output the BEGIN, VERSION, N, FN, REV, and END portions of the vCard. A link to the code library is here. It’s not 100% implemented yet, but supports all of the features that Outlook and most other mailing software are going to care about, including the PHOTO type, addresses, emails, phone numbers, etc. The “AGENT” type may be the only one I never get around to implementing. See the XML comments for additional usage and functionality. Let me know if there are any bugs!

NOTE: If your browser insists that the ZIP file is a TGZ extension, just change it back to ZIP. Not sure why the MIME type is wrong.

Posted by vcsjones | 2 comment(s)
Filed under:

The Power Struggle of FilterAttribute

I’ve been doing a lot of MVC2 work lately, and have been indescribably thrilled with how easy it is to write clean code with it (or at least what I consider clean code). Being able to Unit Test my Controllers and have separation from everything else is like magic. OK, maybe I am a little late to this ballgame. I discovered a very cool feature of MVC2, and that is the FilterAttribute. When reading documentation about how to ensure controller actions could only be run if the user was Authenticated, I naturally came to the AuthorizeAttribute. It was that simple! I read the documentation, and to my delight it is extensible to make your own FilterAttributes. It becomes more powerful when you put the IAuthorizationFilter interface on your attribute, too. Now I can preemptively short circuit the use of an action.

I wanted an attribute that would allow me to say, “Hey, if you are already logged in, just go here instead.” It doesn’t make sense to show a Sign Up page if the user is already logged in, just take them Home. Here is what I ended up with:

   1: public class RedirectToActionIfAuthenticatedAttribute : FilterAttribute, IAuthorizationFilter
   2: {
   3:     private readonly string _controller;
   4:     private readonly string _action;
   6:     public RedirectToActionIfAuthenticatedAttribute(string actionName, string controllerName)
   7:     {
   8:         _controller = controllerName;
   9:         _action = actionName;
  10:     }
  12:     public void OnAuthorization(AuthorizationContext filterContext)
  13:     {
  14:         var authenticationService = IoC.Resolve<IAuthenticationService>();
  15:         if (authenticationService.GetCurrentUser().HasValue)
  16:         {
  17:             filterContext.Result = new RedirectToRouteResult
  18:                     (
  19:                         new RouteValueDictionary
  20:                             {
  21:                                 {"controller", _controller},
  22:                                 {"action", _action}
  23:                             });
  24:         }
  25:     }
  26: }

The implementation is easy: the OnAuthorization method comes from IAuthorizationFilter. It seems a bit odd to be using this for things that really aren’t purely authorization related, but the internal attributes in the MVC2 kit also use it this way, so I was a bit relaxed. At this point, filterContext has a property called Result. If you leave it null, the attribute has no affect. If you set it to something by the time OnAuthorization exits, then that will trump the execution of your controller action. In this case, I am assuming you are using the out-of-the-box default route and populating the controller and action.

It has a verbose name, but I tend to like these kind of names. Regardless, I can now throw this attribute on controller actions to redirect them wherever I want if they are already signed in. It’s usage is like so:

   1: [RedirectToActionIfAuthenticated("MyAccount", "Home")]
   2: public ActionResult SignUp()
   3: {
   4:     return View();
   5: }

If a user tries to view the SignUp view and they are already authenticated, then just take them to the MyAccount action, which in this case is a view, on the Home controller.

At this point, there was a slew of application I could think of for these kind of attributes. However…

Is it Metadata Abuse?

I was debating, and leaning towards no. One of the things that always came back to mind with this is that attributes are just metadata - or so I was taught originally. I can easily see this begin taken to extents that exceed their purpose. I only need this attribute twice (so far) in my code base. SignUp and LogOn views. It’s not a security things, more of a usability thing, so I am not worried about putting them on the HttpPost actions – just the HttpGet for now. Would it be more correct to just make the decision inside of the action itself? For some applications, like the AuthorizeAttribute – I can easily see the need. If you have a few dozen controller actions – and you probably do – duplicating the authentication logic that many times would be a big criticism. The other part is, I can test the attribute using my favorite testing tools, but testing the controller is now a bit different – do I test that the controller has the attribute? Can I test it without it being an integration test? A test that just asserts and attribute is present doesn’t give much meaning. I know the attribute is there – I can see it.

I’m still not 110% sure on my “style” of using MVC2 and a pattern that I can stick to. I really like that MVC2 has all of the extension points that I need. There is always a popular saying, “Just because you can, doesn’t mean you should”.

Posted by vcsjones | with no comments
Filed under: , ,

Fading Controls with Mouse Movement in WPF

This is an off-topic post from my IE Extension Writing (which I am working on, I promise!). I was playing with a WPF app;It’s a simple photo viewer. I wanted the UI to be “control-less” and only show the picture. However I also wanted some user interface elements to it as well. I decided to take the approach of using controls that will sit overtop of the image, and only fade them in when there is mouse movement, and then fade them out when the mouse is idle. Sort of like how Windows Media Player works when viewing a video in full screen.

It’s pretty quick, but it might save someone some time. This can be done easily and purely in XAML using Event Triggers. Here is the markup I used:

   1: <EventTrigger RoutedEvent="Mouse.MouseMove">
   2:     <BeginStoryboard HandoffBehavior="Compose">
   3:         <Storyboard AutoReverse="False">
   4:             <DoubleAnimation To="1" Storyboard.TargetName="ScaleSlider" Storyboard.TargetProperty="Opacity" Duration="0:0:0.1875" />
   5:             <DoubleAnimation To="0" Storyboard.TargetName="ScaleSlider" Storyboard.TargetProperty="Opacity" BeginTime="0:0:3.0" Duration="0:0:0.1875" />
   6:         </Storyboard>
   7:     </BeginStoryboard>
   8: </EventTrigger>

and this EventTrigger is on Window.Triggers. ScaleSlider is a slider control on the current window. You can use whatever control you’d like. When the mouse moves, it fades the control in, and fades it out after 3 seconds of the mouse being idle.

This is a quick and dirty app, but it should work for most people. The caveat to this simple example is all you are doing is hiding the control: users can still interact with it even though it isn’t visible, say through the keyboard. My work around for this is bind the IsEnabled of the control to it’s own Opacity. Here is the markup for my slider:

   1: <Slider Orientation="Vertical" Margin="0, 20, 30, 20" Maximum="5" Minimum="0.05" Value="1" TickFrequency="0.3"
   2:         Name="ScaleSlider" HorizontalAlignment="Right" Opacity="0" TickPlacement="BottomRight" ValueChanged="ScaleSlider_ValueChanged"
   3:         IsEnabled="{Binding Mode=OneWay, RelativeSource={x:Static RelativeSource.Self}, Path=Opacity}" />

This works nicely, it disables itself when the Opacity is zero.

Posted by vcsjones | 1 comment(s)
Filed under:

Writing a Managed Internet Explorer Extension: Part 3

I’m debating where to take this little series, and I think I am at a point where we need to start explaining Internet Explorer, and why writing these things can be a bit tricky. I don’t want to write a blog series where people are blindly copying and pasting code and not knowing what IE is doing.

wayback-machine[1] I am not a professional at it, but I’ve written browser extensions for most popular browsers. IE, Chrome, Firefox, and Safari. In terms of difficulty, IE takes it. That’s probably why there isn’t a big extension community for IE. Let’s go in the Way Back Machine…

IE  at it’s pinnacle, IE was 95% by web surfers with IE 5 and IE 6. If you are a developer, you probably hear a lot of criticisms for IE 6, and rightly so. Back then, IE supported a plug in model with that notorious name ActiveX. It was criticized for allowing web pages to just ship run arbitrary code. Of course, all of that changed and now IE really gets in your face before one of those things run. In fact, it is one of the reasons why intranet apps still require IE 6. Regardless, the message was clear to Microsoft. We need security!

Security was addressed in IE 7, and even more so in IE 8 with the help of Windows Vista and Windows 7.

Hopefully by now you’ve had the opportunity to play around with writing IE Add Ons, but you may have noticed some odd behavior, such as accessing the file system.

UAC / Integrity Access

UAC (User Access Control) was introduced in Windows Vista. There was a lot of noise over it, but it does make things more secure, even if that lousy dialog is turned off. It’s just transparent to the user. The purpose of UAC is the Principle of Least Privilege. Don’t give a program access to a securable object, like a file, unless it needs access to it. Even if your application will never touch a specific file, another application might figure out a way to exploit your application into doing dirty deeds for it. UAC provides a mechanism for temporarily giving access to securable object the application would normally not have permission to. UAC introduced the concept of Elevated and Normal. Normal is what the user normally operates under until a UAC prompt shows up.

Those two names are just used on the surface though… there are actually three Integrity Access Levels. Aptly named, they are called Low, Medium, and High. Medium is Normal, and High is Elevated.

IE is a program that use Low by default. Low works just like threads and process tokens. In theory, you could run your own application in “Low”. Low is it’s own SID: “S-1-16-4096”. If we start a process using this SID, then it will be low integrity. You can see this article for a chunk of code that does that. It’s hard to do this in managed code, and will require a good amount of platform invoke. You can also use this technique with threads.

Ultimately, Low mode has some really hard-core security limitations. You have no access to the File System, except a few useful places

  • %USERPROFILE%\Local Settings\Temporary Internet Files\Low
  • %USERPROFILE%\Local Settings\Temp\Low
  • %USERPROFILE%\AppData\LocalLow
  • %USERPROFILE%\Cookies\Low
  • %USERPROFILE%\Favorites\Low
  • %USERPROFILE%\History\Low

That’s it. No user documents, nada. Some of those directories may not even exist if a Low process hasn’t attempted to create them yet. If your extension is going to only be storing settings, I recommend putting them into %USERPROFILE%\AppData\LocalLow. This directory only exists in Windows Vista and up. Windows XP has no UAC, and also it has no protected mode, so you are free to do as you please on Windows XP!

To determine that path of LocalLow, I use this code. A domain policy might move it elsewhere, or it might change in a future version of Windows:

   1: public static class LocalLowDirectoryProvider
   2: {
   3:     private static readonly Lazy<string> _lazyLocalLowDirectory = new Lazy<string>(LazyGetLocalLowDirectory, LazyThreadSafetyMode.ExecutionAndPublication);
   5:     public static string LocalLowDirectory
   6:     {
   7:         get
   8:         {
   9:             return _lazyLocalLowDirectory.Value;
  10:         }
  11:     }
  13:     private static string LazyGetLocalLowDirectory()
  14:     {
  15:         var shell32Handle = LoadLibrary("shell32.dll");
  16:         try
  17:         {
  18:             var procAddress = GetProcAddress(shell32Handle, "SHGetKnownFolderPath");
  19:             if (procAddress == IntPtr.Zero)
  20:             {
  21:                 return null;
  22:             }
  23:         }
  24:         finally
  25:         {
  26:             FreeLibrary(shell32Handle);
  27:         }
  28:         var localLowSavePath = IntPtr.Zero;
  29:         try
  30:         {
  31:             if (SHGetKnownFolderPath(new Guid("A520A1A4-1780-4FF6-BD18-167343C5AF16"), 0, IntPtr.Zero, out localLowSavePath) != CONSTS.S_OK)
  32:             {
  33:                 return null;
  34:             }
  35:             return Marshal.PtrToStringUni(localLowSavePath);
  36:         }
  37:         finally
  38:         {
  39:             if (localLowSavePath != IntPtr.Zero)
  40:             {
  41:                 Marshal.FreeCoTaskMem(localLowSavePath);
  42:             }
  43:         }
  44:     }
  46:     [DllImport("shell32.dll", CallingConvention = CallingConvention.StdCall, EntryPoint = "SHGetKnownFolderPath")]
  47:     private static extern uint SHGetKnownFolderPath([MarshalAs(UnmanagedType.LPStruct)] Guid rfid, uint dwFlags, IntPtr hToken, out IntPtr pszPath);
  49:     [DllImport("kernel32.dll", CallingConvention = CallingConvention.StdCall, EntryPoint = "GetProcAddress", CharSet = CharSet.Ansi)]
  50:     private static extern IntPtr GetProcAddress([In] IntPtr hModule, [In, MarshalAs(UnmanagedType.LPStr)] string lpProcName);
  52:     [DllImport("kernel32.dll", CallingConvention = CallingConvention.StdCall, EntryPoint = "LoadLibrary", CharSet = CharSet.Auto)]
  53:     private static extern IntPtr LoadLibrary([In, MarshalAs(UnmanagedType.LPTStr)] string lpFileName);
  55:     [DllImport("kernel32.dll", CallingConvention = CallingConvention.StdCall, EntryPoint = "FreeLibrary")]
  56:     private static extern IntPtr FreeLibrary([In] IntPtr hModule);
  58: }
It returns null if there is no LocalLow directory. The Lazy<T> class provides some cool thread-safe caching for this value as it will never change (at least it shouldn’t).

However, if you need to access the file system outside of one of these white listed directories, you have a couple of options later on down our journey:

  1. Use IE’s built in Open File and Save File dialogs. They will give you access to the file.
  2. Use a broker process / COM server. We’ll discuss this one later.

Loose Coupling

This tends to trick managed developers. Starting with IE 8, each tab is it’s own process. That tends to break what developers get comfortable with, like the fact that a static / shared variable are unique per tab. That was one of the design goals of decoupling tabs – they can only talk to each other through securable means, like RPC. Even in IE 7 which does not have a process per-tab, it still isolates the BHO instances from one another. As far as the BHO knows, a tab is a window.

Every time a new tab is opened, that tab gets it’s own instance of the BHO. This was originally done to keep IE 7 as backward compatible with BHO’s as possible. In IE 6, each Window was it’s own process. BHO’s got comfortable assuming there would only be one instance of itself running. This loose coupling will also change the behavior of how dialogs might be shown from a BHO. We’ll get into that when we discuss UI design and interaction.

Part 4, we will back back up making a BHO do useful things. I just felt I had to get this off my chest.

Posted by vcsjones | 3 comment(s)
Filed under: , , ,

Writing a Managed Internet Explorer Extension: Part 2.5

When we last discussed wiring events in part 2, we discussed how events work and how to wire them, and more importantly how to unwire them. I also mentioned that we could use attachEvent and detachEvent rather than the events on interfaces. This is useful if you don’t know what type of element you are attaching an event to.

attachEvent and detachEvent

attachEvent is part of the IHTMLElement2 interface, and fortunately all elements and tags implement this interface, so long as you are targeting Internet Explorer 5.0+. attachEvent takes two parameters, a string indicating which event to attach to, and the actual handler itself. It’s signature looks like this:

BSTR event, //in
IDispatch *pDisp, //in
VARIANT_BOOL *pfResult //out, retval

The IDispatch is the event handler. Unfortunately this means it isn’t as simple as passing a delegate to it and “it just works”. We need to implement a class that will marshal the handler to COM correctly.

Also note, that like Part 2, we need to call detachEvent to stop from memory leaking. Rather than implement the IDispatch interface ourselves, we can use the IReflect interface to give COM marshalling all of the help that it needs. When IDispatch (in this case) invokes the event, it will use the name “[DISPID=0]”. We can handle that in the InvokeMember implementation of IReflect, otherwise we just pass it to our own type. The nice approach to this is it uses a common delegate that we are probably already fimiliar with, EventHandler. In this case, I’ve called the class EventProxy.

public class EventProxy : IReflect
private readonly string _eventName;
private readonly IHTMLElement2 _target;
private readonly Action<CEventObj> _eventHandler;
private readonly Type _type;

public EventProxy(string eventName, IHTMLElement2 target, Action<CEventObj> eventHandler)
_eventName = eventName;
_target = target;
_eventHandler = eventHandler;
_type = typeof(EventProxy);

public IHTMLElement2 Target
get { return _target; }

public string EventName
get { return _eventName; }

public void OnHtmlEvent(object o)

private void InvokeClrEvent(CEventObj o)
if (_eventHandler != null)

public MethodInfo GetMethod(string name, BindingFlags bindingAttr, Binder binder, Type[] types, ParameterModifier[] modifiers)
return _type.GetMethod(name, bindingAttr, binder, types, modifiers);

public MethodInfo GetMethod(string name, BindingFlags bindingAttr)
return _type.GetMethod(name, bindingAttr);

public MethodInfo[] GetMethods(BindingFlags bindingAttr)
return _type.GetMethods(bindingAttr);

public FieldInfo GetField(string name, BindingFlags bindingAttr)
return _type.GetField(name, bindingAttr);

public FieldInfo[] GetFields(BindingFlags bindingAttr)
return _type.GetFields(bindingAttr);

public PropertyInfo GetProperty(string name, BindingFlags bindingAttr)
return _type.GetProperty(name, bindingAttr);

public PropertyInfo GetProperty(string name, BindingFlags bindingAttr, Binder binder, Type returnType, Type[] types, ParameterModifier[] modifiers)
return _type.GetProperty(name, bindingAttr, binder, returnType, types, modifiers);

public PropertyInfo[] GetProperties(BindingFlags bindingAttr)
return _type.GetProperties(bindingAttr);

public MemberInfo[] GetMember(string name, BindingFlags bindingAttr)
return _type.GetMember(name, bindingAttr);

public MemberInfo[] GetMembers(BindingFlags bindingAttr)
return _type.GetMembers(bindingAttr);

public object InvokeMember(string name, BindingFlags invokeAttr, Binder binder, object target, object[] args, ParameterModifier[] modifiers, CultureInfo culture, string[] namedParameters)
if (name == "[DISPID=0]")
OnHtmlEvent(args == null ? null : args.Length == 0 ? null : args[0]);
return null;
return _type.InvokeMember(name, invokeAttr, binder, target, args, modifiers, culture, namedParameters);

public Type UnderlyingSystemType
get { return _type.UnderlyingSystemType; }

Instances of EventProxy can be handed as the IDispatch to attachEvent. We would use it like so:

private void _webBrowser2Events_DocumentComplete(object pdisp, ref object url)
var document = (HTMLDocument)_webBrowser2.Document;
var htmlElements = document.getElementsByTagName("input").OfType<IHTMLElement2>();
foreach (var htmlElement in htmlElements)
Action<CEventObj> handler = s => MessageBox.Show("Clicked: " +;
var proxy = new EventProxy("onclick", htmlElement, handler);
htmlElement.attachEvent("onclick", proxy);

Like in part 2, we need to keep a running dictionary of all of the events that we attach, then call detachEvent on BeforeNavigate2. I’ll leave that to you, but at the end of the series I will post the entire working solution in a VS 2010 project.

DocumentComplete Fired Multiple Times

If you use the code from above or from part two, you may notice that clicking an input element causes a dialog to show several times. That is because DocumentComplete is being called more than once. DocumentComplete is fired whenever any document completes, not just the whole thing. So content from an <iframe> will cause the DocumentComplete to get fired again. Sometimes this behavior is desirable, but in this case it is not. How do we ensure it’s only called once for the main document?

The DocumentComplete gives use two things: the URN of the document that was loaded, and a pointer to a dispatch object. Simply put, if the pdisp is the same reference as your IWebBrowser2 instance you setup from SetSite, then it’s the “root” document. This ensures DocumentComplete is only fired once, when the main document is complete:

   1: if (!ReferenceEquals(pdisp, _webBrowser2))
   2: {
   3:     return;
   4: }

As I mentioned in part 2, part 3 will be about manipulating the DOM. I just wanted to cover this first.

Posted by vcsjones | 2 comment(s)
Filed under: , ,

Writing a Managed Internet Explorer Extension: Part 2

Continuing my miniseries from Writing a Managed Internet Explorer Extension: Part 1, we discussed how to setup a simple Internet Explorer Browser Helper Object in C# and got a basic, but somewhat useless, example working. We want to interact with our Document Object Model a bit more, including listening for events, like when a button was clicked. I’ll assume that you are all caught up on the basics with my previous post, and we will continue to use the sample solution.

Elements in the HTMLDocument can be accessed by getElementById, getElementsByName, or getElementsByTagName, etc. We’ll use getElementsByTagName, and then filter that based on their “type” attribute of “button” or “submit”.

objexplorerAn issue that regularly comes up with using the generated .NET MSHTML library is its endless web of delegates, events, and interfaces. Looking at the object explorer, you can see that there are several delegates per type. This makes it tricky to say “I want to handle the ‘onclick’ event for all elements.” You couldn’t do that because there is no common interface they all implement with a single onclick element. However, if you are brave you can let dynamic types in .NET Framework 4.0 solve that for you. Otherwise you will have a complex web of casting ahead of you.

Another issue that you may run into is conflicting member names. Yes, you would think this isn’t possible, but the CLR allows it, I just don’t believe C# and VB.NET Compiles allow it. For example, on the interface HTMLInputElement, there is a property called “onclick” and an event called “onclick”. This interface will not compile under C# 4:

   1: public interface HelloWorld
   2: {
   3:     event Action HelloWorld;
   4:     string HelloWorld { get; } 
   5: }

However, an interesting fact about the CLR is it allows methods and properties to be overloaded by the return type. Crazy, huh? Here' is some bare bones MSIL you can compile on your own using ilasm to see it in action:

   1: .assembly extern mscorlib
   2: {
   3:   .publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
   4:   .ver 4:0:0:0
   5: }
   7: .module MislExample.dll
   8: .imagebase 0x00400000
   9: .file alignment 0x00000200
  10: .stackreserve 0x00100000
  11: .subsystem 0x0003
  12: .corflags 0x0000000b
  14: .class interface public abstract auto ansi MislExample.HelloWorld
  15: {
  16:   .method public hidebysig newslot specialname abstract virtual 
  17:           instance void  add_HelloWorld
  18:             (class [mscorlib]System.Action 'value') cil managed
  19:   {
  20:   }
  22:   .method public hidebysig newslot specialname abstract virtual 
  23:           instance void  remove_HelloWorld
  24:             (class [mscorlib]System.Action 'value') cil managed
  25:   {
  26:   }
  28:   .method public hidebysig newslot specialname abstract virtual 
  29:           instance string  get_HelloWorld() cil managed
  30:   {
  31:   }
  33:   .event [mscorlib]System.Action HelloWorld
  34:   {
  35:     .addon instance void MislExample.HelloWorld::
  36:             add_HelloWorld(class [mscorlib]System.Action)
  37:     .removeon instance void MislExample.HelloWorld::
  38:             remove_HelloWorld(class [mscorlib]System.Action)
  39:   }
  40:   .property instance string HelloWorld()
  41:   {
  42:     .get instance string MislExample.HelloWorld::get_HelloWorld()
  43:   }
  44: }

That MSIL isn’t fully complete as it lacks any sort of manifest, but it will compile and .NET Reflector will be able to see it. You might have trouble referencing it from a C# or VB.NET project.

You can work around this issue by being explicit in this case: cast it to the interface to gain access to the event or do something clever with LINQ:

   1: void _webBrowser2Events_DocumentComplete(object pDisp, ref object URL)
   2: {
   3:     HTMLDocument document = _webBrowser2.Document;
   4:     var inputElements = from element in document.getElementsByTagName("input").Cast<HTMLInputElement>()
   5:                     select new { Class = element, Interface = (HTMLInputTextElementEvents2_Event)element };
   6:     foreach (var inputElement in inputElements)
   7:     {
   8:         inputElement.Interface.onclick += inputElement_Click;
   9:     }
  10: }
  12: static bool inputElement_Click(IHTMLEventObj htmlEventObj)
  13: {
  14:     htmlEventObj.cancelBubble = true;
  15:     MessageBox.Show("You clicked an input element!");
  16:     return false;
  17: }

This is pretty straight forward: whenever the document is complete, loop through all of the input elements and attach on onclick handler to it. Despite the name of the interface, this will work with all HTMLInputElement objects.

Great! We have events wired up. Unfortunately, we’re not done. This appears to work at first try. However, go ahead and load the add on and use IE for a while. It’s going to start consuming more and more memory. We have written a beast with an unquenchable thirst for memory! We can see that in Son of Strike, too.

MT Count TotalSize Class Name
03c87ecc 3502 112064 mshtml.HTMLInputTextElementEvents2_onclickEventHandler
06c2aac0 570 9120 mshtml.HTMLInputElementClass

This is a bad figure, because it is never going down, even if we Garbage Collect. With just a few minutes of use of Internet Explorer, there is a huge number of event handles. The reason being because we never unwire the event handler, thus we are leaking events. We need to unwire them. Many people have bemoaned this problem in .NET: event subscriptions increment the reference count. Many people have written Framework wrappers for events to use “Weak Events”, or events that don’t increment the reference count. Both strong and weak reference have their advantages.

I’ve found the best way to do this is to keep a running Dictionary of all the events you subscribed to, and unwire them in BeforeNavigate2 by looping through the dictionary, then removing the element from the dictionary, allowing it to be garbage collected.

Here is my final code for unwiring events:

   1: [ComVisible(true),
   2: Guid("9AB12757-BDAF-4F9A-8DE8-413C3615590C"),
   3: ClassInterface(ClassInterfaceType.None)]
   4: public class BHO : IObjectWithSite
   5: {
   6:     private object _pUnkSite;
   7:     private IWebBrowser2 _webBrowser2;
   8:     private DWebBrowserEvents2_Event _webBrowser2Events;
   9:     private readonly Dictionary
  10:         <
  11:             HTMLInputTextElementEvents2_onclickEventHandler,
  12:             HTMLInputTextElementEvents2_Event
  13:         > _wiredEvents
  14:         = new Dictionary
  15:         <
  16:             HTMLInputTextElementEvents2_onclickEventHandler,
  17:             HTMLInputTextElementEvents2_Event
  18:         >();
  20:     public int SetSite(object pUnkSite)
  21:     {
  22:         if (pUnkSite != null)
  23:         {
  24:             _pUnkSite = pUnkSite;
  25:             _webBrowser2 = (IWebBrowser2)pUnkSite;
  26:             _webBrowser2Events = (DWebBrowserEvents2_Event)pUnkSite;
  27:             _webBrowser2Events.DocumentComplete += _webBrowser2Events_DocumentComplete;
  28:             _webBrowser2Events.BeforeNavigate2 += _webBrowser2Events_BeforeNavigate2;
  29:         }
  30:         else
  31:         {
  32:             _webBrowser2Events.DocumentComplete -= _webBrowser2Events_DocumentComplete;
  33:             _webBrowser2Events.BeforeNavigate2 -= _webBrowser2Events_BeforeNavigate2;
  34:             _pUnkSite = null;
  35:         }
  36:         return 0;
  37:     }
  39:     void _webBrowser2Events_BeforeNavigate2(object pDisp, ref object URL, ref object Flags,
  40:         ref object TargetFrameName, ref object PostData, ref object Headers, ref bool Cancel)
  41:     {
  42:         foreach (var wiredEvent in _wiredEvents)
  43:         {
  44:             wiredEvent.Value.onclick -= wiredEvent.Key;
  45:         }
  46:         _wiredEvents.Clear();
  47:     }
  49:     void _webBrowser2Events_DocumentComplete(object pDisp, ref object URL)
  50:     {
  51:         HTMLDocument document = _webBrowser2.Document;
  52:         var inputElements = from element in document.getElementsByTagName("input").Cast<HTMLInputElement>()
  53:                             select new { Class = element, Interface = (HTMLInputTextElementEvents2_Event)element };
  54:         foreach (var inputElement in inputElements)
  55:         {
  56:             HTMLInputTextElementEvents2_onclickEventHandler interfaceOnOnclick = inputElement_Click;
  57:             inputElement.Interface.onclick += interfaceOnOnclick;
  58:             _wiredEvents.Add(interfaceOnOnclick, inputElement.Interface);
  59:         }
  60:     }
  62:     static bool inputElement_Click(IHTMLEventObj htmlEventObj)
  63:     {
  64:         htmlEventObj.cancelBubble = true;
  65:         MessageBox.Show("You clicked an input!");
  66:         return false;
  67:     }
  69:     public int GetSite(ref Guid riid, out IntPtr ppvSite)
  70:     {
  71:         var pUnk = Marshal.GetIUnknownForObject(_pUnkSite);
  72:         try
  73:         {
  74:             return Marshal.QueryInterface(pUnk, ref riid, out ppvSite);
  75:         }
  76:         finally
  77:         {
  78:             Marshal.Release(pUnk);
  79:         }
  80:     }
  81: }

After performing the same level of stress as before, there were only 209 instances of HTMLInputTextElementEvents2_onclickEventHandler. That is still a bit high, but it’s because the Garbage Collector done it’s cleanup. The Garbage Collector makes it a bit subjective to counting how many objects are in memory. If we really cared we could check and see which of those have a reference count greater than zero to get the full result, but I think that’s above the call of duty, right?

There are alternative ways to wire events. If the strong typing and plethora of interfaces is getting to you, it’s possible to use attachEvent and detachEvent albeit it requires converting these events into objects that COM can understand.

Part 3 we will look into manipulating the DOM.

Posted by vcsjones | 1 comment(s)
Filed under: , ,

Writing a Managed Internet Explorer Extension: Part 1

I’ve recently had the pleasure of writing an Internet Explorer add on. I found this to somewhat difficult for a few reasons and decided to document my findings here.

Managed vs Native

One difficult decision I had to make even before I had to write a single line of code was what do I write it with? I am a C# developer, and would prefer to stay in that world if possible. However, this add-on had the intention of being use commercially, and couldn’t make the decision solely based on preference.

Add-on’s to Internet Explorer are called Browser Helper Objects, often documented as BHOs as well. They are COM types, thus if we were going to do this managed, we will be doing some COM Interop. I’ve done this before, but mostly from a level of tinkering or deciding to go back to native. The .NET Framework had another benefit to me, and that was WPF. My BHO requires an user interface, and doing that natively isn’t as easy or elegant as using native libraries. Ultimately I decided to go with .NET Framework 4.0, and I can only recommend the .NET Framework 4.

Previous versions of the CLR has a serious drawback when exposing the types to COM: They always used the latest version of the CLR on the machine. If you wrote a BHO in the .NET Framework 1.1, and 2.0 was installed, it would load the assembly using the .NET Framework 2.0. This can lead to unexpected behavior. Starting in the .NET Framework 4, COM Visible types are guaranteed to run against the CLR they were compile with.

The Basics of COM and IE

Internet Explorer uses COM as it’s means of extending its functionality. Using .NET, we can create managed types and expose them to COM and Internet Explorer would be non-the-wiser. COM heavily uses Interfaces to provide functionality. Our BHO will be a single class that implements a COM interface. Let’s start by making a single C# Class Library in Visual Studio. Before we can start writing code, we need to let the compiler know we will be generating COM types. This is done by setting the “Register Assembly for COM Interop” in our project settings on the “Build” tab. While you are on the Build tab, change the Platform target to “x86” as we will only be dealing with 32-bit IE if you are running a 64-bit OS. Now that’s out of the way, let’s make our first class. We’ll call our class BHO.

   1: namespace IeAddOnDemo
   2: {
   3:     public class BHO
   4:     {
   5:     }
   6: }

By itself, this class is not useful at all, and nor can COM do anything with it. We need to let COM know this type is useful to it with a few key attributes. The first is ComVisibleAttribute(true). This attribute does exactly what it looks like. The next is GuidAttribute. This is important because all COM types have a unique GUID. This must be unique per-type per application. Just make your own in Visual Studio by clicking “Tools” and “Create GUID”. Finally there is the ClassInterfaceAttribute which will be set to None. Optionally, you can set the ProgIdAttribute if you want. This allows you to specify your own named identifier that will be used when the COM type is registered. Otherwise it’s your class name. Here is what my class looks like now:

   1: [ComVisible(true),
   2: Guid("9AB12757-BDAF-4F9A-8DE8-413C3615590C"),
   3: ClassInterface(ClassInterfaceType.None)]
   4: public class BHO
   5: {
   6: }

So now our type can be registered, but it isn’t useful to IE. Our class needs to implement an interface that IE always expects all BHO’s to implement: IObjectWithSite. This is an already existing COM interface that we will re-define in managed code, but will let the CLR know it’s actually a COM interface through a series of attributes.

   1: [ComImport,
   2: Guid("FC4801A3-2BA9-11CF-A229-00AA003D7352"),
   3: InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
   4: public interface IObjectWithSite
   5: {
   6:     [PreserveSig]
   7:     int SetSite([In, MarshalAs(UnmanagedType.IUnknown)]object pUnkSite);
   8:     [PreserveSig]
   9:     int GetSite(ref Guid riid, out IntPtr ppvSite);
  10: }

This you can directly copy and paste into your project. Make sure you don’t change the GUID, either. This GUID is already defined by Internet Explorer. The ComImport attribute indicates that this interface is a COM interface. The InterfaceTypeAttribute is important. All COM interfaces we will be working is is InterfaceIsIUnknown. All COM interfaces implement a basic interface. In this case, IObjectWithSite implement IUnknown. We won’t actually be doing anything with this interface though that .NET can’t already do for us with the help of the Marshal class.

The GetSite and SetSite methods will be automatically called by Internet Explorer, we just need to provide the implementation. SetSite is of the most interest to us. pUnkSite is will be another IUnknown interface. Since we won’t be using the IUnknown interface, we’ll just use object instead and be happy with that. We’ll add the IObjectWithSite to our BHO class.

Before that, we need to add a few references to our project. Bring up the Add Reference dialog, and switch over to the COM tab. We’ll be adding these:

  • Microsoft HTML Object Library
  • Microsoft Internet Controls

.NET will automatically generate the .NET wrappers for us rather than having to declare all of them by hand like we did with IObjectWithSite. These libraries contain the useful parts of Internet Explorer that allow us to do cool things, like manipulate the Document Object Model of a page. Now let’s add our interface.

   1: [ComVisible(true),
   2: Guid("9AB12757-BDAF-4F9A-8DE8-413C3615590C"),
   3: ClassInterface(ClassInterfaceType.None)]
   4: public class BHO : IObjectWithSite
   5: {
   6:     private object _pUnkSite;
   7:     public int SetSite(object pUnkSite)
   8:     {
   9:         _pUnkSite = pUnkSite;
  10:         return 0;
  11:     }
  13:     public int GetSite(ref Guid riid, out IntPtr ppvSite)
  14:     {
  15:         var pUnk = Marshal.GetIUnknownForObject(_pUnkSite);
  16:         try
  17:         {
  18:             return Marshal.QueryInterface(pUnk, ref riid, out ppvSite);
  19:         }
  20:         finally
  21:         {
  22:             Marshal.Release(pUnk);
  23:         }
  24:     }
  25: }

The GetSite is a fairly vanilla implementation that can be used for all BHOs. Internet Explorer will call GetSite with an interface GUID. We defer this back to our object from SetSite. SetSite gives us an object, but it isn’t just any plain ‘ol boring object. It actually implements a bunch of cool interfaces, like IWebBrowser2 and DWebBrowserEvents2_Event. SetSite is usually called twice by IE. When pUnkSite is not null, it’s an object that is IE itself. When pUnkSite is null, it means IE is shutting down and we need to our cleanup. We can cast our pUnkSite to those two interfaces which are in the COM libraries we referenced earlier. The return value in this case should always be S_OK, or 0. With IWebBrowser2 we can manipulate the DOM, and DWebBrowserEvents2_Event we can listen for events. Let’s add some simple functionality: whenever a page is loaded, let’s display a message box with the title. Here is what my final code looks like:

   1: using System;
   2: using System.Runtime.InteropServices;
   3: using System.Windows;
   4: using mshtml;
   5: using SHDocVw;
   7: namespace IeAddOnDemo
   8: {
   9:     [ComVisible(true),
  10:     Guid("9AB12757-BDAF-4F9A-8DE8-413C3615590C"),
  11:     ClassInterface(ClassInterfaceType.None)]
  12:     public class BHO : IObjectWithSite
  13:     {
  14:         private object _pUnkSite;
  15:         private IWebBrowser2 _webBrowser2;
  16:         private DWebBrowserEvents2_Event _webBrowser2Events;
  17:         public int SetSite(object pUnkSite)
  18:         {
  19:             if (pUnkSite != null)
  20:             {
  21:                 _pUnkSite = pUnkSite;
  22:                 _webBrowser2 = (IWebBrowser2)pUnkSite;
  23:                 _webBrowser2Events = (DWebBrowserEvents2_Event)pUnkSite;
  24:                 _webBrowser2Events.DocumentComplete += _webBrowser2Events_DocumentComplete;
  25:             }
  26:             else
  27:             {
  28:                 _webBrowser2Events.DocumentComplete -= _webBrowser2Events_DocumentComplete;
  29:                 _pUnkSite = null;
  30:             }
  31:             return 0;
  32:         }
  34:         void _webBrowser2Events_DocumentComplete(object pDisp, ref object URL)
  35:         {
  36:             HTMLDocument messageBoxText = _webBrowser2.Document;
  37:             MessageBox.Show(messageBoxText.title);
  38:         }
  40:         public int GetSite(ref Guid riid, out IntPtr ppvSite)
  41:         {
  42:             var pUnk = Marshal.GetIUnknownForObject(_pUnkSite);
  43:             try
  44:             {
  45:                 return Marshal.QueryInterface(pUnk, ref riid, out ppvSite);
  46:             }
  47:             finally
  48:             {
  49:                 Marshal.Release(pUnk);
  50:             }
  51:         }
  52:     }
  53: }

Notice that in SetSite, if pUnkSite is null, I remove the event wireup. This is required, otherwise pUnkSite won’t get released properly and IE is likely to crash when a user tries to close it.


We have our code now, but how do we get IE to do anything with the assembly? First we need to register it. The .NET Framework comes with a tool called regasm. This will register our .NET Assembly like it were a COM library. Before we can do that, we need to add a strong name to our assembly. If you don’t strong name sign the assembly, the regasm is going to complain.

What you will want to do now is open the Visual Studio Command Prompt found in your start menu along with Visual Studio. You’ll want to run it as an administrator, too and change your working directory to your project’s output. Then call regasm like this:

   1: regasm.exe /register /codebase IeAddOnDemo.dll

If all goes well, you will see “Types registered successfully”. Let’s verify. Open up your registry by running regedit.exe and looking under HKEY_CLASSES_ROOT\CLSID. Remember the GUID you used in the attribute for BHO? It should be under there. In my example, I should see HKEY_CLASSES_ROOT\CLSID\{9AB12757-BDAF-4F9A-8DE8-413C3615590C}. Low and behold, I do.

Note that if you are using a 64-bit operating system, you will see it under HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{your guid here}. That’s OK and expected.

We have the COM class registered, but IE still doesn’t know it’s there. We need to add it to the list of BHO’s. They live under this key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\explorer\Browser Helper Objects

or this for x64 machines:

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\explorer\Browser Helper Objects

Note that they key “Browser Helper Objects” may not exist if there has never been a BHO installed on the machine. If it’s not there, go ahead and create it.

Finally, create a sub key under the “Browser Helper Objects” using the same GUID that was registered. Make sure to include the curly braces like you saw earlier under CLSID. So now I have a key path:

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\explorer\Browser Helper Objects\{9AB12757-BDAF-4F9A-8DE8-413C3615590C}

If you want, you can set the default value of the key to a string of your choice to make it more identifiable in the registry. Lastly, you will want to create a DWORD under the registry called NoExplorer with a value of 1. This stops Windows Explorer from loading the Add On and limiting it to just Internet Explorer. I haven’t tested my add on with Windows Explorer so I have no idea if this procedure works for it. Now go ahead and start IE, and if all went according to plan you will see this:


Looking under “Tools” and “Manage Add Ons” we see our BHO is listed there.


If you want to unregister your add on, simply do the following:

  1. Delete your BHO registration. In my case it’s key “HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\explorer\Browser Helper Objects\{9AB12757-BDAF-4F9A-8DE8-413C3615590C}”
  2. Call regasm like we did before, except use /unregister rather than /register.

The next blog post will cover listening to DOM events, such as when a button is clicked.

Posted by vcsjones | 9 comment(s)
Filed under: , ,

The AJAX Update Panel and Recursive Common Table Expressions

So my blog has become quiet again, but don’t worry! I’ve simply started pushing more of my posts toward our company blog. Check out my latest posts there:

The AJAX UpdatePanel: Is it your worst enemy?

Recursive Common Table Expressions

And from here on out are going to be a series on .NET 4.0.

Posted by vcsjones | 1 comment(s)
Filed under:

Vitamin D, Sunshine, and Rainbows

Today is the start of me coming off of a client project that I have been on and off for years now. I really like the project and I tend to jump in when the client is planning some large items over the coming months. Sadly that reign has come to an end, and it’s off to different things. Regardless, a somewhat unique thing that we do as often as possible is work at the client’s location, their building, their office. While this provides enormous benefits when working with the client, it does mean that we take what we can get when it comes to office space. Recently, they gave us a new room when they expanded to the floor below them. It’s a nice room with great tables and layout for pair programming, comfy chairs, and good development machines. Though the joke around the office is we’re going to suffer from a Vitamin D deficiency. Why? It’s because there is no window or place that natural light gets in.

Today I had the good fortune of working next to a window in our office.

I never really considered how much of a difference it makes, I always wanted to think “lamps are good enough”. Honestly though, I found myself having more energy and being much perkier. I’m not usually one known to be “perky” either.

So I want opinions, how important is a window in the office to you?

Posted by vcsjones | 1 comment(s)

Retargeting Assemblies for SQLite

If you read my previous post about dealing with SQLite and SQL CE, then you know that I am on a mission to get unit tests working correctly with a SQLite database. I decided to see if I could get “assembly retargeting” working. First, what is “retargeting”?

.NET allows assemblies to be “redirected” in a sense when the platform is different. This is how the .NET Compact Framework actually works. All .NET CF assemblies are retargetable. In your application, you are referencing assemblies such as System.dll. However, ever notice that .NET Compact Framework application will run on a desktop? That’s because the runtime is silently retargeting them to the full assemblies, like System. This is key to getting the application working on different platforms. Under all of the covers, .NET relies heavily on platform invoke to do the heavy lifting in the WinForms world. However, the P/Invoke calls between a Windows Mobile application and a workstation are wildly different. This is why redirection is needed to work on multiple assemblies.

How can we get this to work for SQLite?

There are a few things you need to get retargeting to work. You need two assemblies, one for the Compact Framework and one for the full framework. I am using the managed wrapper provided by PHX software, which happens to be open source as well. They have ZIPs of both full and compact assemblies.

The second catch is they need to have the same strong name. Oddly, these two managed wrappers use a different strong name key depending on the platform, but since it’s open source it’s very easy to download, synchronize the keys, and recompile.

The final bit is putting the assemblies in a place the application will look for them without causing problems. In your project, add a reference to the Compact assembly. Leave this for now.

Open a command prompt with administrative permissions and use gacutil to install the Full one.

Finally, you need to make sure the unmanaged libraries are ending up in your project output directory for both compact and full. There are 5 files:

  • System.Data.SQLite.exp
  • System.Data.SQLite.lib
  • SQLite.Interop.065.lib
  • SQLite.Interop.065.exp
  • SQLite.Interop.065.dll

All of these are needed. The best bet is to include them in your project with compile action to “None” and "”Copy if newer” for copying.

All things done right…Tada! It works, but what is really going on? What we are really doing is tricking the .NET Framework into loading the full framework assembly instead of the compact assembly. This has to do with how assemblies are resolved. Without going over a lot of details, assemblies are resolved in this order:

  1. The GAC
  2. The probe path of the current application as specified by the AppDomain (like the bin)

That’s the very brief description, and a bit simplified. For the real down-and-dirty, check out this MSDN article. When the runtime looks in the GAC, it finds the assembly we installed. The assembly hash isn’t the same, but since it was retargetable and the strong name keys matched, it loaded it from the GAC anyway and didn’t even bother looking at the one we actually referenced. When it is deployed to a mobile device, it is not in the GAC, so it loads the one we referenced which should have been copied to the output. For the platform invoke, since we copied both the Win32 and the ARM libraries, platform invoke resolved their location from the deployed directory.

Neat stuff, but seemed harder than it could have been.

For what it’s worth, this is also how SQL CE is working. However it does it’s magic by putting the assembly in the GAC for the desktop version and the compact version.

Posted by vcsjones | 1 comment(s)
Filed under:

C# and VB.NET

I don’t like to think of myself as a “C# Developer”. I prefer to think of myself as a “.NET Developer”, mainly because C# is not the only tool I know how to use. However, I primarily use C# as a language of choice because that’s the majority of what most .NET developers use, and I never had a reason to know or understand VB.NET. It’s not to say I was clueless about VB.NET, I’ve glanced at the syntax and I felt that it was similar enough to C# that I didn’t really need to get into the dirty details. Nor do I want to spend the time on comparing the two. There have been countless blog posts on that already.

Now, I chose to write a Compact Framework Application in VB.NET. Some of my co-workers looked at me like I was nuts. That’s one thing I really never got: Why does the general C# crowd hate VB.NET? When I say hate, I mean hate. That’s at least the sense I get from them. I’ve asked, and one such answer was “It’s too wordy and verbose,” but does that really qualify hating it?

Now I ask those C# developers, can you, with a pen and a paper and no references, write a property in VB.NET? How about class with a generic? Maybe a generic with multiple type constraints? Subscribing to an event? You can see where I am going with this. I had an experience where I did a one day bit of consulting to someone helping them port a .NET 1.1 application to the latest stuff. It went smoother than they expected once I explained the difference between a “Web Site” and a “Web Application”. They had some time to ask me, “What’s new in VB.NET?”


Well, there is generics of course, but I struggled to remember syntaxes. Simple things, like a property. Also, the editor for VB.NET is slightly different with how it handles intellisense completion, so I botched that a few times. It was rather embarrassing. I decided at that time there was zero excuse for me, as a consultant, to not know VB.NET as well as I know C#. To do this the most effective way possible, I needed to write a real application in it.

I’m comfortable with VB.NET, in fact I have some preferences over C# with it. I am really looking forward to the VB.NET “catch-up” for VB 10. One thing I noticed that C# had a lot of syntactic sugar that VB.NET lacked, like auto properties. The killer for me right now is that a lambda expression must be a Function instead of a Sub.

I have a new appreciation for VB.NET and VB.NET developers. Maybe I’ll do a code camp or user group presentation in it once :-).

Posted by vcsjones | 2 comment(s)
Filed under: ,

Nastiness with Data in the .NET Compact Framework and the spiral downward (With Updates!)

The First Try

As of late, if you for whatever reason on earth follow me on twitter, you might’ve picked up on the fact that I am working on a Windows Mobile project. Written in VB.NET. Here is my experience with working with a SQL database in the .NET Compact framework, and hopefully save someone some headaches.

I naturally wanted to use SQL CE for a database. It comes out of the box, right? It’s made by the SQL team, which I usually have no problems with. SQL Server has been a solid Microsoft product for years, so let’s use that, I mean Visual Studio was “nice” enough to install it for me.

Well, on a previous application we used SQL CE. And it immediately crashed the application when we ran a query. Let’s look at some numbers. The database contained 2 tables, and one foreign keyed to the other. TableA contained roughly 45,000 records, and TableB contained roughly 4 million records. TableB contained a foreign key to tableA. The database would be read only. No inserts, updates, deletes – nothing of the sort. TableA contained 2 columns, an identity column and a nvarchar lookup that was 250 characters wide (500 bytes for those counting). TableB contained an identity, a reference to TableA, and 5 nvarchar columns that were also 250 characters.

Not the most complex of datasets. Also note that I am generalizing names of tables and columns to avoid being specific about a project.

Now, the device was admittedly underpowered. We’re talking a 250 MHz ARM processor with 16 MB of RAM. 8 MB of the RAM was being used by the OS and a Bluetooth loader. We have 8 MB of memory. Not exactly a ton, right?

SQL CE 3.5 flat out out ran out of memory. Poof, gone. Well, we tweaked some settings in the connection string like pool size, max working set, etc. No dice. The database was already given to me the way it was, was TableA really needed? Let’s merge the two tables – the thought being that the JOIN was killing it. Nope, didn’t make things better. What was the query? A lookup by an ID:

   1: SELECT Id, FirstName, LastName, Location FROM TableB WHERE LookupId = @LookupId

Yeah, pretty simple. It should only return one record, too. It was accessed using a SqlDataReader to avoid the overhead of datasets, LINQ, etc. Here’s where things got interesting. LookupId was an nvarchar, but was really just a set of numbers, like 3487982429803758. Too big to be an Int, but small enough to be a BigInt. The data had leading zeros in them, but regardless the number would be unique. So let’s change it to a BigInt. Though after the conversion, it just really didn’t look right. Some numbers were flat out converted wrong. Here is how I converted it:

   1: ALTER TABLE [TableB]
   2:     ADD [LookupIdTemp] BIGINT NULL
   3: GO
   4: UPDATE [TableB] SET [LookupIdTemp] = CONVERT(BIGINT, [LookupId])
   5: GO
   6: ALTER TABLE [TableB]
   7:     DROP COLUMN [LookupId]
   8: GO
   9: ALTER TABLE [TableB]
  10:     ADD [LookupId] BIGINT NULL
  11: GO
  12: UPDATE [TableB] SET [LookupId] = [LookupIdTemp]
  13: GO
  14: ALTER TABLE [TableB]
  15:     DROP COLUMN [LookupIdTemp]
  16: GO
  17: ALTER TABLE [TableA]
  19: GO

Not very fast, but I was able to execute it from a desktop machine and let it run for a few minutes (sp_rename doesn’t exist for SQL CE for those unaware). I tried it on some sample data, and it seemed to work fine. No so. Here is a sample input, and sample output of where things didn’t work and did work. See if you notice a pattern.

Input Output
5 5
10 10
150 150
1000 1000
05 0
050 5
001000 10
010 1
0000000010 0

See a pattern emerging? It’s the leading zeros. SQL CE does not convert a string to a BIGINT correctly if it has leading zeros. Simple as that. For every insignificant zero on the left, it chomps that many digits from the right.

Input Output
5 5
10 10
150 150
1000 1000
05 5
050 50
001000 1000
010 10
0000000010 00000001

See now? That’s pretty pathetic. As expected, if you do this with a full SQL engine, it works as expected. Also, it only seems to do this for a BIGINT. The conversion works just fine for INT, SMALLINT, TINYINT, you name it. I got in touch with the SQL CE team, and acknowledged the bug. They were also kind enough to provide a recent work around: put a sign in front of the digit. i.e. “+050” is properly converted to 50.

Even after all of that the query flat out didn’t run on the device. We started to worry that the device just wasn’t powerful enough. So we tested it on a slightly better device, a 550 MHz XScale with 64 MB of RAM. It ran, but was slow. Painfully slow. Now we really started getting worried.  Was our goal just too much for a CE device for a .NET Application? Were we going to have to move to something native like C++ eMbedded?

Well, what about indexes? Did we have our LookupId column indexed? You bet. That’s the first thing you do on a lookup column with a ton of rows. It didn’t seem to help much either. Usually indexes perform miracles. So what were we doing wrong?

We went through all sorts of suggestions, like using the Seek operation,  downgrading to SQL CE 2.0 (you never know) any nothing worked.

Well, let’s try SQLite first at the suggestion of a hardware vendor. We found a nice app on CodePlex that even converted it for us. SQLite is a native implementation, and we had a tricky time finding a managed wrapper for it, but we did here. the conversion took ages, but the managed wrapper was very clean and written in ADO.NET style. Let’s give it a try.

It came back instantly. I thought something went wrong or the conversion didn’t convert all of the rows. A count check confirmed that all of the millions of rows were there and came back. Though I tried a LookupId of “3”. What happens when I try to use 10000000000000?

Came back 10 seconds later. Ah but wait – we didn’t get the index pulled over. Let’s add the index in try again. The size of the database grew significantly, from 400 MB to 750 MB. The database was on a compact flash card, so we weren’t too concerned about that. Aside from the size, the query was always returning instantly (or at least sub-second).

I don’t know exactly why SQLite is infinitely better, but I think indexes are implemented in a strange way – the size of the database didn’t seem to change much in SQL CE without the index. If anyone wants to tell me how indexes work in SQL CE, I’d be happy to know. Or possibly tell me if I was doing something wrong. Some back research seemed to affirm my point with SQL CE. It’s very bloated for the Compact Framework.

One last little annoyance – the SQL CE Framework is not as simple as including an assembly in your project. It requires a full CAB installation. That isn’t exactly correct. A SQL CE MVP was kind enough to tell me that it can be distributed with the application. However, I was not able to find anywhere on MSDN, or much other places on the internet, on how to do that. What files do I need? Where do I get them? Microsoft wants you to use the MSI so they can update it with Windows Updates.

That left me with a pretty bad taste in my mouth for SQL CE. Given that the only difference between SQL CE and SQLite were a few key class names, I didn’t think I ever wanted to use SQL CE again.

The Second Try

I later was working on a project that needed to persist cached data somewhere. This is for a WPF-based Desktop application. SQLite came to mind, but it had one flaw that annoyed me. Different assemblies for x86 and x86-64. I could either have two different builds, or latebind, or do something clever, but I thought it was time to try SQL CE again, I don’t plan on having more than 100 rows in any given table anyway. SQL CE isn’t much better either. Again, it isn’t redistributable in any form other than an MSI. At least the documentation doesn’t recommend it. But at least I had the freedom to just distribute the SQL CE MSI for the correct platform and the app would just work.

You might be thinking, “Why not just force your application to run in x86 all the time and avoid the x86-64 problem?” Well, I am a 64-bit junky. If it’s a managed app, it better well damn work on the OS’s preferred architecture. It’s a pet peeve of mine I suppose. Unless there is a good reason to force x86 (like performance problems with 64-bit – remember the x86-64 Jitter is slower) or moving a legacy application to at least work properly on x86-64 as a starting point.

SQL CE is actually an OK database platform for a desktop. There are some quirky bugs with it, like the issue with BIGINT, but not bad. I suspect it will improve as well. However I will never use it for a Compact Application ever again, regardless of how big I think the data is going to be.

Unit Testing Insanity

Now, I do unit testing whenever I can. Sometimes the client prefers I don’t due to time constraints or don’t see the benefit. I push for it, but there is only so much I can push. The previous client was one of those, so I didn’t notice this issue until recently.

If you’ve never written a Compact Framework application, there are a few things to know. First, an application targeted for the Compact Framework is fully capable of being referenced by Full .NET Framework assemblies, which makes sense. the stranger one is that in a Compact Framework project you can reference a Full .NET Framework assembly, it just may not run properly. That in mind, here is a sample solution setup.

  • ProjectFoo.Data
  • ProjectFoo.Business
  • UnitTests.ProjectFoo.Data
  • UnitTests.ProjectFoo.Business

All of them are Compact Framework Class Libraries in VB.NET. The UnitTests projects need to be Compact Framework projects because for whatever reason, a Full VB.NET Project cannot reference a Compact Framework VB.NET Project (though C# can do it). My reasons for using VB.NET is worth another blog post later.

Regardless, things were going smoothly with this approach. NUnit has no problem working with the Compact Framework, my CI server runs the tests fine, Resharper runs them, seems like a good setup. Time to add some data! I went for SQLite since this is a Compact Framework application. I referenced it, added all of the native dependencies, at this point it’s like clockwork for me. My first test is to test a class that creates the database for the first time. Here was my result:

System.IO.FileLoadException: Could not load file or assembly 'System.Data.SQLite, Version=, Culture=neutral, PublicKeyToken=1fdb50b1b62b4c84, Retargetable=Yes' or one of its dependencies. The given assembly name or codebase was invalid.

Huh? Lets check the Bin directory and make sure it’s there. Yep.

I knew this wasn’t going to be an easy one. I’ve dealt with some bad problems with loading assemblies in the past when I was writing an multi AppDomain project in the past, and knew the best place to start was the Fusion log. It happened to already be enabled from previous problems like this, which is lucky, I guess. Here’s what was in the log:

*** Assembly Binder Log Entry  (9/28/2009 @ 12:33:27 AM) ***

The operation failed.
Bind result: hr = 0x80131047. No description available.

Assembly manager loaded from:  C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorwks.dll
Running under executable  C:\Program Files (x86)\JetBrains\ReSharper\v4.5\Bin\JetBrains.ReSharper.TaskRunner.exe
--- A detailed error log follows. 

=== Pre-bind state information ===
LOG: User = VCSJONESDC\Kevin Jones
LOG: DisplayName = System.Data.SQLite, Version=, Culture=neutral, PublicKeyToken=1fdb50b1b62b4c84, Retargetable=Yes
LOG: Appbase = file:///C:/development/Thycotic/SecretServer.WindowsMobile/trunk/UnitTests.SecretServer.WindowsMobile.Data/bin/Debug
LOG: Initial PrivatePath = NULL
LOG: Dynamic Base = NULL
LOG: Cache Base = NULL
LOG: AppName = UnitTests.SecretServer.WindowsMobile.Data
Calling assembly : SecretServer.WindowsMobile.Data, Version=1.0.3557.41832, Culture=neutral, PublicKeyToken=null.
LOG: This bind starts in default load context.
LOG: No application configuration file found.
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v2.0.50727\config\machine.config.
ERR: Failed to apply policy (hr = 0x80131047).
ERR: Unrecoverable error occurred during pre-download check (hr = 0x80131047).

Not too much help. The HR is the same as the except which I doubled checked like so:

   1: [System.Runtime.InteropServices.Marshal]::ThrowExceptionForHR(0x80131047)

Yes, I used a PowerShell script to test this out. Once you get use to it, I find it very handy for such uses.

So, let’s start digging into the IL of SQLite using our good friend ILDASM. Reflector doesn’t quite show one of the inner workings we want. If you look at the manifest of SQLite, it’s got this oddball in there:

   1: .module extern 'SQLite.Interop.065.DLL'

Reflector simply shows it as a comment at the top of the manifest disassembly. So what on earth is SQL.Interop.065.DLL?

Other assemblies contain the same .module reference, and ironically they can point to unmanaged libraries, like kernel32.dll. This is not the same as the unpopular .NET Modules. A .NET Module is, in short words, an assembly without a manifest. So what is SQL.Interop.065.DLL? It’s not a valid Win32 PE or PE+ file either. A look at it with the Dependency Finder tells us it’s compiled for the ARM architecture and it’s Subsystem is Windows CE 2.0+.

Here’s a screen grab.


For those curious, coredll.dll is the equivalent of kernel32.dll on a Windows CE system, but since I am using Windows 7 it obviously won’t be found on my desktop.

What does this mean? Why is a managed assmebly referencing an unmanaged assembly? Well, it isn’t exactly a “reference”. What happens is, whenever you do platform invoke on a DLL, the DLL is added as a “module” in the manifest. Let’s try it. Make a new C# project and throw in this code:

   1: [DllImport("user32.dll")]
   2: [return: MarshalAs(UnmanagedType.LPTStr)]
   3: internal static extern string CharUpper
   4:     (
   5:         [In, MarshalAs(UnmanagedType.LPTStr)] string lpsz
   6:     );

Now look at our manifest in ILDASM, we have this:

   1: .module extern user32.dll

I’m not entirely sure why it was quoted in the first example and not in the second, or if it has any significance.

The reason at this point is pretty obvious: We’re trying to do Platform Invoke on a library that my processor is incapable of handling.

Makes sense, but I am a bit bummed. Is SQL CE a better solution? Since SQLite is retargetable, can I use different versions of SQLite depending on the platform? I’m not sure right now, but I’ll be sure to follow up. If you have a working solution for this I would love to hear it, too.

Posted by vcsjones | 5 comment(s)

Thoughts on Office 2010 Technical Preview

While it was leaked out via a Bit Torrent, as most things do these days (erm, Windows 7 “RTM” as people are calling it) I decided to wait until it was available to me under Microsoft Connect. Yesterday I got the chance to download and install it. I wanted to share my thoughts on it as well.

Native x64

Yes, Office 2010 is now available to run x64. This surprised me actually, that’s a huge undertaking and I didn’t immediately see the benefit. However, people that spend their lives doing analysis and equations in Excel that have a x64 operating system will appreciate this. One thing that does interest me is how this affects 3rd party tools. 64-bit program cannot load a 32-bit library, and Office is inherently  COM oriented, so I wonder how this will play with plug ins. Most Virus Scanners offer some plug in for scanning a document as it is loaded. Macro enabled documents are a long time distributor of viruses. Office 2007 took a small bite out of that by changing the document’s extension for macro enabled documents as well as tightening the security.

Initially I was also worried about some of the downsides of x64 – mainly pointer bloat. Pointers in x64 are 8 bytes wide, as opposed to 4 for x86. This can lead to additional memory use by the application. I didn’t see any obvious memory issues in any of the office programs though.

Outlook 2010

outlook2010Right off the bat I noticed they fixed something that was really annoying, and that is if you use Outlook Anywhere (Exchange over HTTP) it would not remember your password. Every time Outlook started, you had to enter your password, no choice. My Windows password is pretty strong, so someone would have to get on my Windows account first before they could read my email. However since Outlook forced me to type my password every time, it also forced me to create a somewhat weak password – at least one that I could remember. Since we also have Outlook Web Access, someone could have gotten into my email through the web client if they guessed my password. Now I am able to keep my Exchange password pretty complex and only have specific devices – which are always secured with their own password – remember it.

Aside from that addition they also did a little interface revamping, specifically the addition of the Ribbon. It looks more polished, and allows you to access more features. It was something that also irritated me in Office 2007 was the inconsistent UI in the Office Suite. Notably, Outlook and Visio were missing the Ribbon – both of which have been fixed.

Outlook 2010 does a better job of organizing your email now as well, as opposed to a linear timeline of when it was received. It takes a bit of a GMail approach to it by grouping related emails into a conversation-like format. It was a bit confusing at first, so here is how I think it works. To the left of all grouped messages are an expanding arrow to allow you to view the conversation. Initially, it only shows ones that are in the inbox. If you click it again then it will include the entire conversation – ones from Sent Items and the Inbox. Once you get the swing of it it makes finding emails a lot easier. If you get a lot of emails in one day, you can catch up on what was already said much quicker. Of course, if the idea of grouping emails disgusts you, you can go back to the linear timeline – or any other sorting mechanism.

Something else that is new is the idea of Quick Steps – and I really like this. It allows you to create predefined actions on a message, say “Move to Folder Y and mark it as read”, or “Forward this to Tom and send it”. Here’s the kicker – you can assign keyboard shortcuts to it as well. My boss wanted this feature so badly he even took to writing a VBA Script that does it for him in Outlook 2007.

Maybe it’s me, but it appears that some basic artwork for icons are missing in the Technical Preview and are substituted with orange dots.

Small feature, but cool. Maybe it did this in 2007 – but I couldn’t find it – it now tells you how much of your space quota you are using. My quota is 500 MB and I am using 80. This probably only works for Exchange, possibly anything IMAP. I’ll have to test it.

Composing emails hasn’t changed that much. Aside from the ribbon, some other small features like “insert a screenshot” are now there as well.

Stability overall seems good. My mailbox isn’t huge so I can’t really vouch for how good it is with performance, but it is still a COM based application, and thus at the root a single threaded application. It didn’t hang on me though – I’ll update if it does.

One final thing, which I hope they fix, is that Outlook Today screen. That screen hasn’t changed since Outlook 97 or 2000, and it definitely feels worn and old with the new slick Ribbon UI. There’s so much potential with that screen but it isn’t being used.

Perhaps there are some additional things I missed. I can’t test it against Exchange 2010 so I don’t know if using Exchange 2010 will unlock some additional goodies.

Word 2010, Excel 2010, PowerPoint 2010

Word and Excel are important applications to many, and it is unfortunate for me to say, at least so far – I can’t see anything different. The UI is updated to the new Office Tab rather than the Button – but the look and feel are the same. Startup is lightning fast though – I don’t even have time to see the splash screen. It seems to work with large documents better and not get sluggish like it use to. Maybe that is a new found power of x64, but I like it!

I’ll keep playing around and update if I find anything new.


One of my buddies pointed out to me that Office 2010 has better support for the Open Office formats. Though 2007 also supported it, many people called it incomplete and often saved it in a way that only Office could open (thus defeating the point of being “Open”). I would be a little surprised, but not completely, if this is a 2010 feature and not also an update to 2007 by means of a Service Pack.

I’m not sure if it is because it is coming with Windows 7 or Office 2010, but there are a few new fonts as well.

Posted by vcsjones | with no comments
Filed under:

ASP.NET AJAX 4.0 Client Template Rendering – The Observer Pattern - Part 2

We last left of here, about a simple introduction to the AJAX 4.0 Client Template Rendering. I’d catch up on that one if you haven’t read it before you read this one.

Our last example was pretty simple. We took a chunk of JSON, and displayed it in an unordered list with simple JavaScript and the AJAX Framework. Let’s take this a step further with actually manipulating the data. The framework allows manipulation of data, and it does this with the observer pattern. The definition of the observer pattern is a bit weak, but Microsoft is advertising this as “a true implementation of the observer pattern” rather than a notification pattern that uses constructs like events. This leads a new chunk of the AJAX Framework, Sys.Observer. We need to understand how this pattern works before we can manipulate data.

This allows the AJAX Framework to watch, or “observe” for changes on certain things. This can be a DOM element or any JavaScript object. In our case, we are working with a collection. The simplest means of working with our collection is using the Sys.Observer.add() method. This allows you to add an item to a collection, and also notifies the AJAX Framework that the collection was changed. If the AJAX Framework has something bound to our collection, like our unordered list, it knows it needs to update the list.

Take a look at our observer in action:

   1: function add() {
   2:     Sys.Observer.add(descriptions, { Name: "Cat", Description: "Cats make a meowing sound" });
   3: }

This code sample is build on the code in the previous blog entry.

When we call “add()” we are adding a new item to the descriptions collection. Not only does this add it to the collection, it also notifies anything that is using the descriptions collection that it was notified, and the AJAX Framework correctly “rebinds” the UI, in our case the unordered list.

The Observer class has a lot of additional functionality that you would expect for manipulating a collection, remove, removeAt, etc. The full details are outlined in MSDN.

If we add an input element that calls our add function, we can see it in action:

   1: <input type="button" value="Do Add" onclick="add()" />

Neat huh? It took little effort to get us this far. The Observer pattern is pretty powerful as well. Using the observer pattern, we can also create our own listeners and react appropriately when the observer makes a change to an object, and it goes far beyond working with collections.

We’ll dig further into the observer pattern in the next post.

Posted by vcsjones | with no comments

ASP.NET AJAX 4.0 Client Template Rendering

Recently I have been dabbling with the ASP.NET 4.0 Framework. There were two things that immediately got my interest:

  1. An actual new version of the CLR. More of come on that later…
  2. AJAX 4.0

Now, I didn’t mean to say I was pumped an excited about it, but a certain element caught my attention, and that is Client Rendering. This basically, allows you to bind simple HTML to JSON or a simple JavaScript array. The idea is interesting, but I wasn’t sure how well it’d play.

To get started, I looked here: This is a brief overview of the application. Not entirely informative, but it got me started. I fired up Visual Studio 2010 and began to tinker.

To start off, I wanted to build something simple. No JSON or Web Service calls, basics! I’m a slow learner. So, what exactly am I doing with this? Given the scenario where I have a JSON object with some information on it, I want to display it on the page. Now, there have been a few ways of doing this in the past. jQuery makes UI component for adapting JSON objects to tables, ordered elements, etc. The Microsoft implementation isn’t too different. It defines a binding for some sort of repeatable HTML element. Tables have repeating rows, unordered lists have repeating list items, etc. Unordered lists are pretty simple, so lets start there. Let’s say we are given a JSON object similar to this:

   1: var descriptions = [{ Name: "Foo", Description: "Foo is a great product." }, { Name: "Bar", Description: "Bar is OK, but not great."}];

Pretty simple. We have 2 objects in a collection with a Name and Description property. Now we want to show them in in an unordered list.

First, to get started, we need a ScriptManager in our page. This wires in all of Microsoft’s AJAX scripts for us.

   1: <asp:ScriptManager ID="MyScriptManager" runat="server">
   2:     <Scripts>
   3:         <asp:ScriptReference  Name="MicrosoftAjaxTemplates.js" />
   4:     </Scripts>
   5: </asp:ScriptManager>

I just plopped this right after the <body> tag. Notice the script reference is simply working on name. One of the new features of AJAX 4.0 is Microsoft broke apart the AJAX Framework JavaScript into different libraries. This allows you to selectively use the libraries you need and avoid large chunks of script you don’t need. In this case, we need the AJAX Templates script.

The AJAX functionality we will be using is an attached DataView. The scripting library works by attaching itself it an HTML element. In our case, it’s a unordered list. The attachment is done with a simple attribute on the element, sys:attach and then specify what you want to attach. Before we can go there, we need to tell our browser certain things, mainly namespaces. The body tag will contain certain namespace imports, similar to how WPF XAML imports namespaces. We want the DataView and the Sys namespace, so we’ll throw those in there:

   1: <body xmlns:sys="BLOCKED SCRIPTSys"
   2:     xmlns:dataview="BLOCKED SCRIPTSys.UI.DataView"
   3:     sys:activate="*">

There is one additional attribute, sys:activate that is in there. We’ll touch on that in a bit. So now we want to build our unordered list and attach a dataview to it. Let’s put together our unordered list and go through it.

   1: <ul id="descriptionsList" sys:attach="dataview" class="sys-template" dataview:data="{{ descriptions }}"> 
   2:     <li>    
   3:         <h3>{{ Name }}</h3>
   4:         <div>{{ Description }}</div>
   5:         <hr />
   6:     </li>
   7: </ul>

Notice the sys:attach attribute, and we are attaching a dataview to our unordered list. The other important attribute is the dataview:data. This is basically what data you want to bind to. Finally, we have a list item that is our “template” for what AJAX should use for all of the items. We denote a binding using double curly braces {{ }}. So {{ Name }} binds to the Name attribute in our JSON, etc. The dataview:data attribute is bound to our descriptions JSON object we declared.

There is one little piece left, and that is the class of sys-template. This is just a minor styling that sets display: none. This hides the entire element and the AJAX Framework will set the display properly once it has done fully loading all of the data and rendered it.

All together, it renders something like this:


All together, this was all of my source:

   1: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ClientTemplates.aspx.cs" Inherits="DataControls.ClientTemplates" %>
   2: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
   3: <html xmlns="" >
   4: <head runat="server">
   5:     <style type="text/css">
   6:     .sys-template
   7:     {
   8:         display: none;
   9:     }
  10:     </style>
  11:     <script type="text/javascript">
   2:         var descriptions = [{ Name: "Foo", Description: "Foo is a great product." }, { Name: "Bar", Description: "Bar is OK, but not great."}];
  12: </head>
  13: <body xmlns:sys="BLOCKED SCRIPTSys"
  14:     xmlns:dataview="BLOCKED SCRIPTSys.UI.DataView"
  15:     sys:activate="*">
  16:     <form id="form1" runat="server">
  17:         <div>
  18:             <asp:ScriptManager ID="MyScriptManager" runat="server">
  19:                 <Scripts>
  20:                     <asp:ScriptReference  Name="MicrosoftAjaxTemplates.js" />
  21:                 </Scripts>
  22:             </asp:ScriptManager>
  23:             <ul id="descriptionsList" sys:attach="dataview" class="sys-template" dataview:data="{{ descriptions }}"> 
  24:                 <li>    
  25:                     <h3>{{ Name }}</h3>
  26:                     <div>{{ Description }}</div>
  27:                     <hr />
  28:                 </li>
  29:             </ul>
  30:         </div>
  31:     </form>
  32: </body>
  33: </html>

And I didn’t write a single line in the code-behind, either.

This demonstrates the basic functionality of the binding. Next we’ll look at how to actually bind to dynamic data from a web service call, and manipulating the data.

Update: If you want to download a Visual Studio 2010 project, I’ve uploaded it here: It looks like Community Server is blocking some of the code snippets thinking it is XSS.

Posted by vcsjones | 1 comment(s)
Filed under: ,
More Posts Next page »