March 2007 - Posts
I have to admit a slept rather well last night. Train traffic greatly diminished after 22:00. The windows had double glazing as well, helping to reduce the noise.
What really insured my night’s rest however were the earplugs. Advice to newbie business travelers: always bring earplugs.
The hotel did not have bacon for breakfast.
I finished my day1 report last night so I used the extra time this morning to post that report and catch up to my email. Apparently, some spamming copany or other has discovered the contact form on my blog. I might have to disallow anonymous contacts soon, since the deluge of spam is increasing daily.
Improve Database application performance with SQL server service broker
This session is hosted by Bob Beauchemin. He looks like my boss’s twin brother. Really bizarre.
This session is one that I went to because there was nothing else that interested me, but now I am glad I went there.
It was 100% code demo. T-SQL code to be precise. It showed how you can design your database enabled logic to work with SQL transactions. The main advantage of using transactions is that your applications can keep on working if one or more of the back-end servers are offline.
This technology uses built-in message queues that allow you to have much faster transactions than what you would get with DTS (Distributed transaction server).
These queues turn the database actions into asynchronous operations.
This demo was very clear and concise, and one of the better code demos I have ever seen. Nuff said.
ADO.NET vNext – Linq, Object services and the data entity model
Originally I was going to go to a Workflow Foundation demo by Ingo Rammer, but I had already seen a beginner’s intro once at tech-ed.
I knew literally nothing about vNext and object services, so another code demo by Bob Beauchemin might be a better use of my time.
vNext, object services and the data entity model are going to be part of Orcas.
The data entity model is a new data model that is accompanied by 3 XML schemes for defining and mapping the data flow from a custom data store (like the registry, the GAC or whatever) to ADO.NET. The model contains hooks for 3d part provider writers.
Object services is a technology that allows you to work with object databases that contain data objects that can have inheritance and other object properties.
I have to admit that I still don’t know that much about it, let alone understand it, but now at least I know it exists.
This session is hosted bu Bruce Payence, and is about the windows powershell.
This is actually an IT session, not a development session.
Powershell is the new .NET enabled command line interface. From the demos it looked really cool. It can use all available .NET components, so it allows you to do basically anything you could do with a unix or DOS command shell, as well as anything you can do with .NET.
It is also fully customizable via plug-in modules.
To show the power of powershell, there were a couple of interesting demos: a space invaders program and a couple of examples of really interesting uses of plug-in modules like an extension that allows you to ‘cd’ into the registry to get information.
Powershell cannot do anything that e.g. C# could not also do, but because you are creating shell scripts, it is much easier to string different programs together to do something.
Another nicety is that powershell can use both unix commands (ln, rm, …) as DOS commands (dir, copy, …).
One important improvement over standard unix style scripting is that you can pipe .NET types into other programs without having to format and parse everything to and from text.
At this point I should point out that the room was packed, and the majority of the people in the audience were older sysadmins, and not developers.
Everybody seems to find it perfectly obvious that 44/7 equates to 6.2857… That is a sure sign of not being a developer. I would have expected it to be 6. Whatever.
Anyway, Powershell looks really interesting. So interesting in fact that I bought a book about it. It looks like something that can be really useful in a sysadmin role. Since I am going to have to maintain a standalone production line network, it might be worth to dive into it.
Bruce is an excellent speaker. He also did a book-signing session at the book stand, so I had him sign my copy of ‘Powershell’. What can I say… I am a geek.
Continuous integration with and without Team System
Another talk delivered by Roy Osherove.
This session is about continuous integration. CI means that every time a check-in is done (or each fixed time interval) your project is checked out to a build server and built.
This way you can immediately verify if your (or someone else’s changes) broke the built.
Continuous integration has the advantage that you discover build problems as soon as possible.
Roy showed several tools that can help you to automate a build:
· Nant: works for smaller projects, but can be a tedious to set up if you are not an XML masochist.
· MSBUILD: same problem. XML hell.
· Team Foundation server MSBUILD: better, but is very limited in the current version of TFS. The Orcas TFS Build utility will be much better though.
· FinalBuilder: a 3d party tool that allows you to configure an automated build graphically. The very nice thing about this tool is that it comes with dozens of configurable actions like deploying build output, burning DVDs, running unit tests, … This was the favority tool of the speaker.
· Visual Builder. Comparable to FinalBuilder.
Having such a build system in place is only the first step, because now you’ll have to set it up so that it starts by itself with each change.
There are several tools for this:
- Cruise control
- Draco .NET
- Something that I cannot read in my notes anymore
These were not really shown, but their use was explained.
Continuous building is also possible with TFS. This takes some custom programming. TFS supports 3d part plugins through .NET interfaces. With a custom plugin you can register to TFS events, like ‘check-in complete’ or other such events.
These events can be used to let the plug-in trigger a check-out and a rebuild. This takes quite a bit of programming, but there are already a couple of open-source plug-ins that you can use.
This session also came with a song, but it was not as nice as the one of his first session. It was very interesting, and convinced me of the necessity of having an automated build system in place if you are one a project with more than 1 other programmer.
Building WCF based services with WF enabled business logic
Another session delivered by Chris Weyer.
As with his previous sessions, he again began with the ABC story which I’d heard 4 times by now. Sigh… But I digress. After 15 minutes the session really took off.
Thinking about it, it would make perfect sense to implement WF functionality in a WCF service. After all, if you start a custom workflow, you’d want someone to be able to interact with it, no?
As it turns out, the current releases of WCF and WF make this very hard to do, and require a lot of custom programming.
Chris showed how you can do it, and demonstrated how his multimedia demo project implemented this.
He also showed how this is implemented in Orcas. Luckily WF and WCF are perfectly matched in the Orcas release of .NET, so that boades well for the future. One of these days, Chris will upload his demo project to his blog so that we can all look at it. It uses WPF, WCF, WF, SQL Server and some other stuff.
End of the day
The conference is over. I decided not to stay for the closing keynote. I don’t believe in keynotes. At least with the kick-off keynote you have the comfort that you can do something interesting when it is over.
The trip home took 2.5 hours because –surprise surprise- there was an accident.
I was completely exhausted from trying to remember and learn all these new things over the last 2 days.
But all in all it was very interesting, despite the lack of C++ (or any native code at all).
Before going to the conference center ICC in Gent, I checked in to the hotel where I’ll be staying tonight.
It is located in a quite street with little traffic. I sure hope that I can sleep tonight because on arrival I discovered that there is a railroad right in front of my hotel room (like 10 meters away from my room).
Funny that they didn’t mention that on the website.
Ah well. I only stay there for one night. My room is very clean, comfortable and the people here are very friendly.
The ICC itself is only at walking distance from the hotel, located in a beautiful park. Luckily it was no problem to register for the event without the printed registration papers that I forgot in my car.
For some reason, keynotes generally fail to excite me. Maybe I have just become too cynical after 9 years in the consulting business to be affected by a marketing whizbang speech..
It was a typical American keynote with lots of multimedia. Luckily the keynote speaker - David Chappel - managed to fill 1.5 hours of speech without boring me too much. That is an achievement in itself.
David delivered a good story that incorporated what he calls ‘The Brian May’ principles.
- Work together.
- Stretch yourself (specialization is for insects)
- Know your tools.
Brian May was a guitarist for the rock band Queen. Apart from being a brilliant musician, Brian published an article in Nature as an astronomy grad student. He also built his own guitar.
David then went on to give a survey of .NET3.0, Forefront, System Center and Longhorn Server, explaining how the aforementioned principles apply to them and to the developers and IT professionals using them.
The keynote conclusion was that we can all be a rock star like Brian May in our own way.
WCF communication patterns
This session is hosted by Christian Weyer. I have seen him before in Barcelona and I hope this session is as good as the previous one.
I like to get a lot of WCF and WF information because I’ll soon be starting at my new company in the role as the sole Systems Engineer, responsible for systems integration and custom applications.
The beginning of this session is equivalent to the Barcelona intro session on WCF with the ABC story: Address, Bindings and Contract.
Then there was an explanation of the issues involved with WCF application scaling. Basically you have to start using asynchronous programming, threads and callbacks to avoid synchronous behavior.
This was coupled with an explanation of the different channel types in WCF:
- One way. This is just a fire and forget method of sending messages without waiting for them to be handled. One caveat is that Sending itself is synchronous, so methods only return if the message is really gone.
- Duplex. Communication channels are bidirectional, allowing a process to send a message and receive multiple return messages over the same channel. By default this is only possible with session based bindings.
- Request reply. A process sends a message to someone else and then gets 1 reply afterwards.
WCF is extensible at each level of the WCF stack. You can extend channels, protocols etc, but it is fairly complex.
The demo was very similar to the demo in the architectural session in Barcelona last year. Only this time, Chris focused only on the WCF part of the story.
Visual studio team system for DB developers
This is one of those sessions I attend because there is nothing else that sounds interesting. It is hosted by Roy Osherove from Israel. He is a surprisingly witty speaker and interesting to listen to.
Roy gave a quick overview of Visual studio Team Suite and Visual Studio Team Foundation Server, and how they relate to the VSTS for DB developers.
Basically, VSTS for DB devs allows developers to develop databases (with tables, stored procedures, …) in visual studio, integrated with TFS.
This means that you have all the support of tasks, bug tracking, unit testing and nightly builds, but applicable to database designs.
This is really something because there is nothing comparable from any other vendor at the moment. DB design is usually kept separate from the rest of the source tree, and there is usually some level of asynchronous development, leading to integration mismatches and problems late in the development phase.
There were some demos of database unit testing, integrated with a nightly build system.
It was really impressive, and I can easily see the justification for spending a couple of thousands of euros for this tool in a DB development environment.
Using this will save you lots and lots of pain.
At the end of the session, Roy took his guitar and sang ‘the database development song’ to the tune of ‘Sound of silence’. Really. No joke.
And it was a nice song too. It was definitely something that people remember when the guys at home ask ‘So how was the conference?’. Oh well there was this guy that sang a song about databases… I can already image my wife’s face when I tell her this.
This session was also hosted by Roy Osherove as well.
Regular expressions are something I have not yet used ‘for real’ in any of my projects, but I can see the importance of a good understanding of regular expressions.
For those who don’t know what regular expressions are: a regular expression allows you to define a pattern that describes what a piece of text should look like, and then apply that pattern to some arbitrary input data. You would use regexes to verify input fields for telephone numbers etc.
.NET regex allows you to
- Describe text using patterns.
- Validate, manipulate or parse text.
2 tools that can help you develop regexes are regulator and regulazy, which are available for download from www.osherove.com
Regular expressions can save you enormous amounts of time and money because you don’t have to write custom parsing code that allows for all sorts optional fields and parameters.
The problem with regular expressions is that they can quickly become unreadable for other human beings.
The conclusion of this session was that .NET regexes are really powerful, they can save you a lot of development and debugging time, but should only be used where appropriate because of their inherent maintenance (readability) issues.
I really got a good idea of how I can use .NET regexes. This was a good session.
It ended with another guitar song about regular expressions, though it was not as good as the DB song.
WCF practices from the field
This session was hosted by Christian Weyer, and continued where his previous session stopped. The first 5 minutes and slides were the same. After that it quickly picked up pace with practical WCF development issues.
This time his code demo focused on hosting WF inside a WCF service, but without really looking at the WF part. Rather, the binding via WCF was highlighted.
The crux of his speech was that interop between WCF and other ( non WCF tools) is really not easy. Partly this is because WCF is still v1.
The default bindings are useless in 99% of all real world scenarios. This is because of the ‘security by default’ design which limits virtually everything into uselessness. The good news is that it is very easy to configure bindings, as long as you aware about which settings you should override immediately.
Some of the settings are very obscure and named illogical. For example, the ReceiveTimeout setting is really the session timeout. There is a tool in the SDK that allows you to configure these settings.
Another of his point is that the Microsoft slogan ‘choose any contract and use it with any binding’ is really a pipe dream. In real world scenarios there are so much binding parameters to be configured that you should instead choose one and stick with it. After all, there is little point in changing bindings just for the sake of it.
Binary data transfer across HTTP is the binding that is best in most of the cases, and you should only pick something else if there is a pressing need.
The whole point of WCF is message exchange. To allow objects to be sent over a wire, they have to be serialized. This can be done through several serializing methods.
- BinaryFormatter and SoapFormatter
The preferred one is the last one, as it has been custom tuned for WCF, though it can only be used if interop with other languages like e.g. Java is not a requirement. Or was that the NetDataContractFormatter? I can’t remember.
Anyway, there are a couple of important points for WCF servers:
- Ideally they should be self-hosted in an NT service to minimize dependencies. IIS 7 is also an option, and it is said to be stable and robust. Unfortunately it carries an ugly stigma from the IIS4 and IIS5 days, when IIS was one of the most horrid pieces of crap to be connected to the internet.
- Make sure your code runs with non-administrator privileges. This means that you have to create the connection points in advance, using a config tool. This is because they invoke http.sys which is unavailable to non-admin accounts because it runs in the kernel.
To consume a WCF service from the client side:
· Use svcutil or ‘add web reference’ to create an interop assembly. Or
· Use the .NET interfaces themselves. After all, they ARE the WCF contracts. Note that the first call is VERY expensive, since the whole WCF stack has to be created.
· Avoid session affinity, since sessions can be lost. NetNamesPipe, NetTcp and WspTcpBinding all have an implicit dependency on session semantics.
Some points in conclusion:
- WCF supports almost any communication scenario known to man, but can be tedious to configure.
- If interop is not an issue, use secure transport (http over SSL).
- Security is a feature, and secure interop is a miracle (or better: a pipe dream)
- WCF has built-in support for WMI, event logging, tracing, …
All in all a very good talk.
VS Orcas, improving end to end application level performance
This session was hosted by Jay Schmelzer, and was going to cover the following things:
· Advances for all VS languages: VB and C#. This one immediately had me miffed. Excuse me!?!? What happened to C++? Steve Teixeira (Group PM for Visual C++) seems to be convinced that C++ has a considerate amount of improvements as well. Better tell him that was cancelled.
I mean it’s not like I expect a wide audience to be interested in VC++, but if this talk is to be about Visual Studio, he could at least mention that C++ still exists.
After all, Tech-ed Barcelona showed that VC++ has still a lot of developer interest.
· .NET multiplatform targeting. This is kinda cool, since it allows VS to create applications for different .NET framework version. I.e. the .NET framework version is no longer coupled to a specific VS version.
· .NET 3.5 design surfaces.
· Web / Office 2007 apps.
This session was going to be a code demo. Cool, I like those.
Are you a Dilbert fan by any chance? I am. Several years ago I read a Dilbert comic (I still have it, but cannot publish it here obviously) that had Dilbert sitting in front of some important customers, demo-ing their latest product.
Dilbert starts the demo and says: “Our product does not yet have a user interface, but if it had, you’d see something here, here, and sometimes here. And then you’d be saying ‘I’d gotta get me some of that’. Any questions?”
Now picture a Group Program Manager in front of a 300 man audience. He is going to show the new VS features. He starts a database connection wizard or something. An empty dialog box appears.
He then says ‘This feature does not yet work. But if it did, you’d see this and that, and you would be able to yada yada yada and you would really like it yada yada’.
At that point I picked up my stuff and left. As novel as it was to have a code demo where you have to imagine the actual demo, I figured I’d avoid the rush and leave early to get dinner.
No offense to Jay, who is a good speaker, but this session was not up to par.
End of day 1
Overall a good day. It was kind of tiring, with a keynote and 5 sessions crammed into 1 day, but good nonetheless.
It was not quite as nice as tech-ed though. Some points for improvement:
- You could only get drinks and snacks on the ground level floor. This means that after each session, there is a surge of people going for a drink.
- No free wireless access. You had to go to 1 of 2 stands with 8 Ethernet cables and physically connect.
- Coffee was served in little cups. I preferred the large cups I got in Barcelona, since you could easily take them with you to the sessions.
I accidentally ran into 2 former colleagues who now work for a small consulting company that offers services in the .NET and IT area. They were doing well, and it was nice catching up with them.
I had diner in a nice restaurant near the ICC, called ‘Grade’ www.grade.be
Should you ever be around the ICC, feeling hungry, it is a good place to eat.
Yesterday I was searching for some information on FireWire and the User Mode Driver Foundation (UMDF) and I ended up on the blog of Ilias Tsigkogiannis.
I scanned the page and lo and behold: He links to my articles on driver development using the KMDF, which are hosted on codeproject.
2 Microsoft KMDF developers and one independent driver developer (Don Burn) were kind enough to proofread them before publication, but I didn't know that someone from Microsoft linked to them as tutorials. J
That's one of the 'advantages' of being active in a niche domain were very little information is easily available: it's a small world.
Here I am, getting to know my new PC. It is a racehorse:
- 4GB of DDR2
- Core2Duo 2 6600
- Nvidia GF-7950 VT
- WD 320GB SATAII, 16 MB Cache, 300MB/s, RAID-1
I installed Vista64 because I want to use that as a development platform and test target. Of course I want to transfer all of the data on my trusted yet ancient laptop to this beast.
So I hook up the firewire cable, go to the network configuration tab, ...
Wait a minute... where is my FireWire connection? I know it is there because it works in the XP system that I use in dual boot. Hmm... They can't have been this stupid, can they?
Turns out they can, and they did. FireWire is no longer a supported carrier for TCPIP under Vista.
I have been using network shares over FireWire for a long time for transferring large amounts of data from my laptop to my desktop for backup to DVD. The only alternative is 100Mb/s network cross cable or WIFI. Not fun.
So it seems that my XP system will come in handy after all. I hope I don't discover any more of these weird decisions.
Since I will have to do COM in my new job, I wanted to make sure I knew COM inside out. I have used COM before, but only client side. I never wrote a COM server.
So I decided to finally work my way through ‘Essential COM’ to learn the nitty gritty details of COM. After all, a server is where all the ugly stuff happens.
I decided to ignore ATL and use only the raw platform COM API. This is tedious and cumbersome, but at least I’d know exactly how a COM server works. Learning ATL after that should not be too hard because at least I’ll know what I am doing, and why things are done a certain way.
The best way to describe the learning process is with the words of my fellow C++ MVP and friend, Carl Daniel: ‘As much archeology as engineering at this point in the lifecycle of COM’
Examples and tutorials on the internet are generally several years old, and can have compilation problems. Another problem is that it is very hard to find examples about advanced functionality. The reason is simple. COM was invented before internet coding communities existed like today.
Then there is MSDN…
The COM documentation looks like it was written by a team of lawyers. Everything is factually correct, wide open to interpretation, and only to be understood if you can memorize and cross reference everything within your mind. And of course, a lot of things are simply never mentioned.
Let’s just say I have had a lot of fun so far. The great thing about hours and days of frustration and hitting brick walls is that it feel sooooo good when you finally figure out how stuff works.
I think I’ve got a good understanding now of COM. I put my demo project in sourcesafe and ‘released’ a new version with each major new feature. If I have some free time I’ll write articles for www.codeproject.com, explaining each step along the way.
The first few articles will be relatively simple. The one about my current implementation will be really advanced, and cover a special topic that is very neat.
I won’t spoil the surprise J but I can say that literally nobody ever wrote an online article about it. The only mention of that particular technique I found was in a well respected COM reference book. And it was dead wrong…
So that proves that it is still possible to do very interesting stuff with technology that is over a decade old.
It is with some pride that I can announce my first publication that is printed on processed dead trees, a.k.a. paper.
It is titled ‘Designing a Device API: Part 1 - What It Means, and Why You Should Do It’ and it is published in the ‘NT Insider’; a journal for systems programmers that is published by OSR (www.osronline.com).
It is the first article in a series that will explain the issues in developing a user mode API on top of a device driver. Each follow-up article will focus on a specific topic that is relevant for API development; explaining techniques, best practices, and common pitfalls. The tool I use for creating the user mode API is Visual C++.
The article itself can be viewed online on their website.
http://www.osronline.com/article.cfm?id=482Free registration is required.