Tech-ed Amsterdam 2012: Day 1
As it turns out, my prediction about breakfast turned out to be true. In fact, I’d say it is not even worthy of the name ‘breakfast’. There were a handful of bread rolls, no bacon, no cheese, no meat, no honey, …. They did have some pre-packaged jam, chocolate paste, cream cheese, and hard boiled eggs. Exactly what I would have expected from such a loungy modern artsy place. Hip people apparently don’t eat real food. I’ll have to see if there is a good breakfast place available around here.
At least the coffee was good.
I’ll have to find a pharmacist today to buy some earplugs. I never travel without. And I am certain I put a pair of earplugs in my travel bag. There must have been a freak quantum event, opening up a wormhole in my luggage, making the plugs disappear.
To be fair, the hotel rooms are very quiet and sound proof. But the shower head was dripping. It stopped after while, but I figured I might put in my earplugs so that I didn’t wake up if it started again. Yet I didn’t have any. I figured that if McGyver could escape from a tub of acid with only a chocolate bar, I could improvise earplugs with the items available to me in my room.
I am not entirely certain, but there is a good chance I am the first person to improvise ear plugs, using only gummi bears and the cellophane wrapper of a plastic cup. The result was surprisingly ergonomic and effective. J
The RAI is only a good 5 minutes walking from the hotel, which is nice. The RAI itself is still quiet. The crew are still in the process of building up the event halls. There is something nice about walking around in such vast halls when things have not yet started. You’re unnoticed and all by yourself while being around other people. The coffee is great .
I was here early, and managed to get hold of Kate Gregory while she was preparing for the session. We spent over half an hour just catching up on things and life. That’s one of the good things about tech ed. You get to meet the same people again and again, and keep in touch across jobs and years.
Kate started the day with an overview of the new C++ language features of Visual Studio 2010 and 2012. More specifically, auto, shared_ptr and unique_ptr. It’s been a while since I programmed native C++, but I have to say that with just these 3 things alone, C++ has become a lot more readable and robust. I’ll have to play around with C++ again to get a feel for it.
To be more precise, I am writing a parser / compiler / code analysis tool in my free time for the DeltaV phase code running our plant. I already had a basic version that I can use, but it is rather ugly, and does not really produce an executable parse tree. It uses regular expressions to parse the code, and does not support all language features.
For the sake of doing it, I have started a new project in my free time to build a real tokenizer / parser that can be used later on to run the code in simulation mode, and to perform more detailed static analysis. Currently I am doing that in C# for convenience sake. With over 100 megabytes of DeltaV code to crunch through, it will be interesting to see if the same algorithm in native C++ (using smart pointers and other new C++ goodness) is going to be faster and smaller.
After the break, the topic changed to Application Lifecycle Management, which is interesting, but not really applicable to me because at the moment I am not working in a development team environment. ALM is basically what Feam Foundation used to be. ALM is going to be in all versions of VS. The amount of functionality will depend on which version you have.
I’ve heard good things about TFS, and many horror stories about setting it up, which can easily take 2 to 3 weeks of billable consulting time. To mitigate that issue, Microsoft has now provided a TFS hosted on Azure. For now it is free, though it may not stay that way. For smaller companies, this may become a very good option, depending on the SLA of course. You don’t want to discover one day that your project history is gone for good. I suspect that it will start costing money if you want a set SLA. Even then it will be much more cost effective for small companies than hosting their own TFS and dealing with maintenance.
Code coverage and unit testing
The next part of the talk covered code analysis, unit testing and code coverage for native projects. I have to say that that looks impressive. For code analysis, there is now support for native code, meaning you can get code and class diagrams for native code, where that used to be only for managed code. Unit testing and code coverage works pretty much like they work for managed code.
Instead of working with method attributes, there are macros that provide similar functionality, and you don’t even have to know how they work. The unit testing provides test results, code coverage, and various UI features that make them very convient. I sure wish I had that available on earlier projects.
Lunch was ok. It was a sandwich / salad lunch. The sandwiches were good and the company was great, since I had lunch with Kate. The only downside is that I probably needed half the calories of the lunch to get to and from the lunch hall.
Library vs language
After the break, the topic switched to lambda functions, and how they can be used with for_each. Following from that she covered the parallel_for_each, aloowing the programmer to distribute a for_each loop across multiple CPU cores. That looks very interesting, allowing programmers to make a massive boost in speed for repetitive tasks that are not interdependent.
My current code parser (the one I wrote for analyzing our process control code) already uses parallelization, but does so manually via a thread pool and explicit handling of task completions. This looks interesting.
With the additional performance gain of using native memory management, it will be interesting to see if a native implementation of my DeltaV code parser will be quicker. That would also be a good opportunity to get re-acquainted with C++ and the new language features, as well as keep my development portfolio up to date. If I ever want to get back into development again, I’ll need to be able to have something to show for the last couple of years.
The final leg of parallelization example was done with AMP. The MP stands for ‘Massive parallellization’, and uses the GPU instead of the CPU. The video car needs to be DirectX compatible. If it is, then you can create small tasks that can be distributed in parallel by the GPU. As you all (should) know, a modern video card contains dozens or hundreds of pipelines which are perfect at executing simple drudge work.
Those pipelines suck at branching. They can’t handle it properly, and even if they can, performance goes from warp to suck. If you need to branch or do anything complex, you have to stick with the cpu cores. But anything that can be broken down to ‘do this simple task a gazillion times’ will make your GPU scream without even breaking a sweat.
At the end of the talk, there was an overview of the different type of container, and their pros and cons, and then the last part of the talk was about algorithms, and some general programming remarks.
I had a great day talking with, and listening to Kate. Somehow the day just flew by. I really like C++, make no mistake, but I had my reservations about 8 hours of C++. Yet I shouldn’t have worried because the talk was very interesting, and Kate covered a lot of diverse topics.
I forgot to mention it yesterday, but I had great Japanese food at restaurant Takara. Takara is Japanese for 'treasure'. Very good food at a very reasonable price. Tonight I had a steak at restaurant ‘Toon grill’ (Toon rhymes with Tone). It was argentinian beef, and one of the best steaks I’ve ever eaten.
All in all, day 1 was very much worth it.