Tech-Ed developers Barcelona: Thursday
Yesterday evening my Tech-ed issued tram card decided to die on me. I hadn’t noticed, but as luck would have it, a bunch of auditors entered the tram, accompanied by some tough looking stewards with handcuffs and batons.
My card proved to be invalid, and the auditor said something about the magnetic band failing. He wanted to replace it with a new one, and I tried to tell him that I should get one with at least as many rides left as I would normally have.
At this point I should point out that he did not speak a word of English, and I obviously don’t speak Spanish. Luckily he decided to do the wise thing and gave me my card back, gesturing ‘it’s OK’.
This saved us both an hour of frustration, trying to understand each other and solving this in a way that would satisfy us both. The old ‘I am a dumb but innocent tourist’ trick still does the job.
Since I only have to take the tram 2 more times I’ll just keep on using it, instead of trying to get it replaced.
So after a healthy breakfast I now arrive at the convention center to get my first cup of coffee for the day. In fact the coffee is so good that I call them little cups of happiness.
The fact that I call them that does NOT mean that I am addicted, ok?! Now let me get to the caffeine altar to get my cup of happiness before going to the first session.
DEV321: Delving into VS team system for software developers
This talk is hosted by Brian Randell, and it covers Visual Studio Team System for Developers. We don’t use this at my company (because it costs $$$, and it is not our core business) but I have it through my MSDN subscription, and I figured I’d see what it was all about.
Brian is another good speaker, and keeps the tone light while covering all different topics. It is mostly a succession of demos, glued together by a minimum of slides.
The key idea is that testing alone is not enough to guarantee a high quality app, and there were some statistics to back this up, along with a graph that showed that cost of a bug increases the later it is found.
I’ll just go over the different topics in sequence.
- Unit testing of managed code via VS, and generating the unit tests automatically. This is a really cool feature, and it can save you lots of time in setting up your test harnesses. Unfortunately this feature is not available for unmanaged C++.
- Code coverage. This tool can show you how much of your code was really executed in your tests. This is useful to determine the amount of dead code in your app, as well as making sure that your tests cover most of the code you’ve written.
- Static analysis. This allows you to analyze the source code itself to look for things that look suspicious. For managed code you use FxCop, for native C++ code you use PREfast.
- dynamic code analysis. This is only available for native C++ applications, and it checks for heap violations, handle violations and locking errors while your code is running, and presents you with a list of problem spots that you need to correct.
- Code profiling. This allows you to take performance measurements while your program is running to identify performance hotspots in your code. This can be done though sampling (which you do to get a first rough idea) and instrumentation (which you do get detailed results).
You don’t use instrumentation from the beginning because that would fill up your memory and hard drive very quickly.
Overall, this was a very good session that gave me a good idea about the capabilities of team studio for developers.
I think I’ll start using at least the analysis tools for some of my larger projects. Unit testing is less of an option because most of my code is unmanaged, but I might do it on some C# code just to get familiar.
All in all this was another very good session.
DEV365: porting .NET applications to 64 bit Xeon and Itanium
This session is hosted by Samah Tawfik of Intel Corporation.
This was another session I really wanted to see because 64 bit is going to become very real within the next year, and it is a good idea to learn about it so that I know what this will mean for me.
Not that I see myself developing for Itanium any time soon, but you never know what might happen, and chances are I’ll buy a dual or quad core Xeon in the beginning of next year, this this might be useful at a personal level too, because who doesn’t write his own custom apps for automating household tasks, right?
The architectural difference between 64 bit Xeon vs 32 bit Xeon can be described as:
- Extra memory space
- Extra registers
- Double precision ints
- Flat virtual address space
- 32 legacy mode, 64/32 compatibility mode, 64/64 mode
Then there was an architectural comparison between x64 and ia64.
Basically, the major reasons to upgrade to 64 bit are the extra memory space, and the extra registers and instructions that allow compilers to take advantage of extra features.
There was a benchmark that actually showed to 64 compiled code can be significantly slower than 32 bit code for smaller loads. The reason is that the 64 bit code is not yet able to take advantage of all the 64 bit features while still carrying the overhead of doing so.
Intel has a suite of tools that allow you to design, develop and debug application that take full advantage of 64 bit features and parallelism, and they integrate fully with Visual Studio.
For targeting .NET 64 bit you basically don’t have to care whether if will run on 32 or 64 bit, as long as you are not using P/Invoke or unsafe things. In that case you should set the CPU type to restrict the .NET assembly to run only in 32 or 64 bit mode. Not doing so can just crash your app because native DLLs are only loaded at runtime when a native function is executed for the first time.
The presentation itself was not very good. It was not bad either, it was just OK. This is not criticism towards Samah. I have done presentations like this myself since a long time, and I started out being very bad, progressed to mediocre and have now reached a point where I am just average.
Speaking in front of an audience and connecting with it is hard, and it takes a lot of skill. Not everybody is able to do this as good.
The Q&A session was very good however, and Samah handled difficult questions very well, and gave clear and concise answers. She knew what she was talking about and was well prepared.
One thing I learned was that if an app is only very computational intensive, it might not give you any advantage to 64 bit, at least if it is 64 bit .NET code. Just going 64 bit is not a magic bullet
Native code on the other hand would really be able to take advantage of 64 cpus because the native compiler can really optimize algorithms and play with registers and pipelines in a way that is not possible for the .NET JIT.
DEV320: What’s new in Visual C++ ‘Orcas’
This talk is hosted by Steve Teixeira, and we are all glad that he has his luggage back by now.
This conference room is actually one of the bigger ones, with room for roughly 200 people. It is slowly filling up but I suspect that it is not going to get very crowded, partially because this is session is about what’s coming in the future, instead of using what there is right now.
Still, there are at least a 100 people here.
Before I go any further I need to stress that Steve mentioned that the list of new featured is not set in stone, nor is there a fixed release date for Orcas. There is a very good chance that the featured mentioned below make it to Orcas, but there the feature list is not yet publicly committed to.
The Visual C++ mission is to enable provide world-class native tools while bridging next-gen technologies.
There are basically 3 major types of project that need C++
- Projects that have to be crossplatform.
- Projects that have a large existing C++ codebase.
- Projects that need a large degree control over runtime behavior.
To cater to these different types of projects, Orcas VC++ is moving in the following directions:
- Support platform technology and renew investment in unmanaged libs
- New development for MFC libraries.
- Making VC a good Vista LUA citizen
- Supporting the Vista SDK
- Advance interop with managed code
- STL/CLR template library to provide a very easy template-based means of converting managed typed to unmanaged types and vice-versa
- Developer agility:
- Improve compiler throughput by enabling concurrent compilation of cpp files, as well as enabling the incremental build of mixed mode solutions by looking at assembly meta data, and only consider files changed if the meta data they generate has changed.
- Allow targeting of multiple .NET platforms so that using Orcas doe not mean you have to upgrade to NET 3.x
- Deliver a new C++ class designer
There is of course also a list of things that is going to be cut from Orcas:
- ATL Server: this is going to split off from VC, and converted into a shared source project, kind of like what happened with WTL.
- /clr:oldSyntax: it is going to be removed. With Orcas it will no longer be possible to compile the old Managed Extensions for C++ code. This is a good thing (I will write a separate article on my blog about this).
- Pre Windows2000 targets are deprecated. Another ‘thank God’. Windows 98 was very good at the time it was released. It was much better than Windows 95. However, it is severely limited by the fact that the win32 API and the Windows system itself have evolved to the NT family of systems. Functionally, Windows 2000 was the marriage between the 9x series and NT4.0.
Windows Me is what happens if you extend a design beyond the parameters it was designed for. It should never have been. It is unstable, and all new functionality has been grafted on in a way that makes it look like the Frankenstein of the operating systems.
By removing support for those systems, large parts of the MFC library and other libraries and SDKs can significantly be cleaned up and reduced in code size.
- /Wp64: this is a compiler switch that you can use to warn for 64 bit portability errors. It was introduced at a time when the 64 bit compiler did not yet ship.
Since the 64 bit compiler does ship, it makes no sense to support this switch anymore. If you want to know if there are 64 bit compilation issues, just compile with the 64 bit compiler.
Steve also explained that generally, things are deprecated in one release, and then removed in the next release.
However, this is not going to happen with all the unsafe C runtime functions that are now marked as deprecated.
Basically, Microsoft introduced a ‘safe’ version for each function in the C runtime which is susceptible to buffer overflow problems, and they needed a way to tell the programmers that you could better use the new functions instead of the old ones.
Unfortunately someone decided that this mechanism already exist in the form of the deprecating pragma and decided to use that, much to the horror of many C++ programmers.
The VC++ team has recognized the unfortunate message that they delivered, and this message will be changed in the Orcas release to give a better, less hostile message that does not contains the word ‘deprecated’.
The C runtime functions which are now marked deprecated will NEVER be removed from VC++. Because they are part of the standard C/C++ runtime.
All in all this was another high quality lecture.
DEV326: Not faster processors but more processors
There weren’t a whole lot of other options, so I chose to go to this session. My only other choice would have been DEV359: .NET hidden treasures, but after reading the intro I decided that since I already knew 3 of the 4 treasures they mentions, the session would probably be just a waste of my time.
Anyway, this session is hosted by Carl Franklin.
Great. This entire presentation is going to be code demos (yaay) but it is done in VB.NET. (barf). Couldn’t they at least have mentioned this in the session description?
I’m out of here.
DEV356: Using OpenMP and MSMPI to develop parallel high performance apps
This session is hosted by Saptak Sen.
Since the topic is OpenMP it is very unlikely to cover VB.NET. Now don’t get me wrong, the market for VB.NET is still huge, and I don’t belittle the people who use it, but I really don’t like the verboseness of the language.
Windows 2003 Compute Cluster Server (WCCS) has the following goals / key concepts behind it:
- Simplified deployment and submission and monitoring of jobs.
- Leveraging existing knowledge and infrastructure to simplify HPC.
- Allow programmers to use a familiar development environment.
WCCS is actually made up of Windows 2003 Cluster edition, and the Compute Cluster Pack (CCP).
The OS is used to manage the hardware and to provide a high bandwidth low latency interconnect.
CCP contains the support for standard MPI, job scheduler and CCS SDK.
OpenMP is supported by Visual Studio Directly, but only in C++. You can use OpenMP in a .NET application, but only if you use Visual C++. This is one of the areas in which C++ gets to have the cake.
OpenMP can get you big gains in the parallelization of long loops without loop dependencies.
The number of OpenMP thread can be set statically at compile time, or dynamically at runtime.
MSMPI is a networked protocol that allows you to distribute tasks by sending messages to different nodes in the HPC network. There is of course more to it than that, but that’s the gist of it.
There is a lot of scheduling, security and other stuff going on, but functionally, nodes can send messages to each other that trigger them to do something, after which they return the results. It is more complicated than OpenMP, but easier than programming everything yourself using Windows sockets.
The presentation wasn’t bad, but this technology is not likely to be something that I will ever use, except perhaps the OpenMP stuff, that might be worth diving into to learn the basics of it.