For a code review at work, I had to write a code review document. among other things, I had to list the different stored procedures in my database, together with the security configuration of the stored procedures. A bit of googling got me the various pieces of the puzzle, and I put them together for my purpose:
select o.NAME AS object_name,
dp.NAME AS principal_name,
dp.type_desc AS principal_type_desc,
p.state_desc AS permission_state_desc
from sys.all_objects o
left OUTER JOIN sys.database_permissions p
on p.major_id = o.OBJECT_ID
left outer JOIN sys.database_principals dp
on p.grantee_principal_id = dp.principal_id
where o.type= 'p' and is_ms_shipped=0 and o.[name] not like 'sp[_]%diagram%'
order by object_name
I used an outer join for the database permissions instead of an inner join like the person whose example I copied, because I wanted to show all procedures, even if they did not have security configuration attached.
My apologies for yesterdays blog post. If I look at it in the preview pane or in the editor, it looks well formatted with the various options presented in a neat list. I just noticed that in the actual blog view it looked like a programming book vomited all over my blog page. The problem is that I can't really see anything I can change to make it look good. The only way I know atm is to copy / paste from word. The online blog editor is depressingly basic. I'll figure something out.
EDIT: I discovered that the msmvps theme is responsible for this. I have no clue about CSS so rather than try to figure out what the exact problem is, I switched to a different theme and it is ok now.
This came up in a forum discussion: what flavors of C++ are there, and what do they mean?
- There is regular C. Not C++ of course, but it bears mentioning because C is supported by the C++ compiler. Not all features of the C standard are supported. Which ones are is mostly customer driven (customers being the people with a lot of money, not you and me probably)
- regular C++. Also refered to as native C++, ISO C++. Code is portable across platforms, and compiled to native machine code. There is no ABI for C++, so compiled code is not binary compatible between compilers or even different versions of the same compiler.
- Managed C++. Also known as Managed Extensions for C++. It was the first version of the Microsoft C++ language for dealing with the CLR. It was an abomiation upon men. It was ugly and had a hst of problems. If you've never seen it, consider yourself lucky. If you ever had to use it, you have my sympathy. It was deprecated with Visual Studio 2005, and we like to pretend it never existed.
- C++/CLI this is the current version of the C++ language which targets the CLR. Contrary to Managed C++, the language and the syntax are user friendly. It is meant to allow native C++ and .NET to interoperate.
- C++/CX this is the version of C++ that is used for building C++ metro apps with XAML. It looks a lot like C++/CLI. when I first saw it, I thought it actually was C++/CLI. It is something different however, and for obvious reasons, you cannot mix C++/CLI and C++/CX in the same source file. The compiler would not know which is which.
Range for is another of those syntactic sugar enhancements that enables you to do the same as before, but with less code. Often, you find yourself iterating through all elements of a container and doing something with those elements. In my original example, that was done by the following code:
for(auto i = data.begin(); i< data.end(); i++)
cout << i->a << endl;
It is the traditional way to do this, but it means you're declaring an iterator, getting the begin and end, and then ++ing from start to finish. With C++0x, you can write this in shorthand with the construct know as 'range for' which means what it says 'Do this for the entire range'. It is very similar to the foreach keyword in C#.
for(auto &i: data)
cout << i.a << endl;
As you can see, this is just a nicer way to do the same thing you did before.
7) Go easy on the coffee. People tend to drink more coffee than they normally do. This has various reasons, including the fact that the coffee at tech-ed events tends to be high quality. If you do it can mess with your body in various ways.
8) Go easy on the sugars. Every break, there are muffins, waffles, sugared snacks, and other sweets. If you start feeding a lot of sugar into your system, your slow metabolism shuts down. When the sugar has burned up, you’ll start feeling tired, drowsy and generally bad. At that point you’ll either have to get more sugar (which only makes the problem worse) or wait for your slow metabolism to kick in again. Before the latter happens, you’ll feel like crap for a while.
9) An important point for people with one or more food allergies: do not assume that familiar dishes will be made with familiar ingredients. Case in point is the Berliner rolls they had for desert, 2 days ago. In Belgium, these are made only from buns, powdered sugar, and custard. The ones here also had a thick slice of strawberry between the bun and the custard. If I hadn’t checked, there is a possibility I might have eaten it. That would have been bad. In the same vein: if you allergic to some spices or herbs, do not assume that the ones in the dish are the ones you are familiar with.
4) Take your notes during the session. You get so much information that you will have forgotten a lot by the end of the session if you don’t. You’ll also forget the structure in which it was presented. Writing things down as you go along makes it much easier. If you make your notes directly on your laptop, it is even more efficient because you can write down most of the text while you are there. You’ll only need a couple of minutes after the session for editing, instead of spending 15 minutes per session. Also, while taking and editing notes into blog posts is a lot of work, it forces you to make summaries for later reference, and you’ll remember things better.
5) Read the summaries of the sessions you are thinking to go to. Sometimes, session titles are a bit deceptive, and it is annoying if you have to leave a session after 10 minutes and then find another session to go to while they are already in progress.
6) Session summaries are not always clear, and sometimes there are sessions scheduled in the same timeslot. Adapt your planned schedule as the week goes by. If sessions on topics A and B are scheduled the same time, and topic A was already covered in an earlier session (even if that wasn’t known in advance from the summary)it makes sense to go to session B, even if you had originally planned to go to session A.
With C++0x, the >> operator has become context sensitive. If you don't use templates you may wonder what the point is, but this is a very useful little feature that makes things a bit more readable and will save you many compiler errors if you use templates.
If you look at my previous C++ post, you'll see that I declared a template variable like this:
vector < MyData < int> > data;
You'll notice that there is a space between the '>' symbols. This is because in previous versions of the standard, 2 consecutive '>' symbols will always be considered to be a shift operator. It would be perfectly natural for the > symbols to be placed right next to each other. It would be more readable, and certainly more intuitive. In the new standard, this has been fixed. >> will now be interpreted context sensitive. If they are part of a template declaration, they are considered to be 2 individual symbols. If they are part of an expression, they are considered to be the shift operator. So from now on, you can do the following and it will finally compile cleanly.
vector < MyData < int>> data;
It is indeed a small thing, but anyone working with templates for a while has bumped into this more than oncce.
Over the years I've made a short list of things that help me survive tech ed.
1) Arrive early. Travel the day before the event. It means you’ll sacrifice your Sunday afternoon. However, it also means you avoid the rush traffic, you can register and check in without needing to hurry, and if something goes wrong during travel, you have enough time to deal with that. Additionally, you can start the seminar well rested after a good night’s sleep and a relaxing breakfast
2) If you have a choice in the matter, pick your hotel as close to the event as possible. This way, you don’t lose time in the mornings and evening on public transportation. Your schedule is busy enough already without adding additional hassles to it. And if you can walk to the event center, you are not bound to transportation schedules.
3) Don’t party. Paying attention throughout the day, making notes and talking to people is tiring. You’ll need to unwind and recover if you want to be able to give it your all the next day. If you’ve only had 4 hours of sleep and a lot of rich food and alcohol the day before, you’ll have trouble to stay alert. Then again, tech-ed is also an excellent moment to network and talk with other people, so find a balance.
With this short post, I am kicking off my new series of articles on the new C++ standard implementation in Visual C++ 2012. These articles will all be short, covering a single new thing each. This way you can see the impact of each change or improvement all by itself. That is easier to read, and of course also easier for me to write.
For the first article, I picked auto, simply because its impact on existing code bases is huge, and literally every C++ programmer can take advantage of it. For those who don't yet know it: auto can be used as the variable type when declaring local variables, instead of the explicit type. The end result is still a strongly typed variable, because the compiler knows what the type of that variable has to be. It just doesn't force you to type it yet again.
So you can use
auto localvar = foo();
MyType* localvar = foo();
It is much more convenient to use auto, because you no longer have to think about typing the right name. What makes it doubly useful is if during development, the return type of 'foo' changes, the change is transparent in your code. As long as the semantics of that return type stay the same, it will compile properly.
To give a more practical example: consider a simple vector with a template data type such as can be found in many C++ projects that make use of the STL. The really ugly thing about it is the need for an iterator. Even for such a simply example, the iterator type declaration is ugly. And if you ever need to update the type, it will involve a lot of work. Consider the following type: vector < MyData < int> > data; In order to iterate through that vector, we need the following loop:
for(vector < MyData< int > > ::iterator i = data.begin(); i< data.end(); i++)
cout << i->a << endl;
As you can see, it is ugly. A traditional way to deal with this is to create typedefs for all iterator types that you use in your code. This works, but you still need to do that work. The typedefs are just a way to move the ugliness into a header file where you don't notice it. And with typedefs you can still introduce some errors. C++0x otoh can use the auto keyword.
(auto i = data.begin(); i< data.end(); i++)
cout << i->a << endl;
And this makes the resulting source code not only much easier to read (for the maintainer) but also easier to write, and if the type of 'data' should change, then as long as the public interface is compatible with before, the code will just compile without needing syntax modifications.
VS2012 comes with 2 themes: 'light' and 'dark'. Some people might prefer the dark one. If you spend a long day looking at code, it may even be more ergonomic. However, I quickly discovered a problem with the dark theme when I was writing my first new C++ blog post: copy paste really sucks. The bulk of code in VS under the dark theme is white. Copy paste white code to a blog windows with normal white background and ... Where did the code go?
I've decided to start blogging again, on the subject of C++. A couple of years ago, just before the release of VS2010, I had become jaded with C++. The standard was still nowhere near finalized, Visual C++ was getting none of the 'designer' loving.
Sure, we had C++/CLI, but only after the abomination that was Managed C++. And while C++/CLI was a decent language and indeed 'just worked', the only thing it was good for was writing glue code to run native code in a managed wrapper. For all other things, C# was a vastly better choice.
Fast forward a couple of years, and it is a whole new world.
The standard has finally been ratified, C++ has gotten a much needed refresher (both language and library wise), it has suddenly become hip again with Metro and the need for fast code with a small footprint, and with interesting things like PPL, AMP and ALM, there is a brave new world to be discovered. I am excited about C++ again!
I am also not typing this on my development machine or my laptop, but on the Windows 2012 Server machine that I created in the Azure cloud. It is lovely to have a performant dev machine to play with. Given the very low cost of Azure VMs, I can't really justify buying a new development machine when the old one kicks the bucket. And that is not even considering the benefits of having access to the machine everywhere, having it patched automatically, and never having to worry about hardware problems.
Ok, I suppose noone really missed me or even knew I was gone for 3 years. I also decided to come up with a new name on my blog. The cluebatman them was getting a bit dorky. C++ programming on cloud 9 is better for now. The new C++ standard has made me happy, and I am running my dev machine in the cloud.
when I'll come up with something better, I'll change it.
Anyway, I'm Back!'
I checked out and brought my luggage with me to the RAI. There is a luggage / cloak room where almost no one drops off their stuff, so I am using that one instead of the main one. Hopefully, it’ll save me some time when it is time to leave.
Yesterday I hung out with Steve for a while. It’s things like these that make tech-ed more than just about learning. As I mentioned earlier, it is nice to stay in touch with people across years.
DEV332: Async made simple in Windows 8, with C# and VB.NET
This session is hosted by Dustin Campbell
Async is the norm for Windows RT, where asynchronous programming is the only way to program. Synchronous programming and blocking is no longer acceptable for user applications, in order to ensure that applications are responsive and scalable.
Futures are objects representing pieces of ongoing work. They are objects in which callbacks can be registered that have to be executed when the work is ready to be completed (like doing something with downloaded data. Futures are basically syntactic sugar to make existing async programming patterns more palatable. The only downside is that you get nested lambdas for tasks that execute in several steps. Apparently, this is called macaroni code.
To fix this, C# has await and async keywords.
Await will take the rest of a method, and hooks it up as callback for the asynchronous event which is being handled. The Async keyword is used on the method itself to tell the compiler it has to do this. The callback will always appen on the sme thread that the operation was started from, so resource contention is not a problem, because while the code is running, the thread is not doing anything else.
So while your source code looks like something that is executing synchronously, it is actually broken in different pieces which are executed asynchronously. This is really neat, and it hides a lot of the ugliness of asynchronous programming. Even if you are not programming for Windows 8, this is a valuable feature for regular applications that require asynchronous programming.
Exception handling is built in, because the underlying IAsync operation captures it and presents it to the caller. Exceptions can then also bubble up through various completion tasks, and can be handled simply in the event handler like you would normally do. This is sweet, and much, much, much more convenient than if you had to deal with it manually
SIA311: Sysinternals primer: Gems
This session is hosted by Aaron Margosis. I’ve sen him present a similar talk a couple of years ago.
The room is not full. Plenty of seats are left open. I think this has to do with the fact that it is the last day. Aaron announced that there would be a book signing, but also mention that in their infinite wisdom, the organizers have decided not to have a bookstore on site. Yeah... I noticed. Someone should have his ass kicked because of it.
The entire session was demo driven, so I didn’t take notes. It was mainly about the unknown utilities or unknown features of well known utilities in the sysinternals suite.
DEV334: C++ Accelerated Massive Parallelism in Visual C++ 2012
This session is hosted by Kate Gregory, and covers the new C++ AMP tools which allow you to offload number crunching to the GPU. The room is not full, I suspect it has roughly the same group of people who were also at the pre-con sessions.
The session started with the overview of why you want C++: Control, performance, portability.
With AMP, your code is sent to an accelerator. Today, this accelerator is your GPU, but other accelerators might appear. The libraries are contained in vcredist, so you can distribute your AMP app just like any other app. And because the spec is open, everyone can implement it, extend it or add to it. Apparently, Intel have already done that.
They key to moving data to and from the GPU is a class array_view<T,N>, which represents a multi dimensional array of whetever. You populate those structures, and then perform the parallel_for_each() library function. This function will do all the heavy lifting and data copying for you. When the parall_for_each finishes, the result will be ready for you.
You an only call (other) AMP functions. All functions must be inlineable, use only amp supported types, and you won’t be doing pointer redirections or other C++ tricks. There is a list of things that are allowed and not allowed, but they are really all common sense.
There is also array<T,N>, which is nearly identical to array_view, but if you want to get data out, it has to be manually copied. At least that was my understanding. Things are going fast at this point so it is possible I’ve missed something.
If you want to take more control of your calculation, you can use tiling. Each GPU thread in a tile has a small programmable cache, which is identified by the new keyword tile_static. This is excellent for algorithms that use the same information over and over again. There is an overload for parallel_for_each which takes a tiled extent. However, the programmer is responsible for preventing race conditions -> use a proper use pattern with tile barriers
What is particularly interesting is that Visual Studio 2012 has support for debugging and visualization. You can choose debugger type CPU breakpoints and GPU breakpoints, and you need to debug on windows 8 apparently. It just works, and this was probably a huge chunk of work for someone, somewhere in the VS debugger team J
There is also a concurrency analyzer which is really good for figuring out CPU / GPU activity and how it correlates to your code.
That’s it for today. Time to go home.
I am glad attention got called to the fact that there is no bookshop. I’ll have to put that in the official feedback as well. And speaking of silliness: this tech ed there was exactly 1 session about the new C# keywords for asynchronous programming, and one on .NET 4.5 features. And for some inexplicable reason, they got scheduled in the same timeslot. Someone dropped the ball there as well.
Tech ed was a valuable experience yet again. I’ll post an overall tech-ed wrap-up tomorrow.
I played some more with my Azure devbox, and I have to say the responsiveness is great. I had another phone call from the home front, and I am looking forward to going home and see my family.
The weather here is clearing up nicely, and the park I have to walk through was just lovely. Lots of trees and little lakes, with gaggles of geese and ducks and fish. At one point I had to jump aside or risk being trampled by a horde of women in spandex.
Today there are several interesting sessions I am looking forward to, including the next one.
What’s new in Active Directory in windows Server 2012
This session is hosted by Samuel Devasahayam and Ulf B. Simon-weidner
ADPrep and DCPromo are now gone. Server manager makes it seamless, and ADPrep functionality now runs automatically when the first machine is promoted. It can also be run remotely, which is a nice feature for remote installation. Validation checks were added to make sure common errors were eliminated. What is nice is that all functionality is implemented under the hood as powershell comdlets, so everything is scriptable.
If there was a network hiccup, dcpromo would fail. This has now been made more robust. Again, for me this is less relevant because I always work on a LAN, but it is important for customers with distributed networks.
Increase management experience. Recycle bin GUI.
A common problem with virtualization is that rollback to snapshot messes with Active Directory. Everything created during the rolled back time period and can be inconsistent. Therefore it becomes necessary to figure out a way of AD to know where there has been a rollback or not. This seems to be done in cooperation with the hypervisor. What was interesting is that they still say that snapshot is NOT a valid backup / recovery method, which is explained later on/
The example that was shown was of a RID pool which started reusing parts of the RID pool that had already been given out. Support has now been added to detect such cases and deal with them in a consistent manner.
Interesting new functionality is the ability to clone domain controllers. That is kind of cool. Admins can now take a DC, and clone it into multiple copies -> easy deployment of DCs to branch offices, but also to create redundant DCs. There was a flow chart, covering the various steps that were taking internally, such as discarding the RID pool to prevent some of the problems mentioned earlier. You have to be careful that 3d party software might not like being cloned. DNS, FRS, DSFR are supported.
RID improvements. Currently, it is possible to deplete the entire RID pool of a forest, after about a billion RIDs. This seems odd. There was a sequence of events identified which could lead to this problem. The only solution was to do an entire forest migration. If you have a forst that is big enough to encounter this problem, this is probably not a happy scenario. The list of possible causes was covered. One topic I’ll have to read more about is DC Reincarnation because that is related to our backup and recovery scenario. I don’t think we have any of these problems but it pays to make sure.
There was a mention of deferred index creation, which had to do with schema changes in large enterprisey networks, so not really applicable to me. Offline domain join (join across internet) is another feature that I can see being nice for customers with a large geographic distribution. Ditto for connected accounts. This feature allows you to connect your live id to your AD account. It is not possible to log into AD with your live id though.
LDAP Logging has been improved, with added controls and behavior that is always nice for anyone.
AD Administrative center looks nice. There is now a single tool to manage AD. That has been long overdue. One of the things it allows you to do is to enable the recycle bin. The recycle bin already existed in 2008R2, but now it is integrated in the GUI. It is not yet nested for now (meaning OU and CN structure). It is not yet a transparent ‘undo’. There is also a history viewer which shows the history of your AD transactions, with the powershell syntax visible. That is really nice, both for debugging and scripting.
Password policies are not also granular, meaning it is possible to set different policies for different types of accounts. This is another thing that was possible before with 2008, but not via the GUI.
Clustered or load balanced services that share a security principal are now supported by Managed Service Accounts. This is also nice, considering that more and more servers are clustered. Replication and topology cmdlets are now supported for managing site topology and replication in a consistent and scriptable manner.
One thing that is again very interesting is Active Directory based activation, meaning you no longer need KMS to activate your clients and servers in a volume licensed environment.
WCL289: Windows 8 demos
This session is hosted by Brad McCabe
No powerpoint. One thing that is nice is that there is no loon screen. You get immediate feedback form your apps. Loging in can be done via password, swipe, or picture manipulation. The latter is interesting especially in tablet environment where it is easy to log on by tapping a couple of landmarks and then poking your wife in the nose J
The desktop is not a sea of icons, but rather a sea of tiles that are grouped together. The tiles themselves contain live information. The apps themselves follow this tiles paradigm as well, allowing you to flip through your application. For data drive apps that makes a lot of sense. I wonder if it will be as efficient for control type applications.
One thing that is interesting is that apps actually start to look like on tv, with animations, sliding panels and transparent surface, all without a lot of effort on the part of the application developer. That is all supported and provided by the Metro libraries and Windows RT subsystem.
The same functionality exists between touch and mouse / keyboard. Touch is all about how you interact with the edges, mouse uses edges and right click. And it was mentioned yet again that devices and desktops have the same kind of user interface.
Apps following a share contract can easily exchange information, even though they don’t know anything about each other. The search contract allows users to search apps as result providers.
The traditional desktop still exists, and can be used side by side together with the new Metro style UI.
Windows 8 has a new concept called storage spaces. All devices can be pooled together in a storage pool and storage spaces. You can allocate larger sizes than physical space, and more space can be provisioned as it is needed. Not quite sure how this is more useful than the ability to use dynamic disks that can be grown. It also supports mirroring and parity.
Windows To Go was demonstrated running off a stick. This was similar to the demonstration already shown during the keynote yesterday.
Dev317: Going beyond F11, Debug Faster and Better with Visual Studio 2012
This session is hosted by Brian A. Randell.
I think I saw Brian speak on earlier tech-ed events. He is a very good speaker if memory serves me well. Very animated and able to drag the audience along for the ride. He also understands the value and use of silences.
I was first considering to go to wsv326: ‘Windows Server 2012, a Techie’s perspective’, but after looking at the summary, it seemed to overlap with the Active Directory session I saw earlier this morning. It would probably go a bit deeper on things like Kerberos and Compound identity, but that is not really something that is applicable to me. The chance of running a Windows Server 2012 environment before 2015 are slim as well.
Another candidate for this session slot was DEV314: Azure Development with Visual Studio 2012. That looked interesting, but I am unlikely to develop for the cloud or develop multi tiered applications, so not really that useful to go to.
Install SP1 for fixing bugs in unit testing and making it more performant.
Debugging: use just my code. Shows only your code in the call stack, and not other DLLs and frameworks. Otherwise the entire stack is shown. Source stepping in the .NET code is also supported with the parts of the source code and symbols that have been opened by Microsoft. This requires source stepping and source server support. Keep looking at the options menu for debugging because that is where some old and new features are still unknown.
There was an explanation of the various neat things you can do with breakpoints, like making them conditional or counted. This is rather old stuff really. I didn’t know pinnable data tips though. These are like tiny quickwatch windows which are shown over the code, and which are updated while debugging. They are visualizations of live variables.I thought that was neat. There is also a breakpoints window showing al breakpoints and various information.
Remote debugging finally works in VS2012. It existed already, but didn’t really work that well. This feature was seriously improved for Windows 8, for the purpose of remote debugging applications on ARM or other devices. It works tethered and over WiFi. You need to install remote debugging tools on the target platform.
Debug->Attach to process (machine name and process selection). From then on you can set breakpoints and break into the debugger.
For the paying versions of VS, all tools are available with the installation CD. Express users have to download it manually. Then it has to be installed, and you have to run the remote debugging configuration wizard. This is basically to give access and configure firewall rules if there are any.
The profiler has also had improvements, making it possible to get more information about your application as it runs, even inside the simulator. Analyze ->Launch performance wizard. There are 4 profiling options that are more or les invasive. Each has its benefits and problems. CPU sampling can show hot code and call path issues. This can help you identify the most interesting places to optimize them.
Love, hold and protect your pdbs. If you distribute applications, you need to track your pdbs make sure you have all versions. With TFS, you can automate this process and link it to the version of your code that was built. For me this is not an issue, since I don’t have TFS. With the applications I distribute, I distribute the pdb files along with it.
At the end of the talk, there was a description about intellitrace. This is particularly useful if you inherit someone else’s code. It is a historical debugger, which needs Visual Studio ultimate. The output is an i-trace file, and contains the CLR profiler debugging and profiling APIs, and you can navigate up and down the stack frames with Visual Studio. Everyone can collect logs, including users and testers. You can collect intellitrace events, as well as method entry and exit.
There was a demonstration showing thee features, and it really looked nice. If you have a business model where you distribute a lot of applications to big customers, this is very worthwhile. Mind you, if you only have small customers it is nice as well, but you boss probably needs an argument why spending all that money is a good idea.
SIA302: Malware hunting with the sysinternals tools
This session is hosted by Mark Russinovich.
My attendance here is a no brainer. Firstly because the only other interesting thing in this slot is Windows 8 Metro, and Steve will be covering that as well. Besides, it was already demonstrated this morning as part of the windows 8 demo. And secondly, if there is no really compelling counter argument, you just can’t –not –go listen to Mark Russinovich. He is to nerds and geeks all around the world, what Justin Bieber is for pre teen girls.
I am sitting outside the theatre waiting for the doors to open, because I am anticipating a big turnout for this talk. I’m also making sure m batteries are charged again.
The room is filling up completely. People are being encouraged to move to the middle of the seats because according to the woman trying to fit everyone in the room ‘People are not going to crawl over you; 98% of the people here are males!’
I didn’t take notes here, since the entire session was a demonstration of how to use process explorer to try and remove malware from a system without repaving it. It was an interesting demonstration, but not something I would attempt myself I think. At the end he also discussed Stuxnet and Flame, and how sophisticated they were. Stuxnet in particular is a bit scary. Not what it does, but how well it does it and remains hidden.
DEV367: Building Windows 8 Metro style apps in C++.
This talk is hosted by Steve Teixeira. As with Mark’s talk, I just can’t –not –go, regardless of what the other sessions would be.
Actually I am looking forward to this talk for several reasons. I am going to get back in C++ programming, and since Metro is currently the way forward, and C++ is now finally a first class citizen again, Metro is the way to go. It will be very useful to see a demo on this topic so that I can hit the ground running.
And let’s face it, if you are like me (if you are a C++ programmer, that is a strong possibility) you think that a grey dialog with a square button is an example of good UI design. Throw a listbox on that form and you’re the man. Yet if you want hip people to think your app is hip, you need your app to look like the other apps on their hip device. With metro, a lot of that work is done for you so that is good I suppose.
I rushed to this room after Mark’s talk, because Steve’s session was packed yesterday, and there is only half an hour between Mark’s session and Steve’s.
After creating a standard XAML program, you can use the manifest designer to define the ways in which windows treats your program, and also the capabilities you specify your app needs, and how it gets packaged.
C++/CX is a set of language and library extensions to allow consumption and authoring of Window RT types. It is 100% native code C++. The syntax looks like C++/CLI, and uses many of the same conventions. In fact, just by looking at it you might be lulled into thinking you are looking at C++/CLI.
It is deeply integrated with the STL, which makes sense as everything is native. An important remark was to use only C++/CX at the surface area of your application, and keep the rest in ISO C++. That way your codebase remains portable, while still having a surface that is consumable by WinRT clients.
Important detail: Exceptions don’t travel across module boundaries. They get translated to an HRESULT and then rehydrated into COMException. So not only does the Exception types don’t transfer, but you also should not derive from these exception types, because the translation will not know your exception type and you will lose your hierarchical information.
Metadata of your app or dll will automatically be stored in the .winmd metadata file.
C++/CX also has partial classes. This is neat, and is what was needed to allow the IDE to work on your class and consume it without getting in your way. Your UI is completely configured via XAML, which connects to your code.
Hybrid C++ / BLOCKED SCRIPT high level programming where it matters. HTML project with js, and not 1 bit oc C++. This is the UI project, not functionality behind the buttons. Then the WinRT project got added to the solution. The html project can then just consume the WinRT component.
These 2 options of programming look really similar, and you do practically the same thing. I guess the only reason to pick one over the other would be which you would be more comfortable with.
Note on deployment: the model is built on the idea that apps are distributed via the store. However, this does not really work for developers (J) or enterprises or other scenarios. To provide this functionality, you can package your application. The package comes with everything it needs to install it on a different machine, and even includes the necessary information for remote debugging.
Today was the most interesting day so far. Or rather: the day in which the sessions were varied, and all were interesting. The Active Directory stuff looks great, and in my opinion this should have gone in Win2008. My guess it was all related to lack of time, and they finally had the time to finalize and polish the things that were implemented in a rudimentary fashion in 2008.
Windows 8 looks great, and I think it will go places in the consumer market. Now if only Microsoft won’t drop the ball, and make Windows 8 devices available in the European consumer market. Previous devices like the windows phone and the Zune were a success. I think Zune wasn’t even released here.
The debugging talk was interesting, and I learned a couple of new tricks. Brian is always a good speaker to listen to.
And I am stoked about the Metro development with C++. It looks really userfriendly, and you don’t need to jump hoops like with previous designer experiences. It makes me wonder about the future of WPF btw. There is a large installed base for Windows forms, and those certainly have their place.
My impression is that they came up with WPF to implement some of the ideas that made it into Metro, but without any real underlying strategy, platform support or concept of platform diversity at te time when it was implemented.
Last night I spent some time registering for Azure. It just took 5 minutes to sign up, and then sign up for the Virtual Machine functionality. All you need is a live id and a credit card. I’m still running on the 3 month trial for the moment.
Via the Azure management console you can configure the services that you use, and it is indeed as easy as it was said to be. One thing I found interesting is that a bigger machine is not more expensive than a small machine. It is just that IO for the small machine is cheaper. Interesting concept. Geographic distribution seems to be humongously expensive though. However when you think about it: if you want geographical redundancy for your own data center, it is going to cost you some real money as well.
The machine is running Windows 2012 RC, and I installed Visual Studio 2012 RC on it. Azure even was kind enough to give me the option to save the RDP file on my desktop. The one issue I can see is that I had to supply a DNS name for inside the cloud, and the one I picked originally was taken. Now, for me it is not a big deal to pick something else, but one has to wonder how many names will be picked by domain squatters.
The wireless is working flawlessly today. ‘Today’ has not yet officially started of course. It is still 45 minutes before the start of the first session, and the place is still deserted. The tables in one of the seating halls are still mostly empty, there is silence, and there is good coffee without lines at the machine. I bet I could get more work done here just sitting at a table, than sitting behind my desk at work. Silence is really undervalued.
This is a first for me. Having 2 keynotes in a row. I wonder what the point is. I did notice there is no closing keynote on the last day, probably because people tend not to show up. I guess moving the keynote is one solution, but not having 2 would be another.
Perhaps they’ve decided to do it at this moment because the delegate party was yesterday, and they anticipate that there will be many delegates with a hangover, or who might have trouble getting out of bed at a reasonable hour.
As I am waiting for the keynote to start, I am enjoying the music. They’re playing pimped up versions of songs like ‘somebody that I used to know’, with a lot of bass beats added to it. Not that I am a big fan of techno music, but I have to hand it to the organizers: they do know how to provide good sound. The quality is crisp, and the bass is strong. As a friend of mine used to say: you shouldn’t so much hear the music as feel the music. If your chest cavity is not resonating with the drum beats, the bass isn’t loud enough. I bet a Rammstein or Manowar song would sound great in this hall. There is no acoustic echo at all despite the high volume. Hint for the organizers: Hire Rammstein as the warming up act for the keynote.
Totally off topic: the coolest t-shirt I have seen so far is a simple black one with the text ‘I can see dead servers’. Thumbs up to whoever thought of that.
Apparently, the reason for the keynote split is that they wanted to have one for server 2012, and one for Windows 8. Initially I was not too thrilled about Metro, but in an environment that also contains tablets and phones, it does provide a compelling platform.
What does look impressive is that you get a Windows 7 VM with Windows 8 for running things in windows 7 mode. This is similar to the Windows XP mode that was available with Windows 7. What is new is that with a simple click, you can mount the virtual disk to explore it, or you can even add it to the boot menu, and boot directly from the disk image, thus giving you complete control of the hardware and all features (like 3D acceleration).
Another enhancement is that the touch gestures are supported via RDP. It’s not just the mouse input but the actual gesture messages.
What I thought was interesting as well is the Windows on a stick deployment. You can install windows on a memory stick and configure it fully, and then insert the stick in any computer and boot from the stick. You get full access to all hardware, except for the local disks which are hidden. This way you get all access to your own hardware (using the machine as nothing but a bunch of dead electronics) as well as isolation from any malware or other stuff which might be on that pc and which you really don’t want mixed with your (corporate) data and systems.
There was a mention that Metro apps used a new platform called WinRT which is separate from Win32. I guess that makes a lot of sense. Win32 is a dated subsystem, geared towards a paradigm which is essentially synchronous in nature. Win32 is full of blocking calls and synchronized concepts. You don’t want this behavior for tablets and phones. Blocking is bad. Whatever blocks should be handled asynchronously. To that effect, WinRT is ideal. It will probably cause more context switching and a bit more overhead, but feel more responsive. On server side, Win32 is just fine.
There was a brief mention of some changes to the Visual C# language (like the ‘await’) keyword which adds support for the fact that stuff happens asynchronously. That way, you don’t have to chain asynchronous events (like e.g. reading a block of file data and then shoving it in a decoder) with your own glue logic. Those things look obvious and interesting, and it makes me wonder why there is no C# language session about language changes to Visual C#.
DBI308: Practical uses and Optimization of new T-SQL features in SQL Server 2012
This session is hosted by Tobias Ternstrom.
It is a good thing I turned up early, because with 20 minutes to go, this room is packed and people are being turned away. This is for 2 reasons. This is the only remotely interesting session at this slot, and it is hosted in a rather small room. There is room for 150 to 200 ish people if I had to take a guess.
Tobias is a very good speaker. Swedish guy, and with a good sense of humor. The session went by quickly, and was very interesting for someone like me who uses and encounters T-SQL regularly without necessarily being knowledgeable enough to keep up to date with all changes.
The first thing he covered was a new way of generating ID values. It basically provides the same functionality as the IDENTITY field in normal database columns, but with the advantage of being unique across tables. In that respect they are like GUIDs, but you have a guarantee of getting sequential numbers.
Then there is the new ‘THROW’ keyword which can be used for exception handling in T-SQL within a try-catch construction. For now, the throw does not work across stored procedures, but that is on the todo list. I also learned a couple of things about RAISERROR that were new to me.
One very exciting feature I am looking for is the new support for window based calculations and aggregates. This feature allows you to calculate aggregate values across only a window of rows around the current row, or for example compare values with the previous row, where ‘previous’ depends on how your window is defined. Doing this with the current supported features involves either cursors or sub queries which are error prone, ugly, and can make execution times explode.
The only problem with window based ranges is that once you start using them, it will be very hard to go back to your existing SQL2008 systems J
And finally, there are a lot of new functions that are basically only syntactic sugar, but which make life so much easier. Often, these are keywords that mirror what other environments like Excel do, and which are just very convenient and save you a lot of manual typing.
And for those who don’t know yet: Sweden once had a 30th of February J
When you are working with times and dates, you can get very weird edge cases and this is one of them. This is not really relevant to the topic at hand, but it does serve as a nice example that you can mention whenever someone says that time and date are easy.
DEV350: Using Windows Runtime and SDK to build metro style apps
This session was hosted by John Lam.
The room was packed. The session started with an overview of Metro, and how it uses the new Windows RT subsystem. There was an overview of the differences, like the fact that everything in RT is non blocking and asynchronous.
That concluded the powerpoint part of the session, and John proceeded with the demo. He built a metro style app in js, demonstrating the various things that were involved in making the application do something sensible.
I didn’t make any notes during this session, since it was all demo, and the entire demo pdf can be downloaded as well. It did again show the strength of Metro development.
It is worth mentioning that John is a very good speaker, and can take the audience along for a demo ride without rushing things or talking too quickly.
DEV368: Visual C++ and the Native Renaissance
This session is hosted by Steve Teixeira. My attendance here is a no-brainer. Steve is one of the best speakers at the whole of tech-ed. Even if C++ were not my favorite topic, I’d still be here.
I turned up an hour early for this session. Not because I expected a rush, but because I wanted to have a quiet place to type about the previous sessions. I didn’t get around to that because Steve was already there so we had the chance to talk.
One on-topic thing we talked about was standard conformance of the C++ compiler, which is a goal they are going for. When I asked about the ‘export’ keyword, Steve said it was removed from the standard. That kind of surprised me because I remember some ‘over my dead body’ quote from one of the committee members. I have to say it was a good decision. I remember an article by EDG which are the smartest compiler guys around, and they said that implementing it was very hard, and that it didn’t really work the way you would expect it to work.
Combined with the fact that there is no ABI for templates, it makes sense to finally get it out of the standard. If you can’t make the compiler standards compliant, then make the standard compilers-compliant J either way improves compliance.
Meanwhile, the crowd shuffles in steadily. Earlier on I joked to Steve that I had come early to beat the rush. By now the room is completely packed, and there were 10 to 15 people standing in the back of the room. A quick calculation says that there are about 130 to 140 people, which is amazing. I’d never have guessed that. I would have expected that it would be the same 30ish people I’ve seen in the other C++ sessions.
Speaking about people, the session had a graph early on about the age statistics for C++ programmers, to prove that we are not all greybeards. The average age turned to be about my age demographic: late 20s early 30s. As could be expected, the age curve dropped off with increasing age. There was one guy in the survey who was allegedly 92 years old. He was probably still waiting for a build to finish.
Steve talked about the renewed interest in C++, and some of the reasons for it. For one, with the variation of computing equipment (like ARM devices) cross platform portability is starting to matter again. And with C++ for the processing and for example XAML support for the GUI, C++ is a compelling option. Especially considering that memory footprint and cpu cycles have become important again with that variety of devices.
The fact that the language standard itself was finalized also has a great deal to do with it. Things like the auto keyword and shared pointer made a huge impact. There was an example with some old style C++ code on the left and the equivalent (using modern syntax) on the right. The right part was as readable as e.g. C#. Yet 10 years ago, we’d have looked at the example on the left and said ‘Yep, that’s some good clean C++ there’. It is kind of obvious why many people didn’t want to touch C++ with a ten foot pole.
With the new Metro style programming (which will be shown more in-depth tomorrow) a key point is that C++ apps can finally use the same designer and same designer patterns as C#, VB.NET or js programmers. This means you have up to date tools, AND you have the option to make your C++ app look cool.
With the introduction done, Steve covered things that Kate already touched upon earlier, like PPL and AMP. The demo was awesome. It was basically the visualization of a piece of paper, on which you could draw using touch. And then (again, using touch) you could take the page and flip it, with a live 3D rendering of the page as it would look if it were a real piece of paper. There were some additional ripple and wave effect that could be turned on and off. That was really impressive from a performance point of view.
Another important part of the demos covered the way you would design a Metro app in C++. I am not going to go in detail about that here, since this topic will be handled more in-depth tomorrow.
At the end of the demo there were some intelligent questions, which is not a surprise, given an obviously intelligent audience. When the session slot had ended and there was some time for one-on-one questions, Steve was immediately swamped. Clearly, interest in C++ is on the uptake.
DEV311: Taking Control of Visual Studio through extensions and Extensibility
This session was supposed to be hosted by Anthony Cangialosi and Anthony Lindeman. There was only one presenter, and I didn’t found out there were 2 names on the planning until after the session, so sadly I can’t say which of these 2 people presented the session.
I chose this topic because at this session slot, there was little else that was relevant to me, and VS extensibility is something I played with before. Incidentally, my experiences with VS extensibility were mentioned right at the beginning of the demo as ’the bad old days’.
It used to be that just for being allowed to use the extensibility SDK you had to register and be approved, and sign paperwork and jump through various hoops. Documentation and examples were not too great either in those days.
These days, you can extend without prerequisites and all you need is the Visual Studio SDK. From there, you are all set and you can start building both your VS extension and the deployment package needed to distribute it in a proper manner.
Here I didn’t take many notes either, because like the Metro session, it was largely demo driven. In this particular demo, the presenter started with sample boilerplate, and fleshed it out as he went along. The demo was ok. It looked interesting, but for now not something I can really use.
Today was interesting. I learned a lot about Metro, and seeing it in the whole picture instead of just as a new type of desktop, it is starting to make sense.
The highlight of the day was of course without a doubt Steve’s talk. The session itself was of course interesting, but what made it special was that the room was packed. And again I think it is weird that there are more talks about C++ than all other languages combined. You won’t hear me complaining though.
Internet was up during the entire day with only a single glitch (as far as I noticed) so that is a definite improvement.
I also got my free book on Windows 2012 Server, which is nice. Speaking about books: to my amazement I discovered that this year, there is no bookstore in the exposition hall, or indeed anywhere on site. Whoever made that decision should have his head checked, because tech books sell like hotcakes on tech events like these. Geeks love tech books at least as uch as free t-shirts.
I’ve bought books every year I’ve been to tech ed. The guy at the MS Press booth had gotten many complaints already, and while he was explaining the situation to me, several other people dropped by and said ‘What?! No books!?’
By the end of the day I was dead tired and glad to get back to the hotel. I think I’ll turn in early tonight, after spending some more time playing with VS 2012 on my Azure machine.
Breakfast was the same as yesterday. I thought of going someplace else, but didn’t for 3 reasons. First, tech-ed starts at 08:30, and the restaurants are at the other side of the RAI. I would really have to hurry in order to get there, have a meal, and go to the RAI. Second, the food at the restaurants is so great that my evening meal makes things up again. And lastly … a sizable portion of the world population is dying of famine and dehydration. During the time it took for you to read this paragraph, several people just gave up and died. So it would be a bit snobbish to make a big deal out of it and scorn the perfectly good food I get given.
Now, tech-ed. There are noticeably more people here. Yesterday, I saw the exhibition hall when the builders were in the progress of building the booths. I have to say that it will be impressive if everything is finished this morning, because yesterday it was still an unholy mess, like you can expect on a big construction site that is nowhere near due date.
The other big hall where people gather in between session is completely devoid of seating arrangements. I’ll have to check out the exhibition hall later to see what it looks like. I am used to seeing these kinds of areas full of bean bags and other things that allow you to sit.
Of course yesterday evening I had to phone the home front and talk with my oldest daughter whom is apparently missing me very much. And she started asking about her present, because of course I can’t go ‘on work holiday’ without bringing back a present. I also had to explain who this ‘Kate person’ was that I had spent the day hanging out with. This was a point of interest for my wife as well J
My youngest is much more emotionally independent. She only has to know that I come back and that I’ll have a present, and she’ll be satisfied.
The key note
It was a usual Microsoft keynote, which started with a lot of action movie music and light effects. Lots of deep basses. The keynote was delivered by Brad Anderson, and intermediate speakers like Mark Russinovich. The room was almost completely full.
As an aside, I have to mention that this is the first ech-ed where the wifi experience is plain crappy. The network stays up, but the internet connection keeps crapping out. The occasional connection comes through, and then it stays up for a while and goes down again. At first I thought it was my laptop, but then I noticed the guy next to me having the same problems. I have a feeling that whoever was in charge failed to anticipate the load that 8000 nerds would place on the internet connection.
Mobile 3G and stuff like that seems to work though, given the number of mobile users who could connect to the demo application.
The keynote started with a quick overview of Microsoft Hyper-V and all the goodness you now get out of the box with Windows Server 2012. The number they showed certainly looked impressive. Per VM 64 cores, 1TB (or was it 4?) of memory, and over a million IOps of data transfer. It is very uncommon for vice presidents to mention the competition during a keynote. The name ‘VMWare’ was used quite a lot. I really had the feeling that Microsoft was throwing down a gauntlet.
In all fairness, the numbers shown were certainly impressive, and seem to give VMWare a run for its money. And the important consideration about that is that you do get a lot out of the box with Windows Server 2012, whereas with ESX you don’t. I am not an administering a VM host environment so I could be wrong, but it does look more flexible and powerful.
After that there was a demo of some of the Azure features coupled with Visual Studio 2012. As far as keynotes go, this one wasn’t too bad. The Azure demo made me change the selection of the next session I am going to. Originally I was going to see something about SQL in hybrid IT, but I decided to go to ‘Windows Azure Today and Tomorrow’ instead.
I should point out that there was not much to choose from for the first session slots. I have the impression that they kept Tuesday morning free form real content so that the late arrivals would not miss anything important.
FND05: Windows Azure Today and Tomorrow
This talk was hosted by Scott Guthrie. A very knowledgeable person for sure, but not a natural speaker like Mark Russinovich.
Scott explained a bit about Azure, and how the payment plan works. In a first for Microsoft (in my experience) you only pay for what you actually use. You can dynamically increase the cpu, memory, storage and other things when you need them, and you only pay for the time you are using them, after which you can just reduce your hardware / services.
A basic virtual machine with windows Server 2012 costs almost nothing. I would mention the cost here if the wireless actually worked and I could check. I am not entirely clear right now if the metric looks at hours in use, or hours running with a given configuration, and whether it counts the hours if the machine is shut down.
The latter seems weird of course, but consider my scenario: I want to have a machine that is performant enough for running Visual studio, debug my various hobby projects, and I want to be able to use that machine from anywhere. Currently, that is a windows 7 machine in my basement, with 4GB ram and a Core2Duo. It is getting dated but still fast enough.
However, I might need to replace that machine in the near future, and for the cost of buying a new machine, it might be worthwhile to run my dev machine in Azure cloud. Especially if it would not count the hours during which I am not using it, which would make it dirt cheap. And I would have the advantage of being able to work from my laptop in the living room, or a hotel room without needing to worry about my data or the performance metrics of whatever machine I am using.
There was some more talk about Azure and the various user scenarios. Mar Russinovich went in-depth in the next session, so I’ll not cover them here.
AZR208: Windows Azure virtual Machines and Virtual Networks
This session was hosted by Mark Russinovich. I’ve heard him speak before and he is a good speaker. Mark started with an explanation of Azure, and private clouds. One thing that was made clear is that cloud machines are just VHD files, just like normal Hyper-V machines. This means that transferring machines to and from the cloud, or different cloud providers, is completely transparent. There is no lock-in.
One way to use the cloud is to create a VM in the cloud, move your application there, and then scale up the VM as needed. On top of that, you could choose to run components of the application on cloud services. One such service is SQL server, which can be scaled up to Godzilla like proportions. The good thing (other than not having to maintain the monster hardware) is that Microsoft takes care of patching and other things.
In your virtual machines, you can also add storage on an as-needed basis, which will be backed by cloud storage solutions. The virtual disks will be stored on redundant disks in the SAN, meaning that you are isolated from normal disk problems that might occur. Your own disks can of course be configured for max performance, like striping.
Mark then covered things like how services are organized in different groups so that software patching and network maintenance can be done in a manner that there will no be resulting downtime for your applications.
And finally there was an explanation of virtual networks. It is a given that the different servers in your collective can talk to each other, but in an enterprise environment, you may want to domain-join those machines. And of course, you would not want that to happen over an internet connection. To that purpose, Azure supports a VPN connection to your own infrastructure. This is hardware VPN, and a nice feature is that Azure can generate VPN configuration scripts for most commone firewall manufacturers, like Cisco and Juniper. Once that is set up, those machines appear to be on your own local network.
I thought that that was a particularly cool feature. Because it allows a company to move a great lot of (non critical) machines up to the cloud, where their cost can be budgeted up front, and no on-site personel is needed to support the infrastructure. Currently, if you are running your own VM infrastructure, you are supporting it all, and you may have a lot of infrastructure, which may cost a lot more money than needed. Then there is the square meter price of your data center, electrical power and cooling, … even if you move only the non critical machines to Azure, a lot of benefit can be had.
A final thing that is worth mentioning is that Azure is currently run across 16 massive data centers which can take over from each other. So if one data center goes offline due to a meteor strike (to name a cool example), another can seamlessly take over. In fact, Mark mentioned that it is a hard promise that any data store change is replicated to at least one data center in the same geopolitical area within 15 minutes. This means that data from EU companies stays in the EU, and US data stays in the US, etc. For some people this is irrelevant, but many companies that are subject to regulatory bodies have strict requirements to make sure that certain data many never leave the EU or the US.
Before tech-ed, I had a fairly jaded view of Azure, but after the things I have seen in this and the previous sessions, I have come around. Azure (or clouds in general) are the way of the future. We are still in a transitional phase where companies start their own private clouds (Be they Hyper-V or ESX or something else) but between this and 10 years, I suspect that many companies will move a great deal of servers into a cloud that they don’t manage themselves. After all, why would they?
And given that there is no cost of entry, that you only pay for what you use and that you can scale up and down dynamically, there is no doubt in my mind that this will take off.
WSV205 Windows Server 2012 Overview
This talk was hosted by Michael Leworthy.
As soon as I sat down and he started talking and showed an overview of his topics, I realized I had made a big mistake. This was going to be a marketing talk, ‘look how great we are’-style. I gave it 5 more minutes in which I was proven correct, and I decided to leave. There are few enough Visual Studio talks as it is, and I changed to DEV213.
DEV213: What’s new in Visual Studio 2012
This session is hosted by Orville Mcdonald. It was already 10 minutes underway when I came in, but I managed to pick up easily enough.
Right when I came in, he was going to demo how easy it was to develop metro apps in Visual Studio 2012.The main reason for Metro is to have a unified approach to developing for multiple platforms, so that your app might be useable on your desktop, on your tablet and on your phone.
I cannot judge on how easy it is to develop metro style, but testing it sure looked greate. There is a simulator that can be used to test your app in real scenarios. The simulator can do all the things any real tablet could do. The orientation can be changed, you can ‘slide’ your finger, and do all manual manipulations in a simulated way.
Then there was a demo of migrating a web application from a local SQL express database to one hosted on Azure. I can’t comment much on this, except that it is what I would expect from a database migration. The fact that it is in the cloud is less interesting, and I talked about that already.
One of the annoying things in VS2012 that is new and which I cannot believe made it into the release candidate, are the full caps menus. It is the one and only application I know with a menu in all caps, and I hope they reassess that decision. It is loud. Don’t believe me? Consider HOW RELAXING IT IS TO SPEND ALL DAY LOOKING A MENU IN ALL CAPS! LOUD, ISN’T IT?!?!
I was told that this will become optional in the RTM.
To be honest, I had hoped that this session would be more about language and debugging topics, but I guess Metro and Azure are the new kids on the block. In any case it was interesting so see how it works and how it can be debugged.
DEV316: Application lifecyle management tools for C++ in Visual Studio 2012
This session is hosted by Rong Lu. I managed to talk to her in private for a couple of minutes, because I wanted to know what exactly was covered. As soon I told her that I had been to Kate’s pre-conference talk, she told me that if I had any other place to go, it might be worthwhile doing that, because she’d cover the same topics. She told me she would cover those same topics a bit more in depth.
I thought to go to WCL332 instead, but that turned out to be about deployment tools and deployment diagnostics. Not really my cup of tea so I decided to go to DEV316 after all.
The first thing I noticed was that the crowd was the same as during Kate’s pre conference talks. No surprise really. The 30 of us are probably the only C++ programmers in the whole of tech-ed.
Rong covered architectural discovery. The main user scenario for this feature is analyzing code that someone else wrote. C++ codebases tend to live a long time, and many C++ programmers have to maintain or update code they didn’t wrote themselves. In short, the architectural discovery tool builds a diagram with the biary components. These components can then be broken down in explorable and expandable layers. It is possible to edit and save the diagram and mark it up.
This is sure a handy tool for analyzing other peoples code, as well as creating images for writing software design documents. It was undecided at this point, but this functionality will probably be reserved for the Ultimate editions of VS for the next release.
The next demo covered static code analysis. This works really user friendly. You can easily figure out the problem, and there is even a right-click mentu item for inserting the suppression pragma if you want so.
There are a couple hundred of rules that are checked by the code analysis. Rule sets are programmer configurable. 64 bit support, and all rules available from Pro version and above, including concurrency analysis rules.
The unit testing framework for unmanaged C++ was shown again. This is available for all versions, though it will be really basic below VS Professional. From Premium onward, continuous run after build is available, which allows you to run the unit tests with every build. The unit testing framework is extensible by 3d party framework.
Code coverage results were available with a single click to take you to the code coverage results. The code itself was then Blue lines vs red lines. It looked very well made, and it will certainly be very useful for insuring the quality of algorithms.
This session topic was very interesting, and Rong Lu held an excellent presentation. Kate Gregory was there as well.
Day 2 was filled with lots of good information. Azure was the main surprise for me. And Rong Lu’s presentation was worthwhile as well.
One interesting factoid: This edition of tech-ed there are more C++ language talks than C#, VB and F# combined J
Oh, and the internet connection stayed down until the end of the day. I was told at the wireless booth that they were trying to fix it. Wireless was up, but internet for the entire RAI had gone down. Perhaps they'll figure it out by tomorrow.
As it turns out, my prediction about breakfast turned out to be true. In fact, I’d say it is not even worthy of the name ‘breakfast’. There were a handful of bread rolls, no bacon, no cheese, no meat, no honey, …. They did have some pre-packaged jam, chocolate paste, cream cheese, and hard boiled eggs. Exactly what I would have expected from such a loungy modern artsy place. Hip people apparently don’t eat real food. I’ll have to see if there is a good breakfast place available around here.
At least the coffee was good.
I’ll have to find a pharmacist today to buy some earplugs. I never travel without. And I am certain I put a pair of earplugs in my travel bag. There must have been a freak quantum event, opening up a wormhole in my luggage, making the plugs disappear.
To be fair, the hotel rooms are very quiet and sound proof. But the shower head was dripping. It stopped after while, but I figured I might put in my earplugs so that I didn’t wake up if it started again. Yet I didn’t have any. I figured that if McGyver could escape from a tub of acid with only a chocolate bar, I could improvise earplugs with the items available to me in my room.
I am not entirely certain, but there is a good chance I am the first person to improvise ear plugs, using only gummi bears and the cellophane wrapper of a plastic cup. The result was surprisingly ergonomic and effective. J
The RAI is only a good 5 minutes walking from the hotel, which is nice. The RAI itself is still quiet. The crew are still in the process of building up the event halls. There is something nice about walking around in such vast halls when things have not yet started. You’re unnoticed and all by yourself while being around other people. The coffee is great .
I was here early, and managed to get hold of Kate Gregory while she was preparing for the session. We spent over half an hour just catching up on things and life. That’s one of the good things about tech ed. You get to meet the same people again and again, and keep in touch across jobs and years.
Kate started the day with an overview of the new C++ language features of Visual Studio 2010 and 2012. More specifically, auto, shared_ptr and unique_ptr. It’s been a while since I programmed native C++, but I have to say that with just these 3 things alone, C++ has become a lot more readable and robust. I’ll have to play around with C++ again to get a feel for it.
To be more precise, I am writing a parser / compiler / code analysis tool in my free time for the DeltaV phase code running our plant. I already had a basic version that I can use, but it is rather ugly, and does not really produce an executable parse tree. It uses regular expressions to parse the code, and does not support all language features.
For the sake of doing it, I have started a new project in my free time to build a real tokenizer / parser that can be used later on to run the code in simulation mode, and to perform more detailed static analysis. Currently I am doing that in C# for convenience sake. With over 100 megabytes of DeltaV code to crunch through, it will be interesting to see if the same algorithm in native C++ (using smart pointers and other new C++ goodness) is going to be faster and smaller.
After the break, the topic changed to Application Lifecycle Management, which is interesting, but not really applicable to me because at the moment I am not working in a development team environment. ALM is basically what Feam Foundation used to be. ALM is going to be in all versions of VS. The amount of functionality will depend on which version you have.
I’ve heard good things about TFS, and many horror stories about setting it up, which can easily take 2 to 3 weeks of billable consulting time. To mitigate that issue, Microsoft has now provided a TFS hosted on Azure. For now it is free, though it may not stay that way. For smaller companies, this may become a very good option, depending on the SLA of course. You don’t want to discover one day that your project history is gone for good. I suspect that it will start costing money if you want a set SLA. Even then it will be much more cost effective for small companies than hosting their own TFS and dealing with maintenance.
Code coverage and unit testing
The next part of the talk covered code analysis, unit testing and code coverage for native projects. I have to say that that looks impressive. For code analysis, there is now support for native code, meaning you can get code and class diagrams for native code, where that used to be only for managed code. Unit testing and code coverage works pretty much like they work for managed code.
Instead of working with method attributes, there are macros that provide similar functionality, and you don’t even have to know how they work. The unit testing provides test results, code coverage, and various UI features that make them very convient. I sure wish I had that available on earlier projects.
Lunch was ok. It was a sandwich / salad lunch. The sandwiches were good and the company was great, since I had lunch with Kate. The only downside is that I probably needed half the calories of the lunch to get to and from the lunch hall.
Library vs language
After the break, the topic switched to lambda functions, and how they can be used with for_each. Following from that she covered the parallel_for_each, aloowing the programmer to distribute a for_each loop across multiple CPU cores. That looks very interesting, allowing programmers to make a massive boost in speed for repetitive tasks that are not interdependent.
My current code parser (the one I wrote for analyzing our process control code) already uses parallelization, but does so manually via a thread pool and explicit handling of task completions. This looks interesting.
With the additional performance gain of using native memory management, it will be interesting to see if a native implementation of my DeltaV code parser will be quicker. That would also be a good opportunity to get re-acquainted with C++ and the new language features, as well as keep my development portfolio up to date. If I ever want to get back into development again, I’ll need to be able to have something to show for the last couple of years.
The final leg of parallelization example was done with AMP. The MP stands for ‘Massive parallellization’, and uses the GPU instead of the CPU. The video car needs to be DirectX compatible. If it is, then you can create small tasks that can be distributed in parallel by the GPU. As you all (should) know, a modern video card contains dozens or hundreds of pipelines which are perfect at executing simple drudge work.
Those pipelines suck at branching. They can’t handle it properly, and even if they can, performance goes from warp to suck. If you need to branch or do anything complex, you have to stick with the cpu cores. But anything that can be broken down to ‘do this simple task a gazillion times’ will make your GPU scream without even breaking a sweat.
At the end of the talk, there was an overview of the different type of container, and their pros and cons, and then the last part of the talk was about algorithms, and some general programming remarks.
I had a great day talking with, and listening to Kate. Somehow the day just flew by. I really like C++, make no mistake, but I had my reservations about 8 hours of C++. Yet I shouldn’t have worried because the talk was very interesting, and Kate covered a lot of diverse topics.
I forgot to mention it yesterday, but I had great Japanese food at restaurant Takara. Takara is Japanese for 'treasure'. Very good food at a very reasonable price. Tonight I had a steak at restaurant ‘Toon grill’ (Toon rhymes with Tone). It was argentinian beef, and one of the best steaks I’ve ever eaten.
All in all, day 1 was very much worth it.
It’s off to tech-ed again. It’s been 3 years since my last attendance. Last year, there was no tech-ed, and the year before that, my colleague was the one who could go. Technically, last year I could have gone to tech-ed in the US, but I didn’t fancy flying to LA for a 4 day event. By the time my jet-lag would be under control, I’d be on the plane home again.
Anyway, tech-ed 2012.
Some of you might remember the planes, trains and automobiles experience I had traveling to Berlin last time. Or rather, the trip to Berlin was fine, but the trip from Berlin airport to the event center was not. I have a reputation of being one of the worst navigators on earth, and I lived up to it. This year, tech-ed is in Amsterdam. Amsterdam is close, so it should be easy to get to. Right? Not quite. I took a local intercity train to Antwerp central, and there I would take the inter city to Amsterdam. I had arranged for a former colleague and friend to pick me up from schiphol station and he would drive me to the hotel and we'd go to a restaurant afterward. Despite my usual travel anxiety, I told myself that nothing could go wrong this time.
Sadly, I was wrong. I arrived in the central station only to discover that due to work on the tracks, my train to Amsterdam had been cancelled. They told me I had to wait for 2 hours, then travel to just across the Dutch border and take a local ‘stops at every hovel’ train to Amsterdam.
Instead of that, I decided to do the sensible thing and buy a ticket for the Thalys connection to Amsterdam directly. I’ll probably have to explain to HR why I bought additional tickets, but even if they would reject my expense report, avoiding all the extra travel stress and the hours upon hours spent in a local train would be worth the extra cost. I have to say that driving the Thalys is a joy, even in economy class. There are soft and comfortable chairs, power outlets for battery chargers, and an overall calm and soothing atmosphere. I suspect that my trip home will not nearly be as luxurious. Yet for now, life is good.
My friend picked me up at Schiphol, and from there we went to the RAI so that I could register myself and get that that out of the way before the rush. That took only 5 minutes. Getting out turned to be harder. When we entered the underground parking, we kind of assumed that we would be able to drive out again.
That turned out to be a mistake. We had to have a ticket to open the barrier. And then my friend said ‘look, the exit next to us doesn’t have a barrier’. And a minute later we found out why J, when we arrived at the entrance barrier from the wrong side. My friend had to drive his care backward down the spiral again. As it turns out, we had stumbled into the attendee parking, which did not have hourly rates. I had to pay 15 euros for the 5 minutes we’ve been there.
Checking in at the hotel had to be done via self service. The hotel does not have a traditional recention. It was all very modern, very trendy, and rather annoying. Call me old fashioned, but when I arrive at a hotel after traveling, I just want to go in and talk to a (preferably cute female) receptionist who will give me a key and wish me a nice stay. What I do not want is to trawl through my backpack, looking for the hotel reservation details, and then negotiate my way through an on-screen menu. Then again, it was relatively easy and the application didn’t bork, so I mustn’t grumble.
Then the hotel room. Most hotel rooms try to emulate a sense of ‘home’ with various degrees of success. Some are better than others (airport hotels are notoriously bad) but most regular hotels try to give you a sense of stepping into a homely place. Whoever designed this one decided to abandon that entire concept, and go for broke. This is not a hotel room as I know it.
The best description I could come up with is that I feel like I am in the escape pod of an enterprise type starship, year 2212. You can see the pics here. http://www.citizenm.com/innovative-hotel-rooms
The shower and toilet have circular milk glass walls with blue lighting at the top. They look like transporters. Everything else is either chrome or white. The lighting is well designed though. Indirect spots and hidden indirect lighting. Also, this room comes with a remote for everything, from the color of the shower lights to the tv, to the AC unit, and more. I have to say it looks nice.
The hotel does not have a restaurant. That is not a big problem because there are plenty of places to eat close by. Japanese, Pakistani, Italian, Chinese, Dutch (of course), steak houses, etc. I won’t be wanting for food. And there is a ‘Miffy’ store as well, which is good because I can buy something for my daughters.The most critical question left is what the breakfast will be like. It will be very hard to tip my breakfast experience from tech-ed 2009 in Berlin. That was the mother of all breakfasts. Given the loungy modern look of the breakfast area around here, I don’t have much hope that there will be bacon and eggs. This is probably more of a ‘yoghurt, cereal and croissant’ place. Still, we’ll see. I could be pleasantly surprised.
I'll just finish reading through the session guide, and then I'll turn in early so that I can start tech-ed with a clear head.
All in all, tech-ed was worth it this year.
If you’re an an IT professional, then visiting tech-ed is a valuable learning experience. Even though most of my job involves off-the-shelf process control software, this software is still running on the Windows platform, and uses Windows and Microsoft technologies to work.
So for the purpose of administering and troubleshooting software on the windows platform, it is important to know how the software works, what it’s capabilities and configuration options are, and how it fits into the larger ‘Microsoft’ eco system.
By attending tech-ed and choosing the appropriate tracks, it is possible to keep a broad perspective. By knowing the important basic aspects of Windows 2008 and SQL server, Active Directory and other related things, I can get a better understanding, which will always come in handy eventually. Sooner or later we’ll run Windows 2008 and Vista, or ‘7’, or SQL server 2008, or something else.
And some things are downright practical already: The capabilities of powershell are astounding, and can definitely make life much easier in cases where now batch files are used that are not always easy to understand, or unable to provide much feedback when run as a scheduled task.
From that perspective, tech-ed was definitely a success.
Technorati Tags: General
After tech-ed I am excited about developing for Windows 7, and playing with the parallel computing toolset that is going to be part of Visual Studio 2010. Since my existing partitions were already quite cramped, I thought I’d install 7 on a new disk so that I can put it on a 200 GB partition.
I bought a new 640GB Western Digital disk and set out to install Windows 7. all went fine until after the first reboot, when I got a black screen and my monitor fell in power save. I tried rebooting in safe mode but that didn’t work.
For some reason, Windows 7 can sometimes do something funky to the display settings, causing the video card to turn into a problematic mode.
After some fiddling around, I found out (thanks google) that I can press F8 to start Windows 7, and choose an option to force the video resolution to 640*480 and bot succesfully. I was then able to download the proper nvidia drivers, install them, and configure the proper display settings.
Technorati Tags: General
Check-out was a painless experience. I’d already packed my stuff yesterday so I was at the reception early enough to avoid the rush. The cloakroom in the Messe was organized properly so luggage drop-off was painless as well.
Time for a coffee and a quick e-mail check, and it was time for the first session of the day.
DEV307: Parallel computing for managed developers
This talk is hosted by Steve Teixeira.
It is a repeat of the talk that was held earlier this week. Despite that, the room is filling up nicely. When the session starts, the room is not completely packed, but just well attended.
From interviews with customers in large ISVs and game companies, there is still only a minority of programmers who program in parallel. Usually, the parallellimization (my spell checker claims this is not a real word J) is done by one or 2 programmers at most who do the infrastructure, and the rest of the team just makes their code so that it can hook into that. The number mentioned was that only a handful of percent of programmers program in parallel. This really surprised me, since I have been doing that for over 10 years, as I thought many people did.
One of the big reasons for this disconnect with the parallel world is that up until recently, thinking about parallelism meant that you had to think in term of actual execution flow instead of task based. Threads, locks, and the various patterns made it hard to focus on solving the actual problem at hand, because the concurrency plumbing around a parallel problem was so complex.
To ease the transition to parallel programming, Microsoft is working on a parallel programming toolkit that has all the basic plumbing in place so that programmers can start thinking about task based programming and letting the runtime take care of the gory guts underneath. This way, you, the programmer, are not forced to hammer your solution in a threading paradigm, and implement the guts to support cancellation, exception handling, and other things that can otherwise turn a seemingly simple threaded solution from simple to stupendously convoluted.
Steve is a natural born speaker, and again his presentation was a handful of powerpoint slides, interspersed with lots of demos and code explanations. This presentation was based mainly on the necessity to change the way we think about parallel execution.
CLI309: Sysinternals tutorials
This talk is hosted by Aaron Margosis.
Now that I see him, I recognize him as ‘the other guy’ whose name I forgot during the virtualization - > app compat talk.
His presentation included 3 powerpoint slides, and these were shown during the first 2 minutes. The rest of the presentation was a non-stop demo of some of the sysinternals tools and how they can be used.
I have been using these tools for over 10 years now, and they are THE tools you need during trouble shooting, debugging, or simply if you want to know what goes on under the hood. Despite the fact that I have been using them for so long, I still saw a couple of interesting features that I hadn’t seen before.
This talk went very smoothly, and Aaron is a great speaker. As with Mark Russinovich’s talk, the room was packed full. They had already changed the room assignment so that it was now in one of the biggest rooms of the convention center, but still it was packed.
There is little point in me trying to cover the contents of this presentation here. Just download the latest release of the sysinternals suite and start playing with it.
The food aspect of the lunch was as basic as it gets: packed lunch. The turkey sandwich wasn’t bad though.
The company was great though. Steve and I finally managed to meet up and we spent an hour and a half catching up. That was really great, and one of the nice things about going to tech-ed. Not only is the learning experience extremely valuable, but you also get to meet people from all over the world.
Because of this, I missed the last session of the week. As usual, this wasn’t a real drama. The afternoon session(s) on the last day of tech-ed is/are usually less interesting because they factor in that many people are already leaving because of their flight times.
I had thought to attend Mark Russinovich’s talk about ‘The case of the unexplained…’ windows troubleshooting talk, which is would cover the various troubleshooting scenarios he was involved in, and solved with the sysinternals tools. I followed the blog series in which he wrote about those things so I didn’t miss a whole lot.
Day 5 wrap-up
Today was less intense, due to the fact that everybody is leaving today, and the schedule is set to that expectation. The talks were interesting though.
I am writing this from the starbucks at Tegel airport, and I have to admit that I misjudged the size of the mugs here. I chose the middle size because I thought that it was the size I am used to. The only reason I thought that, was the relation between the different sizes that were shown.
Now that I am actually holding it, I can conclude that they don’t use ‘small’, ‘medium’ and ‘large’, but ‘large’, ‘oversized’ and ‘humongous’.
I still have some time to fill before I can check-in for my home flight, so I can get something to eat and finish my reports. I’ve already checked with my colleague and there were no dramas at work so that is good.
I have to say that Tegel airport is a much nicer place to wait than Barcelona airport. There’s a lot of stores here, a starbucks, places to sit… and so far noone is looking me out of the premises because I am not actually buying additional coffee so that is nice.
My flight doesn’t leave for another couple of hours so I can do some development or begin on my tech-ed wrap-up report. It won’t be as extensive as the day posts, but I always like making a summary of the week.
I am already looking forward to home, and being able to hug my kids. Unfortunately, my wife is in the US at the moment, and it will be another week before I see her back.
I had an interesting experience at the security challenge though. I always carry a swiss pocketknife with me. You know the ones: with a file, screwdriver, can openener, a dozen other things, and of course, a blade. A couple of times I thought ‘I must not forget to put it in my checked luggage’. And of course I arrived at the security gate with that knife in my pockets.
I quickly put it in my carry on bag and put everything in those plastic X ray boxes. I was told not to take off my rings, and I triggered the metal detector. I was wanded down by a friendly security guard. For some reason, their detectors were set so sensitive that the wand beeped because of the individual rivets in my jeans, the zipper of my jeans, my rings, and even the roll of peppermints in my pockets.
No kidding, the wand beeped at the foil wrapper of my peppermints. Still, I was allowed to go through but there was no fooling the X ray machine. The lady kindly asked if I had a knife in my bag after which I dutifully handed it over. She looked at it for a minute, tried to open it (she failed), said ‘Hm, ok no problem’ and then gave it back to me with a smile.
She must have decided that I was unlikely to attempt a hijack with a little swiss army knife. It probably helped that it was clearly recognizable as a swiss knife, and that I used it as a keychain with keys attached. I don’t think I’d have gotten the same treatment had it been my spyderco. Interestingly, she made more of a fuss about the fact that my drinking bottle was still half filled with water. So I opened it and started drinking, and after a couple of gulps she told me it was ok.
It was nice to see that the German security guards were both paying attention AND showing common sense.
More Posts Next page »