Extension methods, explicitly implemented interfaces and collection initializers

This post is the answer to yesterday's brainteaser. As a reminder, I was asking what purpose this code might have:

public static class Extensions 

    public static void Add<T>(this ICollection<T> source, T item) 

There are plenty of answers, varying from completely incorrect (sorry!) to pretty much spot on.

As many people noticed, ICollection<T> already has an Add method taking an item of type T. So what difference could this make? Well, consider LinkedList<T>, which implements ICollection<T>, used as below:

// Invalid
LinkedList<int> list = new LinkedList<int>();

That's not valid code (normally).... whereas this is:

// Valid
ICollection<int> list = new LinkedList<int>();

The only difference is the compile-time type of the list variable - and that changes things because LinkedList<T> implements ICollection<T>.Add using explicit interface implementation. (Basically you're encouraged to use AddFirst and AddLast instead, to make it clear which end you're adding to. Add is equivalent to AddLast.)

Now consider the invalid code above, but with the brainteaser extension method in place. Now it's a perfectly valid call to the extension method, which happens to delegate straight to the ICollection<T> implementation. Great! But why bother? Surely we can just cast list if we really want to:

LinkedList<int> list = new LinkedList<int>();

That's ugly (really ugly) - but it does work. But what about situations where you can't cast? They're pretty rare, but they do exist. Case in point: collection initializers. This is where the C# 6 connection comes in. As of C# 6 (at least so far...) collection initializers have changed so that an appropriate Add extension method is also permitted. So for example:

// Sometimes valid :)
LinkedList<int> list = new LinkedList<int> { 10, 20 };

That's invalid in C# 5 whatever you do, and it's only valid in C# 6 when you've got a suitable extension method in place, such as the one in yesterday's post. There's nothing to say the extension method has to be on ICollection<T>. While it might feel nice to be general, most implementations of ICollection<T> which use explicit interface implementation for ICollection<T>.Add do so for a very good reason. With the extension method in place, this is valid too...

// Valid from a language point of view...
ReadOnlyCollection<int> collection = new ReadOnlyCollection<int>(new[] { 10, 20 }) { 30, 40 };

That will compile, but it's obviously not going to succeed at execution time. (It throws NotSupportedException.)


I don't think I'd ever actually use the extension method I showed yesterday... but that's not quite the same thing as it being useless, particularly when coupled with C# 6's new-and-improved collection initializers. (The indexer functionality means you can use collection initializers with ConcurrentDictionary<,> even without extension methods, by the way.)

Explicit interface implementation is an interesting little corner to C# which is easy to forget about when you look at code - and which doesn't play nicely with dynamic typing, as I've mentioned before.

And finally...

Around the same time as I posted the brainteaser yesterday, I also remarked on how unfortunate it was that StringBuilder didn't implement IEnumerable<char>. It's not that I really want to iterate over a StringBuilder... but if it implemented IEnumerable, I could use it with a collection initializer, having added some extension methods. This would have been wonderfully evil...

using System;
using System.Text; 

public static class Extensions  
    public static void Add(this StringBuilder builder, string text)

    public static void Add(this StringBuilder builder, string format, params object[] arguments)
        builder.AppendFormat(format, arguments);

class Test
    static void Main()
        // Sadly invalid :(
        var builder = new StringBuilder
            "Just a plain message",
            { "A message with formatting, recorded at {0}", DateTime.Now }

Unfortunately it's not to be. But watch this space - I'm sure I'll find some nasty ways of abusing C# 6...

Posted by skeet | 5 comment(s)
Filed under: , ,

Quick brainteaser

Just a really quick one today...

What's the point of this code? Does it have any point at all?

public static class Extensions
    public static void Add<T>(this ICollection<T> source, T item)

Bonus marks if you can work out what made me think about it.

I suggest you ROT-13 answers to avoid spoilers for other readers.

Posted by skeet | 33 comment(s)
Filed under: ,

C# 6: First reactions

It's been a scandalously long time since I've blogged about C#, and now that the first C# 6 preview bits are available, that feels like exactly the right thing to set the keys clacking again. Don't expect anything massively insightful from me just yet; I'd heard Mads and Dustin (individually) talk about some new features of C# 6 at conferences, but this is the first opportunity I've had to play with the bits. There are more features to come, and I suspect that the ones we've got aren't in quite the shape they'll be in the end.

First up, if you haven't been following Build, here are some of the resources to get you going:

Firstly, the fact that Roslyn is now open source (under the Apache 2 licence, no less) - this is incredible news. I gather it's already working under Mono (not sure whether that's in its entirety, or just the most important parts) which can only be a good thing. I don't know whether I'll have the time and energy to fork the code and try to implement any little features of my own, but it's definitely nice to know that I could. I'll definitely try to poke around the source code to learn good patterns for working with immutability in C#, if nothing else.

I'm not going to list the C# 6 features and go through them - read the docs that come with Roslyn for that. This is more in the way of commentary and early feedback.

Initializers for automatically implemented properties and primary constructors

I'll talk about these two together because they're so clearly related... which is part my beef with them. Let's start out with the positive side: for very simple cases, these really will be wonderful. For times where you're really just trying to wrap up a bunch of values with no additional validation, it's really great. For example, in Noda Time I have a struct that currently looks like this:

internal struct Transition : IEquatable<Transition>
    private readonly Instant instant;
    private readonly Offset oldOffset;
    private readonly Offset newOffset;

    // Note: no property for oldOffset 
    public Offset NewOffset { get { return newOffset; } } 
    public Instant Instant { get { return instant; } }

    public Transition(Instant instant, Offset oldOffset, Offset newOffset)
        this.instant = instant;
        this.oldOffset = oldOffset;
        this.newOffset = newOffset;

    // Methods

(In case you're wondering: no, there's no reason for the constructor and properties to be public within an internal struct. I'll fix that shortly :)

In C# 6, that would be as simple as:

internal struct Transition(Instant instant, private Offset oldOffset, Offset newOffset) : IEquatable<Transition>
    public Instant Instant { get; } = instant;
    public Offset NewOffset { get; } = newOffset;

    // Methods

This example, entirely coincidentally, shows how you can generate both fields and properties from constructor parameters.

Yay for read-only automatically implemented properties (although at-declaration initialization isn't just restricted to read-only ones), yay for read-only fields without boilerplate constructor code, and all is generally goodness. Except... a lot of my types don't end up looking like that. There are three main issues - validation, accessibility, and structs. Validation is already mentioned in the release notes as a potential cause for concern, and that one actually can be ameliorated to some extent. Accessibility is the assumption - as far as I can see - that the primary constructor should have the same accessibility as the class itself. Struct initialization is a relatively rare issue, but one worth considering.

LocalDateTime in Noda Time highlights all three of these issues in one handy parcel. Here's some of the code:

public struct LocalDateTime // and interfaces
    private readonly CalendarSystem calendar;
    private readonly LocalInstant localInstant;

    internal LocalInstant LocalInstant { get { return localInstant; } }
    public CalendarSystem Calendar
        get { return calendar ?? CalendarSystem.Iso; }
    internal LocalDateTime(LocalInstant localInstant, CalendarSystem calendar)
        Preconditions.CheckNotNull(calendar, "calendar");
        this.localInstant = localInstant;
        this.calendar = calendar;

    // Other constructors etc

In C# 6, I could *sort of* achieve a similar result, like this:

public struct LocalDateTime(LocalInstant localInstant, CalendarSystem calendar) // and interfaces
    private CalendarSystem Calendar { get; }  = Preconditions.CheckNotNull(calendar, "calendar");
    internal LocalInstant LocalInstant { get; } = localInstant;

    // Other constructors etc

I've worked around the validation, by putting it in the property initialization. That's not too bad, but it does potentially mean that your previously-centralized validation is scattered around the code a bit more - it lacks clarity.

We now have a public constructor instead of an internal one - which in this case wouldn't work for me at all, as LocalInstant is an internal type. In fact, LocalDateTime does have some public constructors, but quite often I create types with no public constructors at all, just private constructors and static factory methods. Unless I've missed some way of separating the accessibility of the constructor from the accessibility of the type, primary constructors simply won't be applicable for those types.

Finally, the struct part. Note how the Calendar property in the original code uses the null-coalescing operator. That's because values of structs can always be created without going through a constructor, making the validation less helpful than it would otherwise be. In Noda Time I've attempted to make the default values for all structs act as if they're valid values, even if they're not useful values. (ZonedDateTime uses a time zone of UTC in a similar fashion.) I suspect there's no equivalent for this using read-only automatically implemented properties - but the fix here would probably be to use an "auto read-only private field" and back that with a property - not too much of a hardship.

I think what bothers me most about this pair of features is how tightly coupled they are. If you don't want to have a primary constructor (because you want to have more logic than it allows, or because you want to change the accessibility), but still want to initialize read-only properties, you're forced back to declaring a separate private backing field. I think it would actually be reasonable to treat read-only properties like read-only fields, and allow them to be initialized from normal constructors too. I doubt that I'll prevail, but I'll certainly make the point to the team. (Heck, this may be the thing I try to implement in a Roslyn fork, just for kicks. How hard can it be? ;)

Again, I want to reiterate that in simple cases - even including where there's simple validation, using helper methods - these features will be absolutely wonderful. I'd just like to be able to use them more often than it currently looks like I'd be able to.

Using static

Hooray! We'll be able to select extension methods from a single type instead of from a whole namespace, like I asked for in 2005. I'm not the only one who's been badgering the team on this for a while, so it's great to see that it's made it. I'm not 100% keen on the fact that it looks like any other using directive - I think it would help with clarify if it were "using static ..." instead, but I'm not going to beef about that. (EDIT: I see from the language design notes that this option was considered as recently as February 2014.)

Note that this will further simplify the validation in the example above - I can see myself adding a using directive for Preconditions very frequently. It will also mean that I might get round to adding a load of extension methods into NodaTime.Testing - I didn't want to introduce one namespace per kind of extension method, but this will be much cleaner. (I'll still use a separate namespace to shield any pre-C#-6 users from seeing a slew of them, mind you.)

Declaration expressions

The language guide has this to say near the end:

Declaration expressions take a little getting used to, and can probably be abused in new and interesting ways.

I'm going to reserve judgement here. I'm instinctively against them as I'm sure they'll encourage out parameter usage... but I suspect that's the point - that out parameters aren't too bad when they're easy to use. It still feels like a little bit of a workaround - the language is still really designed to return a single value, and out parameters help you out when you've got some good reason to want to return more than one value. The big question is whether returning more than one value is a good thing or not. If it's not - if it should only be done in extremis - then we shouldn't be making it easier. If it is a good thing, might there not be better ways of making it easier? Maybe not - aside from anything else, this is a relatively small change, compared with alternatives. It seems likely that a platform/language designed with multiple return values in mind would not use the same approach, but that ship has sailed.

The beneficial side effect is the cleaner way of using the "as" operator to both assign and test in one go, as per the example in the guide:

if ((string s = o as string) != null) { ... }

While this clearly lack the joyous vulgarity of my alternative workaround:

for (string s = o as string; s != null; s = null) { ... }

... I have to grudgingly say I prefer it overall.

I'm not quite egocentric enough to think that Mads specifically had me in mind with the phrase "and can probably be abused in new and interesting ways" but I'll obviously do my best to find something.

Anything else?

The other features (exception filters, binary literals, separators in literals, indexed members and element initializers, await in catch and finally blocks, and extension Add methods) all seem fairly reasonable to me - or at least, I don't think I have anything interesting to say about them yet. I'm hopeful that exception filters could create some really interesting mayhem due to the timing of when the condition is evaluated, but I haven't got as far as abusing it in a genuinely glorious way yet.


I'm really, really looking forward to C# 6. Despite my reservations about primary constructors and read-only automatically implemented properties, they're still my favourite features - and ones which I hope can be made more widely useful before the final release.

The question I naturally ask myself is, "Where will this make Noda Time cleaner?" (Bear in mind that I'm still just a hobbyist in C# - Noda Time is as close to a "production" codebase as I have.) I can certainly see myself updating Noda Time to require C# 6 to build (but keeping the .NET 3.5 library requirement) fairly soon after the final release - there's plenty that will make life simpler... and more to come, I hope.

So many thanks, C# team - and I'll look forward to the next preview!

Posted by skeet | 33 comment(s)
Filed under:

How many 32-bit types might we want?

I was recently directed to an article on "tiny types" - an approach to static typing which introduces distinct types for the sake of code clarity, rather than to add particular behaviour to each type. As I understand it, they're like type aliases with no conversions between the various types. (Unlike plain aliases, an object is genuinely an instance of the relevant tiny type - it doesn't have "alias erasure" as a language-based solution could easily do.)

I like the idea, and wish it were better supported in languages - but it led me to thinking more about the existing numeric types that we've got and how they're used. In particular, I was considering how in C# the "byte" type is relatively rarely used as a number, with a useful magnitude which has arithmetic performed on it. That does happen, but more often it's used either as part of other types (e.g. converting 4 bytes from a stream into an integer) or as a sequence of 8 bits.

It then struck me that the situations where we perform bitwise operations and the situations where we perform arithmetic operations are reasonably distinct. I wonder whether it would be worth having five separate types - which could be purely at the language level, if we wanted:

  • Float32 - regular IEEE-754 32-bit binary floating point type with arithmetic operations but no bitwise operations
  • Int32 - regular signed integer with arithmetic operations but no bitwise operations
  • UInt32 - unsigned integer with arithmetic operations but no bitwise operations
  • Bit32 - a bit sequence with bitwise operations but no arithmetic operations
  • Identity32 - a 32-bit value which only defines equality

The last type would be used for identities which happened to consist of 32 bits but where the actual bit values were only useful in terms of comparison with other identities. (In this case, an Identity64 might be more useful in practice.)

Explicit conversions which preserved the underlying bit pattern would be available, so you could easily generate a sequence of Identity32 values by having a simple Int32 counter, for example.

At the same time, I'd want to introduce bitwise operations for Bit8 and Bit16 values, rather than the "everything is promoted to 32 bits" that we currently have in Java and C#, reducing the number of pointless casts for code that performs bitwise operations.

The expected benefits would be the increased friction when using a type in an "unexpected" way for the value being represented - you'd need to explicitly cast to the right type - but maintaining a low-friction path when using a value in the expected way.

I haven't yet figured out what I'd want a Stream.Read(...) call to use. Probably Bit32, but I'm not sure yet.

Anyway, I figured I'd just throw it out there as a thought experiment... any reactions?

Posted by skeet | 22 comment(s)
Filed under: ,

Diagnosing issues with reversible data transformations

I see a lot of problems which look somewhat different at first glance, but all have the same cause:

  • Text is losing "special characters" when I transfer it from one computer to another
  • Decryption ends up with garbage
  • Compressed data can't be decompressed
  • I can transfer text but not binary data

These are all cases of transforming and (usually) transferring data, and then performing the reverse transformation. Often there are multiple transformations involved, and they need to be carefully reversed in the appropriate order. For example:

  1. Convert text to binary using UTF-8
  2. Compress
  3. Encrypt
  4. Base64-encode
  5. Transfer (e.g. as text in XML)
  6. Base64-decode
  7. Decrypt
  8. Decompress
  9. Convert binary back to text using UTF-8

The actual details of each question can be different, but the way I'd diagnose them is the same in each case. That's what this post is about - partly so that I can just link to it when such questions arise. Although I've numbered the broad steps, it's one of those constant iteration situations - you may well need to tweak the logging before you can usefully reduce the problem, and so on.

1. Reduce the problem as far as possible

This is just my normal advice for almost any problem, but it's particularly relevant in this kind of question.

  • Start by assembling a complete program demonstrating nothing but the transformations. Using a single program which goes in both directions is simpler than producing two programs, one in each direction.
  • Remove pairs of transformations (e.g. encrypt/decrypt) at a time, until you've got the minimal set which demonstrates the problem
  • Avoid file IO if possible: hard-code short sample data which demonstrates the problem, and use in-memory streams (ByteArrayInputStream/ByteArrayOutputStream in Java; MemoryStream in .NET) for temporary results
  • If you're performing encryption, hard-code a dummy key or generate it as part of the program.
  • Remove irrelevant 3rd party dependencies if possible (it's simpler to reproduce an issue if I don't need to download other libraries first)
  • Include enough logging (just console output, usually) to make it obvious where the discrepancy lies

In my experience, this is often enough to help you fix the problem for yourself - but if you don't, you'll be in a much better position for others to help you.

2. Make sure you're really diagnosing the right data

It's quite easy to get confused when comparing expected and actual (or before and after) data... particularly strings:

  • Character encoding issues can sometimes be hidden by spurious control characters being introduced invisibly
  • Fonts that can't display all characters can make it hard to see the real data
  • Debuggers sometimes "helpfully" escape data for you
  • Variable-width fonts can make whitespace differences hard to spot

For diagnostic purposes, I find it useful to be able to log the raw UTF-16 code units which make up a string in both .NET and Java. For example, in .NET:

static void LogUtf16(string input)
    // Replace Console.WriteLine with your logging approach
    Console.WriteLine("Length: {0}", input.Length);
    foreach (char c in input)
        Console.WriteLine("U+{0:x4}: {1}", (uint) c, c);

Binary data has different issues, mostly in terms of displaying it in some form to start with. Our diagnosis tools are primarily textual, so you'll need to perform some kind of conversion to a string representation in order to see it at all. If you're really trying to diagnose this as binary data (so you're interested in the raw bytes) do not treat it as encoded text using UTF-8 or something similar. Hex is probably the simplest representation that allows differences to be pinpointed pretty simply. Again, logging the length of the data in question is a good first step.

In both cases you might want to include a hash in your diagnostics. It doesn't need to be a cryptographically secure hash in any way, shape or form. Any form of hash that is likely to change if the data changes is a good start. Just make sure you can trust your hashing code! (Every piece of code you write should be considered suspect - including whatever you decide to use for converting binary to hex, for example. Use trusted third parties and APIs provided with your target platform where possible. Even though adding an extra dependency for diagnostics makes it slightly harder for others to reproduce the problem, it's better than the diagnostics themselves being suspect.)

3. Analyze a clean, full, end-to-end log

This can be tricky when you've got multiple systems and platforms (which is why if you can possibly reproduce it in a single program it makes life simpler) but it's really important to look at one log for a complete run.

Make sure you're talking about the same data end-to-end. If you're analyzing live traffic (which should be rare; unless the problem is very intermittent, this should all be done in test environments or a developer machine) or you have a shared test environment you need to be careful that you don't use part of the data from one test and part of the data from another test. I know this sounds trivial, but it's a really easy mistake to make. In particular, don't assume that the data you'll get from one part of the process will be the same run-to-run. In many cases it should be, but if the overall system isn't working, then you already know that one of your expectations is invalid.

Compare "supposed to be equal" parts of the data. As per the steps in the introduction, there should be pairs of equal data, moving from the "top and bottom" of the transformation chain towards the middle. Initially, you shouldn't care about whether you view the transformation as being correct - you're only worried about whether the output is equal to the input. If you've managed to preserve all the data, the function of the transformation (encryption, compression etc) becomes relevant - but if you're losing data, anything else is secondary. This is where the hash from the bottom of step 2 is relevant: you want to be able to determine whether the data is probably right as quickly as possible. Between "length" and "hash", you should have at least some confidence, which will let you get to the most likely problem as quickly as possible.

4. Profit! (Conclusion...)

Once you've compared the results at each step, you should get an idea of which transformations are working and which aren't. This may allow you to reduce the problem further, until you've just got a single transformation to diagnose. At that point, the problem becomes about encryption, or about compression, or about text encoding.

Depending on the situation you're in, at this point you may be able to try multiple implementations or potentially multiple platforms to work out what's wrong: for example, if you're producing a zip file and then trying to decompress it, you might want to try using a regular decompression program to open your intermediate results, or decompress the results of compressing with a standard compression tool. Or if you're trying to encrypt on Java and decrypt in C#, implement the other parts in each platform, so you can at least try to get a working "in-platform" solution - that may well be enough to find out which half has the problem.

To some extent all this blog post is about is reducing the problem as far as possible, with some ideas of how to do that. I haven't tried to warn you much about the problems you can run into in any particular domain, but you should definitely read Marc Gravell's excellent post on "How many ways can you mess up IO?" and my earlier post on understanding the meaning of your data is pretty relevant too.

As this is a "hints and tips" sort of post, I'll happily modify it to include reader contributions from comments. With any luck it'll be a useful resource for multiple Stack Overflow questions in the months and years to come...

Posted by skeet | 7 comment(s)
Filed under: ,

A tale of two puzzles

As I begin to write this, I'm in a small cubicle in Philadelphia airport, on my way back from CodeMash - a wonderful conference (yet again) which I feel privileged to have attended. Personal top highlights definitely include Dustin Campbell's talk on C# 6 (I'm practically dribbling with anticipation - bits please!) and playing Settlers of Catan on an enormous board. Kevin Pilch-Bisson's talk on scriptcs was fabulous too, not least because he demonstrated its NuGet support using Noda Time. I'm very likely to use scriptcs as a tool to ease folks into the language gently if I ever get round to writing my introductory C# book. (To follow my own advice from the talk I gave, I should really just start writing it - whether the initial attempt is good or not, it's better than having nothing.)

First puzzle

During Dustin's talk, he set some puzzles around refactoring - where an "inline" operation had to be smarter than it originally appeared, for various reasons. The final puzzle particularly piqued my interest, as Dustin explained that various members of the C# team hadn't actually worked out why the code behaved the way it did... the refactoring in Roslyn worked in that it didn't change the behaviour, but more through a good general strategy than any attempt to handle this specific case.

So, the code in question is:

using System;

class X
    static int M(Func<int?, byte> x, object y) { return 1; }
    static int M(Func<X, byte> x, string y) { return 2; }

    const int Value = 1000;

    static void Main()
        var a = M(X => (byte) X.Value, null);

            Console.WriteLine(M(X => (byte) X.Value, null));

This produces output of:


Is this correct? Is it a compiler bug from the old native compiler, faithfully reproduced in Roslyn? Why would moving the expression into an "unchecked" block cause different behaviour?

In an attempt to give you enough space and time to avoid accidentally reading the answer, which is explained below, I'd like to take the time to mention the source of this puzzle. A few years ago, Microsoft hired Vladmir Reshetnikov into testing part of the C#/VB team. Vladmir had previously worked at JetBrains on Resharper, so it's not like he was new to C#. Every year when I see Kevin and Dustin at CodeMash, they give me more reports of the crazy things Vladmir has come up with - perversions of the language way beyond my usual fare. The stories are always highly entertaining, and I hope to meet Vladmir in person one day. He has a twitter account which I've only just started following, but which I suspect I'll find immensely distracting.


I massively overthought this for a while. I then significantly underthought it for a while. I played around with the code by:

  • Removing the second parameter from M
  • Changing the name of the parameter
  • Changing the parameter types in various ways
  • Making X.Value non-const (just static readonly)
  • Changing the value of X.Value to 100
  • Changing the lambda expression to make a call to a method using (byte) X.Value instead of returning it directly
  • Using a statement lambda

These gave me clues which I then failed to follow up on properly - partly because I was looking for something complicated. I was considering the categorization of a "cast of a constant value to a different numeric type" and similar details - instead of following through the compiler rules in a systematic way.

The expression we need to focus on is this:

M(X => (byte) X.Value, null)

This is just a method call using a lambda expression, using overload resolution to determine which overload to call and type inference to determine the type arguments to the method. In a very crude description of overload resolution, we perform the following steps:

  • Determine which overloads are applicable (i.e. which ones make sense in terms of the supplied arguments and the corresponding parameters)
  • Compare the applicable overload to each other in terms of "betterness"
  • If one applicable overload is "better" than all the others, use that (modulo a bit more final checking)
  • If there are no applicable overload, or no one applicable method is better than all the others, then the call is invalid and leads to a compile-time error

My mistake was jumping straight to the second bullet point, assuming that both overloads are valid in each case.

Overload 1 - a simple parameter

First let's look at the first overload: the one where the first parameter is of type Func<int?, byte>. What does the lambda expression of X => (byte) X.Value mean when converted to that delegate type? Is it always valid?

The tricky part is working out what the simple-name X refers to as part of X.Value within the lambda expression. The important part of the spec here is the start of section 7.6.2 (simple names):

A simple-name is either of the form I or of the form I<A1, ..., AK>, where I is a single identifier and <A1, ..., AK> is an optional type-argument-list. When no type-argument-list is specified, consider K to be zero. The simple-name is evaluated and classified as follows:

  • If K is zero and the simple-name appears within a block and if the block’s (or an enclosing block’s) local variable declaration space (§3.3) contains a local variable, parameter or constant with name I, then the simple-name refers to that local variable, parameter or constant and is classified as a variable or value.

(The spec continues with other cases.) So X refers to the lambda expression parameter, which is of type int? - so X.Value refers to the underlying value of the parameter, in the normal way.

Overload 2 - a bit more complexity

What about the second overload? Here the first parameter to M is of type Func<X, byte>, so we're trying to convert the same lambda expression to that type. Here, the same part of section 7.6.2 is used, but also section comes into play when determining the meaning of the member access expression X.Value:

In a member access of the form E.I, if E is a single identifier, and if the meaning of E as a simple-name (§7.6.2) is a constant, field, property, local variable, or parameter with the same type as the meaning of E as a type-name (§3.8), then both possible meanings of E are permitted. The two possible meanings of E.I are never ambiguous, since I must necessarily be a member of the type E in both cases. In other words, the rule simply permits access to the static members and nested types of E where a compile-time error would otherwise have occurred.

It's not spelled out explicitly here, but the reason that it can't be ambiguous is that you can't have two members of the same name within the same type (other than for method overloads). You can have method overloads that are both static and instance methods, but then normal method overloading rules are enforced.

So in this case, X.Value doesn't involve the use of the parameter called X at all. Instead, it's the constant called value within the class X. Note that this is only the case because the type of the parameter is the same as the type that the parameter's name refers to. (It isn't quite the same as the names being the same. If you have a using directive introducing an alias, such as using Y = X; then the condition can be satisfied with different names.)

So what difference does unchecked make?

Now that we know what the conversions from the lambda expression to the two different delegate types mean, we can consider whether they're always valid. Taking the overload resolution out of the picture, when is each of these lines valid at compile-time?

Func<int?, byte> foo = X => (byte) X.Value;
Func<X, byte> bar = X => (byte) X.Value;

Note that we're not trying to consider whether they might throw an exception - clearly if you call foo(null) then that will fail. But will it at least compile?

The first line is always valid... but the second depends on your context, in terms of checked or unchecked arithmetic. For the most part, if you're not explicitly in a checked or unchecked context, the language assumes a default context of unchecked, but allows "external factors" (such as a compiler flag). The Microsoft compiler is unchecked by default, and you can make an assembly use checked arithmetic "globally" using the /checked compiler argument (or going to the Advanced settings in the build properties of a Visual Studio project.

However, one exception to this "default" rule is constant expressions. From section 7.6.12 of the spec:

For constant expressions (expressions that can be fully evaluated at compile-time), the default overflow checking context is always checked. Unless a constant expression is explicitly placed in an unchecked context, overflows that occur during the compile-time evaluation of the expression always cause compile-time errors.

The cast of X.Value to byte is still a constant expression - and it's one that overflows at compile-time, because 1000 is outside the range of byte. So unless we're explicitly in an unchecked context, the code snippet above will fail for the second line.

Back to overloading

Given all of that, how does it fit in with our problem? Well, at this point it's reasonably simple - so long as we're careful. Look at the first assignment, which occurs outside the unchecked statement:

var a = M(X => (byte) X.Value, null);

Which overloads are applicable here? The second argument isn't a problem - the null literal is convertible to both object and string. So can we convert the first argument (the lambda expression) to the first parameter of each overload?

As noted earlier, it's fine to convert it to a Func<int?, byte>. So the first method is definitely applicable. However, the second method requires us to convert the lambda expression to Func<X, byte>. We're not in an explicitly unchecked context, so the conversion doesn't work. Where the rule quoted above talks about "overflows that occur during the compile-time evaluation of the expression always cause compile-time errors" it doesn't mean we actually see an error message when the compiler speculatively tries to convert the lambda expression to the Func<X, byte> - it just means that the conversion isn't valid.

So, there's only one applicable method, and that gets picked and executed. As a result, the value of a is 1.

Now in the second call (inside the unchecked statement) both methods are applicable: the conversion from the lambda expression to the Func<X, byte> is valid because the constant conversion occurs within an explicitly unchecked context. We end up with two applicable methods, and need to see if one of them is definitively better than the other. This comes down to section of the C #5 spec, which gives details of a "better function member" test. This in turn ends up talking about better conversions from arguments to parameter types. The lambda expression conversions are effectively equal, but the conversion of null to string is "better than" the conversion to object, because there's an implicit conversion from string to object but not vice versa. (In that sense the string conversion is "more specific" to borrow Java terminology.) So the second overload is picked, and we print 2 to the console.



This all comes down to the cast from a constant to byte being valid in an explicitly unchecked context, but not otherwise.

One odd part is that I spotted that reasonably quickly, and assumed the problem was actually fairly straightforward. It's only when I started writing it up for this blog post that I dug into the spec to check the exact rules for the meaning of simple names in different situations. I'd solved the crux of the puzzle, but the exact details were somewhat involved. Indeed, while writing this blog post I've started two separate emails to the C# team reporting potential (equal and opposite) bugs - before discarding those emails as I got further into the specification.

This is reasonably common, in my experience: the C# language has been well designed so that we can intuitively understand the meaning of code without knowing the details of the exact rules. Those rules can be pretty labyrinthine sometimes, because the language is very precisely specified - but most of the time you don't need to read through the rules in detail.

Didn't you mention two puzzles?

Yes, yes I did.

The second puzzle is much simpler to state - and much harder to solve. To quote Vladmir's tweet verbatim:

C# quiz: write a valid C# program containing a sequence of three tokens `? null :` that remains valid after we remove the `null`.

Constructing a valid program containing "? null :" is trivial.

Constructing a valid program containing "? :" is slightly trickier, but not really hard.

Constructing a program which is valid both with and without the null token is too hard for me, at the moment. I'm stuck - and I know I'm in good company, as Kevin, Dustin and Bill Wagner were all stuck too, last time I heard. I know of two people who've solved this so far... but I'm determined to get it...

Posted by skeet | 21 comment(s)
Filed under:

Career and skills advice

It feels a little odd even to write this post, but I receive quite a few emails asking me for advice on how to get better at programming, how to get through interviews, whether it's better to be a generalist or a specialist etc. I want to make it very clear right from the start, I am not a career guidance expert. I have very little evidence that this is good advice, and it may well not be good advice for you even if it's good advice in general. Oh, and don't expect anything shockingly insightful or original, either. You may well have heard much or all of this advice before.

So, with those caveats out of the way, my random thoughts...

Communication, communication, communication

Software is all about communication, whether you're writing a design document, answering questions on Stack Overflow, giving a talk at a user conference, being grilled at an interview, or simply writing code. In fact, writing code is about communicating with two different audiences at the same time: the computer (compiler, interpreter, whatever) and whoever is going to read and maintain the code in the future.

In fact, I see improved communication as one of the primary benefits of Stack Overflow. Every time you ask or answer a question, that's an opportunity to practise communicating with other developers. Is your post as clear as it can be? Does it give just the right amount of information - nothing extraneous, but everything that's relevant - in a coherent way? Does it strike the right tone with the reader, suppressing any frustration you may currently be feeling either at the problem you're facing or the perceived flaws in someone you're replying to?

Stack Overflow is just one way you can improve your communication skills, of course. There are many others:

  • Write a blog, even if you think no-one will read it. You may be surprised. If you write about something you find interesting - and put time into writing about it well - it's very likely that someone else will find it interesting too. If you enjoy this enough, consider writing a book - there are plenty of options for publication these days, from traditional publishers to online-only approaches.
  • Find opportunities to give presentations, whether that's at user groups, conferences, or at work. I do realise that not everyone likes presenting to large groups of people, even if I find that frame of mind hard to empathise with. (Who wouldn't want to be the centre of attention? Yes, I really am that shallow.)
  • Take pride in your communication within code. Spend a little bit longer on documentation and comments than you might do normally, and really try to think about what someone reading it might be trying to discover. (Also think about what they ought to know about even if they weren't aware that they needed to know it.)
  • Read a lot, and reflect on it. While we're used to commenting on the quality of fiction and perhaps blog posts, we don't tend to consciously consider the quality of other written work. Next time you're reading the documentation of some library you're using, take a moment to ask yourself how effective it is and what contributes to the quality (or lack thereof).

You may well find that improving your communication also improves your thought processes. Clarity of ideas and clarity of expression often go hand in hand.

Coding and reflecting

Write code and then look back on it - ideally after significantly different periods of time. Code that appears "cool" today can feel immature or self-serving six months later.

Of course there are different kinds of code written under different constraints. You may find that if you're already a professional software engineer, you aren't given enough time for critical evaluation... but you may have the benefit of code reviews from your peers. (Code reviews can often be regarded as a chore, but if you approach them in the right frame of mind, you can learn a lot.) If you're contributing to an open source project - or perhaps creating one from scratch - then you may find you have more opportunity to try multiple approaches, and look back at the code you've written in the past.

Reading other people's code can be helpful too, though I find this is one of those activities which is more talked about than actually done.

Specialist or generalist?

Eric Lippert has written about how he was given advice to pick a subject and become a world expert on it, and I think there's a lot of sense in that. On the other hand, you don't want to be so constrained to a niche area of knowledge that you can move in the wider world. There's a balance to be struck here: I'm a "generalist" in terms of having relatively few skills in specific business domains, but I'm a "specialist" in terms of having spent significant time studying C# and Java from a language perspective. Similarly I'm very restricted in terms of which aspects of software I'm actually competent at: I'd like to think I'm reasonable at library or server-side code, but my experience of UI development (whether that's web or native client) are severely limited. I'm also familiar with fewer languages than I'd like to be.

You'll need to work out what's right for you here - but it's worth doing so consciously rather than simply drifting. Of course, making deliberate decisions isn't the same thing as executing them: a few months ago I decided it would be fun to explicitly concentrate on application development for a while (WPF, Windows Store, Android and iOS via Xamarin) but so far there have been precious few results.

Sometimes people ask me which technologies are good in terms of employment. That can be a very local question, but my experience is that it doesn't really matter too much. If you find something you're passionate about, then whatever you learn from that will often be useful in a more mundane but profitable environment. Technology changes fast enough that you're almost certainly going to have to learn several languages and platforms over your career, so you might as well make at least some of those choices fun ones.


A lot of advice has been written about interviews by people with far more experience than myself. I haven't actually been an interviewee very often, although I've now conducted quite a few interviews. A few general bits of advice though:

  • Don't exaggerate too much on your CV. If you claim you're an expert in a language and then forget how to write a method declaration, that leaves a bad impression.
  • Find out what the company is looking for beforehand, and what the interview is likely to consist of. Are you likely to be asked to code on a whiteboard or in a text editor? If so, it wouldn't be a bad idea to practise that - it feels quite different to writing code in an IDE. If you're likely to be asked questions about algorithmic complexity but university feels like a long time ago, brush up on that a bit.
  • Be interesting! This often comes down to being passionate about something - anything, really. If the only motivation you can give is "I'm bored with my current job" without saying what you'd find interesting, that won't go down well.
  • Listen carefully to what the interviewer asks you. If they specifically say to ignore one aspect of coding (validation, performance, whatever it might be) then don't justify your answer using one of those aspects. By all means mention it, but be driven by the guidance of what the interviewer has specified as important. (The balance may well shift through the interview, as the interviewer tries to evaluate you on multiple criteria.)
  • Communicate as clearly as you can, even while you're thinking. I love it when a candidate explains their thinking as they're working through a difficult problem. I can see where their mind leads them, how intuitive or methodical they are, where they may have weaknesses - and I can guide them with hints. It can be worth taking a few seconds to work out the best way of vocalising your thoughts though. Oh, and if you do write on a whiteboard, try to make it legible.


So that's about it - Jon's entirely subjective guide to a fun and profitable software engineering career. If you choose to act on some of this advice, I sincerely hope it turns out well for you, but I make no guarantees whatsoever. Best of luck either way!

Posted by skeet | 13 comment(s)
Filed under:

Casting vs "as" - embracing exceptions

(I've ended up commenting on this issue on Stack Overflow quite a few times, so I figured it would be worth writing a blog post to refer to in the future.)

There are lots of ways of converting values from one type to another – either changing the compile-time type but actually keeping the value the same, or actually changing the value (for example converting int to double). This post will not go into all of those - it would be enormous - just two of them, in one specific situation.

The situation we're interested in here is where you have an expression (typically a variable) of one reference type, and you want an expression with a different compile-time type, without changing the actual value. You just want a different "view" on a reference. The two options we'll look at are casting and using the "as" operator. So for example:

object x = "foo"
string cast = (string) x; 
string asOperator = x as string;

The major differences between these are pretty well-understood:

  • Casting is also used for other conversions (e.g. between value types); "as" is only valid for reference type expressions (although the target type can be a nullable value type)
  • Casting can invoke user-defined conversions (if they're applicable at compile-time); "as" only ever performs a reference conversion
  • If the actual value of the expression is a non-null reference to an incompatible type, casting will throw an InvalidCastException whereas the "as" operator will result in a null value instead

The last point is the one I'm interested in for this post, because it's a highly visible symptom of many developers' allergic reaction to exceptions. Let's look at a few examples.

Use case 1: Unconditional dereferencing

First, let's suppose we have a number of buttons all with the same event handler attached to them. The event handler should just change the text of the button to "Clicked". There are two simple approaches to this:

void HandleUsingCast(object sender, EventArgs e) 

    Button button = (Button) sender; 
    button.Text = "Clicked"

void HandleUsingAs(object sender, EventArgs e) 

    Button button = sender as Button; 
    button.Text = "Clicked"

(Obviously these aren't the method names I'd use in real code - but they're handy for differentiating between the two approaches within this post.)

In both cases, when the value of "sender" genuinely refers to a Button instance, the code will function correctly. Likewise when the value of "sender" is null, both will fail with a NullReferenceException on the second line. However, when the value of "sender" is a reference to an instance which isn't compatible with Button, the two behave differently:

  • HandleUsingCast will fail on the first line, throwing a InvalidCastException which includes information about the actual type of the object
  • HandleUsingAs will fail on the second line with a NullReferenceException

Which of these is the more useful behaviour? It seems pretty unambiguous to me that the HandleUsingCast option provides significantly more information, but still I see the code from HandleUsingAs in examples on Stack Overflow... sometimes with the rationale of "I prefer to use as instead of a cast to avoid exceptions." There's going to be an exception anyway, so there's really no good reason to use "as" here.

Use case 2: Silently ignoring invalid input

Sometimes a slight change is proposed to the above code, checking for the result of the "as" operator not being null:

void HandleUsingAs(object sender, EventArgs e) 

    Button button = sender as Button; 
    if (button != null
        button.Text = "Clicked"

Now we really don't have an exception. We can do this with the cast as well, using the "is" operator:

void HandleUsingCast(object sender, EventArgs e) 

    if (sender is Button) 
        Button button = (Button) sender; 
        button.Text = "Clicked"

These two methods now behave the same way, but here I genuinely prefer the "as" approach. Aside from anything else, it's only performing a single execution-time type check, rather than checking once with "is" and then once again with the cast. There are potential performance implications here, but in most cases they'd be insignificant - it's the principle of the thing that really bothers me. Indeed, this is basically the situation that the "as" operator was designed for. But is it an appropriate design to start with?

In this particular case, it's very unlikely that we'll have a non-Button sender unless there's been a coding error somewhere. For example, it's unlikely that bad user input or network resource issues would lead to entering this method with a sender of (say) a TextBox. So do you really want to silently ignore the problem? My personal view is that the response to a detected coding error should almost always be an exception which either goes completely uncaught or which is caught at some "top level", abandoning the current operation as cleanly as possible. (For example, in a client application it may well be appropriate to terminate the app; in a web application we wouldn't want to terminate the server process, but we'd want to abort the processing of the problematic request.) Fundamentally, the world is not in a state which we've really considered: continuing regardless could easily make things worse, potentially losing or corrupting data.

If you are going to ignore a requested operation involving an unexpected type, at least clearly log it - and then ensure that any such logs are monitored appropriately:

void HandleUsingAs(object sender, EventArgs e) 

    Button button = sender as Button; 
    if (button != null
        button.Text = "Clicked"
        // Log an error, potentially differentiating between
        // a null sender and input of a non-Button sender.

Use case 3: consciously handling input of an unhelpful type

Despite the general thrust of the previous use case, there certainly are cases where it makes perfect sense to use "as" to handle input of a type other than the one you're really hoping for. The simplest example of this is probably equality testing:

public sealed class Foo : IEquatable<Foo> 

    // Fields, of course

    public override bool Equals(object obj) 
        return Equals(obj as Foo); 

    public bool Equals(Foo other) 
        // Note: use ReferenceEquals if you overload the == operator
        if (other == null
            return false
        // Now compare the fields of this and other for equality appropriately.

    // GetHashCode etc

(I've deliberately sealed Foo to avoid having to worry about equality between subclasses.)

Here we know that we really do want to deal with both null and "non-null, non-Foo" references in the same way from Equals(object) - we want to return false. The simplest way of handling that is to delegate to the Equals(Foo) method which needs to handle nullity but doesn't need to worry about non-Foo reference.

We're knowingly anticipating the possibility of Equals(object) being called with a non-Foo reference. The documentation for the method explicitly states what we're meant to do; this does not necessarily indicate a programming error. We could implement Equals with a cast, of course:

public override bool Equals(object obj) 

    return obj is Foo && Equals((Foo) obj); 

... but I dislike that for the same reasons as I disliked the cast in use case 2.

Use case 4: deferring or delegating the decision

This is the case where we pass the converted value on to another method or constructor, which is likely to store the value for later use. For example:

public Person CreatePersonWithCast(object firstName, object lastName) 

    return new Person((string) firstName, (string) lastName); 

public Person CreatePersonWithAs(object firstName, object lastName) 

    return new Person(firstName as string, lastName as string); 

In some ways use case 3 was a special case of this, where we knew what the Equals(Foo) method would do with a null reference. In general, however, there can be a significant delay between the conversion and some definite impact. It may be valid to use null for one or both arguments to the Person constructor - but is that really what you want to achieve? Is some later piece of code going to assume they're non-null?

If the constructor is going to validate its parameters and check they're non-null, we're essentially back to use case 1, just with ArgumentNullException replacing NullReferenceException: again, it's cleaner to use the cast and end up with InvalidCastException before we have the chance for anything else to go wrong.

In the worst scenario, it's really not expected that the caller will pass null arguments to the Person constructor, but due to sloppy validation the Person is constructed with no errors. The code may then proceed to do any number of things (some of them irreversible) before the problem is spotted. In this case, there may be lasting data loss or corruption and if an exception is eventually thrown, it may be very hard to trace the problem to the original CreatePersonWithAs parameter value not being a string reference.

Use case 5: taking advantage of "optional" functionality

This was suggested by Stephen Cleary in comments, and is an interesting reversal of use case 3. The idea is basically that if you know an input implements a particular interface, you can take a different - usually optimized - route to implement your desired behaviour. LINQ to Objects does this a lot, taking advantage of the fact that while IEnumerable<T> itself doesn't provide much functionality, many collections implement other interfaces such as ICollection<T>. So the implementation of Count() might include something like this:

ICollection<T> collection = source as ICollection<T>;
if (collection != null)
    return collection.Count;
// Otherwise do it the long way (GetEnumerator / MoveNext)

Again, I'm fine with using "as" here.


I have nothing against the "as" operator, when used carefully. What I dislike is the assumption that it's "safer" than a cast, simply because in error cases it doesn't throw an exception. That's more dangerous behaviour, as it allows problems to propagate. In short: whenever you have a reference conversion, consider the possibility that it might fail. What sort of situation might cause that to occur, and how to you want to proceed?

  • If everything about the system design reassures you that it really can't fail, then use a cast: if it turns out that your understanding of the system is wrong, then throwing an exception is usually preferable to proceeding in a context you didn't anticipate. Bear in mind that a null reference can be successfully cast to any nullable type, so a cast can never replace a null check.
  • If it's expected that you really might receive a reference of the "wrong" type, then think how you want to handle it. Use the "as" operator and then test whether the result was null. Bear in mind that a null result doesn't always mean the original value was a reference of a different type - it could have been a null reference to start with. Consider whether or not you need to differentiate those two situations.
  • If you really can't be bothered to really think things through (and I hope none of my readers are this lazy), default to using a cast: at least you'll notice if something's wrong, and have a useful stack trace.

As a side note, writing this post has made me consider (yet again) the various types of "exception" situations we run into. At some point I may put enough thought into how we could express our intentions with regards to these situations more clearly - until then I'd recommend reading Eric Lippert's taxonomy of exceptions, which has certainly influenced my thinking.

Posted by skeet | 25 comment(s)
Filed under: ,

Array covariance: not just ugly, but slow too

It seems to be quite a long time since I've written a genuine "code" blog post. Time to fix that.

This material may well be covered elsewhere – it's certainly not terrifically original, and I've been meaning to post about it for a long time. In particular, I remember mentioning it at CodeMash in 2012. Anyway, the time has now come.

Refresher on array covariance

Just as a bit of background before we delve into the performance aspect, let me remind you what array covariance is, and when it applies. The basic idea is that C# allows a reference conversion from type TDerived[] to type TBase[], so long as:

  • TDerived and TBase are both reference types (potentially interfaces)
  • There's a reference conversion from TDerived to TBase (so either TDerived is the same as TBase, or a subclass, or an implementing class etc)

Just to remind you about reference conversions, those are conversions from one reference type to another, where the result (on success) is never a reference to a different object. To quote section 6.1.6 of the C# 5 spec:

Reference conversions, implicit or explicit, never change the referential identity of the object being converted. In other words, while a reference conversion may change the type of the reference, it never changes the type or value of the object being referred to.

So as a simple example, there's a reference conversion from string to object, therefore there's a reference conversion from string[] to object[]:

string[] strings = new string[10];
object[] objects = strings;

// strings and objects now refer to the same object

There is not a reference conversion between value type arrays, so you can't use the same code to conver from int[] to object[].

The nasty part is that every store operation into a reference type array now has to be checked at execution time for type safety. So to extend our sample code very slightly:

string[] strings = new string[10]; 
object[] objects = strings;

objects[0] = "string"; // This is fine
objects[0] = new Button(); // This will fail

The last line here will fail with an ArrayTypeMismatchException, to avoid storing a Button reference in a String[] object. When I said that every store operation has to be checked, that's a slight exaggeration: in theory, if the compile-time type is an array with an element type which is a sealed class, the check can be avoided as it can't fail.

Avoiding array covariance

I would rather arrays weren't covariant in the first place, but there's not a lot that can be done about that now. However, we can work around this, if we really need to. We know that value type arrays are not covariant... so how about we use a value type array instead, even if we want to store reference types?

All we need is a value type which can store the reference type we need – which is dead easy with a wrapper type:

public struct Wrapper<T> where T : class
    private readonly T value;
    public T Value { get { return value; } }
    public Wrapper(T value)
        this.value = value;
    public static implicit operator Wrapper<T>(T value)
        return new Wrapper<T>(value);

Now if we have a Wrapper<string>[], we can't assign that to a Wrapper<object>[] variable – the types are incompatible. If that feels a bit clunky, we can put the array into its own type:

public sealed class InvariantArray<T> where T : class
    private readonly Wrapper<T>[] array;
    public InvariantArray(int size)
        array = new Wrapper<T>[size];
    public T this[int index]
        get { return array[index].Value; }
        set { array[index] = value; }

Just to clarify, we now only have value type arrays, but ones where each value is a plain wrapper for a reference. We now can't accidentally violate type-safety at compile-time, and the CLR doesn't need to validate write operations.

There's no memory overhead here – aside from the type information at the start, I'd actually expect the contents of a Wrapper<T>[] to be indistinguishable from a T[] in memory.


So, how does it perform? I've written a small console app to test it. You can download the full code, but the gist of it is that we use a stopwatch to measure how long it takes to either repeatedly write to an array, or repeatedly read from an array (validating that the value read is non-null, just to prove that we've really read something). I'm hoping I haven't fallen foul of any of the various mistakes in benchmarking which are so easy to make.

The test tries four scenarios:

  • object[] (but still storing strings)
  • string[]
  • Wrapper<string>[]
  • InvariantArray<string>

Running against an array size of 100, with 100 million iterations per test, I get the following results on my Thinkpad Twist :

Array type Read time (ms) Write time
object[] 11842 44827
string[] 12000 40865
Wrapper<string>[] 11843 29338
InvariantArray<string> 11825 32973

That's just one run, but the results are fairly consistent across runs. The one interesting deviation is the write time for object[] – I've observed it sometimes being the same as for string[], but not consistently. I don't understand this, but it does seem that the JIT isn't performing the optimization for string[] that it could if it spotted that string is sealed.

Both of the workarounds to avoid array covariance make a noticeable difference to the performance of writing to the array, without affecting read performance. Hooray!


I think it would be a very rare application which noticed a significant performance boost here, but I do like the fact that this is one of those situations where a cleaner design also leads to better performance, without many obvious technical downsides.

That said, I doubt that I'll actually be using this in real code any time soon – the fact that it's just "different" to normal C# code is a big downside in itself. Hope you found it interesting though :)

Posted by skeet | 24 comment(s)

But what does it all mean?

This year before NDC, I wrote an article for the conference edition of "The Developer" magazine. Follow that link to find the article in all its illustrated glory (along with many other fine articles, of course) – or read on for just the text.

Back when I used to post on newsgroups I would frequently be in the middle of a debate of the details of some behaviour or terminology, when one poster would say: “You’re just quibbling over semantics” as if this excused any and all previous inaccuracies. I would usually agree –  I was indeed quibbling about semantics, but there’s no “just” about it.

Semantics is meaning, and that’s at the heart of communication – so for example, a debate over whether it’s correct to say that Java uses pass- by-reference1 is all about semantics. Without semantics, there’s nothing to talk about.
This has been going on for years, and I’m quite used to being the pedant in any conversation when it comes to terminology – it’s a topic close to my heart. But over the years – and importantly since my attention has migrated to Stack Overflow, which tends to be more about real problems developers are facing than abstract discussions – I’ve noticed that I’m now being picky in the same sort of way, but about the meaning of data instead of terminology.

Data under the microscope

When it comes down to it, all the data we use is just bits – 1s and 0s. We assemble order from the chaos by ascribing meaning to those bits… and not just once, but in a whole hierarchy. For example, take the bits 01001010 00000000:

  • Taken as a little-endian 16-bit unsigned integer, they form a value of 74. • That 16-bit unsigned integer can be viewed as a UTF-16 code unit for the character ‘J’.
  • That character might be the first character within a string.
  • That string might be the target of a reference, which is the value for a field called “firstName”.
  • That field might be within an instance of a class called “Person”.
  • The instance of “Person” whose “firstName” field has a value
    which is a reference to the string whose first character is ‘J’ might itself be the target of a reference, which is the value for a field called “author”, within an instance of a class called “Article”.
  • The instance of “Article” whose “author” field (fill in the rest your- self…) might itself be the target of a reference which is part of a collection, stored (indirectly) via a field called “articles” in a class called “Magazine”.

As we’ve zoomed out from sixteen individual bits, at every level we’ve imposed meaning. Imagine all the individual bits of information which would be involved in a single instance of the Magazine with a dozen articles, an editorial, credits – and perhaps even images. Really imagine them, all written down next to each other, possibly without even the helpful gap between bytes that I included in our example earlier.

That’s the raw data. Everything else is “just” semantics.

So what does that have to do with me?

I’m sure I haven’t told you anything you don’t already know. Yes, we can impose meaning on these puny bits, with our awesome developer power. The trouble is that bits have a habit of rebelling if you try to impose the wrong kind of meaning on them… and we seem to do that quite a lot.

The most common example I see on Stack Overflow is treating text (strings) and binary data (image files, zip files, encrypted data) as if they were interchangeable. If you try to load a JPEG using StreamReader in .NET or FileReader in Java, you’re going to have problems. There are ways you can actually get away with it – usually by using the ISO-8859-1 encoding – but it’s a little bit like trying to drive down a road with a broken steering wheel, only making progress by bouncing off other obstacles.

While this is a common example, it’s far from the only one. Some of the problems which fall into this category might not obviously be due to the mishandling of data, but at a deep level they’re all quite similar:

  • SQL injection attacks due to mingling code (SQL) with data (values) instead of using parameters to keep the two separate.
  • The computer getting arithmetic “wrong” because the developer didn’t understand the meaning of floating binary point numbers, and should actually have used a floating decimal point type (such as System.Decimal or java.math. BigDecimal).
  • String formatting issues due to treating the result of a previous string formatting operation as another format string – despite the fact that now it includes user data which could really have any kind of text in it.
  • Double-encoding or double-unencoding of text data to make it safe for transport via a URL.
  • Almost anything to do with dates and times, including – but certainly not limited to – the way that java.util.Date and System. DateTime values don’t inherently have a format. They’re just values.

The sheer bulk of questions which indicate a lack of understanding of the nature of data is enormous. Of course Stack Overflow only shows a tiny part of this – it doesn’t give much insight into the mountain of code which handles data correctly from the perspective of the types involved, but does entirely inappropriate things with those values from the perspective of the intended business meaning of those values.

It’s not all doom and gloom though. We have some simple but powerful weapons available in the fight against semantic drivel.


This article gives a good indication of why I’m a fan of statically typed languages. The type system can convey huge amounts of information about the nature of data, even if the business meaning of values of those types can be horribly overloaded.

Maybe it would be good if we distinguished between human-readable text which should usually be treated in a culture-sensitive way, and machine-parsable text which should usually be treated without reference to any culture. Those two types might have different operations available on them, for example – but it would almost certainly get messy very quickly.

For business-specific types though, it’s usually easy to make sure that each type is really only used for one concept, and only provides operations which are meaningful for that concept.

Meaningful names

Naming is undoubtedly hard, and I suspect most developers have had the same joyless experiences that I have of struggling for ten minutes to come up with a good class or method name, only to end up with the one we first thought of which we really don’t like… but which is simply better than all the other options we’ve considered. Still, it’s worth the struggle.

Our names don’t have to be perfect, but there’s simply no excuse for names such as “Form1” or “MyClass” which seem almost designed to carry no useful information whatsoever. Often simply the act of naming something can communicate meaning. Don’t be afraid to extract local variables in code just to clarify the meaning of some otherwise-obscure expression.


I don’t think I’ve ever met a developer who actually enjoys writing documentation, but I think it’s hard to deny that it’s important. There tend to be very few things that are so precisely communicated just by the names of types, properties and methods that no further information is required. What is guaranteed about the data? How should it be used – and how should it not be used?

The form, style and detail level of documentation will vary from project to project, but don’t underestimate its value. Aside from behavioural  details, ask yourself what meaning you’re imposing on or assuming about the data you’re dealing with… what would happen if someone else made different assumptions? What could go wrong, and how can you prevent it by expressing your own understanding clearly? This isn’t just important for large projects with big teams, either – it’s entirely possible that the person who comes to the code with a different viewpoint is going to be you, six months later.


I apologise if this all sounds like motherhood and apple pie. I’m not saying anything new, after all – of course we all agree that we need to understand the meaning of the our data. I’m really not trying to waste your time though: I want you to take a moment to appreciate just how much it matters that we understand the data we work with, and how many problems can be avoided by putting effort into communicating that effectively and reading what others have written about their data.

There are other approaches we can take beyond those I’ve listed above, too – much more technically exciting ones – around static code analysis, contracts, complicated annotations and the like. I have nothing against them, but just understanding the value of semantics is a crucial starting point, and once everyone on your team agrees on that, you’ll be in a much better position to move forward and agree on the meaning of your data itself. From there, the world’s your oyster – if you see what I mean.

1 It doesn’t; references are passed by value, as even my non-programmer wife knows by now. That’s how often this myth comes up.
Posted by skeet | 22 comment(s)
Filed under:

Book Review: Async in C# 5.0


A while ago I was attending one of the Developer, Developer, Developer conference in Reading, and I heard Alex Davies give a talk about actors and async. He mentioned that he was in the process of writing a short book for O'Reilly about async in C# 5, and I offered to review it for him. Many months later (sorry Alex!) I'm finally getting round to it.

Disclaimer: The review copy was given to me for free, and equally the book is arguably a competitor of the upcoming 3rd edition of C# in Depth from the view of readers who already own the 2nd edition... so you could say I'm biased in both directions. Hopefully they cancel out.

This is a book purely on async. It's not a general C# book, and it doesn't even cover the tiny non-async features in C# 5. It's all about asynchrony. As you'd expect, it's therefore pretty short (92 pages) and can comfortably be consumed in a single session. Alex's writing style is informal and easy to read. Of course the topic of the book is anything but simple, so even though you may read the whole book in one go first time, that doesn't mean you're likely to fully internalize it straight away. The book is divided into 15 short chapters, so you can revisit specific areas as and when you need to.


I've been writing and speaking about async for about two and a half years now. I've tried various ways of explaining it, and I'm pretty sure it's one of those awkward concepts which really just needs to click eventually. I've had some mails from people for whom my explanation was the one to do the trick... and other mails from folks who only "got it" after seeing another perspective. I'd encourage anyone learning about async to read a variety of books, articles, blog posts and so on. I don't even think it's a matter of finding the single "right" explanation for you – it's a matter of letting them all percolate.

The book covers all the topics you'd expect it to:

  • Why asynchrony is important
  • Drawbacks of library-only approaches
  • How async/await behaves in general
  • Threading and synchronization contexts
  • Exceptions
  • Different code contexts (ASP.NET, WinRT, regular UI apps)
  • How async code is compiled

Additionally there are brief sections on unit testing, parallelism and actors. Personally I'd have preferred the actors part to be omitted, with more discussion on the testing side – particularly in terms of how to write deterministic asynchronous tests. However, I know that Alex is a big fan of actors, so I can forgive a little self-indulgence on that front.

There's one area where I'm not sure I agree with the advice in the book: exceptions. Alex repeatedly gives the advice that you shouldn't let exceptions go unobserved. I used to go along with that almost without thinking – but now I'm not so sure. There are definitely cases where that definitely is the case, but I'm not as comfortable with the global advice as I used to be. I'll try to put my thoughts in order on this front and blog about this separately at a later date.

That aside, this is a good, pragmatic book. To be honest, I suspect no book on async is going to go into quite as many details as the PFX team blog, and that's probably a good thing. But "Async in C# 5.0" is a very good starting point for anyone wanting to get to grips with async, and I in no way begrudge any potential C# in Depth 3rd edition sales I may lose by saying so ;)

Posted by skeet | 3 comment(s)
Filed under: , , ,

New tool to play with: SemanticMerge

A little while ago I was contacted about a new merge tool from the company behind PlasticSCM. (I haven't used Plastic myself, but I'd heard of it.) My initial reaction was that I wasn't interested in anything which required me to learn yet another source control system, but SemanticMerge is independent of PlasticSCM.

My interested was piqued when I learned that SemanticMerge is actually built on Roslyn. I don't generally care much about implementation details, but I'm looking out for uses of Roslyn outside Microsoft, partly to see if I can gain any inspiration for using it myself. Between the name and the implementation detail, it should be fairly obvious that this is a tool which understands changes in C# source code rather better than a plain text diff.

I've had SemanticMerge installed on one of my laptops for a month or so now. Unfortunately it's getting pretty light use – my main coding outside work is on Noda Time, and as I perform the vast majority of the commits, the only time I really need to perform merges is when I've forgotten to push a commit from one machine before switching to another. I've used it for diffs though, and it seems to be doing the right thing there – showing which members have been added, moved, changed etc.

I don't believe it can currently support multi-file changes – for example, spotting that a lot of changes are all due to a rename operation – but even if it doesn't right now, I suspect that may be worked on in the future. And of course, the merge functionality is the main point.

SemanticMerge is now in beta, so pop over to the web site and give it a try.

Posted by skeet | 3 comment(s)
Filed under: ,

The Open-Closed Principle, in review


I've been to a few talks on SOLID before. Most of the principles seem pretty reasonable to me – but I've never "got" the open-closed principle (OCP from here on). At CodeMash this year, I mentioned this to the wonderful Cori Drew, who said that she'd been at a user group talk where she felt it was explained well. She mailed me a link to the user group video, which I finally managed to get round to watching last week. (The OCP part is at around 1 hour 20.)

Unfortunately I still wasn't satisfied, so I thought I'd try to hit up the relevant literature. Obviously there are umpteen guides to OCP, but I decided to start with Wikipedia, and go from there. I mentioned my continuing disappointment on Twitter, and the conversation got lively. Uncle Bob Martin (one of the two "canonical sources" for OCP) wrote a follow-up blog post, and I decided it would be worth writing one of my own, too, which you're now reading.

I should say up-front that in some senses this blog post isn't so much about the details of the open-closed principle, as about the importance of careful choice of terminology at all levels. As we'll see later, when it comes to the "true" meaning of OCP, I'm pretty much with Uncle Bob: it's motherhood and apple pie. But I believe that meaning is much more clearly stated in various other principles, and that OCP as the expression of an idea is doing more harm than good.

Reading material

So what is it? (Part 1 – high level)

This is where it gets interesting. You see, there appear to be several different interpretation of the principle – some only subtly distinct, others seemingly almost unrelated. Even without looking anything up, I knew an expanded version of the name:

Modules should be open for extension and closed for modification.

The version quoted in Wikipedia and in Uncle Bob's paper actually uses "Software entities (classes, modules, functions, etc.)" instead of modules, but I'm not sure that really helps. Now I'm not naïve enough to expect everything in a principle to be clear just from the title, but I do expect some light to be shed. In this case, unfortunately I'm none the wiser. "Open" and "closed" sound permissive and restrictive respectively, but without very concrete ideas about what "extension" and "modification" mean, it's hard to tell much more.

Fair enough – so we read on to the next level. Unfortunately I don't have Bertrand Meyer's "Object-Oriented Software Construction" book (which I take to be the original), but Uncle Bob's paper is freely available. Wikipedia's summary of Meyer's version is:

The idea was that once completed, the implementation of a class could only be modified to correct errors; new or changed features would require that a different class be created. That class could reuse coding from the original class through inheritance. The derived subclass might or might not have the same interface as the original class.

Meyer's definition advocates implementation inheritance. Implementation can be reused through inheritance but interface specifications need not be. The existing implementation is closed to modifications, and new implementations need not implement the existing interface.

And Uncle Bob's high level description is:

Modules that conform to the open-closed principle have two primary attributes.

  1. They are "Open For Extension". This means that the behavior of the module can be extended. That we can make the module behave in new and different ways as the requirements of the application change, or to meet the needs of new applications.
  2. They are "Closed for Modification". The source code of such a module is inviolate. No one is allowed to make source code changes to it.

I immediately took a dislike to both of these descriptions. Both of them specifically say that the source code can't be changed, and the description of Meyer's approach to "make a change by extending a class" feels like a ghastly abuse of inheritance to me... and goes firmly against my (continued) belief in Josh Bloch's advice of "design for inheritance or prohibit it" – where in the majority of cases, designing a class for inheritance involves an awful lot of work for little gain. Designing an interface (or pure abstract class) still involves work, but with fewer restrictions and risks.

Craig Larman's article uses the term "closed" in a much more reasonable way, to my mind:

Also, the phrase "closed with respect to X" means that clients are not affected if X changes.

When I say "more reasonable way" I mean in terms of how I want to write code... not in terms of the use of the word "closed". This is simply not how the word "closed" is used elsewhere in my experience. In the rare cases where "closed" is used with "to", it's usually in terms of what's not allowed in: "This bar is closed to under 18s" for example. Indeed, that's how I read "closed to modification" and that appears to be backed up by the two quotes which say that once a class is complete, the source code cannot be changed.

Likewise the meaning of "open for extension" seems unusual to me. I'd argue that the intuitive meaning is "can be extended" – where the use of the term "extended" certainly nods towards inheritance, even if it's not intended meaning. If the idea is "we can make the module behave differently" – as Uncle Bob's description suggests – then "open for extension" is a very odd way of expressing that idea. I'd even argue that in the example given later, it's not the "open" module that behaves differently – it's the combination of the module and its collaborators, acting as a unified program, which behaves differently after some aspects are modified.

So what is it? (Part 2 – more detail)

Reading on through the rest of Uncle Bob's paper, the ideas become much more familiar. There's a reasonable example of a function which is asked to draw a collection of shapes: the bad code is aware of all the types of shape available, and handles each one separately. The good code uses an abstraction where each shape (Circle, Square) knows how to draw itself and inherits from a common base class (Shape). Great stuff... but what's that got to do with what was described above? How are the concepts of "open" and "closed" clarified?

The answer is that they're not. The word "open" doesn't occur anywhere in the rest of the text, other than as part of the term "open-closed principle" or as a label for "open client". While it's perhaps rather easier to see this in hindsight, I suspect that any time a section which is meant to clarify a concept doesn't use some of the key words used to describe the concept in a nutshell, that description should be treated as suspect.

The word "closed" appears more often, but only in terms of "closed against" which is never actually defined. (Is "closed against" the same as "closed for"?) Without Craig Larman's explanation, sentences like this make little sense to me:

The function DrawAllShapes does not conform to the open-closed principle because it cannot be closed against new kinds of shapes.

Even Craig's explanation feels somewhat at odds with Uncle Bob's usage, as it talks about clients being affected. This is another of the issues I have with the original two descriptions: they talk about a single module being open/closed, whereas we're dealing with abstractions where there are naturally at least two pieces of code involved (and usually three). Craig's description of changes in one module not affecting clients is describing a relationship – which is a far more useful way of approaching things. Even thinking about the shape example, I'm getting increasingly confused about exactly what's open and what's closed. It feels to me like it's neither the concrete shape classes nor the shape-drawing code which is open or closed – it's the interface between the two; the abstract Shape class. After all, these statements seem reasonable:

  • The Shape class is open for extension: there can be many different concrete subclasses, and code which only depends on the Shape class doesn't need to know about them in order to use them when they are presented as shapes.
  • The Shape class is closed for modification: no existing functions can be removed (as they may be relied on by existing clients) and no new pure virtual functions can be added (as they will not be implemented by existing subclasses).

It's still not how I'd choose to express it, but at least it feels like it makes sense in very concrete terms. It doesn't work well with how Uncle Bob uses the term "closed" though, so I still think I may be on a different page when it comes to that meaning. (Uncle Bob does also make the point that any significant program isn't going to adhere to the principle completely in every part of the code – but in order to judge where it's appropriate to be closed, I do really need to understand what being closed means.)

Just to make it crystal clear, other than the use of the word "closed," the low-level description of what's good and what's bad, and why, is absolutely fine. I really have no problems with it. As I said at the start, the idea being expressed makes perfect sense. It just doesn't work (for me) when expressed in the terms used at a higher level.

Protected Variation

By contrast, let's look at a closely related idea which I hadn't actually heard about before I started all this research: protected variation. This name was apparently coined by Alistair Cockburn, and Craig Larman either quotes or paraphrases this as:

Identify points of predicted variation and create a stable interface around them.

Now that's a description I can immediately identify with. Every single word of it makes sense to me, even without reading any more of Craig's article. (I have read the rest, obviously, and I'd encourage others to do so.) This goes back to Josh Bloch's "design for inheritance or prohibit it" motto: identifying points of predicted variation is hard, and it's necessary in order to create a stable interface which is neither too constrictive for implementations nor too woolly for clients. With class inheritance there's the additional concern of interactions within a class hierarchy when a virtual method is called.

So in Uncle Bob's Shape example, all there is is a point of predicted variation: how the shape is drawn. PV suggests the converse as well – that as well as points of predicted variation, there may be points which will not vary. That's inherent in the API to some extent – every shape must be capable of drawing itself with no further information (the Draw method has no parameters) but it could also be extended to non-virtual aspects. For example, we could decide that every shape has one (and only one) colour, which will not change during its lifetime. That can be implemented in the Shape class itself – with no predicted variation, there's no need of polymorphism.

Of course, the costs of incorrectly predicting variation can be high: if you predict more variation than is actually warranted, you waste effort on over-engineering. If you predict less variation than is required, you usually end up either having to change quite a lot of code (if it's all under your control) or having to come up with an "extended" interface. There's the other aspect of shirking responsibility on this predicted variation to some extent, by making some parts "optional" – that's like saying, "We know implementations will vary here in an incompatible way, but we're not going to try to deal with it in the API. Good luck!" This usually arises in collection APIs, around mutating operations which may or may not be valid (based on whether the collection is mutable or not).

Not only is PV easy to understand – it's easy to remember for its comedy value, at least if you're a fan of The Hitchhiker's Guide to the Galaxy. Remember Vroomfondel and Majikthise, the philosophers who invaded Cruxwan University just as Deep Thought was about to announce the answer to Life, the Universe, and Everything? Even though they were arguing with programmers, it sounds like they were actually the ones with software engineering experience:

"I'll tell you what the problem is mate," said Majikthise, "demarcation, that's the problem!"


"That's right!" shouted Vroomfondel, "we demand rigidly defined areas of doubt and uncertainty!"

That sounds like a pretty good alternative description of Protected Variation to me.


So, that's what I don't like about OCP. The name, and the broad description – both of which I believe to be unhelpful, and poorly understood. (While I've obviously considered the possibility that I'm the only one who finds it confusing, I've heard enough variation in the explanations of it to suggest that I'm really not the only one.)

That sounds like a triviality, but I think it's rather important. I suspect that OCP has been at least mentioned in passing in thousands if not tends of thousands of user groups and conferences. The purpose of such gatherings is largely for communication of ideas – and when a sound idea is poorly expressed, an opportunity is wasted. I suspect that any time Uncle Bob has personally presented it in detail, the idea has sunk in appropriately – possibly after some initial confusion about the terminology. But what about all the misinterpretations and "glancing blows" where OCP is only mentioned as a good thing that clearly everyone wants to adhere to, with no explanation beyond the obscure ones described in part one above? How many times did they shed more confusion than light?

I believe more people are familiar with Uncle Bob's work on OCP than Bertrand Meyer's. Further, I suspect that if Bertrand Meyer hadn't already introduced the name and brief description, Uncle Bob may well have come up with far more descriptive ones himself, and the world would have been a better place. Fortunately, we do have a better name and description for a concept which is at least very closely related. (I'm not going to claim PV and OCP are identical, but close enough for a lot of uses.)

Ultimately, words matter – particularly when it comes to single sentence descriptions which act as soundbytes; shorthand for communicating a complex idea. It's not about whether the more complex idea can be understood after carefully reading thorough explanations. It's about whether the shorthand conveys the essence of the idea in a clear way. On that front, I believe the open-closed principle fails – which is why I'd love to see it retired in favour of more accessible ones.

Note for new readers

I suspect this post may end up being read more widely than most of my blog entries. If you're going to leave a comment, please be aware that the CAPTCHA doesn't work on Chrome. I'm aware of this, but can't fix it myself. If you right-click on the broken image and select "open in new tab" you should get a working image. Apologies for the inconvenience.

Posted by skeet | 32 comment(s)
Filed under:

C# in Depth 3rd edition available for early access, plus a discount code…

Readers who follow me on Twitter or Google+ know this already, but…

The third edition of C# in Depth is now available for early access from its page on the Manning website. I’ve been given a special discount code which expires at midnight EST on February 17th, so be quick if you want to use it – it gives 50% off either version. The code is “csharpsk”.

It’s likely that we’ll have a separate (permanent) discount for readers who already own the second edition, but the details of that haven’t been decided yet.

Just to be clear, the third edition is largely the second edition plus the changes to cover C# 5 – I haven’t done as much rewriting as I did for the second edition, mostly because I was already pretty happy with the second edition :) Obviously the largest (by far) feature in C# 5 is async/await, which is covered in detail in the new chapter 15.

Posted by skeet | 9 comment(s)
Filed under: ,

Fun with Object and Collection Initializers

Gosh it feels like a long time since I’ve blogged – particularly since I’ve blogged anything really C#-language-related.

At some point I want to blog about my two CodeMash 2013 sessions (making the C# compiler/team cry, and learning lessons about API design from the Spice Girls) but those will take significant time – so here’s a quick post about object and collection initializers instead. Two interesting little oddities...

Is it an object initializer? Is it a collection initializer? No, it’s a syntax error!

The first part came out of a real life situation – FakeDateTimeZoneSource, if you want to look at the complete context.

Basically, I have a class designed to help test time zone-sensitive code. As ever, I like to create immutable objects, so I have a builder class. That builder class has various properties which we’d like to be able to set, and we’d also like to be able to provide it with the time zones it supports, as simply as possible. For the zones-only use case (where the other properties can just be defaulted) I want to support code like this:

var source = new FakeDateTimeZoneSource.Builder

(CreateZone is just a method to create an arbitrary time zone with the given name.)

To achieve this, I made the Builder implement IEnumerable<DateTimeZone>, and created an Add method. (In this case the IEnumerable<> implementation actually works; in another case I've used explicit interface implementation and made the GetEnumerator() method throw NotSupportedException, as it's really not meant to be called in either case.)

So far, so good. The collection initializer worked perfectly as normal. But what about when we want to set some other properties? Without any time zones, that's fine:

var source = new FakeDateTimeZoneSource.Builder
    VersionId = "foo"

But how could we set VersionId and add some zones? This doesn't work:

var invalid = new FakeDateTimeZoneSource.Builder
    VersionId = "foo",

That's neither a valid object initializer (the second part doesn't specify a field or property) nor a valid collection initializer (the first part does set a property).

In the end, I had to expose an IList<DateTimeZone> property:

var valid = new FakeDateTimeZoneSource.Builder
    VersionId = "foo",
    Zones = { CreateZone("x"), CreateZone("y") }

An alternative would have been to expose a propert of type Builder which just returned itself - the same code would have been valid, but it would have been distinctly odd, and allowed some really spurious code.

I'm happy with the result in terms of the flexibility for clients - but the class design feels a bit messy, and I wouldn't have wanted to expose this for the "production" assembly of Noda Time.

Describing all of this to a colleague gave rise to the following rather sillier observation...

Is it an object initializer? Is it a collection initializer? (Parenthetically speaking...)

In a lot of C# code, an assignment expression is just a normal expression. That means there's potentially room for ambiguity, in exactly the same kind of situation as above - when sometimes we want a collection initializer, and sometimes we want an object initializer. Consider this sample class:

using System;
using System.Collections;

class Weird : IEnumerable
    public string Foo { get; set; }
    private int count;
    public int Count { get { return count; } }
    public void Add(string x)
    IEnumerator IEnumerable.GetEnumerator()
        throw new NotSupportedException();

As you can see, it doesn't actually remember anything passed to the Add method, but it does remember how many times we've called it.

Now let's try using Weird in two ways which only differ in terms of parentheses. First up, no parentheses:

string Foo = "x";
Weird weird = new Weird { Foo = "y" };
Console.WriteLine(Foo);         // x
Console.WriteLine(weird.Foo);   // y
Console.WriteLine(weird.Count); // 0

Okay, so it's odd having a local variable called Foo, but we're basically fine. This is an object initializer, and it's setting the Foo property within the new Weird instance. Now let's add a pair of parentheses:

string Foo = "x";
Weird weird = new Weird { (Foo = "y") };
Console.WriteLine(Foo);         // y
Console.WriteLine(weird.Foo);   // Nothing (null)
Console.WriteLine(weird.Count); // 1

Just adding those parenthese turn the object initializer into a collection initializer, whose sole item is the result of the assignment operator - which is the value which has now been assigned to Foo.

Needless to say, I don't recommend using this approach in real code...

Posted by skeet | 10 comment(s)
Filed under:

Stack Overflow question checklist

My earlier post on how to write a good question is pretty long, and I suspect that even when I refer people to it, often they don't bother reading it. So here's a short list of questions to check after you've written a question (and to think about before you write the question):

  • Have you done some research before asking the question? 1
  • Have you explained what you've already tried to solve your problem?
  • Have you specified which language and platform you're using, including version number where relevant?
  • If your question includes code, have you written it as a short but complete program? 2
  • If your question includes code, have you checked that it's correctly formatted? 3
  • If your code doesn't compile, have you included the exact compiler error?
  • If your question doesn't include code, are you sure it shouldn't?
  • If your program throws an exception, have you included the exception, with both the message and the stack trace?
  • If your program produces different results to what you expected, have you stated what you expected, why you expected it, and the actual results?
  • If your question is related to anything locale-specific (languages, time zones) have you stated the relevant information about your system (e.g. your current time zone)?
  • Have you checked that your question looks reasonable in terms of formatting?
  • Have you checked the spelling and grammar to the best of your ability? 4
  • Have you read the whole question to yourself carefully, to make sure it makes sense and contains enough information for someone coming to it without any of the context that you already know?

    If the answer to any of these questions is "no" you should take the time to fix up your question before posting. I realize this may seem like a lot of effort, but it will help you to get a useful answer as quickly as possible. Don't forget that you're basically asking other people to help you out of the goodness of their heart - it's up to you to do all you can to make that as simple as possible.

    1 If you went from "something's not working" to "asking a question" in less than 10 minutes, you probably haven't done enough research.

    2 Ideally anyone answering the question should be able to copy your code, paste it into a text editor, compile it, run it, and observe the problem. Console applications are good for this - unless your question is directly about a user interface aspect, prefer to write a short console app. Remove anything not directly related to your question, but keep it complete enough to run.

    3 Try to avoid code which makes users scroll horizontally. You may well need to change how you split lines from how you have it in your IDE. Take the time to make it as clear as possible for those trying to help you.

    4 I realize that English isn't the first language for many Stack Overflow users. We're not looking for perfection - just some effort. If you know your English isn't good, see if a colleague or friend can help you with your question before you post it.

  • Posted by skeet | 17 comment(s)
    Filed under:

    Noda Time v1.0 released

    Go get Noda Time 1.0!

    Today is the end of the longest release cycle I've been personally involved in. On November 5th 2009, I announced my intention to write a port of Joda Time for .NET. The next day, Noda Time was born - with a lofty (foolhardy) set of targets.

    Near the end of a talk *about* Noda Time this evening, I released Noda Time 1.0.0.

    It's taken three years, but I'm immensely proud of what we've managed to achieve. We're far from "done" but I believe we're already significantly ahead of most other date/time APIs I've seen in terms of providing a clean API which reduces *incidental* complexity while highlighting the *inherent* complexity of the domain. (This is a theme I'm becoming dogmatic about on various fronts.)

    There's more to do - I can't see myself considering Noda Time to be "done" any time soon - but hopefully now we've got a stable release, we can start to build user momentum.

    One point I raised at the DotNetDevNet presentation tonight was that there's a definite benefit (in my very biased view) in just *looking into* Noda Time:

    • If you can't use it in your production code, use it when prototyping
    • If you can't use it in your prototype code, play with it in personal projects
    • If you can't use it in personal projects, read the user guide to understand the concepts

    I hope that simply looking at the various types that Noda Time providers will give you more insight into how you should be thinking about date and time handling in your code. While the BCL API has a lot of flaws, you can work around most of them if you make it crystal clear what your data means at every step. The type system will leave that largely ambiguous, but there's nothing to stop you from naming your variables descriptively, and adding appropriate

    Of course, I would far prefer it if you'd start using Noda Time and raising issues on how to make it better. Spread the word.

    Oh, and if anyone from the BCL team is reading this and would like to include something like Noda Time into .NET 5 as a "next generation" date/time, I'd be *really* interested in talking to you :)

    Posted by skeet | 13 comment(s)
    Filed under:

    How can I enumerate thee? Let me count the ways...

    This weekend, I was writing some demo code for the async chapter of C# in Depth - the idea was to decompile a simple asynchronous method and see what happened. I received quite a surprise during this, in a way which had nothing to do with asynchrony.

    Given that at execution time, text refers to an instance of System.String, and assuming nothing in the body of the loop captures the ch variable, how would you expect the following loop to be compiled?

    foreach (char ch in text)
        // Body here

    Before today, I could think of four answers depending on the compile-time type of text, assuming it compiled at all. One of those answers is if text is declared to be dynamic, which I'm not going to go into here. Let's stick with static typing.

    If text is declared as IEnumerable...

    In this case, the compiler can only use the non-generic IEnumerator interface, and I'd expect the code to be roughly equivalent to this:

    IEnumerator iterator = text.GetEnumerator();
        while (iterator.MoveNext())
            char ch = (char) iterator.Current;
            // Body here
        IDisposable disposable = iterator as IDisposable;
        if (disposable != null)

    Note how the disposal of the iterator has to be conditional, as IEnumerator doesn't extend IDisposable.

    If text is declared as IEnumerable<char>...

    Here, we don't need any execution time casting, and the disposal can be unconditional:

    IEnumerator<char> iterator = text.GetEnumerator();
        while (iterator.MoveNext())
            char ch = iterator.Current;
            // Body here

    If text is declared as string...

    Now things get interesting. System.String implements IEnumerable<char> using explicit interface implementation, and exposes a separate public GetEnumerator() method which is declared to return a CharEnumerator.

    Usually when I find a type doing this sort of thing, it's for the sake of efficiency, to reduce heap allocations. For example, List<T>.GetEnumerator returns a List<T>.Enumerator which is struct with the appropriate iteration members. This means if you use foreach over an expression of type List<T>, the iterator can stay on the stack in most cases, saving object allocation and garbage collection.

    In this case, however, I suspect CharEnumerator was introduced (way back in .NET 1.0) to avoid having to box each character in the string. This was one reason for foreach handling to be based on types obeying the enumerable pattern, as well as there being support through the normal interfaces. It strikes me that it could still have been a structure in the same way as for List<T>, but maybe that wasn't considered as an option.

    Anyway, it means that I would have expected the code to be compiled like this, even back to C# 1:

    CharEnumerator iterator = text.GetEnumerator();
        while (iterator.MoveNext())
            char ch = iterator.Current;
            // Body here

    What really happens when text is declared as string...

    (This is the bit that surprised me.)

    So far, I've been assuming that the C# compiler doesn't have any special knowledge about strings, when it comes to iteration. I knew it did for arrays, but that's all. The actual result - under the C# 5 compiler, at least - is to use the Length property and the indexer directly:

    int index = 0;
    while (index < text.Length)
        char ch = text[index];
        // Body here

    There's no heap allocation, and no Dispose call. If the variable in question can change its value within the loop (e.g. if it's a field, or a captured variable, or there's an assignment to it within the body) then a copy is made of the variable value (just a reference, of course) first, so that all member access is performed on the same object.


    So, there we go. There's nothing particularly mind-blowing here - certainly nothing to affect your programming style, unless you were deliberately avoiding using foreach on strings "because it's slow." It's still a good lesson in not assuming you know what the compiler is going to do though... so long as the results are as expected, I'm very happy for them to put extra smarts in, even if it does mean having to change my C# in Depth sample code a bit...

    Posted by skeet | 19 comment(s)
    Filed under:

    Stack Overflow and personal emails

    This post is partly meant to be a general announcement, and partly meant to be something I can point people at in the future (rather than writing a short version of this on each email).

    These days, I get at least a few emails practically every day along the lines of:

    "I saw you on Stack Overflow, and would like you to answer this development question for me..."

    It's clear that the author:

    • Is aware of Stack Overflow
    • Is aware that Stack Overflow is a site for development Q&A
    • Is aware that I answer questions on Stack Overflow

    ... and yet they believe that the right way of getting me to answer a question is by emailing it to me directly. Sometimes it's a link to a Stack Overflow question, sometimes it's the question asked directly in email.

    In the early days of Stack Overflow, this wasn't too bad. I'd get maybe one email like this a week. Nowadays, it's simply too much.

    If you have a question worthy of Stack Overflow, ask it on Stack Overflow. If you've been banned from asking questions due to asking too many low-quality ones before, then I'm unlikely to enjoy answering your questions by email - learn what makes a good question instead, and edit your existing questions.

    If you've already asked the question on Stack Overflow, you should consider why you think it's more worthy of my attention than everyone else's questions. You should also consider what would happen if everyone who would like me to answer a question decided to email me.

    Of course in some cases it's appropriate. If you've already asked a question, written it as well as you can, waited a while to see if you get any answers naturally, and if it's in an area that you know I'm particularly experienced in (read: the C# language, basically) then that's fine. If your question is about something from C# in Depth - a snippet which doesn't work or some text you don't understand, for example - then it's entirely appropriate to mail me directly.

    Basically, ask yourself whether you think I will actually welcome the email. Is it about something you know I'm specifically interested in? Or are you just trying to get more attention to a question, somewhat like jumping a queue?

    I'm aware that it's possible this post makes me look either like a grumpy curmudgeon or (worse) like an egocentric pseudo-celebrity. The truth is I'm just like everyone else, with very little time on my hands - time I'd like to spend as usefully and fairly as possible.

    Posted by skeet | 33 comment(s)
    Filed under: ,

    The future of "C# in Depth"

    I'm getting fairly frequent questions - mostly on Twitter - about whether there's going to be a third edition of C# in Depth. I figure it's worth answering it once in some detail rather than repeatedly in 140 characters ;)

    I'm currently writing a couple of new chapters covering the new features in C# 5 - primarily async, of course. The current "plan" is that these will be added to the existing 2nd edition to create a 3rd edition. There will be minimal changes to the existing text of the 2nd edition - basically going over the errata and editing a few places which ought to mention C# 5 early. (In particular the changes to how foreach loop variables are captured.)

    So there will definitely be new chapters. I'm hoping there'll be a full new print (and ebook of course) edition, but no contracts have been signed yet. I'm hoping that the new chapters will be provided free electronically to anyone who's already got the ebook of the 2nd edition - but we'll see. Oh, and I don't have any timelines at the moment. Work is more demanding than it was when I was writing the first and second editions, but obviously I'll try to get the job done at a reasonable pace. (Writing about async in a way which is both accessible and accurate is really tricky, by the way.)

    Of course when I've finished those, I've got two other C# books I want to be writing... when I'm not working on Noda Time, Tekpub screencasts etc...


    I had a question on Twitter around the "two other C# books". I don't want to go into too many details - partly because they're very likely to change - but my intention is to write "C# from Scratch" and "C# in Style". The first would be for complete beginners; the second wouldn't go into "how things work" so much as "how to use the language most effectively." (Yes, competition for Effective C#.) One possibility is that both would be donationware, at least in ebook form, ideally with community involvement in terms of public comments.

    I'm hoping that both will use the same codebase as an extended example, where "From Scratch" would explain what the code does, and "In Style" would explain why I chose that approach. Oh, and "From Scratch" would use unit testing as a teaching tool wherever possible, attempting to convey the idea that it's something every self-respecting dev does :)

    Posted by skeet | 13 comment(s)
    Filed under: , ,
    More Posts Next page »