Just a small correction for day 162. Casey said that UTF-16 is like UTF-8 a variable length encoding. It has a thing called surrogate pairs which will allow you to encode symbols about 16 bit.

So for example if you had a wchar_t const *bla and set it to U+1F500 you get a string that is w wchar_t long (On Windows, Linux uses UCS4, so you get a lenght of 1 there. Fun times).

UCS2 was the encoding UTF-16 did develop from. And that one is the standard Windows did use (I think they do use UTF-16 now? Not 100% sure on that). UCS2 is in in fact fixed to 16 bit.