General Encoding Strategies
Before we get into specifics, let's try to distill a few best practices for working with encodings. I'm sure you can tell that there's a lot that needs to be considered with encodings, so let's try to focus in on a few key points that will help us the most.
Use UTF-8 Everywhere You Can
We know UTF-8 isn't perfect, but it's pretty darn close to perfect. There is no other single encoding you could pick that has the potential to satisfy such a wide audience. It's our best bet. For these reasons, UTF-8 is quickly becoming the preferred encoding for the Web, email, and more.
If you have a say over what encoding or encodings your software will accept, support, and deliver, choose UTF-8 whenever you can. This is absolutely the best default.
Get in the Habit of Documenting Your Encodings
We learned that you must know a data's encoding to properly work with it. While there are tools to help you guess an encoding, you really want to try and avoid being in this position. Part of how to make that happen is to be a good citizen and make sure you are documenting your encodings at every step.
If you send an email, make sure it specifies a correct character set. Add a meta tag to Web pages to state the encoding. View the source of this page for an example. Document encodings accepted and returned from your API's. This will raise everyone's encoding awareness, which helps us all.
Develop Your Encoding Safe Senses
You need to get into the habit of thinking, "Is this encoding safe?" When you call a method, ask the question. When you hand your data off to some process, reality check some results.
Have you ever done something like
str[1..-2] in Ruby 1.8? I sure have and it's not safe. You're cutting bytes there and that may dice a bigger character into pieces. Then your data is junk.
This may sound like paranoia, but it's really not as bad as it seems. There tend to just be a few key points where you need to go out of your way to protect the data and it's asking this question repeatedly that teaches you to spot those.
To give an example, while enhancing the standard
CSV library for Ruby 1.9's m17n (multilingualization) implementation, I needed to use some user provided data in a
Regexp. That's easy right?
Luckily, my instincts were just good enough to wonder, is that safe? I fed some UTF-32 data to
Regexp.escape() to find out. Remember, multibyte encodings that will show some seemingly normal data are great for testing edge cases. Ruby broke my data:
p Regexp.escape("+".encode("UTF-32BE")) "\x00\x00\x00\\+"
Now, this was just a case of Ruby 1.9 still being raw around the edges. It looks like this has been fixed in current builds:
$ ruby_dev -ve 'p Regexp.escape("+".encode("UTF-32BE"))' ruby 1.9.0 (2008-10-10 revision 0) [i386-darwin9.5.0] "\x00\x00\x00\\\x00\x00\x00+"
Still the point stands, you can't even trust Ruby at some times. Be cautious.
The natural conclusion of this is that you want to know how encodings are handled all through the pipeline your data will pass through. Does your HTML arrange to receive form data in UTF-8? Is Ruby in UTF-8 mode when it receives that data? Does the MySQL table you store that data in have an encoding set to UTF-8? Modern versions of Rails even handle two of those three steps for you. That's why it's important to look into the tools you use.
These strategies aren't all you will need, but they are a terrific start. This is not too much to remember and it will greatly increase your awareness of the issues. That's the most important thing.