21
OCT2008
General Encoding Strategies
Before we get into specifics, let's try to distill a few best practices for working with encodings. I'm sure you can tell that there's a lot that needs to be considered with encodings, so let's try to focus in on a few key points that will help us the most.
Use UTF-8 Everywhere You Can
We know UTF-8 isn't perfect, but it's pretty darn close to perfect. There is no other single encoding you could pick that has the potential to satisfy such a wide audience. It's our best bet. For these reasons, UTF-8 is quickly becoming the preferred encoding for the Web, email, and more.
If you have a say over what encoding or encodings your software will accept, support, and deliver, choose UTF-8 whenever you can. This is absolutely the best default.
Get in the Habit of Documenting Your Encodings
We learned that you must know a data's encoding to properly work with it. While there are tools to help you guess an encoding, you really want to try and avoid being in this position. Part of how to make that happen is to be a good citizen and make sure you are documenting your encodings at every step.
If you send an email, make sure it specifies a correct character set. Add a meta tag to Web pages to state the encoding. View the source of this page for an example. Document encodings accepted and returned from your API's. This will raise everyone's encoding awareness, which helps us all.
Develop Your Encoding Safe Senses
You need to get into the habit of thinking, "Is this encoding safe?" When you call a method, ask the question. When you hand your data off to some process, reality check some results.
Have you ever done something like str[1..-2]
in Ruby 1.8? I sure have and it's not safe. You're cutting bytes there and that may dice a bigger character into pieces. Then your data is junk.
This may sound like paranoia, but it's really not as bad as it seems. There tend to just be a few key points where you need to go out of your way to protect the data and it's asking this question repeatedly that teaches you to spot those.
To give an example, while enhancing the standard CSV
library for Ruby 1.9's m17n (multilingualization) implementation, I needed to use some user provided data in a Regexp
. That's easy right?
Regexp.escape(data)
Luckily, my instincts were just good enough to wonder, is that safe? I fed some UTF-32 data to Regexp.escape()
to find out. Remember, multibyte encodings that will show some seemingly normal data are great for testing edge cases. Ruby broke my data:
p Regexp.escape("+".encode("UTF-32BE"))
"\x00\x00\x00\\+"
Now, this was just a case of Ruby 1.9 still being raw around the edges. It looks like this has been fixed in current builds:
$ ruby_dev -ve 'p Regexp.escape("+".encode("UTF-32BE"))'
ruby 1.9.0 (2008-10-10 revision 0) [i386-darwin9.5.0]
"\x00\x00\x00\\\x00\x00\x00+"
Still the point stands, you can't even trust Ruby at some times. Be cautious.
The natural conclusion of this is that you want to know how encodings are handled all through the pipeline your data will pass through. Does your HTML arrange to receive form data in UTF-8? Is Ruby in UTF-8 mode when it receives that data? Does the MySQL table you store that data in have an encoding set to UTF-8? Modern versions of Rails even handle two of those three steps for you. That's why it's important to look into the tools you use.
These strategies aren't all you will need, but they are a terrific start. This is not too much to remember and it will greatly increase your awareness of the issues. That's the most important thing.
Comments (4)
-
James Edward Gray II April 24th, 2009 Reply Link
If you do find yourself in a situation where you don't know a character encoding and you are forced to guess it (again try to avoid this whenever possible), Andrew S. Townley posted a message to Ruby Talk showing how to use the rchardet gem to guess an encoding.
-
thanks for all of this great info buddy! I really learned about my problems with ruby strings here.I'll spread the word =D
-
UTF-8 everywhere?
Nay.
Operating systems do not tend to us these internally.
OS X uses UTF 8, UTF 16 and UTF 32 where appropriate and handles conversion invisibly most of the time. ICU library under the hood.I would say it is better advice to not pretend UTF8 is the new ASCII and just try to learn to do one thing.
It is a much better idea to encourage all coders to do their homework and know that there will be heterogeneous environments.
Files can contain anything while file systems tend to use some specific encoding for file names plus some kind of limitations.It IS a good idea for software to be prepared to accept data in multiple encodings and internally convert all of it to a common encoding for use within the app.
-
I stand by my recommendation. UTF-8 is still the best choice.
If your program needs to work with other encodings, transcode to UTF-8 on the way in, work with that one encoding internally, and transcode as needed on the way back out. Handling multiple encodings internally is extremely complex.
-