What Ruby 1.9 Gives Us
In this final post of the series, I want to revisit our earlier discussion on encoding strategies. Ruby 1.9 adds a lot of power to the handling of character encodings as you have now seen. We should talk a little about how that can change the game.
UTF-8 is Still King
The most important thing to take note of is what hasn't changed with Ruby 1.9. I said a good while back that the best
Encodingfor general use is UTF-8. That's still very true.
I still strongly recommend that we favor UTF-8 as the one-size-almost-fits-all
Encoding. I really believe that we can and should use it exclusively inside our code, transcode data to it on the way in, and transcode output when we absolutely must. The more of us that do this, the better things will get.
As we've discussed earlier in the series, Ruby 1.9 does add some new features that help our UTF-8 only strategies. For example, you could use things like the
Encodingcommand-line switches (
-U) to setup auto translation for all input you read. These shortcuts are great for simple scripting, but I'm going to recommend you just be explicit about your
Encodings in any serious code.
Miscellaneous M17n Details
We've now discussed the core of Ruby 1.9's m17n (multilingualization) engine.
IOare where you will see the big changes. The new m17n system is a big beast though with a lot of little details. Let's talk a little about some side topics that also relate to how we work with character encodings in Ruby 1.9.
More Features of the Encoding Class
You've seen me using
Encodingobjects all over the place in my explanations of m17n, but we haven't talked much about them. They are very simple, mainly just being a named representation of each
Encodinginside Ruby. As such,
Encodingis a storage place for some tools you may find handy when working with them.
First, you can receive a
Encodingobjects Ruby has loaded in the form of an
$ ruby -e 'puts Encoding.list.first(3), "..."' ASCII-8BIT UTF-8 US-ASCII ...
If you're just interested in a specific
Encoding, you can
find()it by name:
$ ruby -e 'p Encoding.find("UTF-8")' #<Encoding:UTF-8> $ ruby -e 'p Encoding.find("No-Such-Encoding")' -e:1:in `find': unknown encoding name - No-Such-Encoding (ArgumentError) from -e:1:in `<main>'
Ruby 1.9's Three Default Encodings
I suspect early contact with the new m17n (multilingualization) engine is going to come to Rubyists in the form of this error message:
invalid multibyte char (US-ASCII)
Ruby 1.8 didn't care what you stuck in a random
Stringliteral, but 1.9 is a touch pickier. I think you'll see that the change is for the better, but we do need to spend some time learning to play by Ruby's new rules.
That takes us to the first of Ruby's three default
The Source Encoding
In Ruby's new grown up world of all encoded data, each and every
Encoding. That means an
Encodingmust be selected for a
Stringas soon as it is created. One way that a
Stringcan be created is for Ruby to execute some code with a
Stringliteral in it, like this:
str = "A new String"
That's a pretty simple
String, but what if I use a literal like the following instead?
str = "Résumé"
Encodingis that in? That fundamental question is probably the main reason we all struggle a bit with character encodings. You can't tell just from looking at that data what
Encodingit is in. Now, if I showed you the bytes you may be able to make an educated guess, but the data just isn't wearing an
Encoding Conversion With iconv
There's one last standard library we need to discuss for us to have completely covered Ruby 1.8's support for character encodings. The
iconvlibrary ships with Ruby and it can handle an impressive set of character encoding conversions.
This is an important piece of the puzzle. You may have accepted my advice that it's OK to just work with UTF-8 data whenever you have the choice, but the fact is that there's a lot of non-UTF-8 data in the world. Legacy systems may have produced data before UTF-8 was popular, some services may work in different encodings for any number of reasons, and not quite everyone has embraced Unicode fully yet. If you run into data like this, you will need a way to convert it to UTF-8 as you import it and possibly a way to convert it back when you export it. That's exactly what
Instead of jumping right into Ruby's
iconvlibrary, let's come at it with a slightly different approach.
iconvis actually a C library that performs these conversions and on most systems where it is installed you will have a command-line interface for it.
The $KCODE Variable and jcode Library
All of the Ruby files I create start with the same Shebang line:
#!/usr/bin/env ruby -wKU
It's not really needed for every file since it generally only matters if the file is executed. However, I tend to go ahead and add it to all Ruby files I build for several reasons:
- You never know when a file may be executed (
if __FILE__ == $PROGRAM_NAME; endsections are often added to libraries, for example)
- It makes it obvious the file is Ruby code
- It shows the rules this code expects
The rules I mention here, specified by command-line switches, are the main point of interest.
-wturns on Ruby's warnings which are very handy. I recommend doing that whenever you can. But that doesn't have anything to do with character encodings.
-KUsets a magic Ruby variable:
$KCODE. You can do the same in your code if you aren't in a position to control the command-line arguments:
$KCODE = "U"
You probably recognize the
Uas a name for Ruby 1.8's UTF-8 encoding, from my earlier list of encodings. It can also be set to
S. Modern versions of Rails do set
$KCODE = "U"for you.
- You never know when a file may be executed (
Bytes and Characters in Ruby 1.8
Gregory Brown said, in a training session at the Lone Star Rubyconf, "Ruby 1.8 works in bytes. Ruby 1.9 works in characters." The truth of Ruby 1.9 is maybe a little more complicated and we will discuss all of that eventually, but Greg is dead right about Ruby 1.8.
In Ruby 1.8, a
Stringis always just a collection of bytes.
The important question is, how does that one golden rule relate to all that we've learned about character encodings? Essentially, it puts all the responsibility on you as the developer. Ruby 1.8 leaves it to you to determine what to do with those bytes and it doesn't provide a lot of encoding savvy help. That's why knowing at least the basics of encodings is so important when working with Ruby 1.8.
There are plusses and minuses to every system and this one is no exception. On the side of plusses, Ruby 1.8 can pretty much support any encoding you can imagine. After all, a character encoding is just some bytes that somehow map to a set of characters and all Ruby 1.8
Strings are just some bytes. If you say a
Stringholds Latin-1 data and treat it as such, that's fine by Ruby.
General Encoding Strategies
Before we get into specifics, let's try to distill a few best practices for working with encodings. I'm sure you can tell that there's a lot that needs to be considered with encodings, so let's try to focus in on a few key points that will help us the most.
Use UTF-8 Everywhere You Can
We know UTF-8 isn't perfect, but it's pretty darn close to perfect. There is no other single encoding you could pick that has the potential to satisfy such a wide audience. It's our best bet. For these reasons, UTF-8 is quickly becoming the preferred encoding for the Web, email, and more.
If you have a say over what encoding or encodings your software will accept, support, and deliver, choose UTF-8 whenever you can. This is absolutely the best default.
Get in the Habit of Documenting Your Encodings
We learned that you must know a data's encoding to properly work with it. While there are tools to help you guess an encoding, you really want to try and avoid being in this position. Part of how to make that happen is to be a good citizen and make sure you are documenting your encodings at every step.
The Unicode Character Set and Encodings
Since the rise of the various character encodings, there has been a quest to find the one perfect encoding we could all use. It's hard to get everyone to agree about whether or not this has truly been accomplished, but most of us agree that Unicode is as close as it gets.
The goal of Unicode was literally to provide a character set that includes all characters in use today. That's letters and numbers for all languages, all the images needed by pictographic languages, and all symbols. As you can imagine that's quite a challenging task, but they've done very well. Take a moment to browse all the characters in the current Unicode specification to see for yourself. The Unicode Consortium often reminds us that they still have room for more characters as well, so we will be all set when we start meeting alien races.
Now in order to really understand what Unicode is, I need to clear up a point I've played pretty loose with so far: a character set and a character encoding aren't necessarily the same thing. Unicode is one character set, and has multiple character encodings. Allow me to explain.
What is a Character Encoding?
The first step to understanding character encodings is that we're going to need to talk a little about how computers store character data. I know we would love to believe that when we push the
akey on our keyboard, the computer records a little
asymbol somewhere, but that's just fantasy.
I imagine most of us know that deep in the heart of computers pretty much everything is eventually in terms of ones and zeros. That means that an
ahas to be stored as some number. In fact, it is. We can see what number using Ruby 1.8:
$ ruby -ve 'p ?a' ruby 1.8.6 (2008-08-11 patchlevel 287) [i686-darwin9.4.0] 97
?asyntax gives us a specific character, instead of a full
String. In Ruby 1.8 it does that by returning the code of that encoded character. You can also get this by indexing one character out of a
$ ruby -ve 'p "a"' ruby 1.8.6 (2008-08-11 patchlevel 287) [i686-darwin9.4.0] 97
Stringbehaviors were deemed confusing by the Ruby core team and have been changed in Ruby 1.9. They now return one character
Strings. If you want to see the character codes in Ruby 1.9 you can use