Base64与 HEX 在 XML 文档中通过互联网发送二进制内容

在 XML 文档内的系统之间发送二进制内容的最佳方式是什么

我知道 Base64和 Hex 真正的区别是什么。我目前正在使用 Base64,但需要包括一个外部公共库为此,因为与 HEX 我认为我可以只创建一个函数。

76236 次浏览

You could just write your own method for Base64 as well... but I'd generally recommend using external, well-tested libraries for both. (It's not like there's any shortage of them.)

The difference between Base64 and hex is really just how bytes are represented. Hex is another way of saying "Base16". Hex will take two characters for each byte - Base64 takes 4 characters for every 3 bytes, so it's more efficient than hex. Assuming you're using UTF-8 to encode the XML document, a 100K file will take 200K to encode in hex, or 133K in Base64. Of course it may well be that you don't care about the space efficiency - in many cases it won't matter. If it does matter, then clearly Base64 is better on that front. (There are alternatives which are even more efficient, but they're not as common.)

base64 has less overhead (base64 produces 4 characters for every 3 bytes of original data while hex produces 2 characters for every byte of original data). Hex is more readable - you just look at the two characters and immediately know what byte is behind, but with base64 you need effort decoding the 4-characters group, so debugging will be easier with hex.

There only two 'real differences':

  1. The radix. Base64 is base-64, surprise, and hex is base-16.

  2. The encoding: base-64 encodes 3 source bytes into 4 base-64 characters (http://en.wikipedia.org/wiki/Base64#Examples); hex encodes 1 byte into 2 hex characters.

So base64 is more compact than hex.

Other answers made clear the efficiency difference between base16 and base64.

There is more to base selection than efficiency.

Base64 uses more than just letters and numbers. Different implementations use different punctuation characters for indicating padding, and making up the last two characters of the set of 64. These can include plus "+" and equal "=". both problematic in HTTP query strings.

So one reason to favour base16 over base64 is that base16 values can be composed directly into HTTP query strings without requiring additional encoding. Is that important to you?

Notice that this is an additional concern, over and above efficiency. Neither base is inherently better or worse; they're just two different points on a scale, at which you'll find different properties that will be more or less attractive in different situations.

For example, consider base32. It's 20% less efficient than base64, but is still suitable for use in HTTP query strings. Most of its inefficiency comes from being case-insensitive and avoiding zero "0" and one "1", to mistakes in reproduction by humans.

So base32 introduces a new concern; ease of reproduction for humans. Is that a concern for you? If it's not, you could go for something like base62, which is still convenient in HTTP query strings, but is case sensitive and includes zero "0" and "1".

Hopefully, I've clarified that the selection of your encoding base is a matter of sliding along a scale until you get the best efficiency you can have before sacrificing what's important to you.

Wikipedia has a fun list of numeral systems.

Is size important to you?

Base64 is more space efficient. Using 4 characters to represent 3 bytes where as hex uses 2 characters for each byte. In other words: hex increases the size of the string with 100%. For small strings that fit as params in url requests I wouldn't mind the extra cost/size.

Is ease of use important to you?

Hex is easier to use than Base64 because you don't need to escape (it may contain +, = and /) when using the string as a get parameter in url requests.

Is widespread use important to you?

I don't have the numbers, but Base64 might be more known to the general developer than hex depending on several factors. I knew about base64 long before hex (base16).

I was curious how on EARTH base64 can convert 3 input bytes into 4 output bytes for just 33% space growth (whereas hex converts 1 input byte into 2 output bytes for 100% space growth). Why specifically 3 input bytes?

The answer is:

3 bytes = 3 x 8 bits = 24 bits.

Why that magic "24 bits" number? Well, base 64 represents the numbers 0 to 63. How are those represented in binary? With 000000 (0) to 111111 (63).

Bingo! Each base64 character represents 6 bits of input data using a single output byte (a single character such as "Z", etc).

So 24 bits (3 full 8-bit bytes of input) / 6 bits (base64 alphabet) = 4 bytes of base64. That's it!

Or, described another way, every Base64 character (which is 1 byte (8 bits)) encodes 6 bits of real data. And if we divide 8bits/6bits we see where the 33% growth comes from, as mentioned at the top of this post... So yes, Base64 always increases data size by 33% (plus some potential padding by the = characters that are sometimes added at the end of the base64 output).

You may think "Why not base128 (7 bits of input = 8 bits of output), at just 14% size growth when encoding?". The answer for that is that base64 is the best we can find, since the lower 128 ASCII characters aren't all printable. Many are control characters such as NULL etc.

There are obviously ways to create other systems such as perhaps "base81" etc, since you can do anything you want if you create a custom encoding algorithm. But the beauty of base64 is how it encodes data so cleanly in chunks of 6 bits, and how you simply have to "read 3 bytes and output 4" to encode, and "read 4 bytes and output 3" to decode. So that encoding scheme became popular.

Now you are hopefully wiser after having read this.

Fun Update: Speaking of other encoding styles with more characters... It's come to my attention that Ascii85 aka Base85 exists and is slightly more efficient (25% data size growth when encoding as Base85 instead of 33% for Base64): https://en.wikipedia.org/wiki/Ascii85