为什么网络字节顺序定义为 big-endian?

正如在标题中所写的,我的问题是,为什么 TCP/IP在传输数据时使用大端编码,而不是替代的小端编码方案?

51477 次浏览

RFC1700 stated it must be so. (and defined network byte order as big-endian).

The convention in the documentation of Internet Protocols is to express numbers in decimal and to picture data in "big-endian" order [COHEN]. That is, fields are described left to right, with the most significant octet on the left and the least significant octet on the right.

The reference they make is to

On Holy Wars and a Plea for Peace
Cohen, D.
Computer

The abstract can be found at IEN-137 or on this IEEE page.


Summary:

Which way is chosen does not make too much difference. It is more important to agree upon an order than which order is agreed upon.

It concludes that both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either can be used in place of the other as long as it is consistent all across the system/protocol.