在 C # 中确定字符串的编码

有没有办法确定 C # 中字符串的编码?

比方说,我有一个文件名字符串,但是我不知道它是用 Unicode UTF-16编码的还是系统默认编码的,我如何找到它呢?

231606 次浏览

这取决于字符串“来自”哪里。A.NET 字符串是 Unicode (UTF-16)。唯一不同的方法是,比如说,将数据从数据库读入字节数组。

这篇 CodeProject 文章可能会引起人们的兴趣: 检测输入和输出文本的编码

Jon Skeet 的 C # 和.NET 中的字符串是对.NET 字符串的极好解释。

另一个选择,来得太晚了,抱歉:

Http://www.architectshack.com/textfileencodingdetector.ashx

这个只有 C # 的小类在存在时使用 BOMS,尝试自动检测可能的 Unicode 编码,如果没有 Unicode 编码可能或不可能,则退回。

听起来上面引用的 UTF8Checker 做了类似的事情,但是我认为这在范围上稍微宽一些——它不仅仅检查 UTF8,还检查其他可能丢失 BOM 的 Unicode 编码(UTF-16LE 或 BE)。

希望这对谁有帮助!

我知道这有点晚了,但我要说清楚:

字符串实际上没有... 中的编码。NET 的字符串是字符对象的集合。实际上,如果它是一个字符串,那么它已经被解码了。

但是,如果您正在读取由字节构成的文件的内容,并希望将其转换为字符串,则必须使用该文件的编码。

NET 包括用于 ASCII、 UTF7、 UTF8、 UTF32等的编码和解码类。

大多数这些编码包含特定的字节顺序标记,可用于区分所使用的编码类型。

NET 类 System.IO.StreamReader 能够通过读取这些字节顺序标记来确定流中使用的编码;

这里有一个例子:

    /// <summary>
/// return the detected encoding and the contents of the file.
/// </summary>
/// <param name="fileName"></param>
/// <param name="contents"></param>
/// <returns></returns>
public static Encoding DetectEncoding(String fileName, out String contents)
{
// open the file with the stream-reader:
using (StreamReader reader = new StreamReader(fileName, true))
{
// read the contents of the file into a string
contents = reader.ReadToEnd();


// return the encoding.
return reader.CurrentEncoding;
}
}

以下代码具有以下特点:

  1. 检测或尝试检测 UTF-7,UTF-8/16/32(bom,no bom,little & big endian)
  2. 如果未找到 Unicode 编码,则返回到本地默认代码页。
  3. 检测丢失 BOM/签名的 Unicode 文件(概率很高)
  4. 在文件内搜索 charset = xyz 并编码 = xyz,以帮助确定编码。
  5. 为了节省处理时间,您可以“尝试”该文件(可定义的字节数)。
  6. 返回编码和解码的文本文件。
  7. 纯粹基于字节的高效解决方案

正如其他人所说,没有解决方案是完美的(当然,人们不能轻易区分世界范围内使用的各种8位扩展 ASCII 编码) ,但是我们可以得到“足够好”,特别是如果开发人员也提供给用户一个替代编码的列表,如下所示: 每种语言最常用的编码方式是什么?

使用 Encoding.GetEncodings();可以找到编码的完整列表

// Function to detect the encoding for UTF-7, UTF-8/16/32 (bom, no bom, little
// & big endian), and local default codepage, and potentially other codepages.
// 'taster' = number of bytes to check of the file (to save processing). Higher
// value is slower, but more reliable (especially UTF-8 with special characters
// later on may appear to be ASCII initially). If taster = 0, then taster
// becomes the length of the file (for maximum reliability). 'text' is simply
// the string with the discovered encoding applied to the file.
public Encoding detectTextEncoding(string filename, out String text, int taster = 1000)
{
byte[] b = File.ReadAllBytes(filename);


//////////////// First check the low hanging fruit by checking if a
//////////////// BOM/signature exists (sourced from http://www.unicode.org/faq/utf_bom.html#bom4)
if (b.Length >= 4 && b[0] == 0x00 && b[1] == 0x00 && b[2] == 0xFE && b[3] == 0xFF) { text = Encoding.GetEncoding("utf-32BE").GetString(b, 4, b.Length - 4); return Encoding.GetEncoding("utf-32BE"); }  // UTF-32, big-endian
else if (b.Length >= 4 && b[0] == 0xFF && b[1] == 0xFE && b[2] == 0x00 && b[3] == 0x00) { text = Encoding.UTF32.GetString(b, 4, b.Length - 4); return Encoding.UTF32; }    // UTF-32, little-endian
else if (b.Length >= 2 && b[0] == 0xFE && b[1] == 0xFF) { text = Encoding.BigEndianUnicode.GetString(b, 2, b.Length - 2); return Encoding.BigEndianUnicode; }     // UTF-16, big-endian
else if (b.Length >= 2 && b[0] == 0xFF && b[1] == 0xFE) { text = Encoding.Unicode.GetString(b, 2, b.Length - 2); return Encoding.Unicode; }              // UTF-16, little-endian
else if (b.Length >= 3 && b[0] == 0xEF && b[1] == 0xBB && b[2] == 0xBF) { text = Encoding.UTF8.GetString(b, 3, b.Length - 3); return Encoding.UTF8; } // UTF-8
else if (b.Length >= 3 && b[0] == 0x2b && b[1] == 0x2f && b[2] == 0x76) { text = Encoding.UTF7.GetString(b,3,b.Length-3); return Encoding.UTF7; } // UTF-7


        

//////////// If the code reaches here, no BOM/signature was found, so now
//////////// we need to 'taste' the file to see if can manually discover
//////////// the encoding. A high taster value is desired for UTF-8
if (taster == 0 || taster > b.Length) taster = b.Length;    // Taster size can't be bigger than the filesize obviously.




// Some text files are encoded in UTF8, but have no BOM/signature. Hence
// the below manually checks for a UTF8 pattern. This code is based off
// the top answer at: https://stackoverflow.com/questions/6555015/check-for-invalid-utf8
// For our purposes, an unnecessarily strict (and terser/slower)
// implementation is shown at: https://stackoverflow.com/questions/1031645/how-to-detect-utf-8-in-plain-c
// For the below, false positives should be exceedingly rare (and would
// be either slightly malformed UTF-8 (which would suit our purposes
// anyway) or 8-bit extended ASCII/UTF-16/32 at a vanishingly long shot).
int i = 0;
bool utf8 = false;
while (i < taster - 4)
{
if (b[i] <= 0x7F) { i += 1; continue; }     // If all characters are below 0x80, then it is valid UTF8, but UTF8 is not 'required' (and therefore the text is more desirable to be treated as the default codepage of the computer). Hence, there's no "utf8 = true;" code unlike the next three checks.
if (b[i] >= 0xC2 && b[i] < 0xE0 && b[i + 1] >= 0x80 && b[i + 1] < 0xC0) { i += 2; utf8 = true; continue; }
if (b[i] >= 0xE0 && b[i] < 0xF0 && b[i + 1] >= 0x80 && b[i + 1] < 0xC0 && b[i + 2] >= 0x80 && b[i + 2] < 0xC0) { i += 3; utf8 = true; continue; }
if (b[i] >= 0xF0 && b[i] < 0xF5 && b[i + 1] >= 0x80 && b[i + 1] < 0xC0 && b[i + 2] >= 0x80 && b[i + 2] < 0xC0 && b[i + 3] >= 0x80 && b[i + 3] < 0xC0) { i += 4; utf8 = true; continue; }
utf8 = false; break;
}
if (utf8 == true) {
text = Encoding.UTF8.GetString(b);
return Encoding.UTF8;
}




// The next check is a heuristic attempt to detect UTF-16 without a BOM.
// We simply look for zeroes in odd or even byte places, and if a certain
// threshold is reached, the code is 'probably' UF-16.
double threshold = 0.1; // proportion of chars step 2 which must be zeroed to be diagnosed as utf-16. 0.1 = 10%
int count = 0;
for (int n = 0; n < taster; n += 2) if (b[n] == 0) count++;
if (((double)count) / taster > threshold) { text = Encoding.BigEndianUnicode.GetString(b); return Encoding.BigEndianUnicode; }
count = 0;
for (int n = 1; n < taster; n += 2) if (b[n] == 0) count++;
if (((double)count) / taster > threshold) { text = Encoding.Unicode.GetString(b); return Encoding.Unicode; } // (little-endian)




// Finally, a long shot - let's see if we can find "charset=xyz" or
// "encoding=xyz" to identify the encoding:
for (int n = 0; n < taster-9; n++)
{
if (
((b[n + 0] == 'c' || b[n + 0] == 'C') && (b[n + 1] == 'h' || b[n + 1] == 'H') && (b[n + 2] == 'a' || b[n + 2] == 'A') && (b[n + 3] == 'r' || b[n + 3] == 'R') && (b[n + 4] == 's' || b[n + 4] == 'S') && (b[n + 5] == 'e' || b[n + 5] == 'E') && (b[n + 6] == 't' || b[n + 6] == 'T') && (b[n + 7] == '=')) ||
((b[n + 0] == 'e' || b[n + 0] == 'E') && (b[n + 1] == 'n' || b[n + 1] == 'N') && (b[n + 2] == 'c' || b[n + 2] == 'C') && (b[n + 3] == 'o' || b[n + 3] == 'O') && (b[n + 4] == 'd' || b[n + 4] == 'D') && (b[n + 5] == 'i' || b[n + 5] == 'I') && (b[n + 6] == 'n' || b[n + 6] == 'N') && (b[n + 7] == 'g' || b[n + 7] == 'G') && (b[n + 8] == '='))
)
{
if (b[n + 0] == 'c' || b[n + 0] == 'C') n += 8; else n += 9;
if (b[n] == '"' || b[n] == '\'') n++;
int oldn = n;
while (n < taster && (b[n] == '_' || b[n] == '-' || (b[n] >= '0' && b[n] <= '9') || (b[n] >= 'a' && b[n] <= 'z') || (b[n] >= 'A' && b[n] <= 'Z')))
{ n++; }
byte[] nb = new byte[n-oldn];
Array.Copy(b, oldn, nb, 0, n-oldn);
try {
string internalEnc = Encoding.ASCII.GetString(nb);
text = Encoding.GetEncoding(internalEnc).GetString(b);
return Encoding.GetEncoding(internalEnc);
}
catch { break; }    // If C# doesn't recognize the name of the encoding, break.
}
}




// If all else fails, the encoding is probably (though certainly not
// definitely) the user's local codepage! One might present to the user a
// list of alternative encodings as shown here: https://stackoverflow.com/questions/8509339/what-is-the-most-common-encoding-of-each-language
// A full list can be found using Encoding.GetEncodings();
text = Encoding.Default.GetString(b);
return Encoding.Default;
}

我的解决方案是使用内置的东西与一些备用。

我从另一个关于堆栈溢出的类似问题的答案中选择了这个策略,但是我现在找不到它。

它首先使用 StreamReader 中的内置逻辑检查 BOM,如果有 BOM,那么编码将不是 Encoding.Default,我们应该相信这个结果。

如果不是,它将检查字节序列是否是有效的 UTF-8序列。如果是,它将猜测 UTF-8作为编码,如果不是,同样,默认的 ASCII 编码将是结果。

static Encoding getEncoding(string path) {
var stream = new FileStream(path, FileMode.Open);
var reader = new StreamReader(stream, Encoding.Default, true);
reader.Read();


if (reader.CurrentEncoding != Encoding.Default) {
reader.Close();
return reader.CurrentEncoding;
}


stream.Position = 0;


reader = new StreamReader(stream, new UTF8Encoding(false, true));
try {
reader.ReadToEnd();
reader.Close();
return Encoding.UTF8;
}
catch (Exception) {
reader.Close();
return Encoding.Default;
}
}

注意: 这是一个观察 UTF-8编码在内部如何工作的实验。使用初始化的 UTF8Encoding对象在解码失败时抛出异常的 威利维恩提供的解决方案要简单得多,而且基本上做同样的事情。


我编写这段代码是为了区分 UTF-8和 Windows-1252。但是它不应该用于巨大的文本文件,因为它会将整个文件加载到内存中并完全扫描它。我用它来。Srt 字幕文件,只是为了能够将它们保存回加载它们的编码中。

作为 ref 给函数的编码应该是8位后备编码,以便在文件被检测为无效 UTF-8时使用; 通常,在 Windows 系统上,这将是 Windows-1252。这并不像检查实际有效的 ascii 范围那样花哨,甚至在字节顺序标记上也不检测 UTF-16。

位检测背后的理论可以在这里找到: Https://ianthehenry.com/2015/1/17/decoding-utf-8/

基本上,第一个字节的位范围决定了它是 UTF-8实体的一部分之后的位数。后面的这些字节总是在相同的位范围内。

/// <summary>
/// Reads a text file, and detects whether its encoding is valid UTF-8 or ascii.
/// If not, decodes the text using the given fallback encoding.
/// Bit-wise mechanism for detecting valid UTF-8 based on
/// https://ianthehenry.com/2015/1/17/decoding-utf-8/
/// </summary>
/// <param name="docBytes">The bytes read from the file.</param>
/// <param name="encoding">The default encoding to use as fallback if the text is detected not to be pure ascii or UTF-8 compliant. This ref parameter is changed to the detected encoding.</param>
/// <returns>The contents of the read file, as String.</returns>
public static String ReadFileAndGetEncoding(Byte[] docBytes, ref Encoding encoding)
{
if (encoding == null)
encoding = Encoding.GetEncoding(1252);
Int32 len = docBytes.Length;
// byte order mark for utf-8. Easiest way of detecting encoding.
if (len > 3 && docBytes[0] == 0xEF && docBytes[1] == 0xBB && docBytes[2] == 0xBF)
{
encoding = new UTF8Encoding(true);
// Note that even when initialising an encoding to have
// a BOM, it does not cut it off the front of the input.
return encoding.GetString(docBytes, 3, len - 3);
}
Boolean isPureAscii = true;
Boolean isUtf8Valid = true;
for (Int32 i = 0; i < len; ++i)
{
Int32 skip = TestUtf8(docBytes, i);
if (skip == 0)
continue;
if (isPureAscii)
isPureAscii = false;
if (skip < 0)
{
isUtf8Valid = false;
// if invalid utf8 is detected, there's no sense in going on.
break;
}
i += skip;
}
if (isPureAscii)
encoding = new ASCIIEncoding(); // pure 7-bit ascii.
else if (isUtf8Valid)
encoding = new UTF8Encoding(false);
// else, retain given encoding. This should be an 8-bit encoding like Windows-1252.
return encoding.GetString(docBytes);
}


/// <summary>
/// Tests if the bytes following the given offset are UTF-8 valid, and
/// returns the amount of bytes to skip ahead to do the next read if it is.
/// If the text is not UTF-8 valid it returns -1.
/// </summary>
/// <param name="binFile">Byte array to test</param>
/// <param name="offset">Offset in the byte array to test.</param>
/// <returns>The amount of bytes to skip ahead for the next read, or -1 if the byte sequence wasn't valid UTF-8</returns>
public static Int32 TestUtf8(Byte[] binFile, Int32 offset)
{
// 7 bytes (so 6 added bytes) is the maximum the UTF-8 design could support,
// but in reality it only goes up to 3, meaning the full amount is 4.
const Int32 maxUtf8Length = 4;
Byte current = binFile[offset];
if ((current & 0x80) == 0)
return 0; // valid 7-bit ascii. Added length is 0 bytes.
Int32 len = binFile.Length;
for (Int32 addedlength = 1; addedlength < maxUtf8Length; ++addedlength)
{
Int32 fullmask = 0x80;
Int32 testmask = 0;
// This code adds shifted bits to get the desired full mask.
// If the full mask is [111]0 0000, then test mask will be [110]0 0000. Since this is
// effectively always the previous step in the iteration I just store it each time.
for (Int32 i = 0; i <= addedlength; ++i)
{
testmask = fullmask;
fullmask += (0x80 >> (i+1));
}
// figure out bit masks from level
if ((current & fullmask) == testmask)
{
if (offset + addedlength >= len)
return -1;
// Lookahead. Pattern of any following bytes is always 10xxxxxx
for (Int32 i = 1; i <= addedlength; ++i)
{
if ((binFile[offset + i] & 0xC0) != 0x80)
return -1;
}
return addedlength;
}
}
// Value is greater than the maximum allowed for utf8. Deemed invalid.
return -1;
}

这个 文件编码 Nuget 包将一个 Mozilla 通用字符集检测器的 C # 端口封装成一个非常简单的 API:

var encoding = FileEncoding.DetectFileEncoding(txtFile);

我在 GitHub 上找到了新的库: 字符检测器/UTF-未知

字符集检测器构建在 C #-. NET Core 2-3,. NET 标准1-2和. NET 4 + 中

它也是基于其他存储库的 Mozilla 通用字符集检测器的一个端口。

CharsetDetector/UTF-known 有一个名为 CharsetDetector的类。

CharsetDetector包含一些静态编码检测方法:

  • CharsetDetector.DetectFromFile()
  • CharsetDetector.DetectFromStream()
  • CharsetDetector.DetectFromBytes()

检测到的结果是类 DetectionResult有属性 Detected,这是类 DetectionDetail的实例,具有以下属性:

  • EncodingName
  • Encoding
  • Confidence

下面是一个用法示例:

// Program.cs
using System;
using System.Text;
using UtfUnknown;


namespace ConsoleExample
{
public class Program
{
public static void Main(string[] args)
{
string filename = @"E:\new-file.txt";
DetectDemo(filename);
}


/// <summary>
/// Command line example: detect the encoding of the given file.
/// </summary>
/// <param name="filename">a filename</param>
public static void DetectDemo(string filename)
{
// Detect from File
DetectionResult result = CharsetDetector.DetectFromFile(filename);
// Get the best Detection
DetectionDetail resultDetected = result.Detected;


// detected result may be null.
if (resultDetected != null)
{
// Get the alias of the found encoding
string encodingName = resultDetected.EncodingName;
// Get the System.Text.Encoding of the found encoding (can be null if not available)
Encoding encoding = resultDetected.Encoding;
// Get the confidence of the found encoding (between 0 and 1)
float confidence = resultDetected.Confidence;


if (encoding != null)
{
Console.WriteLine($"Detection completed: {filename}");
Console.WriteLine($"EncodingWebName: {encoding.WebName}{Environment.NewLine}Confidence: {confidence}");
}
else
{
Console.WriteLine($"Detection completed: {filename}");
Console.WriteLine($"(Encoding is null){Environment.NewLine}EncodingName: {encodingName}{Environment.NewLine}Confidence: {confidence}");
}
}
else
{
Console.WriteLine($"Detection failed: {filename}");
}
}
}
}

示例结果截图: enter image description here

我最后的工作方法是通过检测由编码创建的字节数组中的字符串中的无效字符来尝试预期编码的潜在候选字符。 如果我没有遇到无效的字符,我认为测试的编码对测试的数据很有效。

对于我来说,为了确定字节数组的正确编码,只需要考虑拉丁语和德语的特殊字符,我尝试用这种方法检测字符串中的无效字符:

    /// <summary>
/// detect invalid characters in string, use to detect improper encoding
/// </summary>
/// <param name="s"></param>
/// <returns></returns>
public static bool DetectInvalidChars(string s)
{
const string specialChars = "\r\n\t .,;:-_!\"'?()[]{}&%$§=*+~#@|<>äöüÄÖÜß/\\^€";
return s.Any(ch => !(
specialChars.Contains(ch) ||
(ch >= '0' && ch <= '9') ||
(ch >= 'a' && ch <= 'z') ||
(ch >= 'A' && ch <= 'Z')));
}

(注意: 如果您需要考虑其他基于拉丁语的语言,那么您可能需要在代码中调整 specalChars const 字符串)

然后我像这样使用它(我只希望使用 UTF-8或默认编码) :

        // determine encoding by detecting invalid characters in string
var invoiceXmlText = Encoding.UTF8.GetString(invoiceXmlBytes); // try utf-8 first
if (StringFuncs.DetectInvalidChars(invoiceXmlText))
invoiceXmlText = Encoding.Default.GetString(invoiceXmlBytes); // fallback to default

正如其他人所提到的,C # 中的 string是编码为 UTF-16LE (System.Text.Encoding.Unicode)的 一直都是

从字里行间可以看出,我相信你真正关心的是你的 string中的字符是否是 兼容和其他的 已知编码(也就是说,它们是否“适合”另一个代码页?).

在这种情况下,我找到的最正确的解决方案是尝试转换并查看字符串 改变。如果 string中的某个字符在目标编码中不“匹配”,编码器将用它替换某个将(例如’?’)是常见的)。


// using System.Text;


// And if you're using the "System.Text.Encoding.CodePages" NuGet package, you
// need to call this once or GetEncoding will raise a NotSupportedException:
// Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);


var srcEnc = Encoding.Unicode;
var dstEnc = Encoding.GetEncoding(1252); // 1252 Requires use of the "System.Text.Encoding.CodePages" NuGet package.
string srcText = "Some text you want to check";
string dstText = dstEnc.GetString(Encoding.Convert(srcEnc, dstEnc, srcEnc.GetBytes(srcText)));


// if (srcText == dstText) the srcText "fits" (it's compatible).
// else the srcText doesn't "fit" (it's not compatible)