从 bcp 客户端接收到一个无效的列长度,为 colid6

我想批量上传 csv 文件数据到 sql 服务器2005从 c # 代码,但我遇到了下面的错误-

从 bcp 客户端接收到一个无效的列长度,为 colid6。

批量复制写入数据库服务器时

146719 次浏览

One of the data columns in the excel (Column Id 6) has one or more cell data that exceed the datacolumn datatype length in the database.

Verify the data in excel. Also verify the data in the excel for its format to be in compliance with the database table schema.

To avoid this, try exceeding the data-length of the string datatype in the database table.

Hope this helps.

I know this post is old but I ran into this same issue and finally figured out a solution to determine which column was causing the problem and report it back as needed. I determined that colid returned in the SqlException is not zero based so you need to subtract 1 from it to get the value. After that it is used as the index of the _sortedColumnMappings ArrayList of the SqlBulkCopy instance not the index of the column mappings that were added to the SqlBulkCopy instance. One thing to note is that SqlBulkCopy will stop on the first error received so this may not be the only issue but at least helps to figure it out.

try
{
bulkCopy.WriteToServer(importTable);
sqlTran.Commit();
}
catch (SqlException ex)
{
if (ex.Message.Contains("Received an invalid column length from the bcp client for colid"))
{
string pattern = @"\d+";
Match match = Regex.Match(ex.Message.ToString(), pattern);
var index = Convert.ToInt32(match.Value) -1;


FieldInfo fi = typeof(SqlBulkCopy).GetField("_sortedColumnMappings", BindingFlags.NonPublic | BindingFlags.Instance);
var sortedColumns = fi.GetValue(bulkCopy);
var items = (Object[])sortedColumns.GetType().GetField("_items", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(sortedColumns);


FieldInfo itemdata = items[index].GetType().GetField("_metadata", BindingFlags.NonPublic | BindingFlags.Instance);
var metadata = itemdata.GetValue(items[index]);


var column = metadata.GetType().GetField("column", BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).GetValue(metadata);
var length = metadata.GetType().GetField("length", BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).GetValue(metadata);
throw new DataFormatException(String.Format("Column: {0} contains data with a length greater than: {1}", column, length));
}


throw;
}

I faced a similar kind of issue while passing a string to Database table using SQL BulkCopy option. The string i was passing was of 3 characters whereas the destination column length was varchar(20). I tried trimming the string before inserting into DB using Trim() function to check if the issue was due to any space (leading and trailing) in the string. After trimming the string, it worked fine.

You can try text.Trim()

Great piece of code, thanks for sharing!

I ended up using reflection to get the actual DataMemberName to throw back to a client on an error (I'm using bulk save in a WCF service). Hopefully someone else will find how I did it useful.

static string GetDataMemberName(string colName, object t) {
foreach(PropertyInfo propertyInfo in t.GetType().GetProperties()) {
if (propertyInfo.CanRead) {
if (propertyInfo.Name == colName) {
var attributes = propertyInfo.GetCustomAttributes(typeof(DataMemberAttribute), false).FirstOrDefault() as DataMemberAttribute;
if (attributes != null && !string.IsNullOrEmpty(attributes.Name))
return attributes.Name;
return colName;
}
}
}
return colName;
}

Check the size of the columns in the table you are doing bulk insert/copy. the varchar or other string columns might needs to be extended or the value your are inserting needs to be trim. Column order also should be same as in table.

e.g, Increase size of varchar column 30 to 50 =>

ALTER TABLE [dbo].[TableName] ALTER COLUMN [ColumnName] Varchar(50)

I got this error message with a much more recent ssis version (vs 2015 enterprise, i think it's ssis 2016). I will comment here because this is the first reference that comes up when you google this error message. I think it happens mostly with character columns when the source character size is larger than the target character size. I got this message when I was using an ado.net input to ms sql from a teradata database. Funny because the prior oledb writes to ms sql handled all the character conversion perfectly with no coding overrides. The colid number and the a corresponding Destination Input column # you sometimes get with the colid message are worthless. It's not the column when you count down from the top of the mapping or anything like that. If I were microsoft, I'd be embarrased to give an error message that looks like it's pointing at the problem column when it isn't. I found the problem colid by making an educated guess and then changing the input to the mapping to "Ignore" and then rerun and see if the message went away. In my case and in my environment I fixed it by substr( 'ing the Teradata input to the character size of the ms sql declaration for the output column. Check and make sure your input substr propagates through all you data conversions and mappings. In my case it didn't and I had to delete all my Data Conversion's and Mappings and start over again. Again funny that OLEDB just handled it and ADO.net threw the error and had to have all this intervention to make it work. In general you should use OLEDB when your target is MS Sql.

I just stumbled upon this and using @b_stil's snippet, I was able to figure the culprit column. And on futher investigation, I figured i needed to trim the column just like @Liji Chandran suggested but I was using IExcelDataReader and I couldn't figure out an easy way to validate and trim each of my 160 columns.

Then I stumbled upon this class, (ValidatingDataReader) class from CSVReader.

Interesting thing about this class is that it gives you the source and destination columns data length, the culprit row and even the column value that's causing the error.

All I did was just trim all (nvarchar, varchar, char and nchar) columns.

I just changed my GetValue method to this:

 object IDataRecord.GetValue(int i)
{
object columnValue = reader.GetValue(i);


if (i > -1 && i < lookup.Length)
{
DataRow columnDef = lookup[i];
if
(
(
(string)columnDef["DataTypeName"] == "varchar" ||
(string)columnDef["DataTypeName"] == "nvarchar" ||
(string)columnDef["DataTypeName"] == "char" ||
(string)columnDef["DataTypeName"] == "nchar"
) &&
(
columnValue != null &&
columnValue != DBNull.Value
)
)
{
string stringValue = columnValue.ToString().Trim();


columnValue = stringValue;




if (stringValue.Length > (int)columnDef["ColumnSize"])
{
string message =
"Column value \"" + stringValue.Replace("\"", "\\\"") + "\"" +
" with length " + stringValue.Length.ToString("###,##0") +
" from source column " + (this as IDataRecord).GetName(i) +
" in record " + currentRecord.ToString("###,##0") +
" does not fit in destination column " + columnDef["ColumnName"] +
" with length " + ((int)columnDef["ColumnSize"]).ToString("###,##0") +
" in table " + tableName +
" in database " + databaseName +
" on server " + serverName + ".";


if (ColumnException == null)
{
throw new Exception(message);
}
else
{
ColumnExceptionEventArgs args = new ColumnExceptionEventArgs();


args.DataTypeName = (string)columnDef["DataTypeName"];
args.DataType = Type.GetType((string)columnDef["DataType"]);
args.Value = columnValue;
args.SourceIndex = i;
args.SourceColumn = reader.GetName(i);
args.DestIndex = (int)columnDef["ColumnOrdinal"];
args.DestColumn = (string)columnDef["ColumnName"];
args.ColumnSize = (int)columnDef["ColumnSize"];
args.RecordIndex = currentRecord;
args.TableName = tableName;
args.DatabaseName = databaseName;
args.ServerName = serverName;
args.Message = message;


ColumnException(args);


columnValue = args.Value;
}
}






}
}


return columnValue;
}

Hope this helps someone

I have been using SqlBulkCopy to transfer an Access database to SQL. Simply put, I create each table in SQL, load the corresponding Access table into a DataTable, then SqlBulkCopy the DataTable to the corresponding SQL table. Because I used a standardized function to do this, I did not setup any ColumnMappings. When there are no ColumnMappings, SqlBulkCopy seems to simply map column 0 to column 0, column 1 to column 1, etc.

In my case, I had to make sure that the column order in my Access table matched the column order in my new SQL table EXACTLY. Once I did that, it ran as expected.