RandomForestClassfier.fit(): ValueError: could not convert string to float

Given is a simple CSV file:

A,B,C
Hello,Hi,0
Hola,Bueno,1

Obviously the real dataset is far more complex than this, but this one reproduces the error. I'm attempting to build a random forest classifier for it, like so:

cols = ['A','B','C']
col_types = {'A': str, 'B': str, 'C': int}
test = pd.read_csv('test.csv', dtype=col_types)


train_y = test['C'] == 1
train_x = test[cols]


clf_rf = RandomForestClassifier(n_estimators=50)
clf_rf.fit(train_x, train_y)

But I just get this traceback when invoking fit():

ValueError: could not convert string to float: 'Bueno'

scikit-learn version is 0.16.1.

185504 次浏览

You can't pass str to your model fit() method. as it mentioned here

The training input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csc_matrix.

Try transforming your data to float and give a try to LabelEncoder.

You have to do some encoding before using fit(). As it was told fit() does not accept strings, but you solve this.

There are several classes that can be used :

  • LabelEncoder : turn your string into incremental value
  • OneHotEncoder : use One-of-K algorithm to transform your String into integer

Personally, I have post almost the same question on Stack Overflow some time ago. I wanted to have a scalable solution, but didn't get any answer. I selected OneHotEncoder that binarize all the strings. It is quite effective, but if you have a lot of different strings the matrix will grow very quickly and memory will be required.

LabelEncoding worked for me (basically you've to encode your data feature-wise) (mydata is a 2d array of string datatype):

myData=np.genfromtxt(filecsv, delimiter=",", dtype ="|a20" ,skip_header=1);


from sklearn import preprocessing
le = preprocessing.LabelEncoder()
for i in range(*NUMBER OF FEATURES*):
myData[:,i] = le.fit_transform(myData[:,i])

I had a similar issue and found that pandas.get_dummies() solved the problem. Specifically, it splits out columns of categorical data into sets of boolean columns, one new column for each unique value in each input column. In your case, you would replace train_x = test[cols] with:

train_x = pandas.get_dummies(test[cols])

This transforms the train_x Dataframe into the following form, which RandomForestClassifier can accept:

   C  A_Hello  A_Hola  B_Bueno  B_Hi
0  0        1       0        0     1
1  1        0       1        1     0

You may not pass str to fit this kind of classifier.

For example, if you have a feature column named 'grade' which has 3 different grades:

A,B and C.

you have to transfer those str "A","B","C" to matrix by encoder like the following:

A = [1,0,0]


B = [0,1,0]


C = [0,0,1]

because the str does not have numerical meaning for the classifier.

In scikit-learn, OneHotEncoder and LabelEncoder are available in inpreprocessing module. However OneHotEncoder does not support to fit_transform() of string. "ValueError: could not convert string to float" may happen during transform.

You may use LabelEncoder to transfer from str to continuous numerical values. Then you are able to transfer by OneHotEncoder as you wish.

In the Pandas dataframe, I have to encode all the data which are categorized to dtype:object. The following code works for me and I hope this will help you.

 from sklearn import preprocessing
le = preprocessing.LabelEncoder()
for column_name in train_data.columns:
if train_data[column_name].dtype == object:
train_data[column_name] = le.fit_transform(train_data[column_name])
else:
pass

As your input is in string you are getting value error message use countvectorizer it will convert data set in to sparse matrix and train your ml algorithm you will get the result

Indeed a one-hot encoder will work just fine here, convert any string and numerical categorical variables you want into 1's and 0's this way and random forest should not complain.

Well, there are important differences between how OneHot Encoding and Label Encoding work :

  • Label Encoding will basically switch your String variables to int. In this case, the 1st class found will be coded as 1, the 2nd as 2, ... But this encoding creates an issue.

Let's take the example of a variable Animal = ["Dog", "Cat", "Turtle"].

If you use Label Encoder on it, Animal will be [1, 2, 3]. If you parse it to your machine learning model, it will interpret Dog is closer than Cat, and farther than Turtle (because distance between 1 and 2 is lower than distance between 1 and 3).

Label encoding is actually excellent when you have ordinal variable.

For example, if you have a value Age = ["Child", "Teenager", "Young Adult", "Adult", "Old"],

then using Label Encoding is perfect. Child is closer than Teenager than it is from Young Adult. You have a natural order on your variables

  • OneHot Encoding (also done by pd.get_dummies) is the best solution when you have no natural order between your variables.

Let's take back the previous example of Animal = ["Dog", "Cat", "Turtle"].

It will create as much variable as classes you encounter. In my example, it will create 3 binary variables : Dog, Cat and Turtle. Then if you have Animal = "Dog", encoding will make it Dog = 1, Cat = 0, Turtle = 0.

Then you can give this to your model, and he will never interpret that Dog is closer from Cat than from Turtle.

But there are also cons to OneHotEncoding. If you have a categorical variable encountering 50 kind of classes

eg : Dog, Cat, Turtle, Fish, Monkey, ...

then it will create 50 binary variables, which can cause complexity issues. In this case, you can create your own classes and manually change variable

eg : regroup Turtle, Fish, Dolphin, Shark in a same class called Sea Animals and then appy a OneHotEncoding.