如何在PySpark中更改数据帧列名?

我来自熊猫的背景,我习惯了从CSV文件读取数据到一个dataframe,然后简单地改变列名使用简单的命令有用的东西:

df.columns = new_column_name_list
然而,这在使用sqlContext创建的PySpark数据框架中不起作用。 我能想到的唯一解决办法是:

df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)

这基本上是定义变量两次,首先推断模式,然后重命名列名,然后用更新的模式再次加载数据框架。

有没有更好更有效的方法来做到这一点,就像我们对熊猫做的那样?

我的Spark版本是1.5.0

535094 次浏览

有很多方法可以做到这一点:

  • < p >选项1。使用selectExpr

     data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)],
    ["Name", "askdaosdka"])
    data.show()
    data.printSchema()
    
    
    # Output
    #+-------+----------+
    #|   Name|askdaosdka|
    #+-------+----------+
    #|Alberto|         2|
    #| Dakota|         2|
    #+-------+----------+
    
    
    #root
    # |-- Name: string (nullable = true)
    # |-- askdaosdka: long (nullable = true)
    
    
    df = data.selectExpr("Name as name", "askdaosdka as age")
    df.show()
    df.printSchema()
    
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+
    
    
    #root
    # |-- name: string (nullable = true)
    # |-- age: long (nullable = true)
    
  • < p >选项2。使用withColumnRenamed,注意这个方法允许你“覆盖”;同一列。对于Python3,将xrange替换为range

     from functools import reduce
    
    
    oldColumns = data.schema.names
    newColumns = ["name", "age"]
    
    
    df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
    df.printSchema()
    df.show()
    
  • <李> 3 < p >选项。使用 别名,在Scala中你也可以使用作为
     from pyspark.sql.functions import col
    
    
    data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age"))
    data.show()
    
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+
    
  • < p >选项4。使用sqlContext.sql,它允许你在注册为表的DataFrames上使用SQL查询。

     sqlContext.registerDataFrameAsTable(data, "myTable")
    df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable")
    
    
    df2.show()
    
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+
    
df = df.withColumnRenamed("colName", "newColName")\
.withColumnRenamed("colName2", "newColName2")

使用这种方式的优点:对于一个很长的列列表,您只需要更改几个列名。这在这些场景中非常方便。在连接具有重复列名的表时非常有用。

如果你想重命名一个列,并保持其他列不变:

from pyspark.sql.functions import col
new_df = old_df.select(*[col(s).alias(new_name) if s == column_to_change else s for s in old_df.columns])

如果你想改变所有的列名,尝试df.toDF(*cols)

对于单个列重命名,仍然可以使用toDF()。例如,

df1.selectExpr("SALARY*2").toDF("REVISED_SALARY").show()

我们可以使用col.alias重命名列:

from pyspark.sql.functions import col
df.select(['vin',col('timeStamp').alias('Date')]).show()

如果你想对所有列名应用一个简单的转换,这段代码可以做到:(我用下划线替换所有空格)

new_column_name_list= list(map(lambda x: x.replace(" ", "_"), df.columns))


df = df.toDF(*new_column_name_list)

感谢@user8117731的toDf技巧。

另一种只重命名一列的方法(使用import pyspark.sql.functions as F):

df = df.select( '*', F.col('count').alias('new_count') ).drop('count')

df.withColumnRenamed('age', 'age2')

这是我使用的方法:

创建pyspark会话:

import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('changeColNames').getOrCreate()

创建dataframe:

df = spark.createDataFrame(data = [('Bob', 5.62,'juice'),  ('Sue',0.85,'milk')], schema = ["Name", "Amount","Item"])

使用列名查看df:

df.show()
+----+------+-----+
|Name|Amount| Item|
+----+------+-----+
| Bob|  5.62|juice|
| Sue|  0.85| milk|
+----+------+-----+

创建一个包含新列名的列表:

newcolnames = ['NameNew','AmountNew','ItemNew']

修改df的列名:

for c,n in zip(df.columns,newcolnames):
df=df.withColumnRenamed(c,n)

使用新列名查看df:

df.show()
+-------+---------+-------+
|NameNew|AmountNew|ItemNew|
+-------+---------+-------+
|    Bob|     5.62|  juice|
|    Sue|     0.85|   milk|
+-------+---------+-------+
我做了一个容易使用的函数来重命名pyspark dataframe的多个列, 如果有人想使用它:

def renameCols(df, old_columns, new_columns):
for old_col,new_col in zip(old_columns,new_columns):
df = df.withColumnRenamed(old_col,new_col)
return df


old_columns = ['old_name1','old_name2']
new_columns = ['new_name1', 'new_name2']
df_renamed = renameCols(df, old_columns, new_columns)

注意,两个列表的长度必须相同。

您可以使用以下函数重命名数据框架的所有列。

def df_col_rename(X, to_rename, replace_with):
"""
:param X: spark dataframe
:param to_rename: list of original names
:param replace_with: list of new names
:return: dataframe with updated names
"""
import pyspark.sql.functions as F
mapping = dict(zip(to_rename, replace_with))
X = X.select([F.col(c).alias(mapping.get(c, c)) for c in to_rename])
return X

如果你只需要更新几个列名,你可以在replace_with列表中使用相同的列名

重命名所有列

df_col_rename(X,['a', 'b', 'c'], ['x', 'y', 'z'])

重命名一些列

df_col_rename(X,['a', 'b', 'c'], ['a', 'y', 'z'])

我们可以使用各种方法重命名列名。

首先,让我们创建一个简单的数据框架。

df = spark.createDataFrame([("x", 1), ("y", 2)],
["col_1", "col_2"])

现在我们试着把col_1重命名为col_3。PFB的几个方法也一样。

# Approach - 1 : using withColumnRenamed function.
df.withColumnRenamed("col_1", "col_3").show()


# Approach - 2 : using alias function.
df.select(df["col_1"].alias("col3"), "col_2").show()


# Approach - 3 : using selectExpr function.
df.selectExpr("col_1 as col_3", "col_2").show()


# Rename all columns
# Approach - 4 : using toDF function. Here you need to pass the list of all columns present in DataFrame.
df.toDF("col_3", "col_2").show()

这是输出。

+-----+-----+
|col_3|col_2|
+-----+-----+
|    x|    1|
|    y|    2|
+-----+-----+

我希望这能有所帮助。

你可以使用多种方法:

  1. < p > df1=df.withColumn("new_column","old_column").drop(col("old_column"))

  2. < p > df1=df.withColumn("new_column","old_column")

  3. < p > df1=df.select("old_column".alias("new_column"))

您可以放入for循环,并使用zip将两个数组中的每个列名配对。

new_name = ["id", "sepal_length_cm", "sepal_width_cm", "petal_length_cm", "petal_width_cm", "species"]


new_df = df
for old, new in zip(df.columns, new_name):
new_df = new_df.withColumnRenamed(old, new)

我喜欢使用字典重命名df。

rename = {'old1': 'new1', 'old2': 'new2'}
for col in df.schema.names:
df = df.withColumnRenamed(col, rename[col])

方法1:

df = df.withColumnRenamed("old_column_name", "new_column_name")

< >强方法2: 如果你想做一些计算并重命名新值

df = df.withColumn("old_column_name", F.when(F.col("old_column_name") > 1, F.lit(1)).otherwise(F.col("old_column_name"))
df = df.drop("new_column_name", "old_column_name")

from pyspark.sql.types import StructType,StructField, StringType, IntegerType


CreatingDataFrame = [("James","Sales","NY",90000,34,10000),
("Michael","Sales","NY",86000,56,20000),
("Robert","Sales","CA",81000,30,23000),
("Maria","Finance","CA",90000,24,23000),
("Raman","Finance","CA",99000,40,24000),
("Scott","Finance","NY",83000,36,19000),
("Jen","Finance","NY",79000,53,15000),
("Jeff","Marketing","CA",80000,25,18000),
("Kumar","Marketing","NY",91000,50,21000)
]


schema = StructType([ \
StructField("employee_name",StringType(),True), \
StructField("department",StringType(),True), \
StructField("state",StringType(),True), \
StructField("salary", IntegerType(), True), \
StructField("age", StringType(), True), \
StructField("bonus", IntegerType(), True) \
])


 

OurData = spark.createDataFrame(data=CreatingDataFrame,schema=schema)


OurData.show()


# COMMAND ----------


GrouppedBonusData=OurData.groupBy("department").sum("bonus")




# COMMAND ----------


GrouppedBonusData.show()




# COMMAND ----------


GrouppedBonusData.printSchema()


# COMMAND ----------


from pyspark.sql.functions import col


BonusColumnRenamed = GrouppedBonusData.select(col("department").alias("department"), col("sum(bonus)").alias("Total_Bonus"))
BonusColumnRenamed.show()


# COMMAND ----------


GrouppedBonusData.groupBy("department").count().show()


# COMMAND ----------


GrouppedSalaryData=OurData.groupBy("department").sum("salary")


# COMMAND ----------


GrouppedSalaryData.show()


# COMMAND ----------


from pyspark.sql.functions import col


SalaryColumnRenamed = GrouppedSalaryData.select(col("department").alias("Department"), col("sum(salary)").alias("Total_Salary"))
SalaryColumnRenamed.show()


你可以使用'alias'来更改列名:

col('my_column').alias('new_name')

另一种使用'alias'的方式(可能没有提到):

df.my_column.alias('new_name')

试试下面的方法。下面的方法允许您重命名多个文件的列

参考:https://www.linkedin.com/pulse/pyspark-methods-rename-columns-kyle-gibson/

df_initial = spark.read.load('com.databricks.spark.csv')
    

rename_dict = {
'Alberto':'Name',
'Dakota':'askdaosdka'
}
    

df_renamed = df_initial \
.select([col(c).alias(rename_dict.get(c, c)) for c in df_initial.columns])


    

rename_dict = {
'FName':'FirstName',
'LName':'LastName',
'DOB':'BirthDate'
}


return df.select([col(c).alias(rename_dict.get(c, c)) for c in df.columns])




df_renamed = spark.read.load('/mnt/datalake/bronze/testData') \
.transform(renameColumns)

最简单的解决方案是使用withColumnRenamed:

renamed_df = df.withColumnRenamed(‘name_1’, ‘New_name_1’).withColumnRenamed(‘name_2’, ‘New_name_2’)
renamed_df.show()

如果你想像我们对Pandas那样做,你可以使用toDF:

创建一个新列的顺序列表并将其传递给toDF

df_list = ["newName_1", “newName_2", “newName_3", “newName_4"]
renamed_df = df.toDF(*df_list)
renamed_df.show()


这是一个用循环重命名多个列的简单方法:

cols_to_rename = ["col1","col2","col3"]


for col in cols_to_rename:
df = df.withColumnRenamed(col,"new_{}".format(col))

列表理解+ f-string:

df = df.toDF(*[f'n_{c}' for c in df.columns])

简单的列表理解:

df = df.toDF(*[c.lower() for c in df.columns])