I think this a really good question, in Hive you would use EXPLODE, I think there is a case to be made that Pandas should include this functionality by default. I would probably explode the list column with a nested generator comprehension like this:
pd.DataFrame({
"name": i[0],
"opponent": i[1],
"nearest_neighbor": neighbour
}
for i, row in df.iterrows() for neighbour in row.nearest_neighbors
).set_index(["name", "opponent"])
In the code below, I first reset the index to make the row iteration easier.
I create a list of lists where each element of the outer list is a row of the target DataFrame and each element of the inner list is one of the columns. This nested list will ultimately be concatenated to create the desired DataFrame.
I use a lambda function together with list iteration to create a row for each element of the nearest_neighbors paired with the relevant name and opponent.
Finally, I create a new DataFrame from this list (using the original column names and setting the index back to name and opponent).
df = (pd.DataFrame({'name': ['A.J. Price'] * 3,
'opponent': ['76ers', 'blazers', 'bobcats'],
'nearest_neighbors': [['Zach LaVine', 'Jeremy Lin', 'Nate Robinson', 'Isaia']] * 3})
.set_index(['name', 'opponent']))
>>> df
nearest_neighbors
name opponent
A.J. Price 76ers [Zach LaVine, Jeremy Lin, Nate Robinson, Isaia]
blazers [Zach LaVine, Jeremy Lin, Nate Robinson, Isaia]
bobcats [Zach LaVine, Jeremy Lin, Nate Robinson, Isaia]
df.reset_index(inplace=True)
rows = []
_ = df.apply(lambda row: [rows.append([row['name'], row['opponent'], nn])
for nn in row.nearest_neighbors], axis=1)
df_new = pd.DataFrame(rows, columns=df.columns).set_index(['name', 'opponent'])
>>> df_new
nearest_neighbors
name opponent
A.J. Price 76ers Zach LaVine
76ers Jeremy Lin
76ers Nate Robinson
76ers Isaia
blazers Zach LaVine
blazers Jeremy Lin
blazers Nate Robinson
blazers Isaia
bobcats Zach LaVine
bobcats Jeremy Lin
bobcats Nate Robinson
bobcats Isaia
df = pd.DataFrame({'listcol':[[1,2,3],[4,5,6]]})
# expand df.listcol into its own dataframe
tags = df['listcol'].apply(pd.Series)
# rename each variable is listcol
tags = tags.rename(columns = lambda x : 'listcol_' + str(x))
# join the tags dataframe back to the original dataframe
df = pd.concat([df[:], tags[:]], axis=1)
import copy
def pandas_explode(df, column_to_explode):
"""
Similar to Hive's EXPLODE function, take a column with iterable elements, and flatten the iterable to one element
per observation in the output table
:param df: A dataframe to explod
:type df: pandas.DataFrame
:param column_to_explode:
:type column_to_explode: str
:return: An exploded data frame
:rtype: pandas.DataFrame
"""
# Create a list of new observations
new_observations = list()
# Iterate through existing observations
for row in df.to_dict(orient='records'):
# Take out the exploding iterable
explode_values = row[column_to_explode]
del row[column_to_explode]
# Create a new observation for every entry in the exploding iterable & add all of the other columns
for explode_value in explode_values:
# Deep copy existing observation
new_observation = copy.deepcopy(row)
# Add one (newly flattened) value from exploding iterable
new_observation[column_to_explode] = explode_value
# Add to the list of new observations
new_observations.append(new_observation)
# Create a DataFrame
return_df = pandas.DataFrame(new_observations)
# Return
return return_df
Here is a potential optimization for larger dataframes. This runs faster when there are several equal values in the "exploding" field. (The larger the dataframe is compared to the unique value count in the field, the better this code will perform.)
Extending Oleg's .iloc answer to automatically flatten all list-columns:
def extend_iloc(df):
cols_to_flatten = [colname for colname in df.columns if
isinstance(df.iloc[0][colname], list)]
# Row numbers to repeat
lens = df[cols_to_flatten[0]].apply(len)
vals = range(df.shape[0])
ilocations = np.repeat(vals, lens)
# Replicate rows and add flattened column of lists
with_idxs = [(i, c) for (i, c) in enumerate(df.columns) if c not in cols_to_flatten]
col_idxs = list(zip(*with_idxs)[0])
new_df = df.iloc[ilocations, col_idxs].copy()
# Flatten columns of lists
for col_target in cols_to_flatten:
col_flat = [item for sublist in df[col_target] for item in sublist]
new_df[col_target] = col_flat
return new_df
This assumes that each list-column has equal list length.
So all of these answers are good but I wanted something ^really simple^ so here's my contribution:
def explode(series):
return pd.Series([x for inner_list in series for x in inner_list])
That's it.. just use this when you want a new series where the lists are 'exploded'. Here's an example where we do value_counts()
In[1]: df = pd.DataFrame({'column': [['a','b','c'],['b','c'],['c']]})
In [2]: df
Out[2]:
column
0 [a, b, c]
1 [b, c]
2 [c]
In [3]: explode(df['column'])
Out[3]:
0 a
1 b
2 c
3 b
4 c
5 c
In [4]: explode(df['column']).value_counts()
Out[4]:
c 3
b 2
a 1