Joe Steanelli 提供的解决方案是可行的,但是当涉及到几十或几百个列时,制作列列表是不方便的。下面介绍如何获取 我的模式中表 我的桌子的列表。
-- override GROUP_CONCAT limit of 1024 characters to avoid a truncated result
set session group_concat_max_len = 1000000;
select GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'"))
from INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'my_table'
AND TABLE_SCHEMA = 'my_schema'
order BY ORDINAL_POSITION
--If your table has too many columns
SET GLOBAL group_concat_max_len = 100000000;
--Prepared statement
SET @SQL = ( select CONCAT('SELECT * INTO OUTFILE \'YOUR_PATH\' FIELDS TERMINATED BY \',\' OPTIONALLY ENCLOSED BY \'"\' ESCAPED BY \'\' LINES TERMINATED BY \'\\n\' FROM (SELECT ', GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'")),' UNION select * from YOUR_TABLE) as tmp') from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'YOUR_TABLE' AND TABLE_SCHEMA = 'YOUR_SCHEMA' order BY ORDINAL_POSITION );
--Execute it
PREPARE stmt FROM @SQL;
EXECUTE stmt;
只是需要一些按语句顺序排序的技巧——我们使用 case 语句,并用一些其他值替换头部值,这些值保证在列表中排序第一(显然这取决于字段的类型以及您是在排序 ASC 还是 DESC)
假设您有三个字段,name (varchar)、 is _ active (bool)、 date _ something _ goes (date) ,并希望对第二个字段进行降序排序:
select
'name'
, 'is_active' as is_active
, date_something_happens as 'date_something_happens'
union all
select name, is_active, date_something_happens
from
my_table
order by
(case is_active when 'is_active' then 0 else is_active end) desc
, (case date when 'date' then '9999-12-30' else date end) desc
SELECT * FROM (
SELECT 'Column name #1', 'Column name #2', 'Column name ##'
UNION ALL
(
// complex SELECT statement with WHERE, ORDER BY, GROUP BY etc.
)
) resulting_set
INTO OUTFILE '/path/to/file';
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE 'c:\\datasheet.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'
/* Change table_name and database_name */
SET @table_name = 'table_name';
SET @table_schema = 'database_name';
SET @default_group_concat_max_len = (SELECT @@group_concat_max_len);
/* Sets Group Concat Max Limit larger for tables with a lot of columns */
SET SESSION group_concat_max_len = 1000000;
SET @col_names = (
SELECT GROUP_CONCAT(QUOTE(`column_name`)) AS columns
FROM information_schema.columns
WHERE table_schema = @table_schema
AND table_name = @table_name);
SET @cols = CONCAT('(SELECT ', @col_names, ')');
SET @query = CONCAT('(SELECT * FROM ', @table_schema, '.', @table_name,
' INTO OUTFILE \'/tmp/your_csv_file.csv\'
FIELDS ENCLOSED BY \'\\\'\' TERMINATED BY \'\t\' ESCAPED BY \'\'
LINES TERMINATED BY \'\n\')');
/* Concatenates column names to query */
SET @sql = CONCAT(@cols, ' UNION ALL ', @query);
/* Resets Group Contact Max Limit back to original value */
SET SESSION group_concat_max_len = @default_group_concat_max_len;
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
我是用 PHP 编写代码的,在使用 concat 和 union 函数时遇到了一些麻烦,而且我也没有使用 SQL 变量,不管怎样,我都可以让它工作,下面是我的代码:
//first I connected to the information_scheme DB
$headercon=mysqli_connect("localhost", "USERNAME", "PASSWORD", "information_schema");
//took the healders out in a string (I could not get the concat function to work, so I wrote a loop for it)
$headers = '';
$sql = "SELECT column_name AS columns FROM `COLUMNS` WHERE table_schema = 'YOUR_DB_NAME' AND table_name = 'YOUR_TABLE_NAME'";
$result = $headercon->query($sql);
while($row = $result->fetch_row())
{
$headers = $headers . "'" . $row[0] . "', ";
}
$headers = substr("$headers", 0, -2);
// connect to the DB of interest
$con=mysqli_connect("localhost", "USERNAME", "PASSWORD", "YOUR_DB_NAME");
// export the results to csv
$sql4 = "SELECT $headers UNION SELECT * FROM YOUR_TABLE_NAME WHERE ... INTO OUTFILE '/output.csv' FIELDS TERMINATED BY ','";
$result4 = $con->query($sql4);
select * from (
(select column_name
from information_schema.columns
where table_name = 'my_table'
and table_schema = 'my_schema'
order by ordinal_position)
union all
(select * // potentially complex SELECT statement with WHERE, ORDER BY, GROUP BY etc.
from my_table)) as tbl
into outfile '/path/outfile'
fields terminated by ',' optionally enclosed by '"' escaped by '\\'
lines terminated by '\n';
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT * from (SELECT ColName1, ColName2, ColName3
FROM YourTable order by ColName1 limit 3) a
INTO OUTFILE '/path/outfile';
我在 NodeJS 中对大型表执行 mysql 查询时遇到了类似的问题。我在 CSV 文件中包含头文件所遵循的方法如下
使用 OUTFILE 查询准备没有标题的文件
SELECT * INTO OUTFILE [FILE_NAME] FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED
BY '\"' LINES TERMINATED BY '\n' FROM [TABLE_NAME]
Fetch column headers for the table used in point 1
select GROUP_CONCAT(CONCAT(\"\",COLUMN_NAME,\"\")) as col_names from
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = [TABLE_NAME] AND TABLE_SCHEMA
= [DATABASE_NAME] ORDER BY ORDINAL_POSITION
Append the column headers to the file created in step 1 using prepend-file npm package
Execution of each step was controlled using promises in NodeJS.
select ('id') as id, ('time') as time, ('unit') as unit
UNION ALL
SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM sensor
select ('id') as id, ('time') as time, ('unit') as unit
UNION ALL
SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM sensor
最后,所有类型的 UNION 尝试都产生了错误。通过将整个混乱局面压缩到一个 dicts 列表中,使用 csv 写入 csv 变得非常简单。口述记录员。
headers_sql = """
SELECT
GROUP_CONCAT(CONCAT(COLUMN_NAME) ORDER BY ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table';
""""
cur = db.cursor(buffered=True)
cur.execute(header_sql)
headers_result = cur.fetchone()
headers_list = headers_result[0].split(",")
rows_sql = """ SELECT * FROM schema.table; """"
data = cur.execute(rows_sql)
data_rows = cur.fetchall()
data_as_list_of_dicts = [dict(zip(headers_list, row)) for row in data_rows]
with open(csv_destination_file, 'w', encoding='utf-8') as destination_file_opened:
dict_writer = csv.DictWriter(destination_file_opened, fieldnames=headers_list)
dict_writer.writeheader() for dict in dict_list:
dict_writer.writerow(dict)
使用 python 的解决方案,但是如果已经使用其他工具,则不需要安装 python 包来读取 sql 文件。
如果你不熟悉 python,你可以在 colab 笔记本上运行 python 代码,所有必需的软件包都已经安装好了。它自动化 Matt 和 Joe 的解决方案。
首先执行这个 SQL 脚本来获得一个包含所有表名的 csv:
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA='your_schema'
INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/tables.csv';
然后将 tables.csv 移动到一个合适的目录,并在替换“ path _ to _ tables”和“ your _ schema”之后执行这段 python 代码。它将生成一个 sql 脚本来导出所有表头:
import pandas as pd
import os
tables = pd.read_csv('tables.csv',header = None)[0]
text_file = open("export_headers.sql", "w")
schema = 'your_schema'
sql_output_path = 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/'
for table in tables :
path = os.path.join(sql_output_path,'{}_header.csv'.format(table))
string = "(select GROUP_CONCAT(COLUMN_NAME)\nfrom INFORMATION_SCHEMA.COLUMNS\nWHERE TABLE_NAME = '{}'\nAND TABLE_SCHEMA = '{}'\norder BY ORDINAL_POSITION)\nINTO OUTFILE '{}';".format(table,schema,path)
n = text_file.write(string)
n = text_file.write('\n\n')
text_file.close()
然后执行这段 python 代码,它将生成一个 sql 脚本来导出所有表的值:
text_file = open("export_values.sql", "w")
for table in tables :
path = os.path.join(sql_output_path,'{}.csv'.format(table))
string = "SELECT * FROM {}.{}\nINTO OUTFILE '{}';".format(schema,table,path)
n = text_file.write(string)
n = text_file.write('\n\n')
text_file.close()
执行两个生成的 sql 脚本,并将头文件 csv 和值 csv 移动到您选择的目录中。
然后执行最后一段 python 代码:
#Respectively the path to the headers csvs, the values csv and the path where you want to put the csvs with headers and values combined
headers_path, values_path, tables_path = '', '', ''
for table in tables :
header = pd.read_csv(os.path.join(headers_path,'{}_header.csv'.format(table)))
df = pd.read_csv(os.path.join(values_path,'{}.csv'.format(table)),names = header.columns,sep = '\t')
df.to_csv(os.path.join(tables_path,'{}.csv'.format(table)),index = False)