将 MySQL 转储导入 PostgreSQL 数据库

如何将“ xxxx.sql”转储从 MySQL 导入 PostgreSQL 数据库?

135214 次浏览

It is not possible to import an Oracle (binary) dump to PostgreSQL.

If the MySQL dump is in plain SQL format, you will need to edit the file to make the syntax correct for PostgreSQL (e.g. remove the non-standard backtick quoting, remove the engine definition for the CREATE TABLE statements adjust the data types and a lot of other things)

Don't expect that to work without editing. Maybe a lot of editing.

mysqldump has a compatibility argument, --compatible=name, where "name" can be "oracle" or "postgresql", but that doesn't guarantee compatibility. I think server settings like ANSI_QUOTES have some effect, too.

You'll get more useful help here if you include the complete command you used to create the dump, along with any error messages you got instead of saying just "Nothing worked for me."

The fastest (and most complete) way I found was to use Kettle. This will also generate the needed tables, convert the indexes and everything else. The mysqldump compatibility argument does not work.

The steps:

  1. Download Pentaho ETL from http://kettle.pentaho.org/ (community version)

  2. Unzip and run Pentaho (spoon.sh/spoon.bat depending on unix/windows)

  3. Create a new job

  4. Create a database connection for the MySQL source (Tools -> Wizard -> Create database connection)

  5. Create a database connection for the PostgreSQL source (as above)

  6. Run the Copy Tables wizard (Tools -> Wizard -> Copy Tables)

  7. Run the job

Here is a simple program to create and load all tables in a mysql database (honey) to postgresql. Type conversion from mysql is coarse-grained but easily refined. You will have to recreate the indexes manually:

import MySQLdb
from magic import Connect #Private mysql connect information
import psycopg2


dbx=Connect()
DB=psycopg2.connect("dbname='honey'")
DC=DB.cursor()


mysql='''show tables from honey'''
dbx.execute(mysql); ts=dbx.fetchall(); tables=[]
for table in ts: tables.append(table[0])
for table in tables:
mysql='''describe honey.%s'''%(table)
dbx.execute(mysql); rows=dbx.fetchall()
psql='drop table %s'%(table)
DC.execute(psql); DB.commit()


psql='create table %s ('%(table)
for row in rows:
name=row[0]; type=row[1]
if 'int' in type: type='int8'
if 'blob' in type: type='bytea'
if 'datetime' in type: type='timestamptz'
psql+='%s %s,'%(name,type)
psql=psql.strip(',')+')'
print psql
try: DC.execute(psql); DB.commit()
except: pass


msql='''select * from honey.%s'''%(table)
dbx.execute(msql); rows=dbx.fetchall()
n=len(rows); print n; t=n
if n==0: continue #skip if no data


cols=len(rows[0])
for row in rows:
ps=', '.join(['%s']*cols)
psql='''insert into %s values(%s)'''%(table, ps)
DC.execute(psql,(row))
n=n-1
if n%1000==1: DB.commit(); print n,t,t-n
DB.commit()

This question is a little old but a few days ago I was dealing with this situation and found pgloader.io.

This is by far the easiest way of doing it, you need to install it, and then run a simple lisp script (script.lisp) with the following 3 lines:

/* content of the script.lisp */
LOAD DATABASE
FROM mysql://dbuser@localhost/dbname
INTO postgresql://dbuser@localhost/dbname;




/*run this in the terminal*/
pgloader script.lisp

And after that your postgresql DB will have all of the information that you had in your MySQL SB.

On a side note, make sure you compile pgloader since at the time of this post, the installer has a bug. (version 3.2.0)

I have this bash script to migrate the data, it doesn't create the tables because they are created in migration scripts, so I need only to convert the data. I use a list of the tables to not import data from the migrations and sessions tables. Here it is, just tested:

#!/bin/sh


MUSER="root"
MPASS="mysqlpassword"
MDB="origdb"
MTABLES="car dog cat"
PUSER="postgres"
PDB="destdb"


mysqldump -h 127.0.0.1 -P 6033 -u $MUSER -p$MPASS --default-character-set=utf8 --compatible=postgresql --skip-disable-keys --skip-set-charset --no-create-info --complete-insert --skip-comments --skip-lock-tables $MDB $MTABLES > outputfile.sql


sed -i 's/UNLOCK TABLES;//g' outputfile.sql
sed -i 's/WRITE;/RESTART IDENTITY CASCADE;/g' outputfile.sql
sed -i 's/LOCK TABLES/TRUNCATE/g' outputfile.sql
sed -i "s/'0000\-00\-00 00\:00\:00'/NULL/g" outputfile.sql
sed -i "1i SET standard_conforming_strings = 'off';\n" outputfile.sql
sed -i "1i SET backslash_quote = 'on';\n" outputfile.sql
sed -i "1i update pg_cast set castcontext='a' where casttarget = 'boolean'::regtype;\n" outputfile.sql
echo "\nupdate pg_cast set castcontext='e' where casttarget = 'boolean'::regtype;\n" >> outputfile.sql


psql -h localhost -d $PDB -U $PUSER -f outputfile.sql

You will get a lot of warnings you can safely ignore like this:

psql:outputfile.sql:82: WARNING:  nonstandard use of escape in a string literal
LINE 1: ...,(1714,38,2,0,18,131,0.00,0.00,0.00,0.00,NULL,'{\"prospe...
^
HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.

I had to do this recently to a lot of large .sql files approximately 7 GB in size. Even VIM had troubling editing those. Your best bet is to import the .sql into MySql and then export it as a csv which can be then imported to Postgres.

But, the MySQL export as a csv is horrendously slow as it runs the select * from yourtable query. If you have a large database/table I would suggest using some other method. One way is to write a script that reads the sql inserts line by line and uses string manipulation to reformat it to "Postgres-compliant" insert statements and then execute these statements in Postgres

As with most database migrations, there isn't really a cut and dried solution.

These are some ideas to keep in mind when doing a migration:

  1. Data types aren't going to match. Some will, some won't. For example, SQL Server bits (boolean) don't have an equivalent in Oracle.
  2. Primary key sequences will be generated differently in each database.
  3. Foreign keys will be pointing to your new sequences.
  4. Indexes will be different and probably will need tweaked.
  5. Any stored procedures will have to be rewritten
  6. Schemas. Mysql doesn't use them (at least not since I have used it), Postgresql does. Don't put everything in the public schema. It is a bad practice, but most apps (Django comes to mind) that support Mysql and Postgresql will try to make you use the public schema.
  7. Data migration. You are going to have to insert everything from the old database into the new one. This means disabling primary and foreign keys, inserting the data, then enabling them. Also, all of your new sequences will have to be reset to the highest id in each table. If not, the next record that is inserted will fail with a primary key violation.
  8. Rewriting your code to work with the new database. It should work but probably won't.
  9. Don't forget the triggers. I use create and update date triggers on most of my tables. Each db sites them a little different.

Keep these in mind. The best way is probably to write a conversion utility. Have a happy conversion!

For those Googlers who are in 2015+.
I've wasted all day on this and would like to sum things up.

I've tried all the solutions described at this article by Alexandru Cotioras (which is full of despair). Of all the solutions mentioned there only one worked for me.

lanyrd/mysql-postgresql-converter @ github.com (Python)

But this alone won't do. When you'll be importing your new converted dump file:

# \i ~/Downloads/mysql-postgresql-converter-master/dump.psql

PostgreSQL will tell you about messed types from MySQL:

psql:/Users/jibiel/Downloads/mysql-postgresql-converter-master/dump.psql:381: ERROR:  type "mediumint" does not exist
LINE 2:     "group_id" mediumint(8)  NOT NULL DEFAULT '0',

So you'll have to fix those types manually as per this table.

In short it is:

tinyint(2) -> smallint
mediumint(7) -> integer
# etc.

You can use regex and any cool editor to get it done.

MacVim + Substitute:

:%s!tinyint(\w\+)!smallint!g
:%s!mediumint(\w\+)!integer!g

Mac OS X

brew update && brew install pgloader


pgloader mysql://user@host/db_name postgresql://user@host/db_name

I could copy tables from MySQL to Postgres using DBCopy Plugin for SQuirreL SQL Client. This was not from a dump, but between live databases.

Use your xxx.sql file to set up a MySQL database and make use of FromMysqlToPostrgreSQL. Very easy to use, short configuration and works like a charm. It imports your database with the set primary keys, foreign keys and indices on the tables. You can even import data alone if you set appropriate flag in the config file.

FromMySqlToPostgreSql migration tool by Anatoly Khaytovich, provides an accurate migration of table data, indices, PKs, FKs... Makes an extensive use of PostgreSQL COPY protocol.

See here too: PG Wiki Page

You can use pgloader.

sudo apt-get install pgloader

Using:

pgloader mysql://user:pass@host/database postgresql://user:pass@host/database

Mac/Win

Download Navicat trial for 14 days (I don't understand $1300) - full enterprise package:

connect both databases mysql and postgres

menu - tools - data transfer

connect both dbs on this first screen. While still on this screen there is a general / options - under the options check on the right side check - continue on error * note you probably want to un-check index's and keys on the left.. you can reassign them easily in postgres.

at least get your data from MySQL into Postgres!

hope this helps!

With pgloader

Get a recent version of pgloader; the one provided by Debian Jessie (as of 2019-01-27) is 3.1.0 and won't work since pgloader will error with

Can not find file mysql://...
Can not find file postgres://...

Access to MySQL source

First, make sure you can establish a connection to mysqld on the server running MySQL using

telnet theserverwithmysql 3306

If that fails with

Name or service not known

log in to theserverwithmysql and edit the configuration file of mysqld. If you don't know where the config file is, use find / -name mysqld.cnf.

In my case I had to change this line of mysqld.cnf

# By default we only accept connections from localhost
bind-address    = 127.0.0.1

to

bind-address    = *

Mind that allowing access to your MySQL database from all addresses can pose a security risk, meaning you probably want to change that value back after the database migration.

Make the changes to mysqld.cnf effective by restarting mysqld.

Preparing the Postgres target

Assuming you are logged in on the system that runs Postgres, create the database with

createdb databasename

The user for the Postgres database has to have sufficient privileges to create the schema, otherwise you'll run into

permission denied for database databasename

when calling pgloader. I got this error although the user had the right to create databases according to psql > \du.

You can make sure of that in psql:

GRANT ALL PRIVILEGES ON DATABASE databasename TO otherusername;

Again, this might be privilege overkill and thus a security risk if you leave all those privileges with user otherusername.

Migrate

Finally, the command

pgloader mysql://theusername:thepassword@theserverwithmysql/databasename postgresql://otherusername@localhost/databasename

executed on the machine running Postgres should produce output that ends with a line like this:

Total import time          ✓     877567   158.1 MB       1m11.230s

If you are using phpmyadmin you can export your data as CSV and then it will be easier to import in postgres.

  1. Take a dump file of mysql database.
  2. use this tool for converting local mysql database to local postgresql database.
  • take a clone in new folder or root directory:

    1. git clone https://github.com/AnatolyUss/nmig.git
    2. cd nmig
    3. git checkout v5.5.0
    4. nano config/config.json open this file after checkout.
    5. Add souce database and destination database and also username, password
    "source": {
    "host": "localhost",
    "port": 3306,
    "database": "test_db",
    "charset": "utf8mb4",
    "supportBigNumbers": true,
    "user": "root",
    "password": "0123456789"
    }
    "target": {
    "host"     : "localhost",
    "port"     : 5432,
    "database" : "test_db",
    "charset"  : "UTF8",
    "user"     : "postgres",
    "password" : "0123456789"
    }
    
    1. After modification of config/config.json file run:
      1. npm install
      2. npm run build
      3. npm start
    2. After all this command you notice you mysql database is transferred to postgresql database.