在 AmazonRedshift 中更改列数据类型

如何改变亚马逊红移数据库中的列数据类型?

我无法在 Redshift 中更改列数据类型; 有没有办法修改 Amazon Redshift 中的数据类型?

133050 次浏览

As noted in the ALTER TABLE documentation, you can change length of VARCHAR columns using

ALTER TABLE table_name
{
ALTER COLUMN column_name TYPE new_data_type
}

For other column types all I can think of is to add a new column with a correct datatype, then insert all data from old column to a new one, and finally drop the old column.

Use code similar to that:

ALTER TABLE t1 ADD COLUMN new_column ___correct_column_type___;
UPDATE t1 SET new_column = column;
ALTER TABLE t1 DROP COLUMN column;
ALTER TABLE t1 RENAME COLUMN new_column TO column;

There will be a schema change - the newly added column will be last in a table (that may be a problem with COPY statement, keep that in mind - you can define a column order with COPY)

ALTER TABLE publisher_catalogs ADD COLUMN new_version integer;


update publisher_catalogs set new_version = CAST(version AS integer);
ALTER TABLE publisher_catalogs DROP COLUMN version RESTRICT;
ALTER TABLE publisher_catalogs RENAME new_version to version;

to avoid the schema change mentioned by Tomasz:

BEGIN TRANSACTION;


ALTER TABLE <TABLE_NAME> RENAME TO <TABLE_NAME>_OLD;
CREATE TABLE <TABLE_NAME> ( <NEW_COLUMN_DEFINITION> );
INSERT INTO <TABLE_NAME> (<NEW_COLUMN_DEFINITION>)
SELECT <COLUMNS>
FROM <TABLE_NAME>_OLD;
DROP TABLE <TABLE_NAME>_OLD;


END TRANSACTION;

If you don't want to change the column order, an option will be creating a temp table, drop & create the new one with desired size and then bulk again the data.

CREATE TEMP TABLE temp_table AS SELECT * FROM original_table;
DROP TABLE original_table;
CREATE TABLE original_table ...
INSERT INTO original_table SELECT * FROM temp_table;

The only problem recreating the table is that you will need to grant again permissions and if the table is too bigger it will take a piece of time.

Redshift being columnar database doesn't allow you to modify the datatype directly, however below is one approach this will change the column order.

Steps -

1.Alter table add newcolumn to the table 2.Update the newcolumn value with oldcolumn value 3.Alter table to drop the oldcolumn 4.alter table to rename the columnn to oldcolumn

If you don't want to alter the order of the columns then solution would be to

1.create temp table with new column name

  1. copy data from old table to new table.

  2. drop old table

  3. rename the newtable to oldtable

  4. One important thing create a new table using like command instead simple create.

for updating the same column in redshift this would work fine

UPDATE table_name
SET column_name = 'new_value' WHERE column_name = 'old_value'

you can have multiple clause in where by using and, so as to remove any confusion for sql

cheers!!

This method works for converting an (big) int column into a varchar

-- Create a backup of the original table
create table original_table_backup as select * from original_table;


-- Drop the original table, and then recreate with new desired data types
drop table original_table;


create table original_table (
col1 bigint,
col2 varchar(20) -- changed from bigint
);


-- insert original entries back into the new table
insert into original_table select * from original_table_backup;


-- cleanup
drop original_table_backup;

(Recent update) It's possible to alter the type for varchar columns in Redshift.

ALTER COLUMN column_name TYPE new_data_type

Example:

CREATE TABLE t1 (c1 varchar(100))


ALTER TABLE t1 ALTER COLUMN c1 TYPE varchar(200)

Here is the documentation link

UNLOAD and COPY with table rename strategy should be the most efficient way to do this operation if retaining the table structure(row order) is important.

Here is an example adding to this answer.

BEGIN TRANSACTION;


ALTER TABLE <TABLE_NAME> RENAME TO <TABLE_NAME>_OLD;
CREATE TABLE <TABLE_NAME> ( <NEW_COLUMN_DEFINITION> );
UNLOAD ('select * from <TABLE_NAME>_OLD') TO 's3://bucket/key/unload_' manifest;
COPY <TABLE_NAME> FROM 's3://bucket/key/unload_manifest'manifest;


END TRANSACTION;

You can use the statements below:

ALTER TABLE <table name --etl_proj_atm.dim_card_type >
ALTER COLUMN <col name --card_type> type varchar(30)