Administrator Guide : Performance tuning : Data compression : Compressing an existing large database
  
Compressing an existing large database
Since compression will only affect the rows that have been touched after compression is turned on, a large existing database will not compress by turning on the setting. In order to create a compressed copy of existing database, following options are available.
Export / Import
This is based on creation of empty database with compression enabled, running the DDL commands to create the data model and finally exporting data into the new database will create a compressed copy of the database. For large databases, this is a time consuming operation possibly leading to out-of service time. Also, getting the import order right to avoid foreign key violation will require engineering effort.
Process using temporary tables
This approach is logically very similar to export/import. It is based on turning the compression on and populating a compressed copy table for each table by running INSERT INTO <work_table> (SELECT * FROM <orig_table>), dropping the original table and renaming the working table. The foreign key order sets similar challenges to this problem as it does to export and import. This approach will also require reorganizing the database to benefit from the released disk blocks.
Note that:
Even updating all the rows in noncompressed table after compression property has been set will not necessarily lead to compressing the data. If the rows will fit into same node there will be no node splits leading to actual compression.
Running backup or HotStandy netcopy will not change the compression status of disk blocks copied. Compressed blocks stay compressed and uncompressed block stay uncompressed.
See also
Data compression