Each dw2.large block has 0.16 TB of disk space. When you said that you have a cluster of 10 nodes, the total area is about 1.6 TB. You mentioned that you have about 1.6 TB of raw data (uncompressed) to load at redshift.
When you load data into redshift using copy commands, redshift automatically compresses your data and loads it. as soon as you load some db table you can see compression encoding on request
Select "column", type, encoding from pg_table_def where tablename = 'my_table_name'
Once you upload your data when the table does not have a sort key. See what compression is applied. I suggested that you drop and create a table every time you load data for your testing. Thus, compression coding will be analyzed each time. When you load your table using copy commands, see below link and fire script to determine table size
http://docs.aws.amazon.com/redshift/latest/dg/c_analyzing-table-design.html
Because when you apply the sort key to your table and load the data, the sort key also takes up some disk space.
Since a table with a missing sort key requires less disk space than a table with a sort key.
You need to make sure that compression is applied to the table.
When we use the sort key, it needs more storage space. When you apply the sort key, you also need to check if you are loading the data in sorted order so that the data is stored in a sorted way. To do this, we need to avoid the vacuum command to sort the table after loading the data.
source share