Thread: change bigint column to int

change bigint column to int

From
Mariel Cherkassky
Date:
Hi,

I want to change the column type from bigint to int to save space. By doing so, does it include some risks ? As I see it, I should only run vacuum full afterwards if I want to return space to the os right ?

Re: change bigint column to int

From
Shreeyansh Dba
Date:
Hi Mariel,

After changing the column bigint to int size get reduced.

But it has two huge and interrelated problems: locking and time. This approach locks the table for as long as the command takes to run. And by lock, taking a heavy “access exclusive” lock which shuts everything else out of the table. If your table is small, this is not an issue.




On Thu, Nov 1, 2018 at 3:51 PM Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:
Hi,

I want to change the column type from bigint to int to save space. By doing so, does it include some risks ? As I see it, I should only run vacuum full afterwards if I want to return space to the os right ?

Re: change bigint column to int

From
Mariel Cherkassky
Date:
You are talking about running alter table x alter column type int; Indeed I agree with you that it will have an  AccessExclusiveLock  on the entire lock. But what if instead I'm doing all the next steps : 
1.add a new column
2.update the values in the new column
3.drop constraints on the old column
4.drop old column
5.rename new column to be the old one
6.add new constraints on the new col

I think that in this case the only step that will require AccessExclusiveLock will be at step 5/6 while in the other steps it will require   RowExclusiveLock.

What do you think ? 

‫בתאריך יום ה׳, 1 בנוב׳ 2018 ב-13:08 מאת ‪Shreeyansh Dba‬‏ <‪shreeyansh2014@gmail.com‬‏>:‬
Hi Mariel,

After changing the column bigint to int size get reduced.

But it has two huge and interrelated problems: locking and time. This approach locks the table for as long as the command takes to run. And by lock, taking a heavy “access exclusive” lock which shuts everything else out of the table. If your table is small, this is not an issue.




On Thu, Nov 1, 2018 at 3:51 PM Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:
Hi,

I want to change the column type from bigint to int to save space. By doing so, does it include some risks ? As I see it, I should only run vacuum full afterwards if I want to return space to the os right ?

Re: change bigint column to int

From
Evan Bauer
Date:
Mariel,

Just run a query and make certain that all values currently stored in the column fit in 4 bytes (+/- 2,147,483,648) before doing the ALTER.

Cheers,

Evan

Sent from my iPhone

On Nov 1, 2018, at 06:20, Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:

Hi,

I want to change the column type from bigint to int to save space. By doing so, does it include some risks ? As I see it, I should only run vacuum full afterwards if I want to return space to the os right ?

Re: change bigint column to int

From
Mariel Cherkassky
Date:
I want to do it on a very big tables(100M records). Do you think that it will be slow ? I mean if I have an AccessExclusiveLock for a lot of time I cant just run a query and take my time until it finishes.. 

‫בתאריך יום ה׳, 1 בנוב׳ 2018 ב-13:55 מאת ‪Evan Bauer‬‏ <‪evanbauer@mac.com‬‏>:‬
Mariel,

Just run a query and make certain that all values currently stored in the column fit in 4 bytes (+/- 2,147,483,648) before doing the ALTER.

Cheers,

Evan

Sent from my iPhone

On Nov 1, 2018, at 06:20, Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:

Hi,

I want to change the column type from bigint to int to save space. By doing so, does it include some risks ? As I see it, I should only run vacuum full afterwards if I want to return space to the os right ?

Re: change bigint column to int

From
Evan Bauer
Date:
Mariel,

My suggestion was that you have confidence that you don’t find yourself 90% through the shrink process and unexpectedly find that it fails on an "integer out of range error” like this:

ares_net=# create table test_int (test_big bigint, test_int int);
CREATE TABLE
ares_net=# insert into test_int values (5000000000, 0);
INSERT 0 1
ares_net=# select * from test_int;
  test_big  | test_int 
------------+----------
 5000000000 |        0
(1 row)

ares_net=# update test_int set test_int = test_big;
ERROR:  integer out of range
ares_net=# 

If you already know (because of domain knowledge or because of constraints on how the data was loaded an updated) that all of your values are in the INT range, then this is unnecessary.  If you aren’t sure and you need to check the value of unindexed columns (which would require a full table scan), then do the update in a procedure that checks that the 8-byte value value will fit into 4-byte int — and does appropriate things should it not fit.

Cheers,

- Evan

Evan Bauer
eb@evanbauer.com
+1 646 641 2973
Skype: evanbauer


On Nov 1, 2018, at 08:54, Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:

I want to do it on a very big tables(100M records). Do you think that it will be slow ? I mean if I have an AccessExclusiveLock for a lot of time I cant just run a query and take my time until it finishes.. 

‫בתאריך יום ה׳, 1 בנוב׳ 2018 ב-13:55 מאת ‪Evan Bauer‬‏ <‪evanbauer@mac.com‬‏>:‬
Mariel,

Just run a query and make certain that all values currently stored in the column fit in 4 bytes (+/- 2,147,483,648) before doing the ALTER.

Cheers,

Evan

Sent from my iPhone

On Nov 1, 2018, at 06:20, Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:

Hi,

I want to change the column type from bigint to int to save space. By doing so, does it include some risks ? As I see it, I should only run vacuum full afterwards if I want to return space to the os right ?


Re: change bigint column to int

From
Mariel Cherkassky
Date:
Yeah I understood you Evan. Thanks man !
I asked about any other risks, in case all values in my col are capable to be inside an int column.

‫בתאריך יום ה׳, 1 בנוב׳ 2018 ב-15:52 מאת ‪Evan Bauer‬‏ <‪evanbauer@mac.com‬‏>:‬
Mariel,

My suggestion was that you have confidence that you don’t find yourself 90% through the shrink process and unexpectedly find that it fails on an "integer out of range error” like this:

ares_net=# create table test_int (test_big bigint, test_int int);
CREATE TABLE
ares_net=# insert into test_int values (5000000000, 0);
INSERT 0 1
ares_net=# select * from test_int;
  test_big  | test_int 
------------+----------
 5000000000 |        0
(1 row)

ares_net=# update test_int set test_int = test_big;
ERROR:  integer out of range
ares_net=# 

If you already know (because of domain knowledge or because of constraints on how the data was loaded an updated) that all of your values are in the INT range, then this is unnecessary.  If you aren’t sure and you need to check the value of unindexed columns (which would require a full table scan), then do the update in a procedure that checks that the 8-byte value value will fit into 4-byte int — and does appropriate things should it not fit.

Cheers,

- Evan

Evan Bauer
eb@evanbauer.com
+1 646 641 2973
Skype: evanbauer


On Nov 1, 2018, at 08:54, Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:

I want to do it on a very big tables(100M records). Do you think that it will be slow ? I mean if I have an AccessExclusiveLock for a lot of time I cant just run a query and take my time until it finishes.. 

‫בתאריך יום ה׳, 1 בנוב׳ 2018 ב-13:55 מאת ‪Evan Bauer‬‏ <‪evanbauer@mac.com‬‏>:‬
Mariel,

Just run a query and make certain that all values currently stored in the column fit in 4 bytes (+/- 2,147,483,648) before doing the ALTER.

Cheers,

Evan

Sent from my iPhone

On Nov 1, 2018, at 06:20, Mariel Cherkassky <mariel.cherkassky@gmail.com> wrote:

Hi,

I want to change the column type from bigint to int to save space. By doing so, does it include some risks ? As I see it, I should only run vacuum full afterwards if I want to return space to the os right ?