Re: Loading 500m json files to database - Mailing list pgsql-general

From Rob Sargent
Subject Re: Loading 500m json files to database
Date
Msg-id acf05c41-3f9d-64fd-3823-bfd09bbeb009@gmail.com
Whole thread Raw
In response to Loading 500m json files to database  (pinker <pinker@onet.eu>)
List pgsql-general

On 3/23/20 4:24 AM, pinker wrote:
> Hi, do you have maybe idea how to make loading process faster?
>
> I have 500 millions of json files (1 json per file) that I need to load to
> db.
> My test set is "only" 1 million files.
>
> What I came up with now is:
>
> time for i in datafiles/*; do
>    psql -c "\copy json_parts(json_data) FROM $i"&
> done
>
> which is the fastest so far. But it's not what i expect. Loading 1m of data
> takes me ~3h so loading 500 times more is just unacceptable.
>
> some facts:
> * the target db is on cloud so there is no option to do tricks like turning
> fsync off
> * version postgres 11
> * i can spin up huge postgres instance if necessary in terms of cpu/ram
> * i tried already hash partitioning (to write to 10 different tables instead
> of 1)
>
>
> Any ideas?
>
Most advanced languages have a bulk copy implementation.  I've found this to be blindingly fast when the receiving
tablehas no indices, constraints.  It's not clear how large your files are, but you might take this time to
"normalized"them: extract any id, datatype, etc into table attributes.
 

> --
> Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>
>




pgsql-general by date:

Previous
From: Radu Radutiu
Date:
Subject: Runtime partition pruning
Next
From: Julien Rouhaud
Date:
Subject: Re: PG12 autovac issues