On 8/9/19 8:14 AM, Shital A wrote:
>
>
> On Fri, 9 Aug 2019, 20:08 Adrian Klaver, <adrian.klaver@aklaver.com
> <mailto:adrian.klaver@aklaver.com>> wrote:
>
> On 8/9/19 4:12 AM, Shital A wrote:
> > Hello
> >
> > Postgresql 9.6
> >
> > Need to generate 1GB test data in very less time. I got some
> techniques
> > online but they take around 40mins for 400GB. Any quicker way?
>
> 1) Postgres version?
>
> 2) Data going into single table or multiple tables?
>
> 3) What is the time frame you are trying to achieve?
>
> 4) What techniques have you tried?
>
> 5) If you need only 1GB why the 400GB number?
>
>
> >
> >
> > Thanks.
> >
>
>
> --
> Adrian Klaver
> adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>
>
>
>
> Hello,
>
> Sorry 400GB was a typo. Its 400 MB Details are below:
>
> 1) Postgres version?
> 9.6
>
> 2) Data going into single table or multiple tables?
> Single table having Multiple columns
>
> 3) What is the time frame you are trying to achieve?
> As quick as possible.
>
> 4) What techniques have you tried?
> Insert into with With statement, inserting 2000000 rows at a time. This
> takes 40 mins.
Might take a look at COPY:
https://www.postgresql.org/docs/11/sql-copy.html
or its psql equivalent:
\copy.
COPY runs as the server user and sees files relative to the server location.
\copy runs as the psql(client) user.
In either case suitability will depend on where you are sourcing the
test data from and what format it is in to begin with.
If you want a more sophisticated wrapper over the above then:
https://ossc-db.github.io/pg_bulkload/pg_bulkload.html
>
> 5) If you need only 1GB why the 400GB number?
> That was 400MB. M checking size using \l+ option.
>
>
> Thanks.
>
>
--
Adrian Klaver
adrian.klaver@aklaver.com