On Sun, Nov 25, 2018 at 03:37:46PM +0200, Mariel Cherkassky wrote:
> I run it from inside the machine on the local database.
> For example :
>
> db=# create table rule_test as select generate_series(1,100000000);
> SELECT 100000000
> db=# explain analyze select generate_series from rule_test order by
> generate_series asc;
So it's returning 100M rows to the client, which nominally will require moving
400MB.
And pgsql is formatting the output.
I did a test with 10M rows:
[pryzbyj@database ~]$ command time -v psql postgres -c 'SELECT * FROM rule_test' |wc -c&
Command being timed: "psql postgres -c SELECT * FROM rule_test"
User time (seconds): 11.52
Percent of CPU this job got: 78%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:17.25
Maximum resident set size (kbytes): 396244
...
170000053
Explain analyze takes 0.8sec, but returning query results uses 11sec CPU time
on the *client*, needed 400MB RAM (ints now being represented as strings
instead of machine types), and wrote 170MB to stdout, Also, if the output is
being piped to less, the data is going to be buffered there, which means your
query is perhaps using 4GB RAM in psql + 4GB in less..
Is the server swapping ? check "si" and "so" in output of "vmstat -w 1"
Justin