Greg Hennessy <greg.hennessy@gmail.com> writes:
>> Postgres has chosen to use only a small fraction of the CPU's I have on
>> my machine. Given the query returns an answer in about 8 seconds, it may be
>> that Postgresql has allocated the proper number of works. But if I wanted
>> to try to tweak some config parameters to see if using more workers
>> would give me an answer faster, I don't seem to see any obvious knobs
>> to turn. Are there parameters that I can adjust to see if I can increase
>> throughput? Would adjusting parallel_setup_cost or parallel_tuple_cost
>> likely to be of help?
See the bit about
* Select the number of workers based on the log of the size of
* the relation. This probably needs to be a good deal more
* sophisticated, but we need something here for now. Note that
in compute_parallel_worker(). You can move things at the margins by
changing min_parallel_table_scan_size, but that logarithmic behavior
will constrain the number of workers pretty quickly. You'd have to
change that code to assign a whole bunch of workers to one scan.
(No, I don't know why it's done like that. There might be related
discussion in our archives, but finding it could be difficult.)
regards, tom lane