Thread: proposal: Allocate work_mem From Pool

proposal: Allocate work_mem From Pool

From
"Joseph D Wagner"
Date:
I'm new here, so forgive me if this is a bad idea or my lack of knowledge on
how to optimize PostgreSQL.

I find PostgreSQL to be great with a large number of small transactions,
which covers most use cases.  However, my experience has not been so great
on the opposite end -- a small number of large transactions, i.e. Big Data.

I had to increase work_mem to 3GB to stop my queries from spilling to disk.
However, that's risky because it's 3GB per operation, not per
query/connection; it could easily spiral out of control.

I think it would be better if work_mem was allocated from a pool of memory
as need and returned to the pool when no longer needed.  The pool could
optionally be allocated from huge pages.  It would allow large and mixed
workloads the flexibility of grabbing more memory as needed without spilling
to disk while simultaneously being more deterministic about the maximum that
will be used.

Thoughts?

Thank you for your time.

Joseph D. Wagner

My specifics:
 -64 GB box
 -16 GB shared buffer, although queries only using about 12 GB of that
 -16 GB effective cache
 -2-3 GB used by OS and apps
 -the rest is available for Postgresql queries/connections/whatever as
needed




Re: proposal: Allocate work_mem From Pool

From
Justin Pryzby
Date:
On Sun, Jul 10, 2022 at 08:45:38PM -0700, Joseph D Wagner wrote:

> However, that's risky because it's 3GB per operation, not per
> query/connection; it could easily spiral out of control.

This is a well-known deficiency.
I suggest to dig up the old threads to look into.
It's also useful to include links to the prior discussion.

> I think it would be better if work_mem was allocated from a pool of memory

I think this has been proposed before, and the issue/objection with this idea
is probably that query plans will be inconsistent, and end up being
sub-optimal.

work_mem is considered at planning time, but I think you only consider its
application execution.  A query that was planned with the configured work_mem
but can't obtain the expected amount at execution time might perform poorly.
Maybe it should be replanned with lower work_mem, but that would lose the
arms-length relationship between the planner-executor.

Should an expensive query wait a bit to try to get more work_mem?
What do you do if 3 expensive queries are all waiting ?

-- 
Justin



RE: proposal: Allocate work_mem From Pool

From
"Joseph D Wagner"
Date:
>> I think it would be better if work_mem was allocated from a pool
>> of  memory

> I think this has been proposed before, and the issue/objection
> with this idea is probably that query plans will be inconsistent,
> and end up being sub-optimal.

> work_mem is considered at planning time, but I think you only
> consider its application execution.  A query that was planned
>  with the configured work_mem but can't obtain the expected
> amount at execution time might perform poorly. Maybe it
> should be replanned with lower work_mem, but that would
> lose the arms-length relationship between the planner-executor.

> Should an expensive query wait a bit to try to get more
> work_mem? What do you do if 3 expensive queries are all
> waiting ?

Before I try to answer that, I need to know how the scheduler works.

Let's say there's a max of 8 worker process, and 12 queries trying to run.
When does query #9 run? After the first of 1-8 completes, simple FIFO?
Or something else?

Also, how long goes a query hold a worker process?  All the way to
completion?  Or does is perform some unit of work and rotate to
another query?

Joseph D Wagner

P.S.  If there's a link to all this somewhere, please let me know.
Parsing through years of email archives is not always user friendly or
helpful.




Re: proposal: Allocate work_mem From Pool

From
John Naylor
Date:

On Tue, Jul 12, 2022 at 5:55 PM Joseph D Wagner <joe@josephdwagner.info> wrote:
> Before I try to answer that, I need to know how the scheduler works.

As I understand the term used, there is no scheduler inside Postgres for user connections -- they're handled by the OS kernel. That's probably why it'd be a difficult project to be smart about memory -- step one might be to invent a scheduler. (The autovacuum launcher and checkpointer, etc have their own logic about when to do things, but while running they too are just OS processes scheduled by the kernel.)

--
John Naylor
EDB: http://www.enterprisedb.com

Re: proposal: Allocate work_mem From Pool

From
Justin Pryzby
Date:
On Tue, Jul 12, 2022 at 03:55:39AM -0700, Joseph D Wagner wrote:
> Before I try to answer that, I need to know how the scheduler works.
> 
> Let's say there's a max of 8 worker process, and 12 queries trying to run.
> When does query #9 run? After the first of 1-8 completes, simple FIFO?
> Or something else?
> 
> Also, how long goes a query hold a worker process?  All the way to
> completion?  Or does is perform some unit of work and rotate to
> another query?

I think what you're referring to as a worker process is what postgres refers to
as a "client backend" (and not a "parallel worker", even though that sounds
more similar to your phrase).

> P.S.  If there's a link to all this somewhere, please let me know.
> Parsing through years of email archives is not always user friendly or
> helpful.

Looking at historic communication is probably the easy part.
Here's some to start you out.
https://www.postgresql.org/message-id/flat/4d39869f4bdc42b3a43004e3685ac45d%40index.de

-- 
Justin



Re: proposal: Allocate work_mem From Pool

From
Joseph D Wagner
Date:
>> Before I try to answer that, I need to know how the scheduler works.

> As I understand the term used, there is no scheduler inside Postgres
> for user connections -- they're handled by the OS kernel.

Then, I'm probably using the wrong term. Right now, I have
max_worker_processes set to 16. What happens when query #17
wants some work done? What do you call the thing that handles
that? What is its algorithm for allocating work to the processes?
Or am I completely misunderstanding the role worker processes
play in execution?

Joseph Wagner



Re: proposal: Allocate work_mem From Pool

From
Justin Pryzby
Date:
On Tue, Jul 12, 2022 at 08:49:10PM -0700, Joseph D Wagner wrote:
> > > Before I try to answer that, I need to know how the scheduler works.
> 
> > As I understand the term used, there is no scheduler inside Postgres
> > for user connections -- they're handled by the OS kernel.
> 
> Then, I'm probably using the wrong term. Right now, I have
> max_worker_processes set to 16. What happens when query #17
> wants some work done? What do you call the thing that handles
> that? What is its algorithm for allocating work to the processes?
> Or am I completely misunderstanding the role worker processes
> play in execution?

max_connections limits the number of client connections (queries).
Background workers are a relatively new thing - they didn't exist until v9.3.

There is no scheduler, unless you run a connection pooler between the
application and the DB.  Which you should probably do, since you've set
work_mem measured in GB on a server with 10s of GB of RAM, and while using
partitioning.

https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONS
|max_connections (integer)
|
|    Determines the maximum number of concurrent connections to the database server. The default is typically 100
connections,but might be less if your kernel settings will not support it (as determined during initdb). This parameter
canonly be set at server start.
 

https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-WORKER-PROCESSES
|max_worker_processes (integer)
|    Sets the maximum number of background processes that the system can support. This parameter can only be set at
serverstart. The default is 8.
 
|    When running a standby server, you must set this parameter to the same or higher value than on the primary server.
Otherwise,queries will not be allowed in the standby server.
 
|    When changing this value, consider also adjusting max_parallel_workers, max_parallel_maintenance_workers, and
max_parallel_workers_per_gather.

https://www.postgresql.org/docs/current/connect-estab.html
|PostgreSQL implements a “process per user” client/server model. In this model, every client process connects to
exactlyone backend process. As we do not know ahead of time how many connections will be made, we have to use a
“supervisorprocess” that spawns a new backend process every time a connection is requested. This supervisor process is
calledpostmaster and listens at a specified TCP/IP port for incoming connections. Whenever it detects a request for a
connection,it spawns a new backend process. Those backend processes communicate with each other and with other
processesof the instance using semaphores and shared memory to ensure data integrity throughout concurrent data
access.
|
|The client process can be any program that understands the PostgreSQL protocol described in Chapter 53. Many clients
arebased on the C-language library libpq, but several independent implementations of the protocol exist, such as the
JavaJDBC driver.
 
|
|Once a connection is established, the client process can send a query to the backend process it's connected to. The
queryis transmitted using plain text, i.e., there is no parsing done in the client. The backend process parses the
query,creates an execution plan, executes the plan, and returns the retrieved rows to the client by transmitting them
overthe established connection.
 

https://www.postgresql.org/docs/current/tutorial-arch.html
| The PostgreSQL server can handle multiple concurrent connections from clients. To achieve this it starts (“forks”) a
newprocess for each connection. From that point on, the client and the new server process communicate without
interventionby the original postgres process. Thus, the supervisor server process is always running, waiting for client
connections,whereas client and associated server processes come and go. (All of this is of course invisible to the
user.We only mention it here for completeness.)
 

https://wiki.postgresql.org/wiki/FAQ#How_does_PostgreSQL_use_CPU_resources.3F
|The PostgreSQL server is process-based (not threaded). Each database session connects to a single PostgreSQL operating
system(OS) process. Multiple sessions are automatically spread across all available CPUs by the OS. The OS also uses
CPUsto handle disk I/O and run other non-database tasks. Client applications can use threads, each of which connects to
aseparate database process. Since version 9.6, portions of some queries can be run in parallel, in separate OS
processes,allowing use of multiple CPU cores. Parallel queries are enabled by default in version 10
(max_parallel_workers_per_gather),with additional parallelism expected in future releases. 
 

BTW, since this is amply documented, I have to point out that it's not on-topic
for the development list.

-- 
Justin