Re: limiting resources to users - Mailing list pgsql-general

From Greg Smith
Subject Re: limiting resources to users
Date
Msg-id 4B148E72.3070007@2ndquadrant.com
Whole thread Raw
In response to Re: limiting resources to users  (Craig Ringer <craig@postnewspapers.com.au>)
Responses Re: limiting resources to users  (Craig Ringer <craig@postnewspapers.com.au>)
List pgsql-general
Craig Ringer wrote:
> I assume you look up the associated backend by looking up the source
> IP and port of the client with `netstat', `lsof', etc, and matching
> that to pg_stat_activity?
There's a bunch of ways I've seen this done:

1) If you spawn the psql process with bash using "&", you can then find
its pid with "$!", then chain through the process tree with ps and
pg_stat_activity as needed to figure out the backend pid.
2) If you know the query being run and it's unique (often the case with
batch jobs run daily for example), you can search for it directly in the
query text of pg_stat_activity.
3) Sometimes the only queries you want to re-nice are local, while
everything else is remote.  You might filter down possible pids that way.
4) Massage data from netstat, lsof, or similar tools to figure out which
process you want.

> It makes me wonder if it'd be handy to have a command-line option for
> psql that caused it to spit the backend pid out on stderr.
Inspired by this idea, I just thought of yet another approach.  Put this
at the beginning of something you want to track:

COPY (SELECT pg_backend_pid()) TO '/place/to/save/pid';

Not so useful if there's more than one of the query running at once, but
in the "nice a batch job" context it might be usable.

--
Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com  www.2ndQuadrant.com


pgsql-general by date:

Previous
From: Schwaighofer Clemens
Date:
Subject: Re: duplicating a schema
Next
From: Craig Ringer
Date:
Subject: Re: limiting resources to users