Thread: How to configure a read-only database server and session? (Was: read-only UNLOGGED tables)

I have a single-disk virtual Linux system and a read-only dataset
which is exposed to internet and completely replaced from time to
time.

I compiled following steps in order to secure and speedup such
PostgreSQL/PostGIS instance:

1. Re-configure PostgreSQL server as following:

  a. Disabling autovacuum daemon.
  b. Setting postgresql.conf parameters:
    fsync=off
    synchronous_commit=off
    full_page_writes=off

2. Restart server, login as db admin, create database, create an app.-user.

3. Load dataset...:
  a. with owner 'app.-user' in schema PUBLIC;
  b. create indexes;
  c. issue a VACUUM ANALYZE command on user tables.

4. Create a 'read-only' user (login role) with only read access to
user defined tables:
    GRANT SELECT ... TO read_only_user

5. Optimize and secure session by following parameters:

    SET transaction_read_only TO FALSE;
    SET TRANSACTION READ ONLY;

6. Go to step 3 in case of new data or a harddisk crash.

Questions:
=> Any comments on securing such a PostgreSQL instance further?
   All user tables reside in schema PUBLIC, Ok?

=> Any comments on making this PostgreSQL instance 'robust'?
   E.g. which situations (except for harddisk crashes) can leave a
read-only dataset in an inconsistent state where PostgreSQL server
can't restart? An immediate shutdown?

=> Any comments on speeding up/optimizing such a read-only dataset?
   What about wal_level and archive_mode?

Yours, Stefan

On 04/18/2011 04:04 AM, Stefan Keller wrote:

> 5. Optimize and secure session by following parameters:
>
>      SET transaction_read_only TO FALSE;
>      SET TRANSACTION READ ONLY;

AFAIK, neither of those have any effect on security. They're purely
advisory hints to Pg.

Personally I think it'd be cool if read-only transactions were denied
the use of INSERT/UPDATE/DELETE, any "untrusted" PLs, and any
INSERT/UPDATE/DELETE via SPI from PLs.  But "would be cool" isn't "want
to try to implement it" and I'm sure if it were easy, it'd have already
been done.

>     All user tables reside in schema PUBLIC, Ok?

Yep. Make sure you don't grant CREATE on public to the target user, only
grant USAGE, and revoke all from public.

> =>  Any comments on making this PostgreSQL instance 'robust'?
>     E.g. which situations (except for harddisk crashes) can leave a
> read-only dataset in an inconsistent state where PostgreSQL server
> can't restart? An immediate shutdown?

AFAIK:

- Kill -9 of a backend
- segfault / sigabrt / sigbus / etc of a backend
   (though backends may try to handle some of these it'd normally be
    unsafe and I doubt it, but I haven't checked)
- Kill -9 of the postmaster
- OS crash or unexpected reset
- .... probably other things

> =>  Any comments on speeding up/optimizing such a read-only dataset?

Depending on dataset size and access patterns, it could  be worth
pinning a few indexes in a tablespace that lives on a ramdisk. Usually
Pg's and the OS's cache management will do the job well, but if you know
more than them - say, that this index will always be really hot, or that
certain queries are more important than others and must be more
responsive - you can play with that sort of thing.

--
Craig Ringer


> -----Original Message-----
> From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
> owner@postgresql.org] On Behalf Of Stefan Keller
> Sent: Sunday, April 17, 2011 2:04 PM
> To: pgsql-general List
> Subject: [GENERAL] How to configure a read-only database server and
> session? (Was: read-only UNLOGGED tables)
>
> I have a single-disk virtual Linux system and a read-only dataset
> which is exposed to internet and completely replaced from time to
> time.
>
> I compiled following steps in order to secure and speedup such
> PostgreSQL/PostGIS instance:
>
> 1. Re-configure PostgreSQL server as following:
>
>   a. Disabling autovacuum daemon.
>   b. Setting postgresql.conf parameters:
>     fsync=off
>     synchronous_commit=off
>     full_page_writes=off
>
> 2. Restart server, login as db admin, create database, create an app.-
> user.
>
> 3. Load dataset...:
>   a. with owner 'app.-user' in schema PUBLIC;
>   b. create indexes;
>   c. issue a VACUUM ANALYZE command on user tables.


Might consider setting your indexes to be fill factor 100 if you have not already. Be aware of what this will mean when
you"load dataset" in the future for a refresh.  



- Mark