Thread: Updated propsoal for read-only queries on PITR slaves (SoC 2007)

Updated propsoal for read-only queries on PITR slaves (SoC 2007)

From
"Florian G. Pflug"
Date:
Hi

I've updated (or rather rewritten) my proposal for implementing
read-onyl queries on PITR slaves as a "Summer of Code 2007" project.

I've added are more details description of how I plan implement
a read-only mode suitable for PITR slaves, and put in a few
possible enhancements to the "Big, Global R/W lock" idea for
serializing WAL replay and queries.

I'm looking forward to any kind of suggestions, ideas, or
critism - I'd like my proposal to be as detailed as
possible before I submit it to SoC, so that if
I get a chance to work on it, I can be reasonable sure
that people here are happy with the way I approach the problem.

greetings, Florian Pflug

Implementing support for read-only queries on PITR slaves
=========================================================

Submitter: Florian Pflug <fgp@phlo.org>

Abstract:
---------
The support for PITR (Point-In-Time-Recovery) in postgres can be used to built
a simple form a master-slave replication. Currently, no queries can be
executed on the slave, though - it only replays WAL (Write-Ahead-Log) segments
it receives from the master. I want to implement support for running read-onyl
queries on such a PITR slave, making PITR usefull not only for disaster
recovery, but also for loadbalancing.

Course overview of the proposed implementation:
-----------------------------------------------
Currently, postgres does WAL replay soley during the startup of the database,
before all subsystems are fully initialized, and before backend are allowed to
connect. To support read-only queries on PITR slaves, while still guaranteeing
that the database is in a consistens state, the WAL replay will be split into
two parts. The first will replay only enough wal to guarantee a consistens
state, and will run during startup. If read-only mode is disabled, the next
step will be run immediatly after the first. If, however, read-only mode is
enabled, then the database will be brought online in read-only mode after
completing recovery, and the second step will be lauched as a seperate
process. Clients are allowed to connect, and to execute read-only queries as
soon as the database is online, even though WAL replay is still being done in
the background.

Implementation of a read-only mode suitable for PITR slaves
-----------------------------------------------------------
Since replication via PITR runs asynchrously, and runs one-way (master-slave),
queries running on the slave are of course not allowed to insert, update or
delete data, nor to change the schema in any way. But there are still write
operations in the datadir that they _are_ allowed to do.
Those are  .) Creating temporary files for on-disk sorting and spilling out     tuplestores  .) Setting XMIN_COMMITTED
andXMAX_COMITTED on heap tuples  .) Setting LP_DELETE on index tuples.
 
Note that creating temporary tables is not allowed. This is necessary
because temporary tables have associated entries in pg_class,
which obviously can't be created on PITR slaves.

Postgres already supports "set transaction read only" during normal operation.
On a read-only PITR slave every transaction will automatically be flagged
read-only, which results in nice error messages (like "ERROR: transaction is
read-only") if a user tries to execute insert/updates/deletes or
schema-changing operations. Also any command that has to be execute outside of
a transaction block (VACUUM) is disallowed on PITR slaved. As an additional
protection, a global variable read_only_mode is introduced. If in read-only
mode, this is set to true for all backends except the WAL replay process, and
the following checks are added.  .) MarkBufferDirty() is changed to throw an error if read_only_mode == 1     Hint bit
updatesalready use SetBufferCommitInfoNeedsSave() instead of     MarkBufferDirty(), which just suits use fine.  .)
XLogInsert()and XLogWrite() throws if read_only_mode == 1  .) SlruPhysicalWritePage() and SimpleLruWritePage() throws
ifread_only_mode == 1     This prevents creating or changing multixacts, subtrans and clog entries.  .) EndPrepare()
andFinishPreparedTransaction() throws if read_only_mode == 1     This prevents preparing transaction, and
committing/rolling-backprepared     transactions.
 
Those checks serve as a safety measure against holes in the already existing
read-only transaction logic. Note that read-onyl transactions won't generate
clog updates, because those are already skipped for transactions that neither
wrote xlog entries, temporary tables nor deleted files.

The following holes currently exist in the read-only transaction logic. Fixing
those is not critical - the lowlevel checks outlined above catch them all -
but would allow displaying better error messages.  .) nextval(), setval()  .) CLUSTER  .) NOTIFY
Disallowing those in all read-only transactions (Not only on PITR slaves) seems
sensible, but it might create compatibility problems.

Allowing read-only queries and WAL archive replay to run side-by-side
---------------------------------------------------------------------
Of all the interlocks postgres uses to ensure that data is not removed
from under a transaction's feet, three are relevant for PITR slaves.

*) Locks on relations. A select takes an AccessShare lock on every referenced   relation, thereby locking out
concurrentDROP,CLUSTER,.. commands. This   is ineffective on PITR slaves, because there is no trace in the WAL that   a
lockhas been granted.
 

*) VACUUM and GetOldestXmin(). VACUUM makes sure not to remove tuples still   needed by some transactions by comparing
theirxmin and xmax to   GetOldestXmin(). But the value returned by GetOldestXmin() on the *server*   has no chance to
takequeries on the *slave* into account which may   eventually run when the WAL records generated by VACUUM are
replayed.

*) The xmin, xmas and list of currently running xid's in SnapshotData ensure   that a single statement or a whole
transactionsee a constant view of the data,   even if other transactions commit while the exection of the statement or
transaction is still in progress. Creating such a snapshot on the slave   is tricky, because the wal contains no
informationabout the transactions   that were running at a specific point. If a transactions runs for a long time
withoutdoing updates or deleted, it's xid will not show up in the wal   during that time.
 

A read-only slave would still replay a part of the wal during startup (before
queries are executed) - but only enough wal to guarantee that the data is
consistent. The condition for consistency is exactly the same as the one
currently used to decide whether there was enough wal to make a
filesystem-level backup consistent or not.

There a three ways to overcome the problems stated above, presented here in
the order of increasing complexity

1) Don't run WAL replay and queries concurrently - stop WAL replaying before   starting a transaction.   This allows
thetransaction to just use a "empty" snapshot, meaning that   just the information from the clog is used to determine
visiblity.A global   lock would be acquired in read mode by the WAL replaying process before   replaying a chunk of WAL
records.The same lock would be acquired in read   mode by a backend before starting a transaction. Since there is no
needfor   a real snapshot, and since a read-only transaction's xid never hits the   disk, read-only transactions could
justuse a constant dummy xid. To be on   the safe side, the chunks after which the WAL replaying process releases   and
reacquiresthe lock would be choosen such that at the end of each chuck   all *_safe_restartpoint functions return
true.

2) Only serialize WAL replay and queries if data is actually removed.   This is a refined version of (1) where the
globallock is only acquired   before actually removing data. Inserting tuples into the heap or index   should be safe.
(Note:The HOT patch might make this more difficult, but   that will be judged when there is consus on that patch). The
exceptionto   this rule are system catalogs, since those are accessed using SnapshotNow -   but since system catalogs
havefixed oids, it seems possible to check for   that during wal replay.
 

3) Log information about granted locks and currently running transactions   into the WAL.   Upon grating a lock on a
relationthat would conflict with AccessShare, a   xlog record is written containing the oid of the relation. The
checkpoint  record is extended to contain a list of currently running transactions on   the master at the time of the
checkpoint.This allows the slave to immitate   the locking that was going on on the master, and also to maintain a list
of  "concurrent" transactions (In the sense that they were current on the   master when the WAL records being replayed
werewritten).
 
   A backend on the slave can then use this list of transactions to construct   a snapshot, and it is guaranteed that
theWAL replay pauses if the changes   it is about to do would conflict with a read-only query.
 
   Since replaying the locking will open up the possibilites of deadlocks on   the slave, it will be necessary to
guaranteethat it's never the WAL   replayer that is aborted, but rather one of the other backends.
 

User-Interface
--------------
A new GUC "recovery_allow_readonly" will be introduced. If set to false, postgres
will behave exactly as it does now. If set to true, postgres will allow read-only
queries while replaying WAL records.

Another possibility would be to move this setting into the recovery.conf. The 
problems
with this approach is that the recovery.conf file is deleted after the information
it contains is incorporated into pg_control. Thus, the readonly setting would 
need to
be stored in pg_control too, making it impossible for the user to change it later
(e.g, after interrupting and restarting WAL replay which is possible with 8.2)

Steps taken during the implementation
-------------------------------------
I will start working on the read-only query support - although I'll only
handle the necessary grade of "read-onlyness" needed for PITR slaves, not for
postgres running on a read-only datadir. Then I'll implement solution (1) of
"Allowing read-only queries and WAL archive replay to run side-by-side", even
though this solution will show limited performance. If this is done, I'll get
my patch into a state where it is considered acceptable for inclusion into the
core. After those goals are archived, I'll try to improve the performance by
relaxing the locking requirements according to either (2), (3) or something
completly different, depending on input from the community.

Costs, Benefits, Open Issues
----------------------------
Costs:
*) Point (3) of "Allowing read-only queries and WAL archive replay to run   side-by-side" would slightly enlare the
WAL.One would need to measure   the impact, but since a query that does locking will probably also change   data, it
canbe assumed that the increase in WAL traffic will hardly be   noticeable.
 
*) The changes necessary to support read-only queries touch quite a few functions.   But only a simpel "if readonly
thenthrow error" has to be added. This could   even be wrappen inside a macro or function.
 
*) The WAL replaying could will have to be reorganized - but changes to this part   are unavoidable when implementing
thisfeature.
 

Benefits:
*) Can be used for master-slave replication. The master database doesn't need   to be modified in any way (apart from
definingan archive_command).   This makes this kind of master-slave replication easier to setup and   maintan then
trigger-basedsolutions.
 
*) Automatically replicates every type of database object, with any special   code needed per object. This is another
advantageover trigger-based   solutions.
 
*) Can be used to run long-running queries (like reporting, or pg_dump)   without preventing the vacuuming of other
tableson the master.
 

Limitations:
*) Point (1) of "Allowing read-only queries and WAL archive replay to run   side-by-side" severly query load you may
puton the slave before   it starts falling further and further behind the master. Point (2) and   (3) are meant to
addressthis, but it isn't yet clear how to implement   those.
 
*) Postgres wouldn't automatically switch into read-write mode when the   replaying process finishes. Thus, failing
overto the slave requires   a postgres restart.
 

Open Questions/Problems
*) How should the flat files be dealt with? Currently, they are updated   after wal replay finishes, which is not
acceptableon the slave.   Will have to find out if the wal already contains enough information   to be more clever, or
ifthis information can be added easily. If   both fail, they could be recreated periodically (say, at every
RestartPoint).



Re: Updated propsoal for read-only queries on PITRslaves (SoC 2007)

From
"Simon Riggs"
Date:
On Thu, 2007-03-01 at 15:45 +0100, Florian G. Pflug wrote:

> I'm looking forward to any kind of suggestions, ideas, or
> critism - I'd like my proposal to be as detailed as
> possible before I submit it to SoC, so that if
> I get a chance to work on it, I can be reasonable sure
> that people here are happy with the way I approach the problem.

I'm happy with your approach to the problem:

- your thinking is in detail, written and clear
- you cover various options, not just your favourite
- you're doing it on list

So I'll support you SoC submission.

--  Simon Riggs              EnterpriseDB   http://www.enterprisedb.com




Re: Updated propsoal for read-only queries on PITR slaves (SoC 2007)

From
Jim Nasby
Date:
On Mar 1, 2007, at 8:45 AM, Florian G. Pflug wrote:
> Another possibility would be to move this setting into the  
> recovery.conf. The problems
> with this approach is that the recovery.conf file is deleted after  
> the information
> it contains is incorporated into pg_control. Thus, the readonly  
> setting would need to
> be stored in pg_control too, making it impossible for the user to  
> change it later
> (e.g, after interrupting and restarting WAL replay which is  
> possible with 8.2)

I think it would be best to very clearly divide setting up a cluster  
as a read-only slave from doing an actual recovery. One obvious way  
to do this would be to require that all read-only GUCs have to live  
in postgresql.conf and not recovery.conf. There's probably some other  
more elegant solutions as well.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)




Re: Updated propsoal for read-only queries on PITR slaves (SoC 2007)

From
"Florian G. Pflug"
Date:
Jim Nasby wrote:
> On Mar 1, 2007, at 8:45 AM, Florian G. Pflug wrote:
>> Another possibility would be to move this setting into the 
>> recovery.conf. The problems
>> with this approach is that the recovery.conf file is deleted after the 
>> information
>> it contains is incorporated into pg_control. Thus, the readonly 
>> setting would need to
>> be stored in pg_control too, making it impossible for the user to 
>> change it later
>> (e.g, after interrupting and restarting WAL replay which is possible 
>> with 8.2)
> 
> I think it would be best to very clearly divide setting up a cluster as 
> a read-only slave from doing an actual recovery. One obvious way to do 
> this would be to require that all read-only GUCs have to live in 
> postgresql.conf and not recovery.conf. There's probably some other more 
> elegant solutions as well.

The main argument for putting this into recovery.conf ist that it changes
the behaviour only during recovery. Much like restore_command ist
part of the recovery.conf. But I agree that overall postgresql.conf
seems saner.

greetings, Florian Pflug