Thread: WAL directory size calculation

WAL directory size calculation

From
Moreno Andreo
Date:
Hi folks! :-)
I'm about to bring up my brand new production server and I was wondering
if it's possible to calculate (approx.) the WAL directory size.
I have to choose what's better in terms of cost vs. performance (we are
on Google Cloud Platform) between a ramdisk or a separate persistent
disk. Obviously ramdisk will be times faster disk, but having a, say,
512 GB ramdisk will be a little too expensive :-)
I've read somewhere that the formula should be 16 MB * 3 *
checkpoint_segment in size. But won't it be different depending on the
type of /wal_level/ we set? And won't it also be based on the volume of
transactions in the cluster?
And, in place of not-anymore-used-in-9.5 /checkpoint_segment/ what
should I use? /max_wal_size/?
Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and
plan to use wal_level=archive because I plan to have a backup server
with barman.

Using the above formula I have:
     16 MB * 3 * 1 GB
that leads to to ... uh .. 48000 TB?

Any ideas?
Thanks
Moreno.-



Re: WAL directory size calculation

From
"David G. Johnston"
Date:
On Thu, Jul 28, 2016 at 9:25 AM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:
I've read somewhere that the formula should be 16 MB * 3 * checkpoint_segment in size.
​[...]​
 
Using the above formula I have:
    16 MB * 3 * 1 GB
that leads to to ... uh .. 48000 TB?

​You seem to be mis-remembering the formula.​

​checkpoiint_segments is a quantity (i.e., unit-less), not a size.  Saying its "1GB" makes no sense.

​I'm also doubting you multiply it by 3 - add three to it maybe...

​Each segment takes 16MB.  The total space required is that times whatever maximum count of segments you expect to have.​

I'd suggest waiting for better responses on these lists or searching out prior conversations on this topic - of which I am sure there are many.

David J.

Re: WAL directory size calculation

From
Andreas Kretschmer
Date:
Am 28.07.2016 um 15:25 schrieb Moreno Andreo:
> Hi folks! :-)
> I'm about to bring up my brand new production server and I was
> wondering if it's possible to calculate (approx.) the WAL directory size.
> I have to choose what's better in terms of cost vs. performance (we
> are on Google Cloud Platform) between a ramdisk or a separate
> persistent disk. Obviously ramdisk will be times faster disk, but
> having a, say, 512 GB ramdisk will be a little too expensive :-)
>

Don't use a RAM-Disk for WAL! DON'T! NEVER!

You really need only some GB, if you are using Replication. Without
Replication 1 GB would be fine, even with replication. But it must be
realible!

Andreas


Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 28/07/2016 15:54, Andreas Kretschmer ha scritto:
> Am 28.07.2016 um 15:25 schrieb Moreno Andreo:
>> Hi folks! :-)
>> I'm about to bring up my brand new production server and I was
>> wondering if it's possible to calculate (approx.) the WAL directory
>> size.
>> I have to choose what's better in terms of cost vs. performance (we
>> are on Google Cloud Platform) between a ramdisk or a separate
>> persistent disk. Obviously ramdisk will be times faster disk, but
>> having a, say, 512 GB ramdisk will be a little too expensive :-)
>>
>
> Don't use a RAM-Disk for WAL! DON'T! NEVER!
OK OK OK, got the point...... :-)

>
> You really need only some GB, if you are using Replication. Without
> Replication 1 GB would be fine, even with replication. But it must be
> realible!
I'll get what seems to be the minumum persistent disk in size among GCP,
10 GB. Thanks!

>
> Andreas
>
>




Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 28/07/2016 15:33, David G. Johnston
      ha scritto:

    <blockquote
cite="mid:CAKFQuwaMGb-qMnf0T0F35NQg_FidzVugnBYhSz3AybMwkwppOg@mail.gmail.com"
      type="cite">

        <div class="gmail_default"
          style="font-family:arial,helvetica,sans-serif"><span
            style="font-family:arial,sans-serif">On Thu, Jul 28, 2016 at
            9:25 AM, Moreno Andreo <span dir="ltr"
            style="font-family:arial,sans-serif"><<a
              moz-do-not-send="true"
              href="mailto:moreno.andreo@evolu-s.it" target="_blank">moreno.andreo@evolu-s.it><span
            style="font-family:arial,sans-serif"> wrote:



            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">I've
              read somewhere that the formula should be 16 MB * 3 *
              checkpoint_segment in size.


              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">​[...]​
               
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Using
              the above formula I have:
                  16 MB * 3 * 1 GB
              that leads to to ... uh .. 48000 TB?




              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif">​You seem
                to be mis-remembering the formula.​





    The result is too big to be sane... so there *must* be something
    wrong
    <blockquote
cite="mid:CAKFQuwaMGb-qMnf0T0F35NQg_FidzVugnBYhSz3AybMwkwppOg@mail.gmail.com"
      type="cite">






              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">​checkpoiint_segments
                is a quantity (i.e., unit-less), not a size.  Saying its
                "1GB" makes no sense.





    Yes, my "1 GB" is related to max_wal_file_size, that from 9.5 came
    over checkpoint_segment.
    <blockquote
cite="mid:CAKFQuwaMGb-qMnf0T0F35NQg_FidzVugnBYhSz3AybMwkwppOg@mail.gmail.com"
      type="cite">




              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">



              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">​I'm
                also doubting you multiply it by 3 - add three to it
                maybe...


              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">



              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif">​Each
                segment takes 16MB.  The total space required is that
                times whatever maximum count of segments you expect to
                have.​





    That's the variable. How many segments do I expect to have? (jumping
    from 9.1 (current) to 9.5 (new) documentation and having a bit of
    confusion in my head...)

    <blockquote
cite="mid:CAKFQuwaMGb-qMnf0T0F35NQg_FidzVugnBYhSz3AybMwkwppOg@mail.gmail.com"
      type="cite">




              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif">

              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif">I'd
                suggest waiting for better responses on these lists
                ...[snip]...





    Got one, in fact. The statement to *absolutely* not use ramdisk with
    wal files resolved almost all of my doubts.

    <blockquote
cite="mid:CAKFQuwaMGb-qMnf0T0F35NQg_FidzVugnBYhSz3AybMwkwppOg@mail.gmail.com"
      type="cite">




              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif">

              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif">David J.






    Thanks
    Moreno

Re: WAL directory size calculation

From
Francisco Olarte
Date:
On Thu, Jul 28, 2016 at 3:25 PM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:
> Obviously ramdisk will be times faster disk, but having a, say, 512 GB
> ramdisk will be a little too expensive :-)

Besides defeating the purpose of WAL, if you are going to use non
persistent storage for WAL you could as well use minimal level,
fsync=off and friends.

> Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
> to use wal_level=archive because I plan to have a backup server with barman.

Is this why you plan using RAM for WAL ( assuming fast copies to the
archive and relying on it for recovery ) ?

Francisco Olarte.


Re: WAL directory size calculation

From
Jan Wieck
Date:


On Thu, Jul 28, 2016 at 9:54 AM, Andreas Kretschmer <andreas@a-kretschmer.de> wrote:
Without Replication 1 GB would be fine, even with replication. But it must be realible!


The required size of WAL depends on what your intended checkpoint_timeout vs. the amount
of WAL generated from data turnover is. A rather small 40GB database, churning TPC-C style
transactions at a rate of 1,000 TPS can easily generate 60MB of WAL per second (if configured
wrong). To keep the WAL size at or below 1GB would require a checkpoint to complete every
17 seconds. In this case, max_wal_size=1GB is a very wrong config option.

One problem here is that the more frequent checkpoints occur, the more full page writes will be
required. Which drives up the amount of WAL, requiring checkpoints even more frequently
when max_wal_size is the limiting factor. This is a classic "down spiral" scenario.

At 1,000 TPS, the above benchmark levels out (after about 1-2 hours) around 60-64GB of
WAL space used (with max_wal_size = 96GB and checkpoint_timeout=20min). The total
amount of WAL actually produced goes down significantly (due to reduced full page writes)
and the transaction response time improves in average as well as in stddev. The whole DB
looks more like it is cruising, than fighting.

This example isn't a big database (40-80GB) or anything exotic. Just a write heavy OLTP
load. 


Regards, Jan





 
Andreas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Jan Wieck
Senior Postgres Architect

Re: [SPAM] Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 28/07/2016 20:45, Francisco Olarte ha scritto:
> On Thu, Jul 28, 2016 at 3:25 PM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:
>> Obviously ramdisk will be times faster disk, but having a, say, 512 GB
>> ramdisk will be a little too expensive :-)
> Besides defeating the purpose of WAL, if you are going to use non
> persistent storage for WAL you could as well use minimal level,
> fsync=off and friends.
After Andreas post and thinking about it a while, I went to the decision
that it's better not to use RAM but another persistent disk, because
there can be an instant between when a WAL is written and it's fsync'ed,
and if a failure happens in this instant the amount of data not fsync'ed
is lost. Am I right?
>
>> Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
>> to use wal_level=archive because I plan to have a backup server with barman.
> Is this why you plan using RAM for WAL ( assuming fast copies to the
> archive and relying on it for recovery ) ?
Yes, but having to deal with the risk of having loss of data, I think
I'll go on a bigger persistent disk, have bigger checkpoint intervals
and end up having a longer rescue time, but the main thing is *no data loss*
>
> Francisco Olarte.
>
>
Thanks

Moreno.




Re: [SPAM] Re: WAL directory size calculation

From
John R Pierce
Date:

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

With that many databases with that so many objectsand undoubtable client connections, I'd want to spread that across a cluster of smaller servers.

just sayin...


-- 
john r pierce, recycling bits in santa cruz

Re: [SPAM] Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 29/07/2016 10:43, John R Pierce ha
      scritto:

    <blockquote
      cite="mid:5f27cf5e-666d-501e-f250-b4bae0a3f3a3@hogranch.com"
      type="cite">



      <blockquote
        cite="mid:5f43e1cd-9348-64bc-ff0c-1906db671277@evolu-s.it"
        type="cite">

          Aside of this,
            I'm having 350 DBs that sum up a bit more than 1 TB, and
            plan
            to use wal_level=archive because I plan to have a backup
            server with barman.




      With that many databases with that so many objects

    350 DBs with about 130 tables and a bunch of sequences each, for the
    sake of precision.
    With extensive use of BLOBs.

    <blockquote
      cite="mid:5f27cf5e-666d-501e-f250-b4bae0a3f3a3@hogranch.com"
      type="cite">
      and undoubtable client connections,

    Yes, that's another big problem... we run normally between 500 and
    700 concurrent connections... I had to set max_connections=1000, the
    whole thing grew up faster than we were prepared for...

    <blockquote
      cite="mid:5f27cf5e-666d-501e-f250-b4bae0a3f3a3@hogranch.com"
      type="cite">
      I'd want to spread that across a cluster of smaller servers.

    That will be step 2... while migration is running (and will run for
    some months, we have to plan migration with users) I'll test putting
    another one or two machines in cluster, make some test cases, and
    when ready, databases will be migrated on other machines, too.
    I posted a question about this some months ago, and I was told that
    one solution would be to set the servers to be master on some
    databases and slave on others, so we can have a better load
    balancing (instead of having all writes on the sole master, we split
    among all masters depending on which database is getting the write
    command, especially when having to write BLOBs that can be some
    megabytes in size).
    I don't know to achieve this, but I will find a way somewhere.

    <blockquote
      cite="mid:5f27cf5e-666d-501e-f250-b4bae0a3f3a3@hogranch.com"
      type="cite">
      just sayin...

    ideas are always precious and welcome.
    <blockquote
      cite="mid:5f27cf5e-666d-501e-f250-b4bae0a3f3a3@hogranch.com"
      type="cite">


      --
john r pierce, recycling bits in santa cruz

Re: [SPAM] Re: WAL directory size calculation

From
"FarjadFarid\(ChkNet\)"
Date:

 

The question to ask is what benefit would you gain by saving BLOB object on a database than on say a flat file server or url on an ftp server? Specially larger ones.

 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand their structure and process them. It is not worth the effort.

It can hit the DB performance in Indexing, backups, migrations and load balancing.  

 

Hope this helps.

 

 

 

From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.


and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...


I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.


just sayin...

ideas are always precious and welcome.

 

-- 
john r pierce, recycling bits in santa cruz

 

Re: [SPAM] Re: WAL directory size calculation

From
"FarjadFarid\(ChkNet\)"
Date:

Another option which is growing in popularity is distributed in memory cache. There are quite a few companies providing such technology.

 

Pricing can range from free to quite expensive.

 

One recommendation with these technologies is to test them under heavy load conditions.

 

Good luck.

 

 

 

 

From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.


and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...


I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.


just sayin...

ideas are always precious and welcome.

 

-- 
john r pierce, recycling bits in santa cruz

 

Re: [SPAM] Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 29/07/2016 11:44,
      FarjadFarid(ChkNet) ha scritto:

    <blockquote
      cite="mid:002101d1e97d$c19a6540$44cf2fc0$@checknetworks.com"
      type="cite">

      <meta name="Generator" content="Microsoft Word 15 (filtered
        medium)">
      <!--
/* Font Definitions */
@font-face
    {font-family:"Cambria Math";
    panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
    {font-family:Calibri;
    panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
    {font-family:Consolas;
    panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
    {margin:0cm;
    margin-bottom:.0001pt;
    font-size:12.0pt;
    font-family:"Times New Roman",serif;
    color:black;}
a:link, span.MsoHyperlink
    {mso-style-priority:99;
    color:blue;
    text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
    {mso-style-priority:99;
    color:purple;
    text-decoration:underline;}
p
    {mso-style-priority:99;
    mso-margin-top-alt:auto;
    margin-right:0cm;
    mso-margin-bottom-alt:auto;
    margin-left:0cm;
    font-size:12.0pt;
    font-family:"Times New Roman",serif;
    color:black;}
pre
    {mso-style-priority:99;
    mso-style-link:"HTML Preformatted Char";
    margin:0cm;
    margin-bottom:.0001pt;
    font-size:10.0pt;
    font-family:"Courier New";
    color:black;}
p.msonormal0, li.msonormal0, div.msonormal0
    {mso-style-name:msonormal;
    mso-margin-top-alt:auto;
    margin-right:0cm;
    mso-margin-bottom-alt:auto;
    margin-left:0cm;
    font-size:12.0pt;
    font-family:"Times New Roman",serif;
    color:black;}
span.moz-txt-tag
    {mso-style-name:moz-txt-tag;}
span.HTMLPreformattedChar
    {mso-style-name:"HTML Preformatted Char";
    mso-style-priority:99;
    mso-style-link:"HTML Preformatted";
    font-family:"Consolas",serif;
    color:black;}
span.EmailStyle22
    {mso-style-type:personal-reply;
    font-family:"Calibri",sans-serif;
    color:windowtext;}
.MsoChpDefault
    {mso-style-type:export-only;
    font-size:10.0pt;}
@page WordSection1
    {size:612.0pt 792.0pt;
    margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
    {page:WordSection1;}
-->






        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">The
            question to ask is what benefit would you gain by saving
            BLOB object on a database than on say a flat file server or
            url on an ftp server? Specially larger ones.


    Privacy. Blobs are stored encrypted, since they are health-related
    images or documents.
    You should be right if all of this data would be resident only on
    our server (that can only be accessed by application), but every
    user has a small PG cluster in his PC with his patients data and
    images that replicates continuously with our server.
    Our application runs on Windows. To get into patient data from
    another user (say, someone that stole the computer) is a bit tricky,
    because you have to know how to exclude authentication in postgres
    and even after this, you have to know where to search and what to
    search and sometines what is the meaning on the encodings.
    Imagine if we have a folder containing all images.... double click
    and open...

    Another point is a bit of self-defense. Our users are far to be
    smart computer users, and in the past we had some cases in which
    someone, trying to clean up a filled-up disk, deleted a directory
    under his Paradox database (!!!) and then asked us why the app was
    not loading anymore....
    <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
    <blockquote
      cite="mid:002101d1e97d$c19a6540$44cf2fc0$@checknetworks.com"
      type="cite">

        <span

style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">BLOB’s
            cause a lot problem for all DBs. Not unless the DB engine
            can understand their structure and process them. It is not
            worth the effort.
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">It
            can hit the DB performance in Indexing, backups, migrations
            and load balancing.


    Regarding backups I disagree. Files related to database must be
    consistent to the database itself, so backup must be done saving
    both database and images. AFAIK there's not a big difference in
    backing up image files versus BLOBS in a database.
    I agree about load balancing, but only in case of a bulk load of
    several megabytes. (our actual server got an overload 2 months ago
    when a client we were activating sent a transaction with the
    insertion of 50 blobs sizing about 300 megabytes)
    <blockquote
      cite="mid:002101d1e97d$c19a6540$44cf2fc0$@checknetworks.com"
      type="cite">

        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">Hope
            this helps.
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 

          <div style="border:none;border-top:solid #E1E1E1
            1.0pt;padding:3.0pt 0cm 0cm 0cm">
            <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext"
                  lang="EN-US">From:<span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext"
                lang="EN-US"> pgsql-general-owner@postgresql.org
                [mailto:pgsql-general-owner@postgresql.org] On
                  Behalf Of Moreno Andreo
                Sent: 29 July 2016 10:19
                To: pgsql-general@postgresql.org
                Subject: Re: [SPAM] Re: [GENERAL] WAL directory
                size calculation


         

          Il 29/07/2016 10:43, John R Pierce ha
            scritto:



             




                Aside of this, I'm having 350 DBs
                  that sum up a bit more than 1 TB, and plan
                  to use wal_level=archive because I plan to have a
                  backup server with barman.



           
          With that many databases with that so many objects

        350 DBs with about 130 tables and a bunch
          of sequences each, for the sake of precision.
          With extensive use of BLOBs.




          and undoubtable client connections,

        Yes, that's another big problem... we run
          normally between 500 and 700 concurrent connections... I had
          to set max_connections=1000, the whole thing grew up faster
          than we were prepared for...




          I'd want to spread that across a cluster of smaller
            servers.

        That will be step 2... while migration is
          running (and will run for some months, we have to plan
          migration with users) I'll test putting another one or two
          machines in cluster, make some test cases, and when ready,
          databases will be migrated on other machines, too.
          I posted a question about this some months ago, and I was told
          that one solution would be to set the servers to be master on
          some databases and slave on others, so we can have a better
          load balancing (instead of having all writes on the sole
          master, we split among all masters depending on which database
          is getting the write command, especially when having to write
          BLOBs that can be some megabytes in size).
          I don't know to achieve this, but I will find a way somewhere.




          just sayin...

        ideas are always precious and welcome.



           
          --
          john r pierce, recycling bits in santa cruz

         

Re: [SPAM] Re: WAL directory size calculation

From
"FarjadFarid\(ChkNet\)"
Date:

 

Actually you can increase the over-all performance of your system several fold by distributing the source of data with encryption. CDN services use this old technique consistently all the time.

If you add a URL to an ftp with SSL certificate. Your backup will be much quicker and if someone stole the computer the images are still encrypted as before. It is just the source where data comes from that changes.  

 

Of course for small amount of data, say encrypted user name, password or id credential, db engine is still the best. But for larger files you could benefit substantially by looking at hybrid solutions.

 

Check out companies like www.maytech.com, not related to me at all. But they have secure network used for NHS (UK).

 

Their ftp service does have user name password protection which could be customised for different customer. They also distributed servers around the world.

 

Hope this helps.

 

From: Moreno Andreo [mailto:moreno.andreo@evolu-s.it]
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) <farjad.farid@checknetworks.com>; pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 11:44, FarjadFarid(ChkNet) ha scritto:

 

The question to ask is what benefit would you gain by saving BLOB object on a database than on say a flat file server or url on an ftp server? Specially larger ones.

Privacy. Blobs are stored encrypted, since they are health-related images or documents.
You should be right if all of this data would be resident only on our server (that can only be accessed by application), but every user has a small PG cluster in his PC with his patients data and images that replicates continuously with our server.
Our application runs on Windows. To get into patient data from another user (say, someone that stole the computer) is a bit tricky, because you have to know how to exclude authentication in postgres and even after this, you have to know where to search and what to search and sometines what is the meaning on the encodings.
Imagine if we have a folder containing all images.... double click and open...

Another point is a bit of self-defense. Our users are far to be smart computer users, and in the past we had some cases in which someone, trying to clean up a filled-up disk, deleted a directory under his Paradox database (!!!) and then asked us why the app was not loading anymore....
 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand their structure and process them. It is not worth the effort.

It can hit the DB performance in Indexing, backups, migrations and load balancing.

Regarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. AFAIK there's not a big difference in backing up image files versus BLOBS in a database.
I agree about load balancing, but only in case of a bulk load of several megabytes. (our actual server got an overload 2 months ago when a client we were activating sent a transaction with the insertion of 50 blobs sizing about 300 megabytes)

 

 

Hope this helps.

 

 

 

From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.



and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...



I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.



just sayin...

ideas are always precious and welcome.


 

-- 
john r pierce, recycling bits in santa cruz

 

 

Re: [SPAM] Re: WAL directory size calculation

From
"FarjadFarid\(ChkNet\)"
Date:

Sorry the URL should have been https://www.maytech.net/

 

Of course there are other companies in this space.

 

From: Moreno Andreo [mailto:moreno.andreo@evolu-s.it]
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) <farjad.farid@checknetworks.com>; pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 11:44, FarjadFarid(ChkNet) ha scritto:

 

The question to ask is what benefit would you gain by saving BLOB object on a database than on say a flat file server or url on an ftp server? Specially larger ones.

Privacy. Blobs are stored encrypted, since they are health-related images or documents.
You should be right if all of this data would be resident only on our server (that can only be accessed by application), but every user has a small PG cluster in his PC with his patients data and images that replicates continuously with our server.
Our application runs on Windows. To get into patient data from another user (say, someone that stole the computer) is a bit tricky, because you have to know how to exclude authentication in postgres and even after this, you have to know where to search and what to search and sometines what is the meaning on the encodings.
Imagine if we have a folder containing all images.... double click and open...

Another point is a bit of self-defense. Our users are far to be smart computer users, and in the past we had some cases in which someone, trying to clean up a filled-up disk, deleted a directory under his Paradox database (!!!) and then asked us why the app was not loading anymore....
 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand their structure and process them. It is not worth the effort.

It can hit the DB performance in Indexing, backups, migrations and load balancing.

Regarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. AFAIK there's not a big difference in backing up image files versus BLOBS in a database.
I agree about load balancing, but only in case of a bulk load of several megabytes. (our actual server got an overload 2 months ago when a client we were activating sent a transaction with the insertion of 50 blobs sizing about 300 megabytes)

 

 

Hope this helps.

 

 

 

From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.



and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...



I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.



just sayin...

ideas are always precious and welcome.


 

-- 
john r pierce, recycling bits in santa cruz

 

 

Re: [SPAM] Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 29/07/2016 15:13,
      FarjadFarid(ChkNet) ha scritto:

    <blockquote
      cite="mid:000c01d1e99a$f2825c40$d78714c0$@checknetworks.com"
      type="cite">

      <meta name="Generator" content="Microsoft Word 15 (filtered
        medium)">
      <!--
/* Font Definitions */
@font-face
    {font-family:"Cambria Math";
    panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
    {font-family:Calibri;
    panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
    {font-family:Consolas;
    panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
    {margin:0cm;
    margin-bottom:.0001pt;
    font-size:12.0pt;
    font-family:"Times New Roman",serif;
    color:black;}
a:link, span.MsoHyperlink
    {mso-style-priority:99;
    color:blue;
    text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
    {mso-style-priority:99;
    color:purple;
    text-decoration:underline;}
p
    {mso-style-priority:99;
    mso-margin-top-alt:auto;
    margin-right:0cm;
    mso-margin-bottom-alt:auto;
    margin-left:0cm;
    font-size:12.0pt;
    font-family:"Times New Roman",serif;
    color:black;}
pre
    {mso-style-priority:99;
    mso-style-link:"HTML Preformatted Char";
    margin:0cm;
    margin-bottom:.0001pt;
    font-size:10.0pt;
    font-family:"Courier New";
    color:black;}
p.msonormal0, li.msonormal0, div.msonormal0
    {mso-style-name:msonormal;
    mso-style-priority:99;
    mso-margin-top-alt:auto;
    margin-right:0cm;
    mso-margin-bottom-alt:auto;
    margin-left:0cm;
    font-size:12.0pt;
    font-family:"Times New Roman",serif;
    color:black;}
span.HTMLPreformattedChar
    {mso-style-name:"HTML Preformatted Char";
    mso-style-priority:99;
    mso-style-link:"HTML Preformatted";
    font-family:Consolas;
    color:black;}
span.moz-txt-tag
    {mso-style-name:moz-txt-tag;}
span.EmailStyle22
    {mso-style-type:personal;
    font-family:"Calibri",sans-serif;
    color:windowtext;}
span.EmailStyle23
    {mso-style-type:personal-reply;
    font-family:"Calibri",sans-serif;
    color:windowtext;}
.MsoChpDefault
    {mso-style-type:export-only;
    font-size:10.0pt;}
@page WordSection1
    {size:612.0pt 792.0pt;
    margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
    {page:WordSection1;}
-->






        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">If
            you add a URL to an ftp with SSL certificate. Your backup
            will be much quicker and if someone stole the computer the
            images are still encrypted as before. It is just the source
            where data comes from that changes. 



    ... and if while working the Internet connection drops....? or my
    office is not covered by broadband at all (and, still in 2016, in
    Italy there are so many places not covered by broadband... no adsl,
    no wi-max, low-performing mobile)?
    Local copies of databases that we synchronize are made to permit
    users to work even if no internet connection is available (since
    they're doctors, they have to have their data available almost all
    the time)

    This architecture is made by design. Some years ago, when we started
    designing our software, we went into this situation, and the
    question was "Why don't we have just a remote server and users
    connect remotely, instead of having replicas in their places?"
    This can ease updates, troubleshooting, almost everything.
    After a while, the answer we went into is exactly as above. Until we
    have slow and unreliable Internet connections (fiber coverage is
    growing, but it's still very sparse) so we can't count on them, we
    can't rely only to a remote server.

    Thanks
    Moreno

    <blockquote
      cite="mid:000c01d1e99a$f2825c40$d78714c0$@checknetworks.com"
      type="cite">

        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">
        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 


        <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 

          <div style="border:none;border-top:solid #E1E1E1
            1.0pt;padding:3.0pt 0cm 0cm 0cm">
            <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext"
                  lang="EN-US">From:<span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext"
                lang="EN-US"> Moreno Andreo
                [mailto:moreno.andreo@evolu-s.it]
                Sent: 29 July 2016 12:08
                To: FarjadFarid(ChkNet)
                <farjad.farid@checknetworks.com>;
                pgsql-general@postgresql.org
                Subject: Re: [SPAM] Re: [GENERAL] WAL directory
                size calculation


         

          Il 29/07/2016 11:44, FarjadFarid(ChkNet)
            ha scritto:


          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">The
              question to ask is what benefit would you gain by saving
              BLOB object on a database than on say a flat file server
              or url on an ftp server? Specially larger ones.

        Privacy. Blobs are stored encrypted, since
          they are health-related images or documents.
          You should be right if all of this data would be resident only
          on our server (that can only be accessed by application), but
          every user has a small PG cluster in his PC with his patients
          data and images that replicates continuously with our server.
          Our application runs on Windows. To get into patient data from
          another user (say, someone that stole the computer) is a bit
          tricky, because you have to know how to exclude authentication
          in postgres and even after this, you have to know where to
          search and what to search and sometines what is the meaning on
          the encodings.
          Imagine if we have a folder containing all images.... double
          click and open...

          Another point is a bit of self-defense. Our users are far to
          be smart computer users, and in the past we had some cases in
          which someone, trying to clean up a filled-up disk, deleted a
          directory under his Paradox database (!!!) and then asked us
          why the app was not loading anymore....
          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 


          <span

style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">BLOB’s
              cause a lot problem for all DBs. Not unless the DB engine
              can understand their structure and process them. It is not
              worth the effort.
          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">It
              can hit the DB performance in Indexing, backups,
              migrations and load balancing.

        Regarding backups I disagree. Files related
          to database must be consistent to the database itself, so
          backup must be done saving both database and images. AFAIK
          there's not a big difference in backing up image files versus
          BLOBS in a database.
          I agree about load balancing, but only in case of a bulk load
          of several megabytes. (our actual server got an overload 2
          months ago when a client we were activating sent a transaction
          with the insertion of 50 blobs sizing about 300 megabytes)



          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US">Hope
              this helps.
          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 
          <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext;mso-fareast-language:EN-US"> 

            <div style="border:none;border-top:solid #E1E1E1
              1.0pt;padding:3.0pt 0cm 0cm 0cm">
              <span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext"
                    lang="EN-US">From:<span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext"
                  lang="EN-US"> <a moz-do-not-send="true"
                    href="mailto:pgsql-general-owner@postgresql.org">pgsql-general-owner@postgresql.org
                  [<a moz-do-not-send="true"
                    href="mailto:pgsql-general-owner@postgresql.org">mailto:pgsql-general-owner@postgresql.org]
                  On Behalf Of Moreno Andreo
                  Sent: 29 July 2016 10:19
                  To: <a moz-do-not-send="true"
                    href="mailto:pgsql-general@postgresql.org">pgsql-general@postgresql.org
                  Subject: Re: [SPAM] Re: [GENERAL] WAL directory
                  size calculation


           

            Il 29/07/2016 10:43, John R Pierce ha
              scritto:



               




                  Aside of this, I'm having 350 DBs
                    that sum up a bit more than 1 TB, and plan
                    to use wal_level=archive because I plan to have a
                    backup server with barman.



             
            With that many databases with that so many objects

          350 DBs with about 130 tables and a bunch
            of sequences each, for the sake of precision.
            With extensive use of BLOBs.





            and undoubtable client connections,

          Yes, that's another big problem... we run
            normally between 500 and 700 concurrent connections... I had
            to set max_connections=1000, the whole thing grew up faster
            than we were prepared for...





            I'd want to spread that across a cluster of smaller
              servers.

          That will be step 2... while migration is
            running (and will run for some months, we have to plan
            migration with users) I'll test putting another one or two
            machines in cluster, make some test cases, and when ready,
            databases will be migrated on other machines, too.
            I posted a question about this some months ago, and I was
            told that one solution would be to set the servers to be
            master on some databases and slave on others, so we can have
            a better load balancing (instead of having all writes on the
            sole master, we split among all masters depending on which
            database is getting the write command, especially when
            having to write BLOBs that can be some megabytes in size).
            I don't know to achieve this, but I will find a way
            somewhere.





            just sayin...

          ideas are always precious and welcome.




             
            --
            john r pierce, recycling bits in santa cruz

           

         

Re: [SPAM] Re: WAL directory size calculation

From
"David G. Johnston"
Date:
On Fri, Jul 29, 2016 at 7:08 AM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:
​R​
egarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. 

​I'd suggest you consider that such binary data be defined as immutable.  Then the only problem you have to worry about is existence - versioning consistency goes away.  You only need focus on the versioning of associations - which remains in the database and is very lightweight.  It is then a separate matter to ensure that all documents you require are accessible given the identifying information stored in the database and linked to the primary records via those versioned associations.

David J.
 

Re: [SPAM] Re: WAL directory size calculation

From
Francisco Olarte
Date:
Hi:

On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
<moreno.andreo@evolu-s.it> wrote:
> After Andreas post and thinking about it a while, I went to the decision
> that it's better not to use RAM but another persistent disk, because there
> can be an instant between when a WAL is written and it's fsync'ed, and if a
> failure happens in this instant the amount of data not fsync'ed is lost. Am
> I right?

With the usual configuration, fsync on, etc.. what postgres does is to
write and sync THE WAL before commit, but it does not sync the table
pages. Should anything bad (tm) happen it can replay the synced wal to
recover. If you use a ram disk for WAL and have a large enough ram
cache you can lose a lot of data, not just from the last sync. At the
worst point you could start a transaction, create a database, fill it
and commit and have everything in the ram-wal and the hd cache, then
crash and have nothing on reboot.

Francisco Olarte.


Re: [SPAM] Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 29/07/2016 15:30, David G. Johnston
      ha scritto:

    <blockquote
cite="mid:CAKFQuwaD7hyCg-zdMhMF6djfz+XzuM0hOSexXW50k7N4orFU0Q@mail.gmail.com"
      type="cite">

        <div class="gmail_default"
          style="font-family:arial,helvetica,sans-serif"><span
            style="font-family:arial,sans-serif">On Fri, Jul 29, 2016 at
            7:08 AM, Moreno Andreo <span dir="ltr"
            style="font-family:arial,sans-serif"><<a
              moz-do-not-send="true"
              href="mailto:moreno.andreo@evolu-s.it" target="_blank">moreno.andreo@evolu-s.it><span
            style="font-family:arial,sans-serif"> wrote:



            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">


                    <div class="gmail_default"
                      style="font-family:arial,helvetica,sans-serif;display:inline">​R​
                    egarding backups I disagree. Files related to
                    database must be consistent to the database itself,
                    so backup must be done saving both database and
                    images. 





              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">​I'd
                suggest you consider that such binary data be defined as
                immutable.  Then the only problem you have to worry
                about is existence - versioning consistency goes away. 
                You only need focus on the versioning of associations -
                which remains in the database and is very lightweight. 
                It is then a separate matter to ensure that all
                documents you require are accessible given the
                identifying information stored in the database and
                linked to the primary records via those versioned
                associations.


              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">



              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">David
                J.


              <div class="gmail_default"
                style="font-family:arial,helvetica,sans-serif;display:inline">​
               




    I think you are right on this point, there are only some kind of
      bytea that are not immutable, and that's where we store bytea
      instead of images (many of these fields have been already
      converted to text type, though)
    Moreno

Re: [SPAM] Re: [SPAM] Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 29/07/2016 17:26, Francisco Olarte ha scritto:
> Hi:
>
> On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
> <moreno.andreo@evolu-s.it> wrote:
>> After Andreas post and thinking about it a while, I went to the decision
>> that it's better not to use RAM but another persistent disk, because there
>> can be an instant between when a WAL is written and it's fsync'ed, and if a
>> failure happens in this instant the amount of data not fsync'ed is lost. Am
>> I right?
> With the usual configuration, fsync on, etc.. what postgres does is to
> write and sync THE WAL before commit, but it does not sync the table
> pages. Should anything bad (tm) happen it can replay the synced wal to
> recover. If you use a ram disk for WAL and have a large enough ram
> cache you can lose a lot of data, not just from the last sync. At the
> worst point you could start a transaction, create a database, fill it
> and commit and have everything in the ram-wal and the hd cache, then
> crash and have nothing on reboot.
>
> Francisco Olarte.
>
>
This is another good point.

I'm ending up with a 10 GB SSD dedicated to WAL files. I'm starting with
a small slice of my clients for now, to test production environment, and
as traffic will grow, I'll see if my choice was good or has to be improved.

Should I keep fsync off? I'd think it would be better leaving it on, right?





Re: [SPAM] Re: [SPAM] Re: WAL directory size calculation

From
Michael Paquier
Date:
On Wed, Aug 3, 2016 at 8:07 PM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:
> Should I keep fsync off? I'd think it would be better leaving it on, right?

From the docs: https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS
While turning off fsync is often a performance benefit, this can
result in unrecoverable data corruption in the event of a power
failure or system crash. Thus it is only advisable to turn off fsync
if you can easily recreate your entire database from external data.

So if you care about your data, that's to set in any case to on, as is
full_page_writes.
--
Michael


Re: [SPAM] Re: [SPAM] Re: [SPAM] Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 03/08/2016 14:12, Michael Paquier ha scritto:
> On Wed, Aug 3, 2016 at 8:07 PM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:
>> Should I keep fsync off? I'd think it would be better leaving it on, right?
> >From the docs: https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS
> While turning off fsync is often a performance benefit, this can
> result in unrecoverable data corruption in the event of a power
> failure or system crash. Thus it is only advisable to turn off fsync
> if you can easily recreate your entire database from external data.
>
> So if you care about your data, that's to set in any case to on, as is
> full_page_writes.

Got it.

Thanks!

Moreno




Re: WAL directory size calculation

From
Jeff Janes
Date:
On Thu, Jul 28, 2016 at 6:33 AM, David G. Johnston
<david.g.johnston@gmail.com> wrote:
> On Thu, Jul 28, 2016 at 9:25 AM, Moreno Andreo <moreno.andreo@evolu-s.it>
> wrote:
>>
>> I've read somewhere that the formula should be 16 MB * 3 *
>> checkpoint_segment in size.
>
> [...]
>
>>
>> Using the above formula I have:
>>     16 MB * 3 * 1 GB
>> that leads to to ... uh .. 48000 TB?
>
>
> You seem to be mis-remembering the formula.
>
> checkpoiint_segments is a quantity (i.e., unit-less), not a size.  Saying
> its "1GB" makes no sense.

Right, that doesn't make any sense.

> I'm also doubting you multiply it by 3 - add three to it maybe...

No, multiplying by 3 is roughly correct.  A log file can't be
recycled/removed until there have been 2 checkpoints started and
completed.   And if you have checkpoint completion target set to high
value, the 2nd checkpoint will complete just before the 3rd one will
be triggered to start.  So that means you have 3 * checkpoint_segments
worth of segment.

Cheers,

Jeff


Re: WAL directory size calculation

From
Jeff Janes
Date:
On Thu, Jul 28, 2016 at 6:25 AM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:
> Hi folks! :-)
> I'm about to bring up my brand new production server and I was wondering if
> it's possible to calculate (approx.) the WAL directory size.
> I have to choose what's better in terms of cost vs. performance (we are on
> Google Cloud Platform) between a ramdisk or a separate persistent disk.

As others have said, there is almost no point in putting WAL on a
ramdisk.  It will not be there exactly at the time you need it.

> Obviously ramdisk will be times faster disk, but having a, say, 512 GB
> ramdisk will be a little too expensive :-)
> I've read somewhere that the formula should be 16 MB * 3 *
> checkpoint_segment in size. But won't it be different depending on the type
> of /wal_level/ we set? And won't it also be based on the volume of
> transactions in the cluster?

Not in usual cases.  If you have more volume, then checkpoint_segment
will get exceeded more frequently and you will have more frequent
checkpoints.  As long as your system can actually keep up with the
checkpoints, then the more frequent checkpoints will cancel the higher
volume, leaving you with the same steady-state number of segments.


> And, in place of not-anymore-used-in-9.5 /checkpoint_segment/ what should I
> use? /max_wal_size/?

max_wal_size doesn't just replace "checkpoint_segment" in the formula.
It replaces the entire
formula itself.  That was the reason for introducing it.

> Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
> to use wal_level=archive because I plan to have a backup server with barman.
>
> Using the above formula I have:
>     16 MB * 3 * 1 GB

If you are getting the "1 GB" from max_wal_size, then see above.

Note that max_wal_size is not a hard limit.  It will be exceeded if
your system can't keep up with the checkpoint schedule.  Or if
archive_command can't keep up.

Cheers,

Jeff


Re: [SPAM] Re: WAL directory size calculation

From
Moreno Andreo
Date:
Il 03/08/2016 18:01, Jeff Janes ha scritto:
> On Thu, Jul 28, 2016 at 6:25 AM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:
>> Hi folks! :-)
>> I'm about to bring up my brand new production server and I was wondering if
>> it's possible to calculate (approx.) the WAL directory size.
>> I have to choose what's better in terms of cost vs. performance (we are on
>> Google Cloud Platform) between a ramdisk or a separate persistent disk.
> As others have said, there is almost no point in putting WAL on a
> ramdisk.  It will not be there exactly at the time you need it.
OK, got it, as I already stated. That was just a bad thought :-)
>
>> Obviously ramdisk will be times faster disk, but having a, say, 512 GB
>> ramdisk will be a little too expensive :-)
>> I've read somewhere that the formula should be 16 MB * 3 *
>> checkpoint_segment in size. But won't it be different depending on the type
>> of /wal_level/ we set? And won't it also be based on the volume of
>> transactions in the cluster?
> Not in usual cases.  If you have more volume, then checkpoint_segment
> will get exceeded more frequently and you will have more frequent
> checkpoints.  As long as your system can actually keep up with the
> checkpoints, then the more frequent checkpoints will cancel the higher
> volume, leaving you with the same steady-state number of segments.

So if I want to keep checkpoint happening not frequently, the solution
is to have a bigger checkpoint_segment (or max_wal_size), so value gets
exceeded less frequently?

>> And, in place of not-anymore-used-in-9.5 /checkpoint_segment/ what should I
>> use? /max_wal_size/?
> max_wal_size doesn't just replace "checkpoint_segment" in the formula.
> It replaces the entire
> formula itself.  That was the reason for introducing it.

Another point cleared. I did not get this in the docs. I'll go an read
it again.

>
>> Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
>> to use wal_level=archive because I plan to have a backup server with barman.
>>
>> Using the above formula I have:
>>      16 MB * 3 * 1 GB
> If you are getting the "1 GB" from max_wal_size, then see above.

Exactly. I think it's its default value, since I didn't change it.

> Note that max_wal_size is not a hard limit.  It will be exceeded if
> your system can't keep up with the checkpoint schedule.  Or if
> archive_command can't keep up.
Got it.
Thanks
Moreno
>
> Cheers,
>
> Jeff
>
>




Re: [SPAM] Re: [SPAM] Re: WAL directory size calculation

From
Francisco Olarte
Date:
Hi Moreno:

On Wed, Aug 3, 2016 at 1:07 PM, Moreno Andreo <moreno.andreo@evolu-s.it> wrote:

It's already been answered, but as it seems to be answering a chunk of
my mail...

> Should I keep fsync off? I'd think it would be better leaving it on, right?

Yes. If you have to ask wether fsync should be on, it should.

I mean, you only take it off when you are absolutely sure of where you
are doing, fsync off goes against the D in acid.

You normally only turn it off in counted cases. As an example we have
an special postgresql.conf for full cluster restores, with fsync=off.
Wehen we need it we stop the cluster, boot it with that, restore, stop
it again and reboot with the normal fsync=on config. In this case we
do not mind losing data as we are doing a full restore anyway.

But normally, its a bad idea. As a classic photo caption says,
fsync=off => DBAs running with scissors.

Francisco Olarte.