Re: Measuring database IO for AWS RDS costings - Mailing list pgsql-admin

From Guillaume Lelarge
Subject Re: Measuring database IO for AWS RDS costings
Date
Msg-id CAECtzeUA5Gwk9km+6Erm3RD0kSdmtsWv8-2H-b3L3dFPprHJXA@mail.gmail.com
Whole thread Raw
In response to Measuring database IO for AWS RDS costings  (David Osborne <david@qcode.co.uk>)
Responses Re: Measuring database IO for AWS RDS costings
List pgsql-admin

Le 13 août 2014 15:47, "David Osborne" <david@qcode.co.uk> a écrit :
>
>
> We have a test Postgresql AWS RDS instance running with a view to transferring our Live physical Postgresql workload to AWS.
>
> Apart from the cost of the instance, AWS also bill for IO requests per month.
>
> We are trying to work out how to estimate the IO costs our Live workload would attract.
> So if we can confirm metrics x+y measured from within our test Postgresql instance on RDS maps to z billable IO requests, then we can measure the same metrics from our Live Postgresql server and estimate costs.
>
> I believe in the AWS world an IO request is each 16kb read or written to disk.
> How would I go about measuring 16kb blocks read or written to disk from within Postgresql?
>
> I was hopeful of pg_stat_database which has blks_read (which I believe are 8kb blocks), but there doesn't seem to be an equivalent for blks_written?
>

You're right that they are 8kB blocks (by default). But it's not read from disk, is more "postgresql asks the OS to give it the blocks". They may come from the disk, but they also may come from the OS disk cache. You can't find actual disk reads and writes from PostgreSQL.

pgsql-admin by date:

Previous
From: David Osborne
Date:
Subject: Measuring database IO for AWS RDS costings
Next
From: Mike Sullivan
Date:
Subject: Looking for PostgreSQL reporting tool