Thread: postgres performance tunning
Dear Friends, I have a requirement for running more that 15000 queries per second. Can you please tell what all are the postgres parameters needs to be changed to achieve this. Already I have 17GB RAM and dual core processor and this machine is dedicated for database operation. -- View this message in context: http://postgresql.1045698.n5.nabble.com/postgres-performance-tunning-tp3307846p3307846.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
> Dear Friends, > I have a requirement for running more that 15000 queries per > second. > Can you please tell what all are the postgres parameters needs to be > changed > to achieve this. > Already I have 17GB RAM and dual core processor and this machine > is dedicated for database operation. That depends on your queries : for simple things like "SELECT * FROM table WHERE primary_key = constant", no problem, a desktop dual core will do it... So, please provide more details...
selvi88 wrote: > I have a requirement for running more that 15000 queries per second. > Can you please tell what all are the postgres parameters needs to be changed > to achieve this. > Already I have 17GB RAM and dual core processor and this machine is > dedicated for database operation. > You can find a parameter tuning guide at http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server that may help you out. You are unlikely to hit 15K queries/second with a dual core processor. When I run really trivial queries using the pgbench program to simulate database activity, that normally gives about 7K queries/second/core. My dual-core laptop will do 13K/second for example. And real-world queries tend to be a bit more intensive than that. I would normally expect that a quad-core system would be needed to reach 15K even with trivial queries; my quad-core server at home will do 28K queries/second running pgbench. If your individual cores are really fast, you might just make it. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services and Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
On Thu, Dec 16, 2010 at 14:33, selvi88 <selvi.dct@gmail.com> wrote: > I have a requirement for running more that 15000 queries per second. > Can you please tell what all are the postgres parameters needs to be changed > to achieve this. You have not told us anything about what sort of queries they are or you're trying to do. PostgreSQL is not the solution to all database problems. If all you have is a dual-core machine then other software can possibly make better use of the available hardware. First of all, if they're mostly read-only queries, you should use a caching layer (like memcache) in front of PostgreSQL. And you can use replication to spread the load across multiple machines (but you will get some latency until the updates fully propagate to slaves). If they're write queries, memory databases (like Redis), or disk databases specifically optimized for writes (like Cassandra) might be more applicable. Alternatively, if you can tolerate some latency, use message queuing middleware like RabbitMQ to queue up a larger batch and send updates to PostgreSQL in bulk. As for optimizing PostgreSQL itself, if you have a high connection churn then you will need connection pooling middleware in front -- such as pgbouncer or pgpool. But avoiding reconnections is a better idea. Also, use prepared queries to avoid parsing overheads for every query. Obviously all of these choices involve tradeoffs and caveats, in terms of safety, consistency, latency and application complexity. Regards, Marti
On Thu, Dec 16, 2010 at 7:33 AM, selvi88 <selvi.dct@gmail.com> wrote: > > Dear Friends, > I have a requirement for running more that 15000 queries per second. > Can you please tell what all are the postgres parameters needs to be changed > to achieve this. > Already I have 17GB RAM and dual core processor and this machine is > dedicated for database operation. 15k tps is doable on cheap hardware if they are read only, and trivial. if you are writing, you are going to need some fancy storage. each disk drive can do about 100-300 tps depending on the speed of the drive and other factors (you can enhance this significantly by relaxing sync requirements). a single dual core is not going to cut it though -- you should bank on 4 cores at least. please describe the problem you are trying to solve in more detail. 15k tps can be trivially done, or could require a massive engineering effort. it really depends on what you are trying to do. merlin
My requirement is more than 15 thousand queries will run, It will be 5000 updates and 5000 insert and rest will be select. Each query will be executed in each psql client, (let say for 15000 queries 15000 thousand psql connections will be made). Since the connections are more for me the performance is low, I have tested it with pgbench tool. Configurations, RAM : 17.9GB CPU : 64-bit 2 cores each 5346 bogomips Postgres Configurations, Shared Memory Required (shmmax) : 1720320000 bytes Wal Buffers : 1024KB Maintenance work mem : 1024MB Effective Cache Size : 9216MB Work Memory : 32MB Shared Buffer : 1536MB -- View this message in context: http://postgresql.1045698.n5.nabble.com/postgres-performance-tunning-tp3307846p3309251.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
Thanks for ur suggestion, already I have gone through that url, with that help I was able to make my configuration to work for 5K queries/second. The parameters I changed was shared_buffer, work_mem, maintenance_work_mem and effective_cache. Still I was not able to reach my target. Can u kindly tell me ur postgres configurations thereby I can get some idea out of it. -- View this message in context: http://postgresql.1045698.n5.nabble.com/postgres-performance-tunning-tp3307846p3310337.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
On Fri, Dec 17, 2010 at 07:48, selvi88 <selvi.dct@gmail.com> wrote:
My requirement is more than 15 thousand queries will run,
It will be 5000 updates and 5000 insert and rest will be select.
What IO system are you running Postgres on? With that kind of writes you should be really focusing on your storage solution.
Each query will be executed in each psql client, (let say for 15000 queries
15000 thousand psql connections will be made).
You will benefit from a connection pooler. Try fiddling with maximum_connections till you hit a sweet spot. Probably you should start with 20 connections and go up till you see your tps decrease.
Still, without deeply looking into your storage I wonder if you'll ever reach your TPS objective.
On Sat, Dec 18, 2010 at 2:34 AM, selvi88 <selvi.dct@gmail.com> wrote: > > > Thanks for ur suggestion, already I have gone through that url, with that > help I was able to make my configuration to work for 5K queries/second. > The parameters I changed was shared_buffer, work_mem, maintenance_work_mem > and effective_cache. > Still I was not able to reach my target. > > Can u kindly tell me ur postgres configurations thereby I can get some idea > out of it. I already posted this on stack overflow, but you'll need more machine than you have to do that. Specifically you'll need either to go to SSD hard drives (which have some issues with power loss data corruption unless you spend the bucks on them with a super capacitor to make sure their write caches can flush on power loss) or a half dozen to a dozen or so spinning 15k drives with a battery backed controller. I can sustain about 5,000 transactions per second on a machine with 8 cores (2 years old) and 14 15k seagate hard drives.
Scott Marlowe wrote: > I can sustain about 5,000 transactions per second on a machine with 8 > cores (2 years old) and 14 15k seagate hard drives. > Right. You can hit 2 to 3000/second with a relatively inexpensive system, so long as you have a battery-backed RAID controller and a few hard drives. Doing 5K writes/second is going to take a giant pile of hard drive or SSDs to pull off. There is no possible way to meet the performance objectives here without a lot more cores in the server and some pretty beefy storage too. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services and Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
On Mon, Dec 20, 2010 at 10:49 AM, Greg Smith <greg@2ndquadrant.com> wrote: > Scott Marlowe wrote: >> >> I can sustain about 5,000 transactions per second on a machine with 8 >> cores (2 years old) and 14 15k seagate hard drives. >> > > Right. You can hit 2 to 3000/second with a relatively inexpensive system, > so long as you have a battery-backed RAID controller and a few hard drives. > Doing 5K writes/second is going to take a giant pile of hard drive or SSDs > to pull off. There is no possible way to meet the performance objectives > here without a lot more cores in the server and some pretty beefy storage > too. And it gets expensive fast as you need more and more tps capability. Those machines listed up there were $10k two years ago. Their replacements are $25k machines with 48 cores, 128G RAM and 34 15k hard drives, and they get about 8k tps. Note that due to the nature of these machines' jobs they are NOT tuned heavily towards tps in real life, but more for handling a bunch of little reads and few big writes and reads simultaneously. The new machines are much more than 30 or 40% faster in real world testing, for our workload they're about 10x as fast, since we were CPU bound before with 8 cores.
On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith <greg@2ndquadrant.com> wrote: > Scott Marlowe wrote: >> I can sustain about 5,000 transactions per second on a machine with 8 >> cores (2 years old) and 14 15k seagate hard drives. > > Right. You can hit 2 to 3000/second with a relatively inexpensive system, > so long as you have a battery-backed RAID controller and a few hard drives. > Doing 5K writes/second is going to take a giant pile of hard drive or SSDs > to pull off. There is no possible way to meet the performance objectives > here without a lot more cores in the server and some pretty beefy storage > too. Is this with synchronous_commit on, or off? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Thu, Jan 6, 2011 at 2:31 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith <greg@2ndquadrant.com> wrote: >> Scott Marlowe wrote: >>> I can sustain about 5,000 transactions per second on a machine with 8 >>> cores (2 years old) and 14 15k seagate hard drives. >> >> Right. You can hit 2 to 3000/second with a relatively inexpensive system, >> so long as you have a battery-backed RAID controller and a few hard drives. >> Doing 5K writes/second is going to take a giant pile of hard drive or SSDs >> to pull off. There is no possible way to meet the performance objectives >> here without a lot more cores in the server and some pretty beefy storage >> too. > > Is this with synchronous_commit on, or off? Off. It doesn't seem to make a lot of difference one you're running on a good battery backed caching RAID controller.
On Thu, Jan 6, 2011 at 2:41 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Thu, Jan 6, 2011 at 2:31 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith <greg@2ndquadrant.com> wrote: >>> Scott Marlowe wrote: >>>> I can sustain about 5,000 transactions per second on a machine with 8 >>>> cores (2 years old) and 14 15k seagate hard drives. >>> >>> Right. You can hit 2 to 3000/second with a relatively inexpensive system, >>> so long as you have a battery-backed RAID controller and a few hard drives. >>> Doing 5K writes/second is going to take a giant pile of hard drive or SSDs >>> to pull off. There is no possible way to meet the performance objectives >>> here without a lot more cores in the server and some pretty beefy storage >>> too. >> >> Is this with synchronous_commit on, or off? > > Off. It doesn't seem to make a lot of difference one you're running > on a good battery backed caching RAID controller. Sorry, that's ON not OFF. Turning it off doesn't seem to ...