stalls gave - Search results in mailing lists

2024-01-26 04:22:58 | Re: I don't understand that EXPLAIN PLAN timings (Jean-Christophe Boggio)

installed: PostgreSQL 15.5 (Ubuntu 15.5-1.pgdg23.10+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 13.2.0-4ubuntu3) 13.2.0, 64-bit and PostgreSQL 16.1 (Ubuntu 16.1-1.pgdg23.10

2021-08-22 19:07:43 | RE: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4 (ldh@laurent-hasson.com)

install I gave last measurements for has all stock > settings. > > Thank you, > Laurent. > > One more

2016-12-01 23:39:15 | Substantial different index use between 9.5 and 9.6 (Bill Measday)

Installed using EnterpriseDB. Both instances are on the same server, postgresql.conf for both are the same except max_locks_per_transaction = 200 in 9.6 (caused insertion errors otherwise). On 9.6, Postgis

2012-02-27 04:08:44 | Re: Very long deletion time on a 200 GB database (Reuven M. Lerner)

gave us the performance that we needed. (One of the developers kept asking me how it can possibly take so long to delete 200 GB, when he can delete files of that size in much

2011-08-08 23:06:37 | benchmark woes and XFS options (mark)

stalled - so my bad there) if someone is a mod and it's still in the wait queue feel free to remove them. Short version: my zcav and dd tests look to get ->CPU bound

2011-02-16 13:11:59 | Re: high user cpu, massive SELECTs, no io waiting problem (Thomas Pöhler)

gave me: version PostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit checkpoint_segments 40 custom_variable_classes pg_stat_statements effective_cache_size 48335MB escape

2010-10-22 12:11:05 | Re: Periodically slow inserts (Gael Le Mignot)

gave me hints and feedbacks. I managed to solve the problem. My understanding of what was happening is the following : - The gin index (as explained on [1]), stores temporary list, and when they grow

2010-04-12 07:23:43 | significant slow down with various LIMIT (Helio Campos Mello de Andrade)

gave in the message following: ########################################################################################################################## This one scanned the t_route table until it found four rows that matched. It apparently didn't need to look at very many rows to find the four matches

2010-03-21 15:04:05 | Re: pg_dump far too slow (Bob Lunney)

stall, try putting something like buffer(1) in the pipeline ... it doesn't generally come with Linux, but you can download source or create your own very easily ... all it needs to do is asynchronously

2009-10-04 23:21:56 | Maybe OT, not sure Re: Best suiting OS (Mark Mielke)

installs on the same hardware do not know how to use NCQ. You might be fine with CentOS 5.2 on your modern hardware - but I suspect that your CentOS 5.2 is not making the absolute

2009-02-04 16:34:35 | Re: suggestions for postgresql setup on Dell 2950 , PERC6i controller (Gary Doades)

installed Ubuntu server on 2 Dell 2950s with 8GB RAM and six 2.5 inch 15K rpm SAS disks in a single RAID10. I only got chance to run bonnie++ on them a few times

2008-05-14 08:04:34 | Re: postgres overall performance seems to degrade when large SELECT are requested (PFC)

install Gb ethernet - run the report on the database server (no bandwidth problems...) - rewrite the reporting tool to use SQL aggregates to transfer less data over the network - or use a cursor to fetch your

2008-04-09 17:58:34 | large tables and simple "= constant" queries using indexes (John Beaver)

gave up after more than 4 hours of waiting for it to finish indexing ----Table stats---- - 15 million rows; I'm expecting to have four or five times this number eventually. - 1.5 gigs of hard

2006-10-25 17:08:02 | Re: commit so slow program looks frozen (Carlo Stonebanks)

gave its cycles over. The Windows task manager shows the postgresql processes that (I assume) are associated with the stalled

2006-10-05 02:07:24 | Re: UPDATE becomes mired / win32 (Steve Peterson)

gave this a shot. It didn't have an impact on the results. The behavior also persists across a dump/reload of the table into a new install