Wood, Dan wrote:
> Whatever you do make sure to also test 250 clients running lock.sql. Even with the communities fix plus YiWen’s fix
Ican still get duplicate rows. What works for “in-block” hot chains may not work when spanning blocks.
Good idea. You can achieve a similar effect by adding a filler column,
and reducing fillfactor.
> Once nearly all 250 clients have done their updates and everybody is
> waiting to vacuum which one by one will take a while I usually just
> “pkill -9 psql”. After that I have many of duplicate “id=3” rows.
Odd ...
> On top of that I think we might have a lock leak. After the pkill I
> tried to rerun setup.sql to drop/create the table and it hangs. I see
> an autovacuum process starting and existing every couple of seconds.
> Only by killing and restarting PG can I drop the table.
Please do try to figure this one out. It'd be a separate problem,
worthy of its own thread.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers