Re: Select for update, locks and transaction levels - Mailing list pgsql-general

From NTPT
Subject Re: Select for update, locks and transaction levels
Date
Msg-id 002801c44b23$08eaa330$8e42b13e@wbp1
Whole thread Raw
In response to Re: Select for update, locks and transaction levels  (Keary Suska <hierophant@pcisys.net>)
List pgsql-general
Maybe filesystem fragmenttion is a problem ??

They told that fragmentation on multiuser system is not a problem (for
example on ext2 filesystem), because many users/ many tasks shared hdd IO
subsytem and there is not benefit for having disk low fragmented


but......


In my situation I use postgresql, PHP as apache module. I make a backup and
run e2fs defragmentation program on related partitions (ie /home and /var/ ,
where php files and database cluster  lives )

Result ? About 40% (!)  performance boost...



----- Original Message -----
From: "Keary Suska" <hierophant@pcisys.net>
To: "Postgres General" <pgsql-general@postgresql.org>
Sent: Thursday, February 19, 2004 8:52 PM
Subject: Re: [GENERAL] Select for update, locks and transaction levels


> on 2/16/04 10:51 AM, nick.barr@webbased.co.uk purportedly said:
>
> > I am trying to gather stats about how many times a resource in our web
> > app is viewed, i.e. just a COUNT. There are potentially millions of
> > resources within the system.
> >
> > I thought of two methods:
> >
> > 1.    An extra column in the resource table which contains a count.
>
> Not a good idea if you expect a high concurrency rate--you will create a
> superfluous bottleneck in your app.
>
> > 2.    A separate table that contains a count using an algorithm similar
> > to the method presented here:
> >
> > http://archives.postgresql.org/pgsql-performance/2004-01/msg00059.php
> >
> > a.    Each time a resource is viewed a new row is inserted with a count
> > of 1.
> > b.    Each time the view count is needed, rows from the table are SUMmed
> > together.
> > c.    A compression script runs regularly to group and sum the rows
> > together.
>
> I am assuming that you are concerned about storage size, which is why you
> want to "compress". You are probably better off (both by performance and
> storage) with something like the following approach:
>
> CREATE TABLE view_res (
>     res_id int8,
>     stamp  timestamp
> ) WITHOUT OIDS;
>
> CREATE TABLE view_res_arch (
>     res_id int8,
>     cycle  date,
>     hits   int8
> );
>
> By using a timestamp instead of count you can archive using a date/time
> range and avoid any concurrency/locking issues:
>
> INSERT INTO view_res_arch (res_id, cycle, hits)
> SELECT res_id, '2003-12-31', COUNT(res_id) FROM view_res
> WHERE stamp >= '2003-12-01' AND stamp <= '2003-12-31 23:59:59'
> GROUP BY res_id;
>
> then:
>
> DELETE FROM view_res
> WHERE stamp >= '2003-12-01' AND stamp <= '2003-12-31 23:59:59'
>
> With this kind of approach you have historicity and extensibility, so you
> could, for example, show historical trends with only minor modifications.
>
> Best regards,
>
> Keary Suska
> Esoteritech, Inc.
> "Leveraging Open Source for a better Internet"
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>
>


pgsql-general by date:

Previous
From: bhawin13@indiatimes.com (Bhavin Patel)
Date:
Subject: Concurrent access related issue.
Next
From: "Gavin M. Roy"
Date:
Subject: Re: Changing from money...