On Tue, Jun 23, 2009 at 10:12 PM, Andrew Smith<laconical@gmail.com> wrote:
> This temp table will probably contain up to 10000 records, each of
> which could be changing every second (data is coming from a real-time
> monitoring system). On top of this, I've then got the ASP.NET app
> reading the updated data values every second or so (the operators want
> to see the data as soon as it changes). I was going to do some
> performance testing to see how well it would work, but thought I'd ask
> the question here first: I know that the number of records isn't a
> problem, but how about the frequency of updates/reads? Is 10000
> updates/reads a second considered a lot in the PostgreSQL world, or
> will it do it easily?
Maybe. Rows that are updated often are NOT generally pgsql's strong
suit, but IF you're running 8.3 or above, and IF you have a low enough
fill factor that there's empty space for the updates and IF the fields
you are updating are not indexed and IF you have aggressive enough
vacuuming and IF you restrict your updates to JUST real updates (i.e.
update ... set a=1 where a<>1) and IF your IO subsystem has enough raw
horsepower, you can make this work. But only benchmarking will tell
you if you can do it with your current hardware and setup.