Thread: PostgreSQL + IDS load/latency help

PostgreSQL + IDS load/latency help

From
"Loki"
Date:
Lists:

We have a PostgreSQL database full of over 500,000 events.
When the database has a small number of events, to
retrieve a specific event stored in that db takes a mere
few seconds.

However, as the database grew in size to where it is now,
it takes over 15-20 seconds to get information back from a
query to that database. We have tried everything,
including vacuum, which someone else recommended to me.

I noticed when running top(exec) on the system that the
PostgreSQL process eventually eats up ALL available RAM
(half a gig of RAM) and forces the system into scratch
space. When rebooting the machine the query time is
greatly improved back down to a reasonable query time,
(obviously because it hasnt eaten up all the memory yet.)

Am I correct in saying that 500,000 events shouldn't be a
problem? Is there anyone out there with more than 500,000
events in their DB than what we currently have?

Can anyone recommend anything to try or do to rectify the
situation. 15-20 seconds to get the information queried
from the database is unacceptable and can not work. Please
advise.

* We are running Snort 1.9.1 storing events to a custom
PostgreSQL database. Dumping the database and creating a
new one every so often is also not an option. Can anyone
provide assistance? Please advise.

For those needing more details on our setup, etc. I am not
currently able to log into the system. Please email me and
I can provide you more details offline.

Sincerely,

Eric Hines
Internet Warfare and Intelligence
Fate Research Labs
http://www.fatelabs.com


Re: PostgreSQL + IDS load/latency help

From
"Gavin M. Roy"
Date:
Have you used "explain" on your queries to make sure they are not using
seq scan?  I've got tens of millions of records in my master database
and get 1 second response times with the right indexes created.

Gavin

Loki wrote:

> Lists:
>
> We have a PostgreSQL database full of over 500,000 events. When the
> database has a small number of events, to retrieve a specific event
> stored in that db takes a mere few seconds.
> However, as the database grew in size to where it is now, it takes
> over 15-20 seconds to get information back from a query to that
> database. We have tried everything, including vacuum, which someone
> else recommended to me.
>
> I noticed when running top(exec) on the system that the PostgreSQL
> process eventually eats up ALL available RAM (half a gig of RAM) and
> forces the system into scratch space. When rebooting the machine the
> query time is greatly improved back down to a reasonable query time,
> (obviously because it hasnt eaten up all the memory yet.)
>
> Am I correct in saying that 500,000 events shouldn't be a problem? Is
> there anyone out there with more than 500,000 events in their DB than
> what we currently have?
>
> Can anyone recommend anything to try or do to rectify the situation.
> 15-20 seconds to get the information queried from the database is
> unacceptable and can not work. Please advise.
>
> * We are running Snort 1.9.1 storing events to a custom PostgreSQL
> database. Dumping the database and creating a new one every so often
> is also not an option. Can anyone provide assistance? Please advise.
>
> For those needing more details on our setup, etc. I am not currently
> able to log into the system. Please email me and I can provide you
> more details offline.
>
> Sincerely,
>
> Eric Hines
> Internet Warfare and Intelligence
> Fate Research Labs
> http://www.fatelabs.com
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>
>


Re: PostgreSQL + IDS load/latency help

From
Dennis Gearon
Date:
what do you mean by 'events'?

Loki wrote:
> Lists:
>
> We have a PostgreSQL database full of over 500,000 events. When the
> database has a small number of events, to retrieve a specific event
> stored in that db takes a mere few seconds.
> However, as the database grew in size to where it is now, it takes over
> 15-20 seconds to get information back from a query to that database. We
> have tried everything, including vacuum, which someone else recommended
> to me.
>
> I noticed when running top(exec) on the system that the PostgreSQL
> process eventually eats up ALL available RAM (half a gig of RAM) and
> forces the system into scratch space. When rebooting the machine the
> query time is greatly improved back down to a reasonable query time,
> (obviously because it hasnt eaten up all the memory yet.)
>
> Am I correct in saying that 500,000 events shouldn't be a problem? Is
> there anyone out there with more than 500,000 events in their DB than
> what we currently have?
>
> Can anyone recommend anything to try or do to rectify the situation.
> 15-20 seconds to get the information queried from the database is
> unacceptable and can not work. Please advise.
>
> * We are running Snort 1.9.1 storing events to a custom PostgreSQL
> database. Dumping the database and creating a new one every so often is
> also not an option. Can anyone provide assistance? Please advise.
>
> For those needing more details on our setup, etc. I am not currently
> able to log into the system. Please email me and I can provide you more
> details offline.
>
> Sincerely,
>
> Eric Hines
> Internet Warfare and Intelligence
> Fate Research Labs
> http://www.fatelabs.com
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>