Thread: Setting "nice" values

Setting "nice" values

From
Madison Kelly
Date:
Hi all,

   I've got a script (perl, in case it matters) that I need to run once
a month to prepare statements. This script queries and updates the
database a *lot*. I am not concerned with the performance of the SQL
calls so much as I am about the impact it has on the server's load.

   Is there a way to limit queries speed (ie: set a low 'nice' value on
a query)? This might be an odd question, or I could be asking the
question the wrong way, but hopefully you the idea. :)

Thanks!

Madi


Re: Setting "nice" values

From
Scott Marlowe
Date:
On Thu, 2006-11-02 at 09:14, Madison Kelly wrote:
> Hi all,
>
>    I've got a script (perl, in case it matters) that I need to run once
> a month to prepare statements. This script queries and updates the
> database a *lot*. I am not concerned with the performance of the SQL
> calls so much as I am about the impact it has on the server's load.
>
>    Is there a way to limit queries speed (ie: set a low 'nice' value on
> a query)? This might be an odd question, or I could be asking the
> question the wrong way, but hopefully you the idea. :)

While you can safely set the priority lower on the calling perl script,
setting db backend priorities lower can result in problems caused by
"priority inversion"  Look up that phrase on the pgsql admin, perform,
general, or hackers lists for an explanation, or go here:

http://en.wikipedia.org/wiki/Priority_inversion

I have a simple script that grabs raw data from an oracle db and shoves
it into a postgresql database for reporting purposes.  Every 100 rows I
put into postgresql, I usleep 10 or so and the load caused by that
script on both systems is minimal.  You might try something like that.

Re: Setting "nice" values

From
Madison Kelly
Date:
Scott Marlowe wrote:
> On Thu, 2006-11-02 at 09:14, Madison Kelly wrote:
>> Hi all,
>>
>>    I've got a script (perl, in case it matters) that I need to run once
>> a month to prepare statements. This script queries and updates the
>> database a *lot*. I am not concerned with the performance of the SQL
>> calls so much as I am about the impact it has on the server's load.
>>
>>    Is there a way to limit queries speed (ie: set a low 'nice' value on
>> a query)? This might be an odd question, or I could be asking the
>> question the wrong way, but hopefully you the idea. :)
>
> While you can safely set the priority lower on the calling perl script,
> setting db backend priorities lower can result in problems caused by
> "priority inversion"  Look up that phrase on the pgsql admin, perform,
> general, or hackers lists for an explanation, or go here:
>
> http://en.wikipedia.org/wiki/Priority_inversion
>
> I have a simple script that grabs raw data from an oracle db and shoves
> it into a postgresql database for reporting purposes.  Every 100 rows I
> put into postgresql, I usleep 10 or so and the load caused by that
> script on both systems is minimal.  You might try something like that.

Will the priority of the script pass down to the pgsql queries it calls?
I figured (likely incorrectly) that because the queries were executed by
the psql server the queries ran with the server's priority. If this
isn't the case, then perfect. :)

Thanks for the tip, too, it's something I will try.

Madi

Re: Setting "nice" values

From
Scott Marlowe
Date:
On Thu, 2006-11-02 at 09:25, Madison Kelly wrote:
> Scott Marlowe wrote:
> > On Thu, 2006-11-02 at 09:14, Madison Kelly wrote:
> >> Hi all,
> >>
> >>    I've got a script (perl, in case it matters) that I need to run once
> >> a month to prepare statements. This script queries and updates the
> >> database a *lot*. I am not concerned with the performance of the SQL
> >> calls so much as I am about the impact it has on the server's load.
> >>
> >>    Is there a way to limit queries speed (ie: set a low 'nice' value on
> >> a query)? This might be an odd question, or I could be asking the
> >> question the wrong way, but hopefully you the idea. :)
> >
> > While you can safely set the priority lower on the calling perl script,
> > setting db backend priorities lower can result in problems caused by
> > "priority inversion"  Look up that phrase on the pgsql admin, perform,
> > general, or hackers lists for an explanation, or go here:
> >
> > http://en.wikipedia.org/wiki/Priority_inversion
> >
> > I have a simple script that grabs raw data from an oracle db and shoves
> > it into a postgresql database for reporting purposes.  Every 100 rows I
> > put into postgresql, I usleep 10 or so and the load caused by that
> > script on both systems is minimal.  You might try something like that.
>
> Will the priority of the script pass down to the pgsql queries it calls?
> I figured (likely incorrectly) that because the queries were executed by
> the psql server the queries ran with the server's priority. If this
> isn't the case, then perfect. :)

nope, the priorities don't pass down.  you connect via a client lib to
the server, which spawns a backend process that does the work for you.
The backend process inherits its priority from the postmaster that
spawns it, and they all run at the same priority.

> Thanks for the tip, too, it's something I will try.

Sometimes it's the simple solutions that work best.  :)  Welcome to the
world of pgsql, btw...

Re: Setting "nice" values

From
Tobias Brox
Date:
[Madison Kelly - Thu at 10:25:07AM -0500]
> Will the priority of the script pass down to the pgsql queries it calls?
> I figured (likely incorrectly) that because the queries were executed by
> the psql server the queries ran with the server's priority.

I think you are right, and in any case, I don't think the niceness
value won't help much if the bottleneck is iowait.

In our application, I've made a special function for doing
low-priority transactions which I believe is quite smart - though maybe
not always.  Before introducing this logic, we observed we had a tipping
point, too many queries, and the database wouldn't swallow them fast
enough, and the database server just jammed up, trying to work at too
many queries at once, yielding the results far too slow.

In the config file, I now have those two flags set:

 stats_start_collector = on
 stats_command_string = on

This will unfortunately cause some CPU-load, but the benefit is great
- one can actually check what the server is working with at any time:

  select * from pg_stat_activity

with those, it is possible to check a special view pg_stat_activity -
it will contain all the queries the database is working on right now.
My idea is to peek into this table - if there is no active queries,
the database is idle, and it's safe to start our low-priority
transaction.  If this view is full of stuff, one should certainly not
run any low-priority transactions, rather sleep a bit and try again
later.

 select count(*) from pg_stat_activity where not current_query like
 '<IDLE>%' and query_start+?<now()

The algorithm takes four parameters, the time value to put in above,
the maximum number of queries allowed to run, the sleep time between
each attempt, and the amount of attempts to try before giving up.


So here are the cons and drawbacks:

 con: Given small queries and small transactions, one can tune this in
      such a way that the low priority queries (almost) never causes
      significant delay for the higher priority queries.

 con: can be used to block users of an interactive query
      application to cause disturbances on the production database.

 con: can be used for pausing low-priority batch jobs to execute only
      when the server is idle.

 drawback: unsuitable for long-running queries and transactions

 drawback: with fixed values in the parameters above, one risks that
           the queries never gets run if the server is sufficiently stressed.

 drawback: the stats collection requires some CPU

 drawback: the "select * from pg_stats_activity" query requires some CPU

 drawback: the pg_stats_activity-view is constant within the
           transaction, so one has to roll back if there is activity
           (this is however not a really bad thing, because one
           certainly shouldn't live an idle transaction around if the
           database is stressed).

Re: Setting "nice" values

From
Andreas Kostyrka
Date:
Am Donnerstag, den 02.11.2006, 09:41 -0600 schrieb Scott Marlowe:
> Sometimes it's the simple solutions that work best.  :)  Welcome to the
> world of pgsql, btw...

OTOH, there are also non-simple solutions to this, which might make
sense anyway: Install slony, and run your queries against a readonly
replica of your data.

Andreas


Attachment

Re: Setting "nice" values

From
Jim Nasby
Date:
On Nov 2, 2006, at 9:14 AM, Madison Kelly wrote:
>   I've got a script (perl, in case it matters) that I need to run once
> a month to prepare statements. This script queries and updates the
> database a *lot*. I am not concerned with the performance of the SQL
> calls so much as I am about the impact it has on the server's load.
>
>   Is there a way to limit queries speed (ie: set a low 'nice' value on
> a query)? This might be an odd question, or I could be asking the
> question the wrong way, but hopefully you the idea. :)

The BizGres folks have been working on resource queuing, which will
eventually do what you want. Take a look at the BizGres mailing list
archives for more info.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)



Re: Setting "nice" values

From
Madison Kelly
Date:
Tobias Brox wrote:
> [Madison Kelly - Thu at 10:25:07AM -0500]
>> Will the priority of the script pass down to the pgsql queries it calls?
>> I figured (likely incorrectly) that because the queries were executed by
>> the psql server the queries ran with the server's priority.
>
> I think you are right, and in any case, I don't think the niceness
> value won't help much if the bottleneck is iowait.
>
> In our application, I've made a special function for doing
> low-priority transactions which I believe is quite smart - though maybe
> not always.  Before introducing this logic, we observed we had a tipping
> point, too many queries, and the database wouldn't swallow them fast
> enough, and the database server just jammed up, trying to work at too
> many queries at once, yielding the results far too slow.
>
> In the config file, I now have those two flags set:
>
>  stats_start_collector = on
>  stats_command_string = on
>
> This will unfortunately cause some CPU-load, but the benefit is great
> - one can actually check what the server is working with at any time:
>
>   select * from pg_stat_activity
>
> with those, it is possible to check a special view pg_stat_activity -
> it will contain all the queries the database is working on right now.
> My idea is to peek into this table - if there is no active queries,
> the database is idle, and it's safe to start our low-priority
> transaction.  If this view is full of stuff, one should certainly not
> run any low-priority transactions, rather sleep a bit and try again
> later.
>
>  select count(*) from pg_stat_activity where not current_query like
>  '<IDLE>%' and query_start+?<now()
>
> The algorithm takes four parameters, the time value to put in above,
> the maximum number of queries allowed to run, the sleep time between
> each attempt, and the amount of attempts to try before giving up.
>
>
> So here are the cons and drawbacks:
>
>  con: Given small queries and small transactions, one can tune this in
>       such a way that the low priority queries (almost) never causes
>       significant delay for the higher priority queries.
>
>  con: can be used to block users of an interactive query
>       application to cause disturbances on the production database.
>
>  con: can be used for pausing low-priority batch jobs to execute only
>       when the server is idle.
>
>  drawback: unsuitable for long-running queries and transactions
>
>  drawback: with fixed values in the parameters above, one risks that
>            the queries never gets run if the server is sufficiently stressed.
>
>  drawback: the stats collection requires some CPU
>
>  drawback: the "select * from pg_stats_activity" query requires some CPU
>
>  drawback: the pg_stats_activity-view is constant within the
>            transaction, so one has to roll back if there is activity
>            (this is however not a really bad thing, because one
>            certainly shouldn't live an idle transaction around if the
>            database is stressed).

I can see how this would be very useful (and may make use of it later!).
For the current job at hand though, at full tilt it can take a few hours
to run, which puts it into your "drawback" section. The server in
question is also almost under load of some sort, too.

A great tip and one I am sure to make use of later, thanks!

Madi

Re: Setting "nice" values

From
Madison Kelly
Date:
Scott Marlowe wrote:
> nope, the priorities don't pass down.  you connect via a client lib to
> the server, which spawns a backend process that does the work for you.
> The backend process inherits its priority from the postmaster that
> spawns it, and they all run at the same priority.

Shoot, but figured. :)

>> Thanks for the tip, too, it's something I will try.
>
> Sometimes it's the simple solutions that work best.  :)  Welcome to the
> world of pgsql, btw...

Heh, if only I was new to pgsql I wouldn't feel silly for asking so many
questions :P. In the same right though, I enjoy PgSQL/Linux/FOSS in
general *because* there seems to never be a shortage of things to learn.

Thanks!

Madi

Re: Setting "nice" values

From
Madison Kelly
Date:
Andreas Kostyrka wrote:
> Am Donnerstag, den 02.11.2006, 09:41 -0600 schrieb Scott Marlowe:
>> Sometimes it's the simple solutions that work best.  :)  Welcome to the
>> world of pgsql, btw...
>
> OTOH, there are also non-simple solutions to this, which might make
> sense anyway: Install slony, and run your queries against a readonly
> replica of your data.

Bingo! This seems like exactly what we can/should do, and it will likely
help with other jobs we run, too.

I feel a little silly for not having thought of this myself... Guess I
was too focused on niceness :). Thanks!

Madi

Re: Setting "nice" values

From
Tobias Brox
Date:
[Madison Kelly - Mon at 08:10:12AM -0500]
> to run, which puts it into your "drawback" section. The server in
> question is also almost under load of some sort, too.
>
> A great tip and one I am sure to make use of later, thanks!

I must have been sleepy, listing up "cons" vs "drawbacks" ;-)

Anyway, the central question is not the size of the job, but the size of
the transactions within the job - if the job consists of many
transactions, "my" test can be run before every transaction.  Having
transactions lasting for hours is a very bad thing to do, anyway.

Re: Setting "nice" values

From
Madison Kelly
Date:
Tobias Brox wrote:
> [Madison Kelly - Mon at 08:10:12AM -0500]
>> to run, which puts it into your "drawback" section. The server in
>> question is also almost under load of some sort, too.
>>
>> A great tip and one I am sure to make use of later, thanks!
>
> I must have been sleepy, listing up "cons" vs "drawbacks" ;-)

:) I noticed but figured what you meant (I certainly do similar flubs!).

> Anyway, the central question is not the size of the job, but the size of
> the transactions within the job - if the job consists of many
> transactions, "my" test can be run before every transaction.  Having
> transactions lasting for hours is a very bad thing to do, anyway.

Ah, sorry, long single queries is what you meant. I have inherited this
code so I am not sure how long a given query takes, though they do use a
lot of joins and such, so I suspect it isn't quick; indexes aside. When
I get some time (and get the backup server running) I plan to play with
this. Currently the DB is on a production server so I am hesitant to
poke around just now. Once I get the backup server though, I will play
with your suggestions. I am quite curious to see how it will work out.

Thanks again!

Madi


Re: Setting "nice" values

From
Tobias Brox
Date:
[Madison Kelly - Mon at 08:48:19AM -0500]
> Ah, sorry, long single queries is what you meant.

No - long running single transactions :-)  If it's only read-only
queries, one will probably benefit by having one transaction for every
query.


Re: Setting "nice" values

From
Madison Kelly
Date:
Tobias Brox wrote:
> [Madison Kelly - Mon at 08:48:19AM -0500]
>> Ah, sorry, long single queries is what you meant.
>
> No - long running single transactions :-)  If it's only read-only
> queries, one will probably benefit by having one transaction for every
> query.
>

In this case, what happens is one kinda ugly big transaction is read
into a hash, and then looped through (usually ~10,000 rows). On each
loop another, slightly less ugly query is performed based on the first
query's values now in the hash (these queries being where throttling
might help). Then after the second query is parsed a PDF file is created
(also a big source of slowness). It isn't entirely read-only though
because as the PDFs are created a flag is updated in the given record's
row. So yeah, need to experiment some. :)

Madi

Easy read-heavy benchmark kicking around?

From
Brian Hurt
Date:
I'm having a spot of problem with out storage device vendor.  Read
performance (as measured by both bonnie++ and hdparm -t) is abysmal
(~14Mbyte/sec), and we're trying to get them to fix it.  Unfortunately,
they're using the fact that bonnie++ is an open source benchmark to
weasle out of doing anything- they can't fix it unless I can show an
impact in Postgresql.

So the question is: is there an easy to install and run, read-heavy
benchmark out there that I can wave at them to get them to fix the
problem?  I have a second database running on a single SATA drive, so I
can use that as a comparison point- "look, we're getting 1/3rd the read
speed of a single SATA drive- this sucks!"

Any advice?

Brian


Re: Easy read-heavy benchmark kicking around?

From
"Merlin Moncure"
Date:
On 11/6/06, Brian Hurt <bhurt@janestcapital.com> wrote:
> I'm having a spot of problem with out storage device vendor.  Read
> performance (as measured by both bonnie++ and hdparm -t) is abysmal
> (~14Mbyte/sec), and we're trying to get them to fix it.  Unfortunately,
> they're using the fact that bonnie++ is an open source benchmark to
> weasle out of doing anything- they can't fix it unless I can show an
> impact in Postgresql.
>
> So the question is: is there an easy to install and run, read-heavy
> benchmark out there that I can wave at them to get them to fix the
> problem?  I have a second database running on a single SATA drive, so I
> can use that as a comparison point- "look, we're getting 1/3rd the read
> speed of a single SATA drive- this sucks!"

hitachi?

my experience with storage vendors is when they say things like that
they know full well their device completely sucks and are just
stalling so that you give up.

merlin

Re: Easy read-heavy benchmark kicking around?

From
Scott Marlowe
Date:
On Mon, 2006-11-06 at 15:09, Merlin Moncure wrote:
> On 11/6/06, Brian Hurt <bhurt@janestcapital.com> wrote:
> > I'm having a spot of problem with out storage device vendor.  Read
> > performance (as measured by both bonnie++ and hdparm -t) is abysmal
> > (~14Mbyte/sec), and we're trying to get them to fix it.  Unfortunately,
> > they're using the fact that bonnie++ is an open source benchmark to
> > weasle out of doing anything- they can't fix it unless I can show an
> > impact in Postgresql.
> >
> > So the question is: is there an easy to install and run, read-heavy
> > benchmark out there that I can wave at them to get them to fix the
> > problem?  I have a second database running on a single SATA drive, so I
> > can use that as a comparison point- "look, we're getting 1/3rd the read
> > speed of a single SATA drive- this sucks!"
>
> hitachi?
>
> my experience with storage vendors is when they say things like that
> they know full well their device completely sucks and are just
> stalling so that you give up.

Man, if I were the OP I'd be naming names, and letting the idiots at
INSERT MAJOR VENDOR HERE know that I was naming names to the whole of
the postgresql community and open source as well to make the point that
if they look down on open source so much, then open source should look
down on them.

PostgreSQL is open source software, BSD and Linux are open source / free
software.  bonnie++'s licensing shouldn't matter one nit, and I'd let
everyone know how shittily I was being treated by this vendor until
their fixed their crap or took it back.

Note that if you're using fibre channel etc... the problem might well be
in your own hardware / device drivers.  There are a lot of real crap FC
and relative cards out there.

Re: Easy read-heavy benchmark kicking around?

From
Mark Kirkwood
Date:
Brian Hurt wrote:
> I'm having a spot of problem with out storage device vendor.  Read
> performance (as measured by both bonnie++ and hdparm -t) is abysmal
> (~14Mbyte/sec), and we're trying to get them to fix it.  Unfortunately,
> they're using the fact that bonnie++ is an open source benchmark to
> weasle out of doing anything- they can't fix it unless I can show an
> impact in Postgresql.
>
> So the question is: is there an easy to install and run, read-heavy
> benchmark out there that I can wave at them to get them to fix the
> problem?  I have a second database running on a single SATA drive, so I
> can use that as a comparison point- "look, we're getting 1/3rd the read
> speed of a single SATA drive- this sucks!"
>

You could use the lineitem table from the TPC-H dataset
(http://www.tpc.org/tpch/default.asp).

Generate the dataset for a scale factor that makes lineitem about 2x
your ram, load the table and do:

SELECT count(*) FROM lineitem

vmstat or iostat while this is happening should display your meager
throughput well enough to get your vendors attention (I'm checking this
on a fairly old 4 disk system of mine as I type this - I'm seeing about
90Mb/s...)

best wishes

Mark

Re: Easy read-heavy benchmark kicking around?

From
Dimitri Fontaine
Date:
Hi,

Le lundi 6 novembre 2006 21:47, Brian Hurt a écrit :
> So the question is: is there an easy to install and run, read-heavy
> benchmark out there that I can wave at them to get them to fix the
> problem?  I have a second database running on a single SATA drive, so I
> can use that as a comparison point- "look, we're getting 1/3rd the read
> speed of a single SATA drive- this sucks!"
>
> Any advice?

Tsung is an easy to use open-source multi-protocol distributed load testing
tool. The more simple way to use it to stress test your machine would be
using pgfouine to setup a test from postgresql logs:
  http://tsung.erlang-projects.org/
  http://pgfouine.projects.postgresql.org/tsung.html

Tsung will easily simulate a lot (more than 1000) of concurrent users playing
your custom defined load scenario. Then it will give you some plots to
analyze results, for disk io though you'll have to use some other tool(s)
while tests are running.

Regards,
--
Dimitri Fontaine
http://www.dalibo.com/

Attachment

Re: Easy read-heavy benchmark kicking around?

From
Markus Schaber
Date:
Hi, Brian,

Brian Hurt wrote:

> So the question is: is there an easy to install and run, read-heavy
> benchmark out there that I can wave at them to get them to fix the
> problem?

For sequential read performance, use dd. Most variants of dd I've seen
output some timing information, and if not, do a "time dd
if=/your/device of=/dev/null bs=1M" on the partition.

Markus
--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org

Re: Easy read-heavy benchmark kicking around?

From
"Merlin Moncure"
Date:
On 11/8/06, Markus Schaber <schabi@logix-tt.com> wrote:
> Hi, Brian,
>
> Brian Hurt wrote:
>
> > So the question is: is there an easy to install and run, read-heavy
> > benchmark out there that I can wave at them to get them to fix the
> > problem?
>
> For sequential read performance, use dd. Most variants of dd I've seen
> output some timing information, and if not, do a "time dd
> if=/your/device of=/dev/null bs=1M" on the partition.

we had a similar problem with a hitachi san, the ams200.  Their
performance group refused to admit the fact that 50mb/sec dd test was
a valid performance benchmark and needed to be addressed.  Yes, that
was a HITACHI SAN, the AMS200, which hitachi's performance group
claimed was 'acceptable performance'.  This was the advice we got
after swapping out all the hardware and buying an entitlement to
redhat enterprise which we had to do to get them to talk to us.

oh, the unit also lost a controller after about a week of
operation...the unit being a HITACHI SAN, the AMS200.

any questions?

merlin

p.s. we have had good experiences with the adtx.

Re: Easy read-heavy benchmark kicking around?

From
Cosimo Streppone
Date:
Merlin Moncure wrote:

> On 11/8/06, Markus Schaber <schabi@logix-tt.com> wrote:
>> Hi, Brian,
>>
>> Brian Hurt wrote:
>>
>> > So the question is: is there an easy to install and run, read-heavy
>> > benchmark out there that I can wave at them to get them to fix the
>> > problem?
>>
>> For sequential read performance, use dd. Most variants of dd I've seen
>> output some timing information, and if not, do a "time dd
>> if=/your/device of=/dev/null bs=1M" on the partition.
>
> we had a similar problem with a hitachi san, the ams200.  Their
> performance group refused to admit the fact that 50mb/sec dd test was
> a valid performance benchmark and needed to be addressed.
 >
 > [...]
 >
> oh, the unit also lost a controller after about a week of
> operation...the unit being a HITACHI SAN, the AMS200.
>
> any questions?

Yes, one.
What was that unit?

;-)

--
Cosimo

Re: Easy read-heavy benchmark kicking around?

From
"Luke Lonergan"
Date:
Similar experiences with HP and their SmartArray 5i controller on Linux.
The answer was: "this controller has won awards for performance! It can't be
slow!", so we made them test it in their own labs an prove just how awfully
slow it was.  In the case of the 5i, it became apparent that HP had no
internal expertise on Linux and their controllers, the driver was built by a
third party that they didn't support and their performance people didn't
deal with the 5i at all.

In the end, all manner of benchmarks after you've purchased aren't a good
substitute for the up front question: do you have documentation of the
performance of your RAID controller on [Linux, Solaris, ...]?

I would like everyone who purchases IBM, Dell, HP or Sun to demand that
documentation - then perhaps we'd see higher quality drivers and hardware
result.

- Luke


On 11/8/06 8:34 AM, "Merlin Moncure" <mmoncure@gmail.com> wrote:

> On 11/8/06, Markus Schaber <schabi@logix-tt.com> wrote:
>> Hi, Brian,
>>
>> Brian Hurt wrote:
>>
>>> So the question is: is there an easy to install and run, read-heavy
>>> benchmark out there that I can wave at them to get them to fix the
>>> problem?
>>
>> For sequential read performance, use dd. Most variants of dd I've seen
>> output some timing information, and if not, do a "time dd
>> if=/your/device of=/dev/null bs=1M" on the partition.
>
> we had a similar problem with a hitachi san, the ams200.  Their
> performance group refused to admit the fact that 50mb/sec dd test was
> a valid performance benchmark and needed to be addressed.  Yes, that
> was a HITACHI SAN, the AMS200, which hitachi's performance group
> claimed was 'acceptable performance'.  This was the advice we got
> after swapping out all the hardware and buying an entitlement to
> redhat enterprise which we had to do to get them to talk to us.
>
> oh, the unit also lost a controller after about a week of
> operation...the unit being a HITACHI SAN, the AMS200.
>
> any questions?
>
> merlin
>
> p.s. we have had good experiences with the adtx.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>