Thread: Slow planning time

Slow planning time

From
Scott Neville
Date:
Hi,

We have a database that for some reason has started to be really slow at planning all queries.  The database is running
version9.4.2 since July 28th (it was freshly installed then - compiled from source).  The response time is fairly
sporadic,but the quickest plan time I have seen (on any query) using explain analyze is 39ms with an execution time of
1ms,however we have slow query logging on and we are getting queries taking over 6000 ms in the planning stage with
thenonly a few ms to execute.  There is nothing complex about the queries so even something like this: 

select max(datetime) from audit;

(where datetime is an indexed field takes 200ms to plan and 0.5ms to execute).

The databases are involved in a replication chain so I have

M1 -> S1 -> S2

I have restarted S2 and S1 and this appears to have made the problem go away (though for how long....).  S1 has a
replicationslot listed on M1. 

The only other thing to note is that while all of the tables are big, but most of them are not crazy (the one that is
mostcommonly selected from has 718,000 rows in it), there are some very big tables which are reaching 325,000,000 rows.
There is quite a lot of change too I estimate about 7,500,000 row changes a day on average, but this is also very
focused(about 7 million of the changes happen on two tables, yet all tables suffer from slow query planning).  Most of
thesechanges occur overnight where bulk changes occur then the rest happens in a more steady stream through the day.  I
couldunderstand it more if the execution time was slow, but its the planning time. 

Auto-vacuum is turned on and set to 1 worker, in addition to this we have a process that runs every night and runs
"vacuumanalyze" on as many tables as it can in a 2 hour period (starting with the oldest vacuumed first). 

Just wondering if anyone has any thoughts as to why planning takes so long and anything I can do to address the issue.

Thanks

Scott

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DISCLAIMER: This email message and any attachments is for the sole
use of the intended recipient(s) and may contain confidential and
privileged information.  Any unauthorised review, use, disclosure
or distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of
the original message.

The views expressed in this message may not necessarily reflect the
views of Bluestar Software Ltd.

Bluestar Software Ltd, Registered in England
Company Registration No. 03537860, VAT No. 709 2751 29
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~






Re: Slow planning time

From
John Scalia
Date:
Scott,

A couple of things to try... First, off, have you performed any vacuum full sequences on your database? Basically, I mean a sequence of vacuum, vacuum full, and another vacuum. And do you know if you autovacuum's have ever completed? Depending on the number of tables, you probably need more than 1 worker anyway. With 7.5 million updates/day, you probably really need to tune your AV settings as well. If you look around the web, I found a query that builds a view indicating the AV status and need for every table in a database. That was quite useful in tuning my own AV settings. Finally, you could have memory settings way too low. Could you post your shared_buffers and other memory values from your postgresql.conf file?
--
Jay

On Wed, Dec 23, 2015 at 6:00 AM, Scott Neville <scott.neville@bluestar-software.co.uk> wrote:
Hi,

We have a database that for some reason has started to be really slow at planning all queries.  The database is running version 9.4.2 since July 28th (it was freshly installed then - compiled from source).  The response time is fairly sporadic, but the quickest plan time I have seen (on any query) using explain analyze is 39ms with an execution time of 1ms, however we have slow query logging on and we are getting queries taking over 6000 ms in the planning stage with then only a few ms to execute.  There is nothing complex about the queries so even something like this:

select max(datetime) from audit;

(where datetime is an indexed field takes 200ms to plan and 0.5ms to execute).

The databases are involved in a replication chain so I have

M1 -> S1 -> S2

I have restarted S2 and S1 and this appears to have made the problem go away (though for how long....).  S1 has a replication slot listed on M1.

The only other thing to note is that while all of the tables are big, but most of them are not crazy (the one that is most commonly selected from has 718,000 rows in it), there are some very big tables which are reaching 325,000,000 rows.  There is quite a lot of change too I estimate about 7,500,000 row changes a day on average, but this is also very focused (about 7 million of the changes happen on two tables, yet all tables suffer from slow query planning).  Most of these changes occur overnight where bulk changes occur then the rest happens in a more steady stream through the day.  I could understand it more if the execution time was slow, but its the planning time.

Auto-vacuum is turned on and set to 1 worker, in addition to this we have a process that runs every night and runs "vacuum analyze" on as many tables as it can in a 2 hour period (starting with the oldest vacuumed first).

Just wondering if anyone has any thoughts as to why planning takes so long and anything I can do to address the issue.

Thanks

Scott

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DISCLAIMER: This email message and any attachments is for the sole
use of the intended recipient(s) and may contain confidential and
privileged information.  Any unauthorised review, use, disclosure
or distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of
the original message.

The views expressed in this message may not necessarily reflect the
views of Bluestar Software Ltd.

Bluestar Software Ltd, Registered in England
Company Registration No. 03537860, VAT No. 709 2751 29
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~






--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: Slow planning time

From
Scott Neville
Date:
Hi,

Thanks for that I think the memory settings are OK, I have:

shared_buffered = 32GB
temp_buffers = 32MB
work_mem = 16MB
maintenance_work_mem = 256MB
effective_cache_size = 96GB

With regard to the updates they are done in bulk (one statement), with a vacuum done straight afterwards, so its

delete from table where date < 3 months ago

3.5 million rows changed

vacuum table.

The planning time is also slow on tables that had an insert of a few 100 rows when the database went live which have
notsubsequently been changed. 

Hope that helps.

Scott

On Wednesday 23 Dec 2015 06:55:07 John Scalia wrote:
> Scott,
>
> A couple of things to try... First, off, have you performed any vacuum full
> sequences on your database? Basically, I mean a sequence of vacuum, vacuum
> full, and another vacuum. And do you know if you autovacuum's have ever
> completed? Depending on the number of tables, you probably need more than 1
> worker anyway. With 7.5 million updates/day, you probably really need to
> tune your AV settings as well. If you look around the web, I found a query
> that builds a view indicating the AV status and need for every table in a
> database. That was quite useful in tuning my own AV settings. Finally, you
> could have memory settings way too low. Could you post your shared_buffers
> and other memory values from your postgresql.conf file?
> --
> Jay
>
> On Wed, Dec 23, 2015 at 6:00 AM, Scott Neville <
> scott.neville@bluestar-software.co.uk> wrote:
>
> > Hi,
> >
> > We have a database that for some reason has started to be really slow at
> > planning all queries.  The database is running version 9.4.2 since July
> > 28th (it was freshly installed then - compiled from source).  The response
> > time is fairly sporadic, but the quickest plan time I have seen (on any
> > query) using explain analyze is 39ms with an execution time of 1ms, however
> > we have slow query logging on and we are getting queries taking over 6000
> > ms in the planning stage with then only a few ms to execute.  There is
> > nothing complex about the queries so even something like this:
> >
> > select max(datetime) from audit;
> >
> > (where datetime is an indexed field takes 200ms to plan and 0.5ms to
> > execute).
> >
> > The databases are involved in a replication chain so I have
> >
> > M1 -> S1 -> S2
> >
> > I have restarted S2 and S1 and this appears to have made the problem go
> > away (though for how long....).  S1 has a replication slot listed on M1.
> >
> > The only other thing to note is that while all of the tables are big, but
> > most of them are not crazy (the one that is most commonly selected from has
> > 718,000 rows in it), there are some very big tables which are reaching
> > 325,000,000 rows.  There is quite a lot of change too I estimate about
> > 7,500,000 row changes a day on average, but this is also very focused
> > (about 7 million of the changes happen on two tables, yet all tables suffer
> > from slow query planning).  Most of these changes occur overnight where
> > bulk changes occur then the rest happens in a more steady stream through
> > the day.  I could understand it more if the execution time was slow, but
> > its the planning time.
> >
> > Auto-vacuum is turned on and set to 1 worker, in addition to this we have
> > a process that runs every night and runs "vacuum analyze" on as many tables
> > as it can in a 2 hour period (starting with the oldest vacuumed first).
> >
> > Just wondering if anyone has any thoughts as to why planning takes so long
> > and anything I can do to address the issue.
> >
> > Thanks
> >
> > Scott
> >
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > DISCLAIMER: This email message and any attachments is for the sole
> > use of the intended recipient(s) and may contain confidential and
> > privileged information.  Any unauthorised review, use, disclosure
> > or distribution is prohibited. If you are not the intended recipient,
> > please contact the sender by reply email and destroy all copies of
> > the original message.
> >
> > The views expressed in this message may not necessarily reflect the
> > views of Bluestar Software Ltd.
> >
> > Bluestar Software Ltd, Registered in England
> > Company Registration No. 03537860, VAT No. 709 2751 29
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> >
> >
> >
> >
> >
> > --
> > Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-admin
> >
--
Scott Neville
Software Developer, Bluestar Software
Telephone: +44 (0)1256 882695
Web site: www.bluestar-software.co.uk
Facebook: www.facebook.com/bluestarsoftware
Email: scott.neville@bluestar-software.co.uk

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DISCLAIMER: This email message and any attachments is for the sole
use of the intended recipient(s) and may contain confidential and
privileged information.  Any unauthorised review, use, disclosure
or distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of
the original message.

The views expressed in this message may not necessarily reflect the
views of Bluestar Software Ltd.

Bluestar Software Ltd, Registered in England
Company Registration No. 03537860, VAT No. 709 2751 29
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~






Re: Slow planning time

From
John Scalia
Date:
Yep, your memory settings look like they might be OK. Now, when you say a bulk update with a vacuum done afterwards, is this a regular vacuum or a vacuum full? You do know that a vacuum as opposed to a vacuum full doesn't actually shrink the table by removing the deleted tuples? Only a vacuum full does that, Of course, the vacuum full locks the entire table so use with caution. Also, after a full vacuum, I believe you should run another normal vacuum as that will rebuild the planner statistics on the newly shrunk table.

As far as your last comment, on the insert of 100 rows being slow. Was that slow prior to the insert or did you actually measure prior?  Could these simply be where you're missing or have corrupted indexes? What does explain tell you about the execution strategy? Do you see a lot of sequential scans or index scans? Could you try dropping an index and recreating it in the event you might have a corrupted index? I know this last suggestion is a very rare occurrence, but it has happened to me.

On Wed, Dec 23, 2015 at 7:22 AM, Scott Neville <scott.neville@bluestar-software.co.uk> wrote:
Hi,

Thanks for that I think the memory settings are OK, I have:

shared_buffered = 32GB
temp_buffers = 32MB
work_mem = 16MB
maintenance_work_mem = 256MB
effective_cache_size = 96GB

With regard to the updates they are done in bulk (one statement), with a vacuum done straight afterwards, so its

delete from table where date < 3 months ago

3.5 million rows changed

vacuum table.

The planning time is also slow on tables that had an insert of a few 100 rows when the database went live which have not subsequently been changed.

Hope that helps.

Scott

On Wednesday 23 Dec 2015 06:55:07 John Scalia wrote:
> Scott,
>
> A couple of things to try... First, off, have you performed any vacuum full
> sequences on your database? Basically, I mean a sequence of vacuum, vacuum
> full, and another vacuum. And do you know if you autovacuum's have ever
> completed? Depending on the number of tables, you probably need more than 1
> worker anyway. With 7.5 million updates/day, you probably really need to
> tune your AV settings as well. If you look around the web, I found a query
> that builds a view indicating the AV status and need for every table in a
> database. That was quite useful in tuning my own AV settings. Finally, you
> could have memory settings way too low. Could you post your shared_buffers
> and other memory values from your postgresql.conf file?
> --
> Jay
>
> On Wed, Dec 23, 2015 at 6:00 AM, Scott Neville <
> scott.neville@bluestar-software.co.uk> wrote:
>
> > Hi,
> >
> > We have a database that for some reason has started to be really slow at
> > planning all queries.  The database is running version 9.4.2 since July
> > 28th (it was freshly installed then - compiled from source).  The response
> > time is fairly sporadic, but the quickest plan time I have seen (on any
> > query) using explain analyze is 39ms with an execution time of 1ms, however
> > we have slow query logging on and we are getting queries taking over 6000
> > ms in the planning stage with then only a few ms to execute.  There is
> > nothing complex about the queries so even something like this:
> >
> > select max(datetime) from audit;
> >
> > (where datetime is an indexed field takes 200ms to plan and 0.5ms to
> > execute).
> >
> > The databases are involved in a replication chain so I have
> >
> > M1 -> S1 -> S2
> >
> > I have restarted S2 and S1 and this appears to have made the problem go
> > away (though for how long....).  S1 has a replication slot listed on M1.
> >
> > The only other thing to note is that while all of the tables are big, but
> > most of them are not crazy (the one that is most commonly selected from has
> > 718,000 rows in it), there are some very big tables which are reaching
> > 325,000,000 rows.  There is quite a lot of change too I estimate about
> > 7,500,000 row changes a day on average, but this is also very focused
> > (about 7 million of the changes happen on two tables, yet all tables suffer
> > from slow query planning).  Most of these changes occur overnight where
> > bulk changes occur then the rest happens in a more steady stream through
> > the day.  I could understand it more if the execution time was slow, but
> > its the planning time.
> >
> > Auto-vacuum is turned on and set to 1 worker, in addition to this we have
> > a process that runs every night and runs "vacuum analyze" on as many tables
> > as it can in a 2 hour period (starting with the oldest vacuumed first).
> >
> > Just wondering if anyone has any thoughts as to why planning takes so long
> > and anything I can do to address the issue.
> >
> > Thanks
> >
> > Scott
> >
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > DISCLAIMER: This email message and any attachments is for the sole
> > use of the intended recipient(s) and may contain confidential and
> > privileged information.  Any unauthorised review, use, disclosure
> > or distribution is prohibited. If you are not the intended recipient,
> > please contact the sender by reply email and destroy all copies of
> > the original message.
> >
> > The views expressed in this message may not necessarily reflect the
> > views of Bluestar Software Ltd.
> >
> > Bluestar Software Ltd, Registered in England
> > Company Registration No. 03537860, VAT No. 709 2751 29
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> >
> >
> >
> >
> >
> > --
> > Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-admin
> >
--
Scott Neville
Software Developer, Bluestar Software
Telephone: +44 (0)1256 882695
Web site: www.bluestar-software.co.uk
Facebook: www.facebook.com/bluestarsoftware
Email: scott.neville@bluestar-software.co.uk

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DISCLAIMER: This email message and any attachments is for the sole
use of the intended recipient(s) and may contain confidential and
privileged information.  Any unauthorised review, use, disclosure
or distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of
the original message.

The views expressed in this message may not necessarily reflect the
views of Bluestar Software Ltd.

Bluestar Software Ltd, Registered in England
Company Registration No. 03537860, VAT No. 709 2751 29
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~






--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: Slow planning time

From
Kevin Grittner
Date:
On Wed, Dec 23, 2015 at 6:00 AM, Scott Neville
<scott.neville@bluestar-software.co.uk> wrote:

> We have a database that for some reason has started to be really
> slow at planning all queries.  The database is running version
> 9.4.2 since July 28th (it was freshly installed then - compiled
> from source).

http://www.postgresql.org/support/versioning/

9.4.2 contains bugs that can eat your data and leave you with a
corrupted (and possibly unusable) database without warning.  You
should always be ready to recover from your backups, but it's worth
taking extra care about your backups when you choose to run with
such serious known bugs.

> The response time is fairly sporadic, but the
> quickest plan time I have seen (on any query) using explain analyze
> is 39ms with an execution time of 1ms, however we have slow query
> logging on and we are getting queries taking over 6000 ms in the
> planning stage with then only a few ms to execute.  There is
> nothing complex about the queries so even something like this:
>
> select max(datetime) from audit;
>
> (where datetime is an indexed field takes 200ms to plan and 0.5ms
> to execute).
>
> The databases are involved in a replication chain so I have
>
> M1 -> S1 -> S2
>
> I have restarted S2 and S1 and this appears to have made the
> problem go away (though for how long....).  S1 has a replication
> slot listed on M1.
>
> The only other thing to note is that while all of the tables are
> big, but most of them are not crazy (the one that is most commonly
> selected from has 718,000 rows in it), there are some very big
> tables which are reaching 325,000,000 rows.  There is quite a lot
> of change too I estimate about 7,500,000 row changes a day on
> average, but this is also very focused (about 7 million of the
> changes happen on two tables, yet all tables suffer from slow query
> planning).  Most of these changes occur overnight where bulk
> changes occur then the rest happens in a more steady stream through
> the day.  I could understand it more if the execution time was
> slow, but its the planning time.
>
> Auto-vacuum is turned on and set to 1 worker, in addition to this
> we have a process that runs every night and runs "vacuum analyze"
> on as many tables as it can in a 2 hour period (starting with the
> oldest vacuumed first).
>
> Just wondering if anyone has any thoughts as to why planning
> takes so long and anything I can do to address the issue.

The most likely explanation, based on the above, is that your
vacuum/analyze regimen is not sufficient to keep up with the
modifications.  You don't give a description of the hardware or
show most of your configuration settings (max_connections would be
particularly interesting), so you may well have other problems; but
I would start by setting all autovacuum settings to their defaults
except for these overrides and running a VACUUM ANALYZE command
with a superuser login (even if it takes days) while you let the
other load run:

autovacuum_max_workers = 10
autovacuum_cost_limit = 1000
autovacuum_work_mem = '1GB'

Once the VACUUM ANALYZE command completes, autovacuum stands a good
chance of keeping up.  If you see all 10 autovacuum processes busy
for more than an hour or two at at time, you might want to increase
autovacuum_cost_limit.

Once you have a vacuum regimen that is keeping up, you may want to
run a query to show bloat levels and take extreme measures such as
VACUUM FULL to fix that; but that is pretty pointless without
having a process to prevent a recurrence of the bloat.

If you are still having problems, please read this and post the
suggested information:

https://wiki.postgresql.org/wiki/Guide_to_reporting_problems

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Slow planning time

From
Tom Lane
Date:
Scott Neville <scott.neville@bluestar-software.co.uk> writes:
> We have a database that for some reason has started to be really slow at planning all queries.  The database is
runningversion 9.4.2 since July 28th (it was freshly installed then - compiled from source).  The response time is
fairlysporadic, but the quickest plan time I have seen (on any query) using explain analyze is 39ms with an execution
timeof 1ms, however we have slow query logging on and we are getting queries taking over 6000 ms in the planning stage
withthen only a few ms to execute.  There is nothing complex about the queries so even something like this: 
> select max(datetime) from audit;
> (where datetime is an indexed field takes 200ms to plan and 0.5ms to execute).

> The databases are involved in a replication chain so I have
> M1 -> S1 -> S2
> I have restarted S2 and S1 and this appears to have made the problem go away (though for how long....).  S1 has a
replicationslot listed on M1. 

Please clarify: the slowness occurs on the slaves but not the master?

I am suspicious that the problem has to do with bloat in pg_statistic,
which I will bet that your homegrown vacuuming protocol isn't covering
adequately.  I concur with Kevin's nearby advice that you'd be better
off to forget that and use 10 or so autovacuum workers; you can use
autovacuum_cost_limit to throttle their I/O impact, and still be a
lot better off than with just 1 worker.

There is probably something else going on that's replication-specific,
but I'm not sufficiently up on that aspect of things to theorize.

            regards, tom lane


Re: Slow planning time

From
"Scott Neville"
Date:
Hi,

To clarify, the slow planning time was on all nodes until I restarted the
slave nodes the slave nodes are now performing better.  The master however
is still performing slowly.

I can increase the number of auto-vacuum workers but this cluster only has
about 25 tables in it (as I mentioned before some have quite a lot of rows).
 So to my mind it appeared to be better to have one auto-vacuum process
getting a whole table done as fast as possible and limiting the impact by
only having one.  You are also correct the tool we have deliberately only
runs vacuum analyze on user tables.  It was assumed that this would be
enough to keep user tables tidy (and to try and target the vacuum of these
tables at lower usage times) and auto-vacuum in the database could take care
of the rest.  Is this not correct?

Thanks

Scott



On Wed, 23 Dec 2015 10:24:24 -0500
  Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Scott Neville <scott.neville@bluestar-software.co.uk> writes:
>> We have a database that for some reason has started to be really
>>slow at planning all queries.  The database is running version 9.4.2
>>since July 28th (it was freshly installed then - compiled from
>>source).  The response time is fairly sporadic, but the quickest plan
>>time I have seen (on any query) using explain analyze is 39ms with an
>>execution time of 1ms, however we have slow query logging on and we
>>are getting queries taking over 6000 ms in the planning stage with
>>then only a few ms to execute.  There is nothing complex about the
>>queries so even something like this:
>> select max(datetime) from audit;
>> (where datetime is an indexed field takes 200ms to plan and 0.5ms to
>>execute).
>
>> The databases are involved in a replication chain so I have
>> M1 -> S1 -> S2
>> I have restarted S2 and S1 and this appears to have made the problem
>>go away (though for how long....).  S1 has a replication slot listed
>>on M1.
>
> Please clarify: the slowness occurs on the slaves but not the
>master?
>
> I am suspicious that the problem has to do with bloat in
>pg_statistic,
> which I will bet that your homegrown vacuuming protocol isn't
>covering
> adequately.  I concur with Kevin's nearby advice that you'd be
>better
> off to forget that and use 10 or so autovacuum workers; you can use
> autovacuum_cost_limit to throttle their I/O impact, and still be a
> lot better off than with just 1 worker.
>
> There is probably something else going on that's
>replication-specific,
> but I'm not sufficiently up on that aspect of things to theorize.
>
>             regards, tom lane

---
Scott Neville
Software Developer, Bluestar Software
Telephone: +44 (0)1256 882695
Web site: www.bluestar-software.co.uk
Facebook: www.facebook.com/bluestarsoftware
Email: scott.neville@bluestar-software.co.uk

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DISCLAIMER: This email message and any attachments is for the sole
use of the intended recipient(s) and may contain confidential and
privileged information.  Any unauthorised review, use, disclosure
or distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of
the original message.

The views expressed in this message may not necessarily reflect the
views of Bluestar Software Ltd.

Bluestar Software Ltd, Registered in England
Company Registration No. 03537860, VAT No. 709 2751 29
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~