Thread: EXTERNAL storage and substring on long strings

EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
Hello,

A few days ago, I asked for advice on speeding up substring queries on
the GENERAL mailing list.  Joe Conway helpfully pointed out the ALTER
TABLE STORAGE EXTERNAL documentation.  After doing the alter,
the queries got slower!  Here is the background:

A freshly loaded database is VACUUM ANALYZEd and I run this query:

explain analyze select substring(residues from 1000000 for 20000)
from feature where feature_id=1;

where feature is a table with ~3 million rows, and residues is a text
column, where for the majority of the rows of feature, it is null, for a
large minority, it is shortish strings (a few thousand characters), and
for 6 rows, residues contains very long strings (~20 million characters
(it's chromosome DNA sequence from fruit flies)).

Here's the result from the ANALYZE:
 Index Scan using feature_pkey on feature  (cost=0.00..3.01 rows=1
width=152) (actual time=388.88..388.89 rows=1 loops=1)
   Index Cond: (feature_id = 1)
 Total runtime: 389.00 msec
(3 rows)

Now, I'll change the storage:

alter table feature alter column residues set storage external;

To make sure that really happens, I run an update on feature:

update feature set residues = residues where feature_id<8;

and then VACUUM ANALYZE again.  I run the same EXPLAIN ANALYZE query as
above and get this output:

 Index Scan using feature_pkey on feature  (cost=0.00..3.01 rows=1
width=153) (actual time=954.13..954.14 rows=1 loops=1)
   Index Cond: (feature_id = 1)
 Total runtime: 954.26 msec
(3 rows)

Whoa!  That's not what I expected, the time to do the query got more
that twice as long.  So I think, maybe it was just an unlucky section,
and overall performance will be much better.  So I write a perl script
to do substring queries over all of my chromosomes at various positions
and lengths (20,000 queries total).  For comparison, I also ran the same
script, extracting the chromosomes via sql and doing the substring in
perl.  Here's what happened:

substr in perl       0.014sec/query
EXTENDED storage     0.0052sec/query
default storage      0.0040sec/query

So, what am I missing?  Why doesn't EXTENDED storage improve substring
performance as it says it should in
http://www.postgresql.org/docs/7.3/interactive/sql-altertable.html ?

I am using an IDE drive on a laptop, running Postgresql 7.3.2 on RedHat
Linux 7.3 with 512M RAM.

Thanks,
Scott

--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
On Thu, 2003-07-31 at 15:44, Tom Lane wrote:
> Scott Cain <cain@cshl.org> writes:
> > explain analyze select substring(residues from 1000000 for 20000)
> > from feature where feature_id=1;
>
> > where feature is a table with ~3 million rows, and residues is a text
> > column, where for the majority of the rows of feature, it is null, for a
> > large minority, it is shortish strings (a few thousand characters), and
> > for 6 rows, residues contains very long strings (~20 million characters
> > (it's chromosome DNA sequence from fruit flies)).
>
> I think the reason uncompressed storage loses here is that the runtime
> is dominated by the shortish strings, and you have to do more I/O to get
> at those if they're uncompressed, negating any advantage from not having
> to fetch all of the longish strings.

I'm not sure I understand what that paragraph means, but it sounds like,
if PG is working the way it is supposed to, tough for me, right?
>
> Or it could be that there's a bug preventing John Gray's substring-slice
> optimization from getting used.  The only good way to tell that I can
> think of is to rebuild PG with profiling enabled and try to profile the
> execution both ways.  Are you up for that?

I am not against recompiling.  I am currently using an RPM version, but
I could probably recompile; the compilation is probably straight forward
(adding something like `--with_profiling` to ./configure), but how
straight forward is actually doing the profiling?  Is there a document
somewhere that lays it out?
>
> (BTW, if you are using a multibyte database encoding, then that's your
> problem right there --- the optimization is practically useless unless
> character and byte indexes are the same.)

I shouldn't be, but since it is an RPM, I can't be sure.  It sure would
be silly since the strings consist only of [ATGCN].

Thanks,
Scott

--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
On Thu, 2003-07-31 at 16:32, Tom Lane wrote:
> Scott Cain <cain@cshl.org> writes:
> >> (BTW, if you are using a multibyte database encoding, then that's your
> >> problem right there --- the optimization is practically useless unless
> >> character and byte indexes are the same.)
>
> > I shouldn't be, but since it is an RPM, I can't be sure.
>
> Look at "psql -l" to see what encoding it reports for your database.
>
I see, encoding is a per database option.  Since I've never set it, all
my databases use sql_ascii.
--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: EXTERNAL storage and substring on long strings

From
Joe Conway
Date:
Scott Cain wrote:
>  Index Scan using feature_pkey on feature  (cost=0.00..3.01 rows=1
> width=153) (actual time=954.13..954.14 rows=1 loops=1)
>    Index Cond: (feature_id = 1)
>  Total runtime: 954.26 msec
> (3 rows)
>
> Whoa!  That's not what I expected, the time to do the query got more
> that twice as long.  So I think, maybe it was just an unlucky section,
> and overall performance will be much better.  So I write a perl script
> to do substring queries over all of my chromosomes at various positions
> and lengths (20,000 queries total).  For comparison, I also ran the same
> script, extracting the chromosomes via sql and doing the substring in
> perl.  Here's what happened:

Hmmm, what happens if you compare with a shorter substring, e.g.:

explain analyze select substring(residues from 1000000 for 2000)
from feature where feature_id=1;

I'm just guessing, but it might be that the extra I/O time to read 20K
of uncompressed text versus the smaller compressed text is enough to
swamp the time saved from not needing to uncompress.

Any other ideas out there?

Joe


Re: EXTERNAL storage and substring on long strings

From
Tom Lane
Date:
Scott Cain <cain@cshl.org> writes:
>> (BTW, if you are using a multibyte database encoding, then that's your
>> problem right there --- the optimization is practically useless unless
>> character and byte indexes are the same.)

> I shouldn't be, but since it is an RPM, I can't be sure.

Look at "psql -l" to see what encoding it reports for your database.

            regards, tom lane

Re: EXTERNAL storage and substring on long strings

From
Tom Lane
Date:
Scott Cain <cain@cshl.org> writes:
> explain analyze select substring(residues from 1000000 for 20000)
> from feature where feature_id=1;

> where feature is a table with ~3 million rows, and residues is a text
> column, where for the majority of the rows of feature, it is null, for a
> large minority, it is shortish strings (a few thousand characters), and
> for 6 rows, residues contains very long strings (~20 million characters
> (it's chromosome DNA sequence from fruit flies)).

I think the reason uncompressed storage loses here is that the runtime
is dominated by the shortish strings, and you have to do more I/O to get
at those if they're uncompressed, negating any advantage from not having
to fetch all of the longish strings.

Or it could be that there's a bug preventing John Gray's substring-slice
optimization from getting used.  The only good way to tell that I can
think of is to rebuild PG with profiling enabled and try to profile the
execution both ways.  Are you up for that?

(BTW, if you are using a multibyte database encoding, then that's your
problem right there --- the optimization is practically useless unless
character and byte indexes are the same.)

            regards, tom lane

Re: EXTERNAL storage and substring on long strings

From
Joe Conway
Date:
Scott Cain wrote:
> So it is possible that if I had a fast scsi drive, the performance might
> be better?

Faster drives are always better ;-)

Did you try the comparison with shorter substrings? Also, maybe not
related to your specific question, but have you tuned any other
postgresql.conf settings?

Joe




Re: EXTERNAL storage and substring on long strings

From
Tom Lane
Date:
Scott Cain <cain@cshl.org> writes:
> I see, encoding is a per database option.  Since I've never set it, all
> my databases use sql_ascii.

Okay, then you've dodged the obvious bullet; time to try profiling I
guess.  The way I usually do it is (given a clean, configured source
tree):

    cd src/backend
    gmake PROFILE="-pg -DLINUX_PROFILE" all
    install resulting postgres executable

(The -DLINUX_PROFILE is unnecessary on non-Linux machines, but AFAIK it
won't hurt anything either.)  Once you have this installed, each session
will end by dumping a gmon.out profile file into the $PGDATA/base/nnn
directory for its database.  After you've done a test run, you do

    gprof path/to/postgres/executable path/to/gmon.out >outputfile

and voila, you have a profile.

It's a good idea to make sure that you accumulate a fair amount of CPU
time in a test session, since the profile depends on statistical
sampling.  I like to have about a minute of accumulated runtime before
trusting the results.  Repeat the same query multiple times if needed.

            regards, tom lane

Re: EXTERNAL storage and substring on long strings

From
Joe Conway
Date:
Scott Cain wrote:
> I am not against recompiling.  I am currently using an RPM version, but
> I could probably recompile; the compilation is probably straight forward
> (adding something like `--with_profiling` to ./configure), but how
> straight forward is actually doing the profiling?  Is there a document
> somewhere that lays it out?
>

Try:
rpm --rebuild --define 'beta 1' postgresql-7.3.4-1PGDG.src.rpm

This will get you Postgres with --enable-cassert and --enable-debug, and
it will leave the binaries unstripped. Install the new RPMs.

Then start up psql in one terminal, followed by gdb in another. Attach
to the postgres backend pid and set a breakpoint at
toast_fetch_datum_slice. Then continue the gdb session, and run your sql
statement in the psql session. Something like:

session 1:
   psql mydatabase

session 2:
   ps -ef | grep postgres
   (note the pid on the postgres backend, *not* the psql session)
   gdb /usr/bin/postgres
   attach <pid-of-backend>
   break toast_fetch_datum_slice
   continue

session 1:
   select substring(residues from 1000000 for 20000) from feature where
   feature_id=1;

session 2:
   did we hit the breakpoint in toast_fetch_datum_slice?

HTH,

Joe


Re: EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
Joe,

I'm working on the comparison--I think the best way to do it is to
reload the original data into a new database and compare them, so it
will take a while.

I have tuned postgresql.conf according to the page that everybody around
here seems to cite. I'll probably post back tomorrow with another set of
results.

Also, the perl script that did several queries used lengths of 5000,
10,000 and 40,000 because those are the typical lengths I would use
(occasionally shorter).

Thanks,
Scott

On Thu, 2003-07-31 at 16:49, Joe Conway wrote:
> Scott Cain wrote:
> > So it is possible that if I had a fast scsi drive, the performance might
> > be better?
>
> Faster drives are always better ;-)
>
> Did you try the comparison with shorter substrings? Also, maybe not
> related to your specific question, but have you tuned any other
> postgresql.conf settings?
>
> Joe
>
>
--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: EXTERNAL storage and substring on long strings

From
Ron Johnson
Date:
On Thu, 2003-07-31 at 15:31, Joe Conway wrote:
> Scott Cain wrote:
> >  Index Scan using feature_pkey on feature  (cost=0.00..3.01 rows=1
> > width=153) (actual time=954.13..954.14 rows=1 loops=1)
> >    Index Cond: (feature_id = 1)
> >  Total runtime: 954.26 msec
> > (3 rows)
> >
> > Whoa!  That's not what I expected, the time to do the query got more
> > that twice as long.  So I think, maybe it was just an unlucky section,
> > and overall performance will be much better.  So I write a perl script
> > to do substring queries over all of my chromosomes at various positions
> > and lengths (20,000 queries total).  For comparison, I also ran the same
> > script, extracting the chromosomes via sql and doing the substring in
> > perl.  Here's what happened:
>
> Hmmm, what happens if you compare with a shorter substring, e.g.:
>
> explain analyze select substring(residues from 1000000 for 2000)
> from feature where feature_id=1;
>
> I'm just guessing, but it might be that the extra I/O time to read 20K
> of uncompressed text versus the smaller compressed text is enough to
> swamp the time saved from not needing to uncompress.

Are you asking, "Can his CPU decompress faster than his disks can
read?"

--
+-----------------------------------------------------------------+
| Ron Johnson, Jr.        Home: ron.l.johnson@cox.net             |
| Jefferson, LA  USA                                              |
|                                                                 |
| "I'm not a vegetarian because I love animals, I'm a vegetarian  |
|  because I hate vegetables!"                                    |
|    unknown                                                      |
+-----------------------------------------------------------------+



Re: EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
So it is possible that if I had a fast scsi drive, the performance might
be better?

On Thu, 2003-07-31 at 16:31, Joe Conway wrote:
> Scott Cain wrote:
> >  Index Scan using feature_pkey on feature  (cost=0.00..3.01 rows=1
> > width=153) (actual time=954.13..954.14 rows=1 loops=1)
> >    Index Cond: (feature_id = 1)
> >  Total runtime: 954.26 msec
> > (3 rows)
> >
> > Whoa!  That's not what I expected, the time to do the query got more
> > that twice as long.  So I think, maybe it was just an unlucky section,
> > and overall performance will be much better.  So I write a perl script
> > to do substring queries over all of my chromosomes at various positions
> > and lengths (20,000 queries total).  For comparison, I also ran the same
> > script, extracting the chromosomes via sql and doing the substring in
> > perl.  Here's what happened:
>
> Hmmm, what happens if you compare with a shorter substring, e.g.:
>
> explain analyze select substring(residues from 1000000 for 2000)
> from feature where feature_id=1;
>
> I'm just guessing, but it might be that the extra I/O time to read 20K
> of uncompressed text versus the smaller compressed text is enough to
> swamp the time saved from not needing to uncompress.
>
> Any other ideas out there?
>
> Joe
--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
Hello,

Note: there is a SQL question way at the bottom of this narrative :-)

Last week I asked about doing substring operations on very long strings
(>10 million characters).  I was given a suggestion to use EXTERNAL
storage on the column via the ALTER TABLE ... SET STORAGE command.  In
one test case, the performance of substring actually got worse using
EXTERNAL storage.

In an effort to find the best way to do this operation, I decided to
look at what is my "worst case" scenario: the DNA sequence for human
chromosome 1, which is about 250 million characters long (previous
strings where about 20 million characters long).  I wrote a perl script
to do several substring operations over this very long string, with
substring lengths varying between 1000 and 40,000 characters spread out
over various locations along the string.  While EXTENDED storage won in
this case, it was a hollow victory: 38 seconds per operation versus 40
seconds, both of which are way too long to for an interactive
application.

Time for a new method.  A suggestion from my boss was to "shred" the DNA
into smallish chunks and a column giving offsets from the beginning of
the string, so that it can be reassembled when needed. Here is the test
table:

string=> \d dna
      Table "public.dna"
 Column  |  Type   | Modifiers
---------+---------+-----------
 foffset | integer |
 pdna    | text    |
Indexes: foffset_idx btree (foffset)

In practice, there would also be a foreign key column to give the
identifier of the dna.  Then I wrote the following function (here's the
SQL part promised above):

CREATE OR REPLACE FUNCTION dna_string (integer, integer) RETURNS TEXT AS '
DECLARE
    smin ALIAS FOR $1;
    smax ALIAS FOR $2;
    longdna         TEXT := '''';
    dna_row         dna%ROWTYPE;
    dnastring       TEXT;
    firstchunk      INTEGER;
    lastchunk       INTEGER;
    in_longdnastart INTEGER;
    in_longdnalen   INTEGER;
    chunksize       INTEGER;
BEGIN
    SELECT INTO chunksize min(foffset) FROM dna WHERE foffset>0;
    firstchunk :=  chunksize*(smin/chunksize);
    lastchunk  :=  chunksize*(smax/chunksize);

    in_longdnastart := smin % chunksize;
    in_longdnalen   := smax - smin + 1;

    FOR dna_row IN
        SELECT * FROM dna
        WHERE foffset >= firstchunk AND foffset <= lastchunk
        ORDER BY foffset
        LOOP

        longdna := longdna || dna_row.pdna;
    END LOOP;

    dnastring := substring(longdna FROM in_longdnastart FOR in_longdnalen);

    RETURN dnastring;
END;
' LANGUAGE 'plpgsql';

So here's the question: I've never written a plpgsql function before, so
I don't have much experience with it; is there anything obviously wrong
with this function, or are there things that could be done better?  At
least this appears to work and is much faster, completing substring
operations like above in about 0.27 secs (that's about two orders of
magnitude improvement!)

Thanks,
Scott


--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: [SQL] EXTERNAL storage and substring on long strings

From
Tom Lane
Date:
Scott Cain <cain@cshl.org> writes:
> At least this appears to work and is much faster, completing substring
> operations like above in about 0.27 secs (that's about two orders of
> magnitude improvement!)

I find it really, really hard to believe that a crude reimplementation
in plpgsql of the TOAST concept could beat the built-in implementation
at all, let alone beat it by two orders of magnitude.

Either there's something unrealistic about your testing of the
dna_string function, or your original tests are not causing TOAST to be
invoked in the expected way, or there's a bug we need to fix.  I'd
really like to see some profiling of the poor-performing
external-storage case, so we can figure out what's going on.

            regards, tom lane

Re: [SQL] EXTERNAL storage and substring on long strings

From
Richard Huxton
Date:
On Monday 04 August 2003 16:25, Scott Cain wrote:
[snip]
> In an effort to find the best way to do this operation, I decided to
> look at what is my "worst case" scenario: the DNA sequence for human
> chromosome 1, which is about 250 million characters long (previous
> strings where about 20 million characters long).  I wrote a perl script
> to do several substring operations over this very long string, with
> substring lengths varying between 1000 and 40,000 characters spread out
> over various locations along the string.  While EXTENDED storage won in
> this case, it was a hollow victory: 38 seconds per operation versus 40
> seconds, both of which are way too long to for an interactive
> application.
>
> Time for a new method.  A suggestion from my boss was to "shred" the DNA
> into smallish chunks and a column giving offsets from the beginning of
> the string, so that it can be reassembled when needed. Here is the test
> table:
>
> string=> \d dna
>       Table "public.dna"
>  Column  |  Type   | Modifiers
> ---------+---------+-----------
>  foffset | integer |
>  pdna    | text    |
> Indexes: foffset_idx btree (foffset)

[snipped plpgsql function which stitches chunks together and then substrings]

> So here's the question: I've never written a plpgsql function before, so
> I don't have much experience with it; is there anything obviously wrong
> with this function, or are there things that could be done better?  At
> least this appears to work and is much faster, completing substring
> operations like above in about 0.27 secs (that's about two orders of
> magnitude improvement!)

You might want some checks to make sure that smin < smax, otherwise looks like
it does the job in a good clean fashion.

Glad to hear it's going to solve your problems. Two things you might want to
bear in mind:
1. There's probably a "sweet spot" where the chunk size interacts well with
your data, usage patterns and PGs backend to give you peak performance.
You'll have to test.
2. If you want to search for a sequence you'll need to deal with the case
where it starts in one chunk and ends in another.

--
  Richard Huxton
  Archonet Ltd

Re: [SQL] EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
On Mon, 2003-08-04 at 11:55, Richard Huxton wrote:
> On Monday 04 August 2003 16:25, Scott Cain wrote:
> [snip]
> > [snip]
>
> You might want some checks to make sure that smin < smax, otherwise looks like
> it does the job in a good clean fashion.

Good point--smin < smax generally by virtue of the application using the
database, but I shouldn't assume that will always be the case.
>
> Glad to hear it's going to solve your problems. Two things you might want to
> bear in mind:
> 1. There's probably a "sweet spot" where the chunk size interacts well with
> your data, usage patterns and PGs backend to give you peak performance.
> You'll have to test.

Yes, I had a feeling that was probably the case-- since this is an open
source project, I will need to write directions for installers on
picking a reasonable chunk size.

> 2. If you want to search for a sequence you'll need to deal with the case
> where it starts in one chunk and ends in another.

I forgot about searching--I suspect that application is why I faced
opposition for shredding in my schema development group.  Maybe I should
push that off to the file system and use grep (or BLAST).  Otherwise, I
could write a function that would search the chunks first, then after
failing to find the substring in those, I could start sewing the chunks
together to look for the query string.  That could get ugly (and
slow--but if the user knows that and expects it to be slow, I'm ok with
that).

Thanks,
Scott

--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: [SQL] EXTERNAL storage and substring on long strings

From
"Shridhar Daithankar"
Date:
On 4 Aug 2003 at 12:14, Scott Cain wrote:
> I forgot about searching--I suspect that application is why I faced
> opposition for shredding in my schema development group.  Maybe I should
> push that off to the file system and use grep (or BLAST).  Otherwise, I
> could write a function that would search the chunks first, then after
> failing to find the substring in those, I could start sewing the chunks
> together to look for the query string.  That could get ugly (and
> slow--but if the user knows that and expects it to be slow, I'm ok with
> that).

I assume your DNA sequence is compacted. Your best bet would be to fetch them
from database and run blast on them in client memory. No point duplicating
blast functionality. Last I tried it beat every technique of text searching
when heuristics are involved.

Bye
 Shridhar

--
There are two types of Linux developers - those who can spell, andthose who
can't. There is a constant pitched battle between the two.(From one of the post-
1.1.54 kernel update messages posted to c.o.l.a)


Re: [SQL] EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
On Mon, 2003-08-04 at 11:53, Tom Lane wrote:
> Scott Cain <cain@cshl.org> writes:
> > At least this appears to work and is much faster, completing substring
> > operations like above in about 0.27 secs (that's about two orders of
> > magnitude improvement!)
>
> I find it really, really hard to believe that a crude reimplementation
> in plpgsql of the TOAST concept could beat the built-in implementation
> at all, let alone beat it by two orders of magnitude.
>
> Either there's something unrealistic about your testing of the
> dna_string function, or your original tests are not causing TOAST to be
> invoked in the expected way, or there's a bug we need to fix.  I'd
> really like to see some profiling of the poor-performing
> external-storage case, so we can figure out what's going on.
>
I was really hoping for a "Good job and glad to hear it" from you :-)

I don't think there is anything unrealistic about my function or its
testing, as it is very much along the lines of the types of things we do
now.  I will really try to do some profiling this week to help figure
out what is going on.

Scott

--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: [SQL] EXTERNAL storage and substring on long strings

From
"Matt Clark"
Date:
> > 2. If you want to search for a sequence you'll need to deal with the case
> > where it starts in one chunk and ends in another.
>
> I forgot about searching--I suspect that application is why I faced
> opposition for shredding in my schema development group.  Maybe I should
> push that off to the file system and use grep (or BLAST).  Otherwise, I
> could write a function that would search the chunks first, then after
> failing to find the substring in those, I could start sewing the chunks
> together to look for the query string.  That could get ugly (and
> slow--but if the user knows that and expects it to be slow, I'm ok with
> that).

If you know the max length of the sequences being searched for, and this is much less than the chunk size, then you
couldsimply 
have the chunks overlap by that much, thus guaranteeing every substring will be found in its entirety in at least one
chunk.



Re: [SQL] EXTERNAL storage and substring on long strings

From
Joe Conway
Date:
Scott Cain wrote:
> On Mon, 2003-08-04 at 11:53, Tom Lane wrote:
>>I find it really, really hard to believe that a crude reimplementation
>>in plpgsql of the TOAST concept could beat the built-in implementation
>>at all, let alone beat it by two orders of magnitude.
>>
>>Either there's something unrealistic about your testing of the
>>dna_string function, or your original tests are not causing TOAST to be
>>invoked in the expected way, or there's a bug we need to fix.  I'd
>>really like to see some profiling of the poor-performing
>>external-storage case, so we can figure out what's going on.
>
> I was really hoping for a "Good job and glad to hear it" from you :-)
>
> I don't think there is anything unrealistic about my function or its
> testing, as it is very much along the lines of the types of things we do
> now.  I will really try to do some profiling this week to help figure
> out what is going on.

Is there a sample table schema and dataset available (external-storage
case) that we can play with?

Joe


Re: [SQL] EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
Joe,

Good idea, since I may not get around to profiling it this week.  I
created a dump of the data set I was working with.  It is available at
http://www.gmod.org/string_dump.bz2

Thanks,
Scott


On Mon, 2003-08-04 at 16:29, Joe Conway wrote:
> Is there a sample table schema and dataset available (external-storage
> case) that we can play with?
>
> Joe
--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: [SQL] EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
Oh, and I forgot to mention: it's highly compressed (bzip2 -9) and is
109M.

Scott

On Tue, 2003-08-05 at 11:01, Scott Cain wrote:
> Joe,
>
> Good idea, since I may not get around to profiling it this week.  I
> created a dump of the data set I was working with.  It is available at
> http://www.gmod.org/string_dump.bz2
>
> Thanks,
> Scott
>
>
> On Mon, 2003-08-04 at 16:29, Joe Conway wrote:
> > Is there a sample table schema and dataset available (external-storage
> > case) that we can play with?
> >
> > Joe
--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: [SQL] EXTERNAL storage and substring on long strings

From
Joe Conway
Date:
Scott Cain wrote:
> Oh, and I forgot to mention: it's highly compressed (bzip2 -9) and is
> 109M.

Thanks. I'll grab a copy from home later today and see if I can find
some time to poke at it.

Joe




Re: EXTERNAL storage and substring on long strings

From
Tom Lane
Date:
Scott Cain <cain@cshl.org> writes:
> A few days ago, I asked for advice on speeding up substring queries on
> the GENERAL mailing list.  Joe Conway helpfully pointed out the ALTER
> TABLE STORAGE EXTERNAL documentation.  After doing the alter,
> the queries got slower!  Here is the background:

Ah-hah, I've sussed it ... you didn't actually change the storage
representation.  You wrote:

> Now, I'll change the storage:
>    alter table feature alter column residues set storage external;
> To make sure that really happens, I run an update on feature:
>    update feature set residues = residues where feature_id<8;
> and then VACUUM ANALYZE again.

This sounds good --- in fact, I think we all just accepted it when we
read it --- but in fact *that update didn't decompress the toasted data*.
The tuple toaster sees that the same toasted value is being stored back
into the row, and so it just re-uses the existing toasted data; it does
not stop to notice that the column storage preference has changed.

To actually get the storage to change, you need to feed the value
through some function or operator that will decompress it.  Then it
won't get recompressed when it's stored.  One easy way (since this
is a text column) is

    update feature set residues = residues || '' where feature_id<8;

To verify that something really happened, try doing VACUUM VERBOSE on
the table before and after.  The quoted number of tuples in the toast
table should rise substantially.

I did the following comparisons on the test data you made available,
using two tables in which one has default storage and one has "external"
(not compressed) storage:

scott=# \timing
Timing is on.
scott=# select length (dna) from edna;
  length
-----------
 245203899
(1 row)

Time: 1.05 ms
scott=# select length (dna) from ddna;
  length
-----------
 245203899
(1 row)

Time: 1.11 ms
scott=# select length(substring(dna from 1000000 for 20000)) from edna;
 length
--------
  20000
(1 row)

Time: 30.43 ms
scott=# select length(substring(dna from 1000000 for 20000)) from ddna;
 length
--------
  20000
(1 row)

Time: 37383.02 ms
scott=#

So it looks like the external-storage optimization for substring() does
work as expected, once you get the data into the right format ...

            regards, tom lane

Re: EXTERNAL storage and substring on long strings

From
Joe Conway
Date:
Tom Lane wrote:
> Scott Cain <cain@cshl.org> writes:
>
>>A few days ago, I asked for advice on speeding up substring queries on
>>the GENERAL mailing list.  Joe Conway helpfully pointed out the ALTER
>>TABLE STORAGE EXTERNAL documentation.  After doing the alter,
>>the queries got slower!  Here is the background:
>
> Ah-hah, I've sussed it ... you didn't actually change the storage
> representation.  You wrote:

Yeah, I came to the same conclusion this morning (update longdna set dna
= dna || '';), but it still seems that the chunked table is very
slightly faster than the substring on the  externally stored column:

dna=# explain analyze select pdna from dna where foffset > 6000000 and
foffset < 6024000;
                                                     QUERY PLAN
------------------------------------------------------------------------------------------------------------------
  Index Scan using foffset_idx on dna  (cost=0.00..4.22 rows=14
width=32) (actual time=0.06..0.16 rows=11 loops=1)
    Index Cond: ((foffset > 6000000) AND (foffset < 6024000))
  Total runtime: 0.27 msec
(3 rows)

dna=# explain analyze select pdna from dna where foffset > 6000000 and
foffset < 6024000;
                                                     QUERY PLAN
------------------------------------------------------------------------------------------------------------------
  Index Scan using foffset_idx on dna  (cost=0.00..4.22 rows=14
width=32) (actual time=0.07..0.16 rows=11 loops=1)
    Index Cond: ((foffset > 6000000) AND (foffset < 6024000))
  Total runtime: 0.25 msec
(3 rows)

dna=# explain analyze select substr(dna,6002000,20000) from longdna;
                                            QUERY PLAN
------------------------------------------------------------------------------------------------
  Seq Scan on longdna  (cost=0.00..1.01 rows=1 width=32) (actual
time=0.46..0.47 rows=1 loops=1)
  Total runtime: 0.58 msec
(2 rows)

dna=# explain analyze select substr(dna,6002000,20000) from longdna;
                                            QUERY PLAN
------------------------------------------------------------------------------------------------
  Seq Scan on longdna  (cost=0.00..1.01 rows=1 width=32) (actual
time=0.23..0.24 rows=1 loops=1)
  Total runtime: 0.29 msec
(2 rows)

I ran each command twice after starting psql to observe the effects of
caching.

However with the provided sample data, longdna has only one row, and dna
has 122,540 rows, all of which are chunks of the one longdna row. I
would tend to think that if you had 1000 or so longdna records indexed
on some id column, versus 122,540,000 dna chunks indexed on both an id
and segment column, the substring from longdna would win.

Joe


Re: EXTERNAL storage and substring on long strings

From
Tom Lane
Date:
Joe Conway <mail@joeconway.com> writes:
> Tom Lane wrote:
>> Ah-hah, I've sussed it ... you didn't actually change the storage
>> representation.  You wrote:

> Yeah, I came to the same conclusion this morning (update longdna set dna
> = dna || '';), but it still seems that the chunked table is very
> slightly faster than the substring on the  externally stored column:

> dna=# explain analyze select pdna from dna where foffset > 6000000 and
> foffset < 6024000;
>                                                      QUERY PLAN
> ------------------------------------------------------------------------------------------------------------------
>   Index Scan using foffset_idx on dna  (cost=0.00..4.22 rows=14
> width=32) (actual time=0.07..0.16 rows=11 loops=1)
>     Index Cond: ((foffset > 6000000) AND (foffset < 6024000))
>   Total runtime: 0.25 msec
> (3 rows)

> dna=# explain analyze select substr(dna,6002000,20000) from longdna;
>                                             QUERY PLAN
> ------------------------------------------------------------------------------------------------
>   Seq Scan on longdna  (cost=0.00..1.01 rows=1 width=32) (actual
> time=0.23..0.24 rows=1 loops=1)
>   Total runtime: 0.29 msec
> (2 rows)

This isn't a totally fair comparison, though, since the second case is
actually doing the work of assembling the chunks into a single string,
while the first is not.  Data-copying alone would probably account for
the difference.

I would expect that the two would come out to essentially the same cost
when fairly compared, since the dna table is nothing more nor less than
a hand implementation of the TOAST concept.  The toaster's internal
fetching of toasted data segments ought to be equivalent to the above
indexscan.  The toaster would have a considerable edge on Scott's
implementation when it came to assembling the chunks, since it's working
in C and not in plpgsql, but the table access costs ought to be just
about the same.

            regards, tom lane

Re: EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
> snipped much discussion about EXTERNAL storage and substring speed

Joe and Tom,

Thanks for all of your help; I went back to my (nearly) production
database, and executed the `update feature set residues = residues
||'';` and then redid my benchmark.  Here's a summary of the results:

substr in perl                      0.83sec/op
substring on default text column    0.24sec/op
substring on EXTERNAL column        0.0069sec/op

There's that 2 orders of magnitude improvement I was looking for!

Thanks again,
Scott

--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: [SQL] EXTERNAL storage and substring on long strings

From
Jan Wieck
Date:
Tom Lane wrote:
> Scott Cain <cain@cshl.org> writes:
>> At least this appears to work and is much faster, completing substring
>> operations like above in about 0.27 secs (that's about two orders of
>> magnitude improvement!)
>
> I find it really, really hard to believe that a crude reimplementation
> in plpgsql of the TOAST concept could beat the built-in implementation
> at all, let alone beat it by two orders of magnitude.
>
> Either there's something unrealistic about your testing of the
> dna_string function, or your original tests are not causing TOAST to be
> invoked in the expected way, or there's a bug we need to fix.  I'd
> really like to see some profiling of the poor-performing
> external-storage case, so we can figure out what's going on.

Doesn't look that unrealistic to me. A plain text based substring
function will reassemble the whole beast first before cutting out the
wanted part. His manually chunked version will read only those chunks
needed. Considering that he's talking about retrieving a few thousand
chars from a hundreds of MB size string ...


Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #


Re: [SQL] EXTERNAL storage and substring on long strings

From
Tom Lane
Date:
Jan Wieck <JanWieck@Yahoo.com> writes:
> Doesn't look that unrealistic to me. A plain text based substring
> function will reassemble the whole beast first before cutting out the
> wanted part. His manually chunked version will read only those chunks
> needed.

So does substr(), if applied to EXTERNAL (non-compressed) toasted text.
See John Gray's improvements a release or two back.

            regards, tom lane

Re: [SQL] EXTERNAL storage and substring on long strings

From
Scott Cain
Date:
Agreed.  When I actually Did It Right (tm), EXTERNAL storage gave
similar (probably better) performance as my shredding method, without
all the hoops to breakup and reassemble the string.

Scott

On Thu, 2003-08-14 at 14:00, Tom Lane wrote:
> Jan Wieck <JanWieck@Yahoo.com> writes:
> > Doesn't look that unrealistic to me. A plain text based substring
> > function will reassemble the whole beast first before cutting out the
> > wanted part. His manually chunked version will read only those chunks
> > needed.
>
> So does substr(), if applied to EXTERNAL (non-compressed) toasted text.
> See John Gray's improvements a release or two back.
>
>             regards, tom lane
--
------------------------------------------------------------------------
Scott Cain, Ph. D.                                         cain@cshl.org
GMOD Coordinator (http://www.gmod.org/)                     216-392-3087
Cold Spring Harbor Laboratory


Re: [SQL] EXTERNAL storage and substring on long strings

From
Jan Wieck
Date:
Tom Lane wrote:

> Jan Wieck <JanWieck@Yahoo.com> writes:
>> Doesn't look that unrealistic to me. A plain text based substring
>> function will reassemble the whole beast first before cutting out the
>> wanted part. His manually chunked version will read only those chunks
>> needed.
>
> So does substr(), if applied to EXTERNAL (non-compressed) toasted text.
> See John Gray's improvements a release or two back.

Duh ... of course, EXTERNAL is uncompressed ... where do I have my head?


Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #