Thread: [Q]updating multiple rows with Different values

[Q]updating multiple rows with Different values

From
"V S P"
Date:
Hello,
searched documentation, FAQ and mailing list archives
(mailing list archive search is volumous :-) )

but could not find an answer:

I would like to be able to update
several rows to different values at the same time

In oracle this used to be called Array update or
'collect' update or 'bulk' update -- but those
keywords did not bring anything for Postgresql.

for example tbl_1 has two columns id and col1


update tbl_1  set
   col1=3  where id=25,
   col1=5  where id=26


I am using PHP PDO (and hoping that if there is a mechanism
within postgresql to do that PDO will support it as well).


Thank you in advance,
VSP

--
  V S P
  toreason@fastmail.fm

--
http://www.fastmail.fm - Access all of your messages and folders
                          wherever you are


Re: [Q]updating multiple rows with Different values

From
Andreas Kretschmer
Date:
V S P <toreason@fastmail.fm> schrieb:

> Hello,
> searched documentation, FAQ and mailing list archives
> (mailing list archive search is volumous :-) )
>
> but could not find an answer:
>
> I would like to be able to update
> several rows to different values at the same time
>
> In oracle this used to be called Array update or
> 'collect' update or 'bulk' update -- but those
> keywords did not bring anything for Postgresql.

test=# create table foo (id int, val int);
CREATE TABLE
Zeit: 0,837 ms
test=*# insert into foo values (1,1);
INSERT 0 1
Zeit: 0,434 ms
test=*# insert into foo values (2,2);
INSERT 0 1
Zeit: 0,298 ms
test=*# update foo set val = case when id=1 then 10 when id=2 then 20
end;
UPDATE 2
Zeit: 0,424 ms
test=*# select * from foo;
 id | val
----+-----
  1 |  10
  2 |  20
(2 Zeilen)


Is this okay for you?



Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.                              (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°

Re: [Q]updating multiple rows with Different values

From
Gerhard Heift
Date:
On Sat, Nov 22, 2008 at 10:04:48PM -0500, V S P wrote:
> Hello,
> searched documentation, FAQ and mailing list archives
> (mailing list archive search is volumous :-) )
>
> but could not find an answer:
>
> I would like to be able to update
> several rows to different values at the same time
>
> In oracle this used to be called Array update or
> 'collect' update or 'bulk' update -- but those
> keywords did not bring anything for Postgresql.
>
> for example tbl_1 has two columns id and col1
>
>
> update tbl_1  set
>    col1=3  where id=25,
>    col1=5  where id=26

Something like this?

UPDATE tbl_1 SET col1 = t.col1 FROM (VALUES
    (25, 3)
    (26, 5)
) AS t(id, col1)
WHERE tbl_1.id = t.id;

> I am using PHP PDO (and hoping that if there is a mechanism
> within postgresql to do that PDO will support it as well).
>
> Thank you in advance,
> VSP

Regards,
  Gerhard

Attachment

Re: [Q]updating multiple rows with Different values

From
"V S P"
Date:
Thank you very much
this is exactly what I am looking for

As well as the example provided
'  case when id=1 then 10  '

- it will work as well.


Now just one more question:
I will not have a lot of values to update (less than a 1000
at a time) -- but the values for col1 will be text that is
up to 64K.  So I will not be able to construct SQL strings
and just send them (because it will probably exceed the character
limits for the SQL statements).

Instead, what I plan to do is to generate an sql string as prepared
statement in PDO, and then bind values  to it, so I will have

UPDATE tbl_1 SET col1 = t.col1 FROM (VALUES
     (':val1', ':id1')
     (':val2', ':id2')
     (':val3', ':id3')
 ) AS t(id, col1)

$count=0;
foreach ($upd_arr as $upd_row )
{
  bindValue(':val'.$count,$upd_row->val);
  bindValue(':id'.$count,$upd_row->id);
  $count=$count+1
}


Is this, aproximately, how I should be doing the update?
Is there a limit on the amount of total size of the statement
when gets out of PDO and into postgres

If yes, what is it?
I will just split the loop into chunks,
just wanted to know.


Thank you again for such a quick help.





On Sun, 23 Nov 2008 10:11:56 +0100, "Gerhard Heift"
<ml-postgresql-20081012-3518@gheift.de> said:
> On Sat, Nov 22, 2008 at 10:04:48PM -0500, V S P wrote:
> > Hello,
> > searched documentation, FAQ and mailing list archives
> > (mailing list archive search is volumous :-) )
> >
> > but could not find an answer:
> >
> > I would like to be able to update
> > several rows to different values at the same time
> >
> > In oracle this used to be called Array update or
> > 'collect' update or 'bulk' update -- but those
> > keywords did not bring anything for Postgresql.
> >
> > for example tbl_1 has two columns id and col1
> >
> >
> > update tbl_1  set
> >    col1=3  where id=25,
> >    col1=5  where id=26
>
> Something like this?
>
> UPDATE tbl_1 SET col1 = t.col1 FROM (VALUES
>     (25, 3)
>     (26, 5)
> ) AS t(id, col1)
> WHERE tbl_1.id = t.id;
>
> > I am using PHP PDO (and hoping that if there is a mechanism
> > within postgresql to do that PDO will support it as well).
> >
> > Thank you in advance,
> > VSP
>
> Regards,
>   Gerhard
--
  V S P
  toreason@fastmail.fm

--
http://www.fastmail.fm - Or how I learned to stop worrying and
                          love email again


Re: [Q]updating multiple rows with Different values

From
"Scott Marlowe"
Date:
On Sun, Nov 23, 2008 at 8:43 AM, V S P <toreason@fastmail.fm> wrote:
> Thank you very much
> this is exactly what I am looking for
>
> As well as the example provided
> '  case when id=1 then 10  '
>
> - it will work as well.
>
>
> Now just one more question:
> I will not have a lot of values to update (less than a 1000
> at a time) -- but the values for col1 will be text that is
> up to 64K.  So I will not be able to construct SQL strings
> and just send them (because it will probably exceed the character
> limits for the SQL statements).

There are no character limits for sql statements in pgsql

literal limits in 8.3

From
Sam Mason
Date:
On Sun, Nov 23, 2008 at 12:08:30PM -0700, Scott Marlowe wrote:
> There are no character limits for sql statements in pgsql

That's what I thought!

However, I've just tried today and am getting some strange results.  The
strange results are that above a certain length PG says that it's put a
string in OK but there's nothing there when I look back afterward.  The
code I'm tickling this with is:

  #include <stdio.h>
  #include <stdlib.h>
  int main(int argc, char ** argv)
  {
    int i = 0, x = atoi(argv[1]);
    char letters[] = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef";
    printf("INSERT INTO test (col) VALUES ('");
    while (i < x*1024*1024) {
      int n = printf ("%s",letters);
      if (n == EOF) return 1;
      i += n;
    }
    printf ("');\n");
    return 0;
  }

I ran the following in psql first:

  CREATE TABLE test (col TEXT);

Then a series of:

  ./test 32 | psql
  ./test 64 | psql
  ./test 128 | psql

the test is a simple:

  SELECT length(col) FROM test;

in psql.  I get a count of zero back (and the string equals '') for the
strange rows.  The execution of "test" also completes far too quickly
when things go strange.

One computer (still 8.3.3 I think) goes strange at 256MB and another
(8.3.4) goes strange at 512MB.  Any idea what's going on?


  Sam

Re: literal limits in 8.3

From
Tom Lane
Date:
Sam Mason <sam@samason.me.uk> writes:
> However, I've just tried today and am getting some strange results.  The
> strange results are that above a certain length PG says that it's put a
> string in OK but there's nothing there when I look back afterward.

I get "out of memory" complaints from psql when I try your test case.

            regards, tom lane

Re: literal limits in 8.3

From
Sam Mason
Date:
On Mon, Nov 24, 2008 at 09:06:14AM -0500, Tom Lane wrote:
> Sam Mason <sam@samason.me.uk> writes:
> > However, I've just tried today and am getting some strange results.  The
> > strange results are that above a certain length PG says that it's put a
> > string in OK but there's nothing there when I look back afterward.
>
> I get "out of memory" complaints from psql when I try your test case.

Hum, strange.

It's a normal 32bit Intel Debian system, nothing much special done
to increase the kernel/user split or anything like that as far as I
remember on this box.  If I try with larger sizes it falls over with
"out of memory", but up until around 755MB (760MB fails) it gives back
"INSERT 0 1" which I've always read as inserting a row.  A select on the
table gives this inserted row containing a zero length string.


  Sam

Re: literal limits in 8.3

From
Tom Lane
Date:
Sam Mason <sam@samason.me.uk> writes:
> It's a normal 32bit Intel Debian system, nothing much special done
> to increase the kernel/user split or anything like that as far as I
> remember on this box.  If I try with larger sizes it falls over with
> "out of memory", but up until around 755MB (760MB fails) it gives back
> "INSERT 0 1" which I've always read as inserting a row.  A select on the
> table gives this inserted row containing a zero length string.

Well, I can't reproduce that here.  Something strange about your
configuration maybe?

            regards, tom lane

Re: literal limits in 8.3

From
Sam Mason
Date:
On Mon, Nov 24, 2008 at 10:45:42AM -0500, Tom Lane wrote:
> Sam Mason <sam@samason.me.uk> writes:
> > It's a normal 32bit Intel Debian system, nothing much special done
> > to increase the kernel/user split or anything like that as far as I
> > remember on this box.  If I try with larger sizes it falls over with
> > "out of memory", but up until around 755MB (760MB fails) it gives back
> > "INSERT 0 1" which I've always read as inserting a row.  A select on the
> > table gives this inserted row containing a zero length string.
>
> Well, I can't reproduce that here.  Something strange about your
> configuration maybe?

Not that I know of.  I've just created a test cluster to make sure and I
get the same behaviour.  Minimal set of commands are:

  LANG=C /usr/lib/postgresql/8.3/bin/initdb pg83
  vi pg83/postgresql.conf

changed port to 5444 to prevent conflicts, everything else default.

  /usr/lib/postgresql/8.3/bin/postgres -D pg83 -k /tmp

in a seperate shell:

  echo "create database smason;" | psql -p 5444 -h /tmp template1
  echo "create table test ( col text );" | psql -p 5444 -h /tmp

  ./test 64 | psql -p 5444 -h /tmp
results in:
  INSERT 0 1
  Time: 3354.269 ms

  ./test 512 | psql -p 5444 -h /tmp
results in:
  INSERT 0 1
  Time: 50.452 ms

  echo "select length(col) from test;" | psql -p 5444 -h /tmp
results in:
    length
  ----------
   67108864
          0

Anything else to try?  All PG packages are from debian backports with
version "8.3.4-2~bpo40+1", anybody else getting this?

Hum, maybe I should try building from source and see if it's something
to do with the Debian packages. [/me twiddles thumbs while code builds]
... done ... I get the same result for both 8.3.4 and 8.3.5.  I'm kind
of stuck for ideas now!


  Sam

Re: literal limits in 8.3

From
Tom Lane
Date:
Sam Mason <sam@samason.me.uk> writes:
> On Mon, Nov 24, 2008 at 10:45:42AM -0500, Tom Lane wrote:
>> Well, I can't reproduce that here.  Something strange about your
>> configuration maybe?

> Not that I know of.  I've just created a test cluster to make sure and I
> get the same behaviour.

Hmm ... the third machine I tried was able to reproduce the problem.

What it boils down to is lack of error checking in psql (not the
backend).  Specifically, we fail to enlarge the output buffer for
psqlscan.l, which causes appendBinaryPQExpBuffer to silently not insert
the chunk it's currently being passed.  Which you might think would be
some random subset of the input string, leading to a parse error on
the backend side --- but no, this is the output of a lexical scan which
means what is dropped is exactly the contents of the multi-megabyte
string literal, not less or more.  And then later insertions work fine
since *they* aren't provoking an out-of-memory problem.  So eventually
the backend receives
    INSERT INTO test (col) VALUES ('');
which of course it finds nothing wrong with.

This is sort of a PITA to fix :-(.  The easiest solution from the point
of view of psql would be to have realloc failure just print "out of
memory" and exit(1), but pqexpbuffer.c is part of libpq and so it's not
too reasonable for it to do that.  And we have also got to think about
the prospect of similarly non-random lossage in other uses of
PQexpbuffer, anyhow.

The least API-damaging solution I can think of is to add an error
indicator flag to PQexpbuffer, comparable to ferror() on stdio files.
Callers would have to check this after loading up a PQexpbuffer if
they wanted to be sure there was no memory overrun.  But that seems
pretty fragile, and it wouldn't be back-patchable.

A variant on that is to clear the buffer and insert "out of memory"
in this scenario, but that's not too pleasant either.

Better ideas anyone?

            regards, tom lane

Re: literal limits in 8.3

From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> Hmm ... the third machine I tried was able to reproduce the problem.
>
> What it boils down to is lack of error checking in psql (not the
> backend).

What is it about certain boxes that causes the failure, but not on others?


- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200811241506
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAkkrCZsACgkQvJuQZxSWSsgHuwCeJmMj9oRxKP5uQ+DA5KNvCnzO
QbIAoJtEzOpT8Bi63Z/yvoAMtHpJdcfF
=Hh+m
-----END PGP SIGNATURE-----



Re: literal limits in 8.3

From
Tom Lane
Date:
"Greg Sabino Mullane" <greg@turnstep.com> writes:
>> What it boils down to is lack of error checking in psql (not the
>> backend).

> What is it about certain boxes that causes the failure, but not on others?

It's a matter of having the out-of-memory condition occur just at the
wrong step, ie, the output from psql's lexical scan (as opposed to the
input, or when trying to construct the Query message to send to the
backend).  So it would depend on factors like 32-vs-64-bit and what
you had ulimit set to.

            regards, tom lane