Thread: proposal: lob conversion functionality
Hello I had to enhance my older project, where XML documents are parsed and created on server side - in PLpgSQL and PLPerl procedures. We would to use a LO API for client server communication, but we have to parse/serialize LO on server side. I found so there are no simple API for working with LO from PL without access to file system. I had to use a ugly hacks: CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea) RETURNS oid AS $$ DECLARE _loid oid; _substr bytea; BEGIN _loid := lo_creat(-1); FOR i IN 0..length($1)/2048 LOOP _substr := substring($1 FROM i * 2048 + 1 FOR 2048); IF_substr <> '' THEN INSERT INTO pg_largeobject(loid, pageno, data) VALUES(_loid, i, _substr); END IF; END LOOP; EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid); RETURN _loid; END; $$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog'; and CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid) RETURNS xml AS $$ DECLARE b_cum bytea = ''; b bytea; BEGIN FOR b IN SELECT l.data FROM pg_largeobject l WHERE l.loid = attachment_to_xml.attachment ORDER BY l.pageno LOOP b_cum := b_cum || b; END LOOP; IF NOT FOUND THEN RETURN NULL; ELSE RETURN xmlelement(NAME"attachment", encode(b_cum, 'base64')); END IF; END; $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog'; These functions can be simplified if we supports some functions like encode, decode for LO So my proposal is creating functions: * lo_encode(loid oid) .. returns bytea * lo_encode(loid oid, encoding text) .. returns text * lo_make(loid oid, data bytea) * lo_make(loid oid, data text, encoding text) This can simplify all transformation between LO and VARLENA. Known limit is 1G for varlena, but it is still relative enough high. Notes. comments? Regards Pavel
Pavel Stehule <pavel.stehule@gmail.com> writes: > I found so there are no simple API for working with LO from PL without > access to file system. What? See lo_open(), loread(), lowrite(), etc. > These functions can be simplified if we supports some functions like > encode, decode for LO I do not see any good reason to tie encode/decode to LOs. regards, tom lane
2013/8/10 Tom Lane <tgl@sss.pgh.pa.us>: > Pavel Stehule <pavel.stehule@gmail.com> writes: >> I found so there are no simple API for working with LO from PL without >> access to file system. > > What? See lo_open(), loread(), lowrite(), etc. > yes, so there are three problems with these functions: a) probably (I didn't find) undocumented b) design with lo handler is little bit PL/pgSQL unfriendly. CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea) RETURNS oid AS $$ DECLARE loid oid; fd integer; bytes integer; BEGIN loid := lo_creat(-1); fd := lo_open(loid, 131072); bytes := lowrite(fd, $1); IF (bytes != LENGTH($1)) THEN RAISEEXCEPTION 'Not all data copied to blob'; END IF; PERFORM lo_close(fd); RETURN loid; END; $$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog'; CREATE OR REPLACE FUNCTION fbuilder.attachment_to_bytea(attachment oid) RETURNS bytea AS $$ DECLAREfd integer;size integer; BEGINfd := lo_open(attachment, 262144);size := lo_lseek(fd, 0, 2);PERFORM lo_lseek(fd, 0, 0);RETURN loread(fd, size); EXCEPTION WHEN undefined_object THEN PERFORM lo_close(fd); RETURN NULL; END; $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog'; I had to use lot of magic constants, and getting size is not size too. I believe so almost all reading will be a complete read, and then it should be supported (maybe loread(fd, -1)). c) probably there is a bug - it doesn't expect handling errors postgres=# select fbuilder.attachment_to_xml(0); WARNING: Snapshot reference leak: Snapshot 0x978f6f0 still referencedattachment_to_xml ───────────────────[null] (1 row) Time: 0.809 ms >> These functions can be simplified if we supports some functions like >> encode, decode for LO > > I do not see any good reason to tie encode/decode to LOs. It can save a one transformations - but it is not too important and can be easy done with current bytea API. > > regards, tom lane
Hello 2013/8/12 Pavel Stehule <pavel.stehule@gmail.com>: > 2013/8/10 Tom Lane <tgl@sss.pgh.pa.us>: >> Pavel Stehule <pavel.stehule@gmail.com> writes: >>> I found so there are no simple API for working with LO from PL without >>> access to file system. >> >> What? See lo_open(), loread(), lowrite(), etc. >> so simplified functionality should have a little bit different names than original LO API: /** saving bytea to lo (with possibility enter a target loid)*/ FUNCTION make_lo(src bytea, loid oid DEFAULT -1) RETURNS oid /** loading from lo to bytea */ FUNCTION load_lo(loid) RETURNS bytea This API is simple and friendly to PL languages, and for more complex and specific work, there is still older LO server side API Regards Pavel > > yes, so there are three problems with these functions: > > a) probably (I didn't find) undocumented > > b) design with lo handler is little bit PL/pgSQL unfriendly. > > > CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea) > RETURNS oid AS $$ > DECLARE > loid oid; > fd integer; > bytes integer; > BEGIN > loid := lo_creat(-1); > fd := lo_open(loid, 131072); > bytes := lowrite(fd, $1); > IF (bytes != LENGTH($1)) THEN > RAISE EXCEPTION 'Not all data copied to blob'; > END IF; > PERFORM lo_close(fd); > RETURN loid; > END; > $$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog'; > > > CREATE OR REPLACE FUNCTION fbuilder.attachment_to_bytea(attachment oid) > RETURNS bytea AS $$ > DECLARE > fd integer; > size integer; > BEGIN > fd := lo_open(attachment, 262144); > size := lo_lseek(fd, 0, 2); > PERFORM lo_lseek(fd, 0, 0); > RETURN loread(fd, size); > EXCEPTION WHEN undefined_object THEN > PERFORM lo_close(fd); > RETURN NULL; > END; > $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog'; > > I had to use lot of magic constants, and getting size is not size too. > I believe so almost all reading will be a complete read, and then it > should be supported (maybe loread(fd, -1)). > > c) probably there is a bug - it doesn't expect handling errors > > postgres=# select fbuilder.attachment_to_xml(0); > WARNING: Snapshot reference leak: Snapshot 0x978f6f0 still referenced > attachment_to_xml > ─────────────────── > [null] > (1 row) > > Time: 0.809 ms > >>> These functions can be simplified if we supports some functions like >>> encode, decode for LO > >> >> I do not see any good reason to tie encode/decode to LOs. > > It can save a one transformations - but it is not too important and > can be easy done with current bytea API. > >> >> regards, tom lane
On 8/12/13 1:08 PM, Pavel Stehule wrote: > IF (bytes != LENGTH($1)) THEN > RAISE EXCEPTION 'Not all data copied to blob'; > END IF; > PERFORM lo_close(fd); FWIW, it's probably better to close before raising the exception... -- Jim C. Nasby, Data Architect jim@nasby.net 512.569.9461 (cell) http://jim.nasby.net
2013/8/21 Jim Nasby <jim@nasby.net>
On 8/12/13 1:08 PM, Pavel Stehule wrote:FWIW, it's probably better to close before raising the exception...IF (bytes != LENGTH($1)) THEN
RAISE EXCEPTION 'Not all data copied to blob';
END IF;
PERFORM lo_close(fd);
it should to be done automatically - lo handler is released after transaction end
Pavel
--
Jim C. Nasby, Data Architect jim@nasby.net
512.569.9461 (cell) http://jim.nasby.net
+1
badly need the large object and bytea convert function.
Once I have to use the ugly pg_read_file() to put some text to pg,I tried to use large object but find it is useless without function to convert large object to bytea.
Jov
blog: http:amutu.com/blog
2013/8/10 Pavel Stehule <pavel.stehule@gmail.com>
Hello
I had to enhance my older project, where XML documents are parsed and
created on server side - in PLpgSQL and PLPerl procedures. We would to
use a LO API for client server communication, but we have to
parse/serialize LO on server side.
I found so there are no simple API for working with LO from PL without
access to file system. I had to use a ugly hacks:
CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
RETURNS oid AS $$
DECLARE
_loid oid;
_substr bytea;
BEGIN
_loid := lo_creat(-1);
FOR i IN 0..length($1)/2048
LOOP
_substr := substring($1 FROM i * 2048 + 1 FOR 2048);
IF _substr <> '' THEN
INSERT INTO pg_largeobject(loid, pageno, data)
VALUES(_loid, i, _substr);
END IF;
END LOOP;
EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
RETURN _loid;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog';
and
CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
RETURNS xml AS $$
DECLARE
b_cum bytea = '';
b bytea;
BEGIN
FOR b IN SELECT l.data
FROM pg_largeobject l
WHERE l.loid = attachment_to_xml.attachment
ORDER BY l.pageno
LOOP
b_cum := b_cum || b;
END LOOP;
IF NOT FOUND THEN
RETURN NULL;
ELSE
RETURN xmlelement(NAME "attachment",
encode(b_cum, 'base64'));
END IF;
END;
$$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';
These functions can be simplified if we supports some functions like
encode, decode for LO
So my proposal is creating functions:
* lo_encode(loid oid) .. returns bytea
* lo_encode(loid oid, encoding text) .. returns text
* lo_make(loid oid, data bytea)
* lo_make(loid oid, data text, encoding text)
This can simplify all transformation between LO and VARLENA. Known
limit is 1G for varlena, but it is still relative enough high.
Notes. comments?
Regards
Pavel
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hello
here is a patchpostgres=# select make_lo(decode('ffffff00','hex'));
make_lo
─────────
24629
(1 row)
Time: 40.724 ms
postgres=# select load_lo(24628);
load_lo
────────────
\xffffff00
(1 row)
postgres=# \lo_import ~/avatar.png
lo_import 24630
postgres=# select md5(load_lo(24630));
md5
──────────────────────────────────
513f60836f3b625713acaf1c19b6ea78
(1 row)
postgres=# \q
bash-4.1$ md5sum ~/avatar.png
513f60836f3b625713acaf1c19b6ea78 /home/pavel/avatar.png
2013/8/22 Jov <amutu@amutu.com>
+1badly need the large object and bytea convert function.Once I have to use the ugly pg_read_file() to put some text to pg,I tried to use large object but find it is useless without function to convert large object to bytea.Jovblog: http:amutu.com/blog2013/8/10 Pavel Stehule <pavel.stehule@gmail.com>Hello
I had to enhance my older project, where XML documents are parsed and
created on server side - in PLpgSQL and PLPerl procedures. We would to
use a LO API for client server communication, but we have to
parse/serialize LO on server side.
I found so there are no simple API for working with LO from PL without
access to file system. I had to use a ugly hacks:
CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
RETURNS oid AS $$
DECLARE
_loid oid;
_substr bytea;
BEGIN
_loid := lo_creat(-1);
FOR i IN 0..length($1)/2048
LOOP
_substr := substring($1 FROM i * 2048 + 1 FOR 2048);
IF _substr <> '' THEN
INSERT INTO pg_largeobject(loid, pageno, data)
VALUES(_loid, i, _substr);
END IF;
END LOOP;
EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
RETURN _loid;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog';
and
CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
RETURNS xml AS $$
DECLARE
b_cum bytea = '';
b bytea;
BEGIN
FOR b IN SELECT l.data
FROM pg_largeobject l
WHERE l.loid = attachment_to_xml.attachment
ORDER BY l.pageno
LOOP
b_cum := b_cum || b;
END LOOP;
IF NOT FOUND THEN
RETURN NULL;
ELSE
RETURN xmlelement(NAME "attachment",
encode(b_cum, 'base64'));
END IF;
END;
$$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';
These functions can be simplified if we supports some functions like
encode, decode for LO
So my proposal is creating functions:
* lo_encode(loid oid) .. returns bytea
* lo_encode(loid oid, encoding text) .. returns text
* lo_make(loid oid, data bytea)
* lo_make(loid oid, data text, encoding text)
This can simplify all transformation between LO and VARLENA. Known
limit is 1G for varlena, but it is still relative enough high.
Notes. comments?
Regards
Pavel
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Attachment
2013/8/25 Pavel Stehule <pavel.stehule@gmail.com>
it introduce a load_lo and make_lo functionsHellohere is a patch
postgres=# select make_lo(decode('ffffff00','hex'));
make_lo
─────────
24629
(1 row)
Time: 40.724 ms
postgres=# select load_lo(24628);
load_lo
────────────
\xffffff00
(1 row)
postgres=# \lo_import ~/avatar.png
lo_import 24630
postgres=# select md5(load_lo(24630));
md5
──────────────────────────────────
513f60836f3b625713acaf1c19b6ea78
(1 row)
postgres=# \q
bash-4.1$ md5sum ~/avatar.png
513f60836f3b625713acaf1c19b6ea78 /home/pavel/avatar.png
my motivation and reason for implementation of this patch was little bit difficult implementation of conversion blob to base64 string.
I have a communication system based on XML documents. These documents are created in PG with XML functions. There was customer request to support binary (images) attachments. LO API is really very good for transport binary data from client side to server side, but next processing was less nice - LO -> Bytea transformation is not well documented and there is necessary to use a magic integer constants. With these two functions this transformations are very simple.
just
select xmlforest(encode(load_lo(24630), 'base64') AS attachment);
select xmlforest(encode(load_lo(24630), 'base64') AS attachment);
Regards
Pavel
RegardsPavel Stehule2013/8/22 Jov <amutu@amutu.com>+1badly need the large object and bytea convert function.Once I have to use the ugly pg_read_file() to put some text to pg,I tried to use large object but find it is useless without function to convert large object to bytea.Jovblog: http:amutu.com/blog2013/8/10 Pavel Stehule <pavel.stehule@gmail.com>Hello
I had to enhance my older project, where XML documents are parsed and
created on server side - in PLpgSQL and PLPerl procedures. We would to
use a LO API for client server communication, but we have to
parse/serialize LO on server side.
I found so there are no simple API for working with LO from PL without
access to file system. I had to use a ugly hacks:
CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
RETURNS oid AS $$
DECLARE
_loid oid;
_substr bytea;
BEGIN
_loid := lo_creat(-1);
FOR i IN 0..length($1)/2048
LOOP
_substr := substring($1 FROM i * 2048 + 1 FOR 2048);
IF _substr <> '' THEN
INSERT INTO pg_largeobject(loid, pageno, data)
VALUES(_loid, i, _substr);
END IF;
END LOOP;
EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
RETURN _loid;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog';
and
CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
RETURNS xml AS $$
DECLARE
b_cum bytea = '';
b bytea;
BEGIN
FOR b IN SELECT l.data
FROM pg_largeobject l
WHERE l.loid = attachment_to_xml.attachment
ORDER BY l.pageno
LOOP
b_cum := b_cum || b;
END LOOP;
IF NOT FOUND THEN
RETURN NULL;
ELSE
RETURN xmlelement(NAME "attachment",
encode(b_cum, 'base64'));
END IF;
END;
$$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';
These functions can be simplified if we supports some functions like
encode, decode for LO
So my proposal is creating functions:
* lo_encode(loid oid) .. returns bytea
* lo_encode(loid oid, encoding text) .. returns text
* lo_make(loid oid, data bytea)
* lo_make(loid oid, data text, encoding text)
This can simplify all transformation between LO and VARLENA. Known
limit is 1G for varlena, but it is still relative enough high.
Notes. comments?
Regards
Pavel
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Pavel,
I have reviewed you patch.
-- Patch got applied cleanly (using patch -p1)
-- Make & Make install works fine
-- make check looks good
I done code-walk and it looks good. Also did some manual testing and haven't
found any issue with the implementation.
Patch introduced two new API load_lo() and make_lo() for loading and saving
from/to large objects Functions. When it comes to drop an lo object created
using make_lo() this still depend on older API lo_unlink(). I think we should
add that into documentation for the clerification.
As a user to lo object function when I started testing this new API, first
question came to mind is why delete_lo() or destroy_lo() API is missing.
Later I realize that need to use lo_unlink() older API for that functionality.
So I feel its good to document that. Do let you know what you think ?
Otherwise patch looks nice and clean.
Regards,
Rushabh Lathia
On Sun, Aug 25, 2013 at 8:31 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
Pavel StehuleRegardsit introduce a load_lo and make_lo functionsHellohere is a patch
postgres=# select make_lo(decode('ffffff00','hex'));
make_lo
─────────
24629
(1 row)
Time: 40.724 ms
postgres=# select load_lo(24628);
load_lo
────────────
\xffffff00
(1 row)
postgres=# \lo_import ~/avatar.png
lo_import 24630
postgres=# select md5(load_lo(24630));
md5
──────────────────────────────────
513f60836f3b625713acaf1c19b6ea78
(1 row)
postgres=# \q
bash-4.1$ md5sum ~/avatar.png
513f60836f3b625713acaf1c19b6ea78 /home/pavel/avatar.png2013/8/22 Jov <amutu@amutu.com>+1badly need the large object and bytea convert function.Once I have to use the ugly pg_read_file() to put some text to pg,I tried to use large object but find it is useless without function to convert large object to bytea.Jovblog: http:amutu.com/blog2013/8/10 Pavel Stehule <pavel.stehule@gmail.com>Hello
I had to enhance my older project, where XML documents are parsed and
created on server side - in PLpgSQL and PLPerl procedures. We would to
use a LO API for client server communication, but we have to
parse/serialize LO on server side.
I found so there are no simple API for working with LO from PL without
access to file system. I had to use a ugly hacks:
CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
RETURNS oid AS $$
DECLARE
_loid oid;
_substr bytea;
BEGIN
_loid := lo_creat(-1);
FOR i IN 0..length($1)/2048
LOOP
_substr := substring($1 FROM i * 2048 + 1 FOR 2048);
IF _substr <> '' THEN
INSERT INTO pg_largeobject(loid, pageno, data)
VALUES(_loid, i, _substr);
END IF;
END LOOP;
EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
RETURN _loid;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog';
and
CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
RETURNS xml AS $$
DECLARE
b_cum bytea = '';
b bytea;
BEGIN
FOR b IN SELECT l.data
FROM pg_largeobject l
WHERE l.loid = attachment_to_xml.attachment
ORDER BY l.pageno
LOOP
b_cum := b_cum || b;
END LOOP;
IF NOT FOUND THEN
RETURN NULL;
ELSE
RETURN xmlelement(NAME "attachment",
encode(b_cum, 'base64'));
END IF;
END;
$$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';
These functions can be simplified if we supports some functions like
encode, decode for LO
So my proposal is creating functions:
* lo_encode(loid oid) .. returns bytea
* lo_encode(loid oid, encoding text) .. returns text
* lo_make(loid oid, data bytea)
* lo_make(loid oid, data text, encoding text)
This can simplify all transformation between LO and VARLENA. Known
limit is 1G for varlena, but it is still relative enough high.
Notes. comments?
Regards
Pavel
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Rushabh Lathia
2013/9/19 Rushabh Lathia <rushabh.lathia@gmail.com>
Hi Pavel,I have reviewed you patch.-- Patch got applied cleanly (using patch -p1)-- Make & Make install works fine-- make check looks goodI done code-walk and it looks good. Also did some manual testing and haven'tfound any issue with the implementation.Patch introduced two new API load_lo() and make_lo() for loading and savingfrom/to large objects Functions. When it comes to drop an lo object createdusing make_lo() this still depend on older API lo_unlink(). I think we shouldadd that into documentation for the clerification.As a user to lo object function when I started testing this new API, firstquestion came to mind is why delete_lo() or destroy_lo() API is missing.Later I realize that need to use lo_unlink() older API for that functionality.So I feel its good to document that. Do let you know what you think ?
good idea
I'll send a updated patch eveningOtherwise patch looks nice and clean.
Thank you :)
Regards
Pavel
Regards,Rushabh Lathia--On Sun, Aug 25, 2013 at 8:31 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:Pavel StehuleRegardsit introduce a load_lo and make_lo functionsHellohere is a patch
postgres=# select make_lo(decode('ffffff00','hex'));
make_lo
─────────
24629
(1 row)
Time: 40.724 ms
postgres=# select load_lo(24628);
load_lo
────────────
\xffffff00
(1 row)
postgres=# \lo_import ~/avatar.png
lo_import 24630
postgres=# select md5(load_lo(24630));
md5
──────────────────────────────────
513f60836f3b625713acaf1c19b6ea78
(1 row)
postgres=# \q
bash-4.1$ md5sum ~/avatar.png
513f60836f3b625713acaf1c19b6ea78 /home/pavel/avatar.png2013/8/22 Jov <amutu@amutu.com>+1badly need the large object and bytea convert function.Once I have to use the ugly pg_read_file() to put some text to pg,I tried to use large object but find it is useless without function to convert large object to bytea.Jovblog: http:amutu.com/blog2013/8/10 Pavel Stehule <pavel.stehule@gmail.com>Hello
I had to enhance my older project, where XML documents are parsed and
created on server side - in PLpgSQL and PLPerl procedures. We would to
use a LO API for client server communication, but we have to
parse/serialize LO on server side.
I found so there are no simple API for working with LO from PL without
access to file system. I had to use a ugly hacks:
CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
RETURNS oid AS $$
DECLARE
_loid oid;
_substr bytea;
BEGIN
_loid := lo_creat(-1);
FOR i IN 0..length($1)/2048
LOOP
_substr := substring($1 FROM i * 2048 + 1 FOR 2048);
IF _substr <> '' THEN
INSERT INTO pg_largeobject(loid, pageno, data)
VALUES(_loid, i, _substr);
END IF;
END LOOP;
EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
RETURN _loid;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog';
and
CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
RETURNS xml AS $$
DECLARE
b_cum bytea = '';
b bytea;
BEGIN
FOR b IN SELECT l.data
FROM pg_largeobject l
WHERE l.loid = attachment_to_xml.attachment
ORDER BY l.pageno
LOOP
b_cum := b_cum || b;
END LOOP;
IF NOT FOUND THEN
RETURN NULL;
ELSE
RETURN xmlelement(NAME "attachment",
encode(b_cum, 'base64'));
END IF;
END;
$$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';
These functions can be simplified if we supports some functions like
encode, decode for LO
So my proposal is creating functions:
* lo_encode(loid oid) .. returns bytea
* lo_encode(loid oid, encoding text) .. returns text
* lo_make(loid oid, data bytea)
* lo_make(loid oid, data text, encoding text)
This can simplify all transformation between LO and VARLENA. Known
limit is 1G for varlena, but it is still relative enough high.
Notes. comments?
Regards
Pavel
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Rushabh Lathia
Hello
here is patch2013/9/19 Pavel Stehule <pavel.stehule@gmail.com>
2013/9/19 Rushabh Lathia <rushabh.lathia@gmail.com>Hi Pavel,I have reviewed you patch.-- Patch got applied cleanly (using patch -p1)-- Make & Make install works fine-- make check looks goodI done code-walk and it looks good. Also did some manual testing and haven'tfound any issue with the implementation.Patch introduced two new API load_lo() and make_lo() for loading and savingfrom/to large objects Functions. When it comes to drop an lo object createdusing make_lo() this still depend on older API lo_unlink(). I think we shouldadd that into documentation for the clerification.As a user to lo object function when I started testing this new API, firstquestion came to mind is why delete_lo() or destroy_lo() API is missing.Later I realize that need to use lo_unlink() older API for that functionality.So I feel its good to document that. Do let you know what you think ?good ideaI'll send a updated patch eveningOtherwise patch looks nice and clean.Thank you :)RegardsPavelRegards,Rushabh Lathia--On Sun, Aug 25, 2013 at 8:31 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:Pavel StehuleRegardsit introduce a load_lo and make_lo functionsHellohere is a patch
postgres=# select make_lo(decode('ffffff00','hex'));
make_lo
─────────
24629
(1 row)
Time: 40.724 ms
postgres=# select load_lo(24628);
load_lo
────────────
\xffffff00
(1 row)
postgres=# \lo_import ~/avatar.png
lo_import 24630
postgres=# select md5(load_lo(24630));
md5
──────────────────────────────────
513f60836f3b625713acaf1c19b6ea78
(1 row)
postgres=# \q
bash-4.1$ md5sum ~/avatar.png
513f60836f3b625713acaf1c19b6ea78 /home/pavel/avatar.png2013/8/22 Jov <amutu@amutu.com>+1badly need the large object and bytea convert function.Once I have to use the ugly pg_read_file() to put some text to pg,I tried to use large object but find it is useless without function to convert large object to bytea.Jovblog: http:amutu.com/blog2013/8/10 Pavel Stehule <pavel.stehule@gmail.com>Hello
I had to enhance my older project, where XML documents are parsed and
created on server side - in PLpgSQL and PLPerl procedures. We would to
use a LO API for client server communication, but we have to
parse/serialize LO on server side.
I found so there are no simple API for working with LO from PL without
access to file system. I had to use a ugly hacks:
CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
RETURNS oid AS $$
DECLARE
_loid oid;
_substr bytea;
BEGIN
_loid := lo_creat(-1);
FOR i IN 0..length($1)/2048
LOOP
_substr := substring($1 FROM i * 2048 + 1 FOR 2048);
IF _substr <> '' THEN
INSERT INTO pg_largeobject(loid, pageno, data)
VALUES(_loid, i, _substr);
END IF;
END LOOP;
EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
RETURN _loid;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog';
and
CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
RETURNS xml AS $$
DECLARE
b_cum bytea = '';
b bytea;
BEGIN
FOR b IN SELECT l.data
FROM pg_largeobject l
WHERE l.loid = attachment_to_xml.attachment
ORDER BY l.pageno
LOOP
b_cum := b_cum || b;
END LOOP;
IF NOT FOUND THEN
RETURN NULL;
ELSE
RETURN xmlelement(NAME "attachment",
encode(b_cum, 'base64'));
END IF;
END;
$$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';
These functions can be simplified if we supports some functions like
encode, decode for LO
So my proposal is creating functions:
* lo_encode(loid oid) .. returns bytea
* lo_encode(loid oid, encoding text) .. returns text
* lo_make(loid oid, data bytea)
* lo_make(loid oid, data text, encoding text)
This can simplify all transformation between LO and VARLENA. Known
limit is 1G for varlena, but it is still relative enough high.
Notes. comments?
Regards
Pavel
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Rushabh Lathia
Attachment
On Thu, Sep 19, 2013 at 10:19 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
Hellohere is patch
Looks good.
Marking it as Ready for Committer.
PavelRegards2013/9/19 Pavel Stehule <pavel.stehule@gmail.com>2013/9/19 Rushabh Lathia <rushabh.lathia@gmail.com>Hi Pavel,I have reviewed you patch.-- Patch got applied cleanly (using patch -p1)-- Make & Make install works fine-- make check looks goodI done code-walk and it looks good. Also did some manual testing and haven'tfound any issue with the implementation.Patch introduced two new API load_lo() and make_lo() for loading and savingfrom/to large objects Functions. When it comes to drop an lo object createdusing make_lo() this still depend on older API lo_unlink(). I think we shouldadd that into documentation for the clerification.As a user to lo object function when I started testing this new API, firstquestion came to mind is why delete_lo() or destroy_lo() API is missing.Later I realize that need to use lo_unlink() older API for that functionality.So I feel its good to document that. Do let you know what you think ?good ideaI'll send a updated patch eveningOtherwise patch looks nice and clean.Thank you :)RegardsPavelRegards,Rushabh Lathia--On Sun, Aug 25, 2013 at 8:31 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:Pavel StehuleRegardsit introduce a load_lo and make_lo functionsHellohere is a patch
postgres=# select make_lo(decode('ffffff00','hex'));
make_lo
─────────
24629
(1 row)
Time: 40.724 ms
postgres=# select load_lo(24628);
load_lo
────────────
\xffffff00
(1 row)
postgres=# \lo_import ~/avatar.png
lo_import 24630
postgres=# select md5(load_lo(24630));
md5
──────────────────────────────────
513f60836f3b625713acaf1c19b6ea78
(1 row)
postgres=# \q
bash-4.1$ md5sum ~/avatar.png
513f60836f3b625713acaf1c19b6ea78 /home/pavel/avatar.png2013/8/22 Jov <amutu@amutu.com>+1badly need the large object and bytea convert function.Once I have to use the ugly pg_read_file() to put some text to pg,I tried to use large object but find it is useless without function to convert large object to bytea.Jovblog: http:amutu.com/blog2013/8/10 Pavel Stehule <pavel.stehule@gmail.com>Hello
I had to enhance my older project, where XML documents are parsed and
created on server side - in PLpgSQL and PLPerl procedures. We would to
use a LO API for client server communication, but we have to
parse/serialize LO on server side.
I found so there are no simple API for working with LO from PL without
access to file system. I had to use a ugly hacks:
CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
RETURNS oid AS $$
DECLARE
_loid oid;
_substr bytea;
BEGIN
_loid := lo_creat(-1);
FOR i IN 0..length($1)/2048
LOOP
_substr := substring($1 FROM i * 2048 + 1 FOR 2048);
IF _substr <> '' THEN
INSERT INTO pg_largeobject(loid, pageno, data)
VALUES(_loid, i, _substr);
END IF;
END LOOP;
EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
RETURN _loid;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path = 'pg_catalog';
and
CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
RETURNS xml AS $$
DECLARE
b_cum bytea = '';
b bytea;
BEGIN
FOR b IN SELECT l.data
FROM pg_largeobject l
WHERE l.loid = attachment_to_xml.attachment
ORDER BY l.pageno
LOOP
b_cum := b_cum || b;
END LOOP;
IF NOT FOUND THEN
RETURN NULL;
ELSE
RETURN xmlelement(NAME "attachment",
encode(b_cum, 'base64'));
END IF;
END;
$$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';
These functions can be simplified if we supports some functions like
encode, decode for LO
So my proposal is creating functions:
* lo_encode(loid oid) .. returns bytea
* lo_encode(loid oid, encoding text) .. returns text
* lo_make(loid oid, data bytea)
* lo_make(loid oid, data text, encoding text)
This can simplify all transformation between LO and VARLENA. Known
limit is 1G for varlena, but it is still relative enough high.
Notes. comments?
Regards
Pavel
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Rushabh Lathia
Rushabh Lathia
On 12.08.2013 21:08, Pavel Stehule wrote: > 2013/8/10 Tom Lane<tgl@sss.pgh.pa.us>: >> Pavel Stehule<pavel.stehule@gmail.com> writes: >>> I found so there are no simple API for working with LO from PL without >>> access to file system. >> >> What? See lo_open(), loread(), lowrite(), etc. > > yes, so there are three problems with these functions: > > a) probably (I didn't find) undocumented It's there, although it's a bit difficult to find by searching. See: http://www.postgresql.org/docs/devel/static/lo-funcs.html. I don't actually agree with this phrase on that page: > The ones that are actually useful to call via SQL commands are > lo_creat, lo_create, lo_unlink, lo_import, and lo_export Calling lo_open, loread and lowrite seems equally useful to me. > b) design with lo handler is little bit PL/pgSQL unfriendly. It's a bit awkward, I agree. > c) probably there is a bug - it doesn't expect handling errors > > postgres=# select fbuilder.attachment_to_xml(0); > WARNING: Snapshot reference leak: Snapshot 0x978f6f0 still referenced > attachment_to_xml > ─────────────────── > [null] > (1 row) Yeah, that's a server-side bug. inv_open() registers the snapshot before checking if the large object exists. If it doesn't, the already-registered snapshot is not unregistered, hence the warning. I've committed the attached fix for that bug. - Heikki
Attachment
2013/9/30 Heikki Linnakangas <hlinnakangas@vmware.com>
On 12.08.2013 21:08, Pavel Stehule wrote:It's there, although it's a bit difficult to find by searching. See: http://www.postgresql.org/docs/devel/static/lo-funcs.html.2013/8/10 Tom Lane<tgl@sss.pgh.pa.us>:Pavel Stehule<pavel.stehule@gmail.com> writes:I found so there are no simple API for working with LO from PL without
access to file system.
What? See lo_open(), loread(), lowrite(), etc.
yes, so there are three problems with these functions:
a) probably (I didn't find) undocumented
I don't actually agree with this phrase on that page:The ones that are actually useful to call via SQL commands are
lo_creat, lo_create, lo_unlink, lo_import, and lo_export
Calling lo_open, loread and lowrite seems equally useful to me.It's a bit awkward, I agree.b) design with lo handler is little bit PL/pgSQL unfriendly.Yeah, that's a server-side bug. inv_open() registers the snapshot before checking if the large object exists. If it doesn't, the already-registered snapshot is not unregistered, hence the warning.c) probably there is a bug - it doesn't expect handling errors
postgres=# select fbuilder.attachment_to_xml(0);
WARNING: Snapshot reference leak: Snapshot 0x978f6f0 still referenced
attachment_to_xml
───────────────────
[null]
(1 row)
I've committed the attached fix for that bug.
nice, I afraid so it is mine bug
thank you
Pavel
- Heikki
On Mon, Sep 30, 2013 at 01:06:15PM +0300, Heikki Linnakangas wrote: > On 12.08.2013 21:08, Pavel Stehule wrote: >> 2013/8/10 Tom Lane<tgl@sss.pgh.pa.us>: >>> Pavel Stehule<pavel.stehule@gmail.com> writes: >>>> I found so there are no simple API for working with LO from PL without >>>> access to file system. >>> >>> What? See lo_open(), loread(), lowrite(), etc. >> >> yes, so there are three problems with these functions: >> >> a) probably (I didn't find) undocumented > > It's there, although it's a bit difficult to find by searching. See: > http://www.postgresql.org/docs/devel/static/lo-funcs.html. > > I don't actually agree with this phrase on that page: > >> The ones that are actually useful to call via SQL commands are >> lo_creat, lo_create, lo_unlink, lo_import, and lo_export > > Calling lo_open, loread and lowrite seems equally useful to me. > >> b) design with lo handler is little bit PL/pgSQL unfriendly. > > It's a bit awkward, I agree. All the other large object functions are named like lo*, so I think new ones should also be lo* rather than *_lo. One of the key benefits of large objects, compared to a bytea column in some table, is their efficiency when reading or writing only a subset of the object. However, the proposed functions only deal with the large object as a whole. We could easily fix that. Consider this list of new functions in their place: lo_create(oid, bytea) RETURNS oid -- new LO with content (similar to make_lo) lo_get(oid) RETURNS bytea -- read entire LO (same as load_lo) lo_get(oid, bigint, int) RETURNS bytea -- read from offset for length lo_put(oid, bigint, bytea) RETURNS void -- write data at offset Anything we do here effectively provides wrappers around the existing functions tailored toward the needs of libpq. A key outstanding question is whether doing so provides a compelling increment in usability. On the plus side, adding such functions resolves the weirdness of having a variety of database object that is easy to access from libpq but awkward to access from plain SQL. On the minus side, this could easily live as an extension module. I have not used the large object facility to any significant degree, but I generally feel this is helpful enough to justify core inclusion. Any other opinions on the general suitability or on the specifics of the API offered? Thanks, nm -- Noah Misch EnterpriseDB http://www.enterprisedb.com
2013/10/21 Noah Misch <noah@leadboat.com>
On Mon, Sep 30, 2013 at 01:06:15PM +0300, Heikki Linnakangas wrote:All the other large object functions are named like lo*, so I think new ones
> On 12.08.2013 21:08, Pavel Stehule wrote:
>> 2013/8/10 Tom Lane<tgl@sss.pgh.pa.us>:
>>> Pavel Stehule<pavel.stehule@gmail.com> writes:
>>>> I found so there are no simple API for working with LO from PL without
>>>> access to file system.
>>>
>>> What? See lo_open(), loread(), lowrite(), etc.
>>
>> yes, so there are three problems with these functions:
>>
>> a) probably (I didn't find) undocumented
>
> It's there, although it's a bit difficult to find by searching. See:
> http://www.postgresql.org/docs/devel/static/lo-funcs.html.
>
> I don't actually agree with this phrase on that page:
>
>> The ones that are actually useful to call via SQL commands are
>> lo_creat, lo_create, lo_unlink, lo_import, and lo_export
>
> Calling lo_open, loread and lowrite seems equally useful to me.
>
>> b) design with lo handler is little bit PL/pgSQL unfriendly.
>
> It's a bit awkward, I agree.
should also be lo* rather than *_lo. One of the key benefits of large
objects, compared to a bytea column in some table, is their efficiency when
reading or writing only a subset of the object. However, the proposed
functions only deal with the large object as a whole. We could easily fix
that. Consider this list of new functions in their place:
lo_create(oid, bytea) RETURNS oid -- new LO with content (similar to make_lo)
lo_get(oid) RETURNS bytea -- read entire LO (same as load_lo)
lo_get(oid, bigint, int) RETURNS bytea -- read from offset for length
lo_put(oid, bigint, bytea) RETURNS void -- write data at offset
Anything we do here effectively provides wrappers around the existing
functions tailored toward the needs of libpq. A key outstanding question is
whether doing so provides a compelling increment in usability. On the plus
side, adding such functions resolves the weirdness of having a variety of
database object that is easy to access from libpq but awkward to access from
plain SQL. On the minus side, this could easily live as an extension module.
I have not used the large object facility to any significant degree, but I
generally feel this is helpful enough to justify core inclusion. Any other
opinions on the general suitability or on the specifics of the API offered?
fast reply - I reply again later in my office.
I don't think so new functions (for bytea type) has any sense in libpq. From C is usually better to use a native C interface than ensure conversion to bytea. Probably the interface libpq should be modernized, but it complete and enough now. I don't have a motivation to enhance a API now. And a fact, so proposed bytea functions are not in libpq is a reason why I used different naming convention. A main motivation was a access from PL to LO without obscure patterns - mainly for PL/pgSQL. For other languages it is available - but maybe better direction is direct mapping to related primitives based on host environment possibilities.
Today evening I'll look on your proposal with some more time.
Regards
Pavel
Thanks,
nm
--
Noah Misch
EnterpriseDB http://www.enterprisedb.com
On 10/20/2013 07:52 PM, Noah Misch wrote: > Consider this list of new functions in their place: > > lo_create(oid, bytea) RETURNS oid -- new LO with content (similar to make_lo) > lo_get(oid) RETURNS bytea -- read entire LO (same as load_lo) > lo_get(oid, bigint, int) RETURNS bytea -- read from offset for length > lo_put(oid, bigint, bytea) RETURNS void -- write data at offset > > Anything we do here effectively provides wrappers around the existing > functions tailored toward the needs of libpq. A key outstanding question is > whether doing so provides a compelling increment in usability. On the plus > side, adding such functions resolves the weirdness of having a variety of > database object that is easy to access from libpq but awkward to access from > plain SQL. On the minus side, this could easily live as an extension module. > I have not used the large object facility to any significant degree, but I > generally feel this is helpful enough to justify core inclusion. Any other > opinions on the general suitability or on the specifics of the API offered? > I am currently working with a client on a largeish LO migration. I would certainly have appreciated having lo_get(oid) available - I wrote something in plpgsql that did almost exactly what Pavel's code does. Your additional lo_get(oid, offset, length) and lo_put(oid, offset, bytea) seem sane enough. So +1 from me for adding all these. If we're going to be doing work in this area, let me note that I'm not sure the decision in commit c0d5be5d6a736d2ee8141e920bc3de8e001bf6d9 to have pg_dump write a separate archive entry for each LO was the wisest decision we ever made. My client is currently migrating to use of bytea instead of LOs partly because we didn't want to think about the issue of having hundreds of millions of archive entries in a dump. When that process is complete we'll upgrade. :-) cheers andrew
Hello
2013/10/21 Andrew Dunstan <andrew@dunslane.net>
I am for including to core - we have no buildin SQL functions that allows access simple and fast access on binary level. Next - these functions completes lo functionality.
On 10/20/2013 07:52 PM, Noah Misch wrote:Consider this list of new functions in their place:
lo_create(oid, bytea) RETURNS oid -- new LO with content (similar to make_lo)
lo_get(oid) RETURNS bytea -- read entire LO (same as load_lo)
lo_get(oid, bigint, int) RETURNS bytea -- read from offset for length
lo_put(oid, bigint, bytea) RETURNS void -- write data at offset
should be - it is more consistent with current API than my proposal.
Anything we do here effectively provides wrappers around the existing
functions tailored toward the needs of libpq. A key outstanding question is
whether doing so provides a compelling increment in usability. On the plus
side, adding such functions resolves the weirdness of having a variety of
database object that is easy to access from libpq but awkward to access from
plain SQL. On the minus side, this could easily live as an extension module.
I have not used the large object facility to any significant degree, but I
generally feel this is helpful enough to justify core inclusion. Any other
opinions on the general suitability or on the specifics of the API offered?
I am for including to core - we have no buildin SQL functions that allows access simple and fast access on binary level. Next - these functions completes lo functionality.
Other questions - should be these functions propagated to libpq? and who will write patch? You or me?
Regards
Pavel
Pavel
I am currently working with a client on a largeish LO migration. I would certainly have appreciated having lo_get(oid) available - I wrote something in plpgsql that did almost exactly what Pavel's code does. Your additional lo_get(oid, offset, length) and lo_put(oid, offset, bytea) seem sane enough. So +1 from me for adding all these.
If we're going to be doing work in this area, let me note that I'm not sure the decision in commit c0d5be5d6a736d2ee8141e920bc3de8e001bf6d9 to have pg_dump write a separate archive entry for each LO was the wisest decision we ever made. My client is currently migrating to use of bytea instead of LOs partly because we didn't want to think about the issue of having hundreds of millions of archive entries in a dump. When that process is complete we'll upgrade. :-)
cheers
andrew
On Mon, Oct 21, 2013 at 08:10:24PM +0200, Pavel Stehule wrote: > > On 10/20/2013 07:52 PM, Noah Misch wrote: > >> Anything we do here effectively provides wrappers around the existing > >> functions tailored toward the needs of libpq. To clarify the above statement: the existing lo* SQL functions are designed to fit the needs of the libpq APIs that call those SQL functions internally. The additions we're discussing are SQL functions designed to fit the needs of user-written SQL statements. > I am for including to core - we have no buildin SQL functions that allows > access simple and fast access on binary level. Next - these functions > completes lo functionality. > > Other questions - should be these functions propagated to libpq? No; I agree that the existing libpq large object API is adequate. > and who will write patch? You or me? If you're prepared to change the function names and add the subset-oriented functions, I would appreciate that. Thanks, nm -- Noah Misch EnterpriseDB http://www.enterprisedb.com
2013/10/21 Noah Misch <noah@leadboat.com>
On Mon, Oct 21, 2013 at 08:10:24PM +0200, Pavel Stehule wrote:
> > On 10/20/2013 07:52 PM, Noah Misch wrote:> >> Anything we do here effectively provides wrappers around the existingTo clarify the above statement: the existing lo* SQL functions are designed to
> >> functions tailored toward the needs of libpq.
fit the needs of the libpq APIs that call those SQL functions internally. The
additions we're discussing are SQL functions designed to fit the needs of
user-written SQL statements.No; I agree that the existing libpq large object API is adequate.
> I am for including to core - we have no buildin SQL functions that allows
> access simple and fast access on binary level. Next - these functions
> completes lo functionality.
>
> Other questions - should be these functions propagated to libpq?
ok
If you're prepared to change the function names and add the subset-oriented
> and who will write patch? You or me?
functions, I would appreciate that.
I'll try to prepare patch in next two days
Regards
Pavel
Hello
here is patch2013/10/21 Noah Misch <noah@leadboat.com>
On Mon, Oct 21, 2013 at 08:10:24PM +0200, Pavel Stehule wrote:
> > On 10/20/2013 07:52 PM, Noah Misch wrote:> >> Anything we do here effectively provides wrappers around the existingTo clarify the above statement: the existing lo* SQL functions are designed to
> >> functions tailored toward the needs of libpq.
fit the needs of the libpq APIs that call those SQL functions internally. The
additions we're discussing are SQL functions designed to fit the needs of
user-written SQL statements.No; I agree that the existing libpq large object API is adequate.
> I am for including to core - we have no buildin SQL functions that allows
> access simple and fast access on binary level. Next - these functions
> completes lo functionality.
>
> Other questions - should be these functions propagated to libpq?If you're prepared to change the function names and add the subset-oriented
> and who will write patch? You or me?
functions, I would appreciate that.
Attachment
On 22.10.2013 13:55, Pavel Stehule wrote: > 2013/10/21 Noah Misch<noah@leadboat.com> >> If you're prepared to change the function names and add the subset-oriented >> functions, I would appreciate that. >> > here is patch lobj.sgml still refer to the old names. - Heikki
fixed documentation
Regards2013/10/24 Heikki Linnakangas <hlinnakangas@vmware.com>
On 22.10.2013 13:55, Pavel Stehule wrote:2013/10/21 Noah Misch<noah@leadboat.com>here is patchIf you're prepared to change the function names and add the subset-oriented
functions, I would appreciate that.
lobj.sgml still refer to the old names.
- Heikki
Attachment
On Fri, Oct 25, 2013 at 03:35:05PM +0200, Pavel Stehule wrote: > 2013/10/24 Heikki Linnakangas <hlinnakangas@vmware.com> > > On 22.10.2013 13:55, Pavel Stehule wrote: > >> 2013/10/21 Noah Misch<noah@leadboat.com> > >>> If you're prepared to change the function names and add the > >>> subset-oriented > >>> functions, I would appreciate that. > >>> > >>> here is patch > >> > > > > lobj.sgml still refer to the old names. > fixed documentation Thanks. I made these noteworthy changes: 1. Fix lo_get(oid) on a LO larger than INT_MAX bytes: raise an error rather than performing a modulo operation on the size. 2. Remove the undocumented ability to pass a negative length to request all bytes up to the end of the LO. substr() also rejects negative lengths. Note that one can get the same effect by passing any length >MaxAllocSize. 3. Documentation reshuffling. I placed all the documentation for these functions in the large objects chapter, and I emphasized the new functions over the prospect of calling the older functions (whose primary role is to support client interfaces) from SQL. If this still looks reasonable, I will commit it. -- Noah Misch EnterpriseDB http://www.enterprisedb.com
Attachment
2013/10/26 Noah Misch <noah@leadboat.com>
On Fri, Oct 25, 2013 at 03:35:05PM +0200, Pavel Stehule wrote:> fixed documentation
> 2013/10/24 Heikki Linnakangas <hlinnakangas@vmware.com>
> > On 22.10.2013 13:55, Pavel Stehule wrote:
> >> 2013/10/21 Noah Misch<noah@leadboat.com>
> >>> If you're prepared to change the function names and add the
> >>> subset-oriented
> >>> functions, I would appreciate that.
> >>>
> >>> here is patch
> >>
> >
> > lobj.sgml still refer to the old names.
Thanks. I made these noteworthy changes:
1. Fix lo_get(oid) on a LO larger than INT_MAX bytes: raise an error rather
than performing a modulo operation on the size.
2. Remove the undocumented ability to pass a negative length to request all
bytes up to the end of the LO. substr() also rejects negative lengths. Note
that one can get the same effect by passing any length >MaxAllocSize.
3. Documentation reshuffling. I placed all the documentation for these
functions in the large objects chapter, and I emphasized the new functions
over the prospect of calling the older functions (whose primary role is to
support client interfaces) from SQL.
If this still looks reasonable, I will commit it.
it is ok
Regards
Pavel