Thread: [HACKERS] Logical Replication and Character encoding
Hi hackers, I tried a committed Logical Replication environment. I found that replication between databases of different encodings didnot convert encodings in character type columns. Is this behavior correct? I expected that the character 0xe6bca2 (UTF-8) would be converted to the same character 0xb4c1 (EUC_JP). The example belowreplicates from the encoding UTF-8 database to the encoding EUC_JP database. You can see that the character 0xe6bca2(UTF-8) is stored intact in the SUBSCRIPTION side database. - PUBLICATION side (encode=UTF-8) postgres=> \l postgres List of databases Name | Owner | Encoding | Collate | Ctype | Accessprivileges ----------+----------+----------+---------+-------+-------------------postgres | postgres | UTF8 | C | C | (1 row) postgres=> CREATE TABLE encode1(col1 NUMERIC PRIMARY KEY, col2 VARCHAR(10)) ; CREATE TABLE postgres=> CREATE PUBLICATION pub1 FOR TABLE encode1 ; CREATE PUBLICATION postgres=> INSERT INTO encode1 VALUES (1, '漢') ; -- UTF-8 Character 0xe6bca2 INSERT 0 1 - SUBSCRIPTION side (encode=EUC_JP) postgres=> \l postgres List of databases Name | Owner | Encoding | Collate | Ctype | Accessprivileges ----------+----------+----------+---------+-------+-------------------postgres | postgres | EUC_JP | C | C | (1 row) postgres=> CREATE TABLE encode1(col1 NUMERIC PRIMARY KEY, col2 VARCHAR(10)) ; CREATE TABLE postgres=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=postgres host=localhost port=5432' PUBLICATION pub1 ; NOTICE: created replication slot "sub1" on publisher CREATE SUBSCRIPTION postgres=# SELECT * FROM encode1 ; ERROR: invalid byte sequence for encoding "EUC_JP": 0xa2 postgres=# SELECT heap_page_items(get_raw_page('encode1', 0)) ; heap_page_items -------------------------------------------------------------------(1,8152,1,33,565,0,0,"(0,1)",2,2306,24,,,"\\x0b0080010009e6bca2") <- stored UTF-8 char 0xe6bca2 (1 row) Snapshot: https://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz 2017-01-31 00:29:07 Operating System: Red Hat Enterprise Linux 7 Update 2 (x86-64) Regards.
Hello, At Tue, 31 Jan 2017 12:46:18 +0000, "Shinoda, Noriyoshi" <noriyoshi.shinoda@hpe.com> wrote in <AT5PR84MB0084FAE5976D89CDE9733093EE4A0@AT5PR84MB0084.NAMPRD84.PROD.OUTLOOK.COM> > I tried a committed Logical Replication environment. I found > that replication between databases of different encodings did > not convert encodings in character type columns. Is this > behavior correct? The output plugin for subscription is pgoutput and it currently doesn't consider encoding but would easiliy be added if desired encoding is informed. The easiest (but somewhat seems fragile) way I can guess is, - Subscriber connects with client_encoding specification and the output plugin pgoutput decide whether it accepts the encodingor not. If the subscriber doesn't, pgoutput send data without conversion. The attached small patch does this and works with the following CREATE SUBSCRIPTION. CREATE SUBSCRIPTION sub1 CONNECTION 'host=/tmp port=5432 dbname=postgres client_encoding=EUC_JP' PUBLICATION pub1; Also we may have explicit negotiation on, for example, CREATE_REPLICATION_SLOT. 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput ENCODING EUC_JP' Or output plugin may take options. 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput OPTIONS(encoding EUC_JP)' Any opinions? -- Kyotaro Horiguchi NTT Open Source Software Center diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c index 1f30de6..6a235d7 100644 --- a/src/backend/replication/logical/proto.c +++ b/src/backend/replication/logical/proto.c @@ -16,6 +16,7 @@#include "catalog/pg_namespace.h"#include "catalog/pg_type.h"#include "libpq/pqformat.h" +#include "mb/pg_wchar.h"#include "replication/logicalproto.h"#include "utils/builtins.h"#include "utils/lsyscache.h" @@ -442,6 +443,9 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) pq_sendbyte(out, 't'); /* 'text' data follows */ outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]); + if (pg_get_client_encoding() != GetDatabaseEncoding()) + outputstr = pg_server_to_client(outputstr, strlen(outputstr)); + len = strlen(outputstr) + 1; /* null terminated */ pq_sendint(out, len, 4); /* length */ appendBinaryStringInfo(out, outputstr, len); /* data */
At Wed, 01 Feb 2017 12:05:40 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20170201.120540.183393194.horiguchi.kyotaro@lab.ntt.co.jp> > Hello, > > At Tue, 31 Jan 2017 12:46:18 +0000, "Shinoda, Noriyoshi" <noriyoshi.shinoda@hpe.com> wrote in <AT5PR84MB0084FAE5976D89CDE9733093EE4A0@AT5PR84MB0084.NAMPRD84.PROD.OUTLOOK.COM> > > I tried a committed Logical Replication environment. I found > > that replication between databases of different encodings did > > not convert encodings in character type columns. Is this > > behavior correct? > > The output plugin for subscription is pgoutput and it currently > doesn't consider encoding but would easiliy be added if desired > encoding is informed. > > The easiest (but somewhat seems fragile) way I can guess is, > > - Subscriber connects with client_encoding specification and the > output plugin pgoutput decide whether it accepts the encoding > or not. If the subscriber doesn't, pgoutput send data without > conversion. > > The attached small patch does this and works with the following > CREATE SUBSCRIPTION. Oops. It forgets to care conversion failure. It is amended in the attached patch. > CREATE SUBSCRIPTION sub1 CONNECTION 'host=/tmp port=5432 dbname=postgres client_encoding=EUC_JP' PUBLICATION pub1; > > > Also we may have explicit negotiation on, for example, > CREATE_REPLICATION_SLOT. > > 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput ENCODING EUC_JP' > > Or output plugin may take options. > > 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput OPTIONS(encoding EUC_JP)' > > > Any opinions? -- Kyotaro Horiguchi NTT Open Source Software Center diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c index 1f30de6..3c457a2 100644 --- a/src/backend/replication/logical/proto.c +++ b/src/backend/replication/logical/proto.c @@ -16,6 +16,7 @@#include "catalog/pg_namespace.h"#include "catalog/pg_type.h"#include "libpq/pqformat.h" +#include "mb/pg_wchar.h"#include "replication/logicalproto.h"#include "utils/builtins.h"#include "utils/lsyscache.h" @@ -442,6 +443,20 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) pq_sendbyte(out, 't'); /* 'text' data follows */ outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]); + if (pg_get_client_encoding() != GetDatabaseEncoding()) + { + char *p; + + p = pg_server_to_client(outputstr, strlen(outputstr)); + + /* Send the converted string on when succeeded */ + if (p) + { + pfree(outputstr); + outputstr = p; + } + } + len = strlen(outputstr) + 1; /* null terminated */ pq_sendint(out, len, 4); /* length */ appendBinaryStringInfo(out, outputstr, len); /* data */
At Wed, 01 Feb 2017 12:13:04 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20170201.121304.267734380.horiguchi.kyotaro@lab.ntt.co.jp> > > > I tried a committed Logical Replication environment. I found > > > that replication between databases of different encodings did > > > not convert encodings in character type columns. Is this > > > behavior correct? > > > > The output plugin for subscription is pgoutput and it currently > > doesn't consider encoding but would easiliy be added if desired > > encoding is informed. > > > > The easiest (but somewhat seems fragile) way I can guess is, > > > > - Subscriber connects with client_encoding specification and the > > output plugin pgoutput decide whether it accepts the encoding > > or not. If the subscriber doesn't, pgoutput send data without > > conversion. > > > > The attached small patch does this and works with the following > > CREATE SUBSCRIPTION. > > Oops. It forgets to care conversion failure. It is amended in the > attached patch. > > > CREATE SUBSCRIPTION sub1 CONNECTION 'host=/tmp port=5432 dbname=postgres client_encoding=EUC_JP' PUBLICATION pub1; > > > > > > Also we may have explicit negotiation on, for example, > > CREATE_REPLICATION_SLOT. > > > > 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput ENCODING EUC_JP' > > > > Or output plugin may take options. > > > > 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput OPTIONS(encoding EUC_JP)' > > > > > > Any opinions? This patch chokes replication when the publisher finds an inconvertible character in a tuple to be sent. For the case, dropping-then-recreating subscription is necessary to go forward. regards, -- Kyotaro Horiguchi NTT Open Source Software Center
Thank you for creating patches. I strongly hope that your patch will be merged into the new version. Since all databases are not yet based on UTF - 8, I think conversion of character codes is still necessary. -----Original Message----- From: Kyotaro HORIGUCHI [mailto:horiguchi.kyotaro@lab.ntt.co.jp] Sent: Wednesday, February 01, 2017 3:31 PM To: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com> Cc: pgsql-hackers@postgresql.org Subject: Re: [HACKERS] Logical Replication and Character encoding At Wed, 01 Feb 2017 12:13:04 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20170201.121304.267734380.horiguchi.kyotaro@lab.ntt.co.jp> > > > I tried a committed Logical Replication environment. I found > > > that replication between databases of different encodings did not > > > convert encodings in character type columns. Is this behavior > > > correct? > > > > The output plugin for subscription is pgoutput and it currently > > doesn't consider encoding but would easiliy be added if desired > > encoding is informed. > > > > The easiest (but somewhat seems fragile) way I can guess is, > > > > - Subscriber connects with client_encoding specification and the > > output plugin pgoutput decide whether it accepts the encoding > > or not. If the subscriber doesn't, pgoutput send data without > > conversion. > > > > The attached small patch does this and works with the following > > CREATE SUBSCRIPTION. > > Oops. It forgets to care conversion failure. It is amended in the > attached patch. > > > CREATE SUBSCRIPTION sub1 CONNECTION 'host=/tmp port=5432 > > dbname=postgres client_encoding=EUC_JP' PUBLICATION pub1; > > > > > > Also we may have explicit negotiation on, for example, > > CREATE_REPLICATION_SLOT. > > > > 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput ENCODING EUC_JP' > > > > Or output plugin may take options. > > > > 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput OPTIONS(encoding EUC_JP)' > > > > > > Any opinions? This patch chokes replication when the publisher finds an inconvertible character in a tuple to be sent. For the case, dropping-then-recreatingsubscription is necessary to go forward. regards, -- Kyotaro Horiguchi NTT Open Source Software Center
Hello. This version makes subscriber automatically set its database encoding to clinet_encoding (if not explicitly set). And changed the behavior when pg_server_to_client returns NULL to ERROR from sending original string. At Wed, 1 Feb 2017 08:39:41 +0000, "Shinoda, Noriyoshi" <noriyoshi.shinoda@hpe.com> wrote in <AT5PR84MB0084A18D3BF1D93B862E95E4EE4D0@AT5PR84MB0084.NAMPRD84.PROD.OUTLOOK.COM> > Thank you for creating patches. > I strongly hope that your patch will be merged into the new > version. Since all databases are not yet based on UTF - 8, I > think conversion of character codes is still necessary. Thanks. > -----Original Message----- > From: Kyotaro HORIGUCHI [mailto:horiguchi.kyotaro@lab.ntt.co.jp] > Sent: Wednesday, February 01, 2017 3:31 PM > To: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com> > Cc: pgsql-hackers@postgresql.org > Subject: Re: [HACKERS] Logical Replication and Character encoding > > At Wed, 01 Feb 2017 12:13:04 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in<20170201.121304.267734380.horiguchi.kyotaro@lab.ntt.co.jp> > > > > I tried a committed Logical Replication environment. I found > > > > that replication between databases of different encodings did not > > > > convert encodings in character type columns. Is this behavior > > > > correct? > > > > > > The output plugin for subscription is pgoutput and it currently > > > doesn't consider encoding but would easiliy be added if desired > > > encoding is informed. > > > > > > The easiest (but somewhat seems fragile) way I can guess is, > > > > > > - Subscriber connects with client_encoding specification and the > > > output plugin pgoutput decide whether it accepts the encoding > > > or not. If the subscriber doesn't, pgoutput send data without > > > conversion. > > > > > > The attached small patch does this and works with the following > > > CREATE SUBSCRIPTION. > > > > Oops. It forgets to care conversion failure. It is amended in the > > attached patch. > > > > > CREATE SUBSCRIPTION sub1 CONNECTION 'host=/tmp port=5432 > > > dbname=postgres client_encoding=EUC_JP' PUBLICATION pub1; > > > > > > > > > Also we may have explicit negotiation on, for example, > > > CREATE_REPLICATION_SLOT. > > > > > > 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput ENCODING EUC_JP' > > > > > > Or output plugin may take options. > > > > > > 'CREATE_REPLICATION_SLOT sub1 LOGICAL pgoutput OPTIONS(encoding EUC_JP)' > > > > > > > > > Any opinions? > > This patch chokes replication when the publisher finds an inconvertible character in a tuple to be sent. For the case,dropping-then-recreating subscription is necessary to go forward. I think this behavior is right and inevitable in order to protect subscribers from broken strings. regards, -- Kyotaro Horiguchi NTT Open Source Software Center
On 01-02-2017 00:05, Kyotaro HORIGUCHI wrote: > - Subscriber connects with client_encoding specification and the > output plugin pgoutput decide whether it accepts the encoding > or not. If the subscriber doesn't, pgoutput send data without > conversion. > I don't think storage without conversion is an acceptable approach. We should provide options to users such as ignore tuple or NULL for column(s) with conversion problem. I wouldn't consider storage data without conversion because it silently show incorrect data and we historically aren't flexible with conversion routines. -- Euler Taveira Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte24x7 e Treinamento
On 2 February 2017 at 11:45, Euler Taveira <euler@timbira.com.br> wrote: > I don't think storage without conversion is an acceptable approach. We > should provide options to users such as ignore tuple or NULL for > column(s) with conversion problem. I wouldn't consider storage data > without conversion because it silently show incorrect data and we > historically aren't flexible with conversion routines. pglogical and BDR both require identical encoding; they test for this during startup and refuse to replicate if the encoding differs. For the first pass at core I suggest a variant on that policy: require source and destination encoding to be the same. This should probably be the first change, since it definitively prevents the issue from arising. If time permits we could also allow destination encoding to be UTF-8 (including codepage 65001) with any source encoding. This requires encoding conversion to be performed, of course. The downside is that this will impact users who use a common subset of two encodings. This is most common for Windows-1252 <-> ISO-8859-15 (or -1 if you're old-school) but also arises anywhere the common 7 bit subset is used. Until we can define an encoding exception policy though, I think we should defer supporting those and make them a "later" problem. -- Craig Ringer http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
Hello, At Fri, 3 Feb 2017 09:16:47 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in <CAMsr+YGqn2PjJBCY+RjWWTJ4BZ=FhG0REXq1E1nd8_kjaDGoEQ@mail.gmail.com> > On 2 February 2017 at 11:45, Euler Taveira <euler@timbira.com.br> wrote: > > > I don't think storage without conversion is an acceptable approach. We > > should provide options to users such as ignore tuple or NULL for > > column(s) with conversion problem. I wouldn't consider storage data > > without conversion because it silently show incorrect data and we > > historically aren't flexible with conversion routines. It is possible technically. But changing the behavior of a subscript and/or publication requires change of SQL syntax. It seems a bit too late for proposing such a new feature.. IMHO unintentional silent data loss must not be happen so the default behavior on conversion failure cannot be other than stop of replication. > pglogical and BDR both require identical encoding; they test for this > during startup and refuse to replicate if the encoding differs. > > For the first pass at core I suggest a variant on that policy: require > source and destination encoding to be the same. This should probably > be the first change, since it definitively prevents the issue from > arising. If the check is performed by BDR, pglogical itself seems to be allowed to convert strings when the both end have different encodings in a non-BDR environment. Is this right? > If time permits we could also allow destination encoding to be UTF-8 > (including codepage 65001) with any source encoding. This requires > encoding conversion to be performed, of course. Does this mean that BDR might work on heterogeneous encoding environemnt? But anyway some encodings (like SJIS) have caharacters with the same destination in its mapping so BDR doesn't seem to work with such conversions. So each encoding might should have a property to inform its usability under BDR environment, but... On the other hand, no prolem is seen in encoding conversions in non-BDR environments. (except the behavior on failure) > The downside is that this will impact users who use a common subset of > two encodings. This is most common for Windows-1252 <-> ISO-8859-15 > (or -1 if you're old-school) but also arises anywhere the common 7 bit > subset is used. Until we can define an encoding exception policy > though, I think we should defer supporting those and make them a > "later" problem. If the conversion is rejected for now, we should check the encoding identity instead. regards, -- Kyotaro Horiguchi NTT Open Source Software Center
On 3 Feb. 2017 15:47, "Kyotaro HORIGUCHI" <horiguchi.kyotaro@lab.ntt.co.jp> wrote:
Hello,
At Fri, 3 Feb 2017 09:16:47 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in <CAMsr+YGqn2PjJBCY+RjWWTJ4BZ=FhG0REXq1E1nd8_kjaDGoEQ@mail. gmail.com> > On 2 February 2017 at 11:45, Euler Taveira <euler@timbira.com.br> wrote:It is possible technically. But changing the behavior of a
>
> > I don't think storage without conversion is an acceptable approach. We
> > should provide options to users such as ignore tuple or NULL for
> > column(s) with conversion problem. I wouldn't consider storage data
> > without conversion because it silently show incorrect data and we
> > historically aren't flexible with conversion routines.
subscript and/or publication requires change of SQL syntax. It
seems a bit too late for proposing such a new feature..
IMHO unintentional silent data loss must not be happen so the
default behavior on conversion failure cannot be other than stop
of replication.
Agree. Which is why we should default to disallowing mismatched upstream and downstream encodings. At least to start with.
If the check is performed by BDR, pglogical itself seems to be
> pglogical and BDR both require identical encoding; they test for this
> during startup and refuse to replicate if the encoding differs.
>
> For the first pass at core I suggest a variant on that policy: require
> source and destination encoding to be the same. This should probably
> be the first change, since it definitively prevents the issue from
> arising.
allowed to convert strings when the both end have different
encodings in a non-BDR environment. Is this right?
Hm. Maybe it's changed since I last looked. We started off disallowing mismatched encodings anyway.
Note that I'm referring to pglogical, the tool, not "in core logical replication for postgres"
Does this mean that BDR might work on heterogeneous encoding
> If time permits we could also allow destination encoding to be UTF-8
> (including codepage 65001) with any source encoding. This requires
> encoding conversion to be performed, of course.
environemnt?
No. Here the "we" was meant to be PG core for V10 or later.
But anyway some encodings (like SJIS) have
caharacters with the same destination in its mapping so BDR
doesn't seem to work with such conversions. So each encoding
might should have a property to inform its usability under BDR
environment, but.
PG doesn't allow SJIS as a db encoding. So it doesn't matter here.
If the conversion is rejected for now, we should check the
> The downside is that this will impact users who use a common subset of
> two encodings. This is most common for Windows-1252 <-> ISO-8859-15
> (or -1 if you're old-school) but also arises anywhere the common 7 bit
> subset is used. Until we can define an encoding exception policy
> though, I think we should defer supporting those and make them a
> "later" problem.
encoding identity instead.
regards,
--
Kyotaro Horiguchi
NTT Open Source Software Center
Hello, At Fri, 3 Feb 2017 13:47:54 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in <CAMsr+YFqNLAvdjmgWOMWM9X=FfzcFOL4pLxBeaqtjySbe_UeTg@mail.gmail.com> > On 3 Feb. 2017 15:47, "Kyotaro HORIGUCHI" <horiguchi.kyotaro@lab.ntt.co.jp> > wrote: > > Hello, > > At Fri, 3 Feb 2017 09:16:47 +0800, Craig Ringer <craig@2ndquadrant.com> > wrote in <CAMsr+YGqn2PjJBCY+RjWWTJ4BZ=FhG0REXq1E1nd8_kjaDGoEQ@mail.gmail.com > > > > On 2 February 2017 at 11:45, Euler Taveira <euler@timbira.com.br> wrote: > > > > > I don't think storage without conversion is an acceptable approach. We > > > should provide options to users such as ignore tuple or NULL for > > > column(s) with conversion problem. I wouldn't consider storage data > > > without conversion because it silently show incorrect data and we > > > historically aren't flexible with conversion routines. > > It is possible technically. But changing the behavior of a > subscript and/or publication requires change of SQL syntax. It > seems a bit too late for proposing such a new feature.. > > IMHO unintentional silent data loss must not be happen so the > default behavior on conversion failure cannot be other than stop > of replication. > > > Agree. Which is why we should default to disallowing mismatched upstream > and downstream encodings. At least to start with. > > > pglogical and BDR both require identical encoding; they test for this > > during startup and refuse to replicate if the encoding differs. > > > > For the first pass at core I suggest a variant on that policy: require > > source and destination encoding to be the same. This should probably > > be the first change, since it definitively prevents the issue from > > arising. > > If the check is performed by BDR, pglogical itself seems to be > allowed to convert strings when the both end have different > encodings in a non-BDR environment. Is this right? > > > Hm. Maybe it's changed since I last looked. We started off disallowing > mismatched encodings anyway. > > Note that I'm referring to pglogical, the tool, not "in core logical > replication for postgres" Ouch! I'm so sorry for my bad mistake. What I thought that I mentioned is not pglogical, but pgoutput, the default output plugin. But what the patch modifies is logical/proto.c. My correct question was the following. | Does pglogical requires that the core to reject connections | with a server with unidentical encoging? Or check condings by | itself and disconnect by itself? > > If time permits we could also allow destination encoding to be UTF-8 > > (including codepage 65001) with any source encoding. This requires > > encoding conversion to be performed, of course. > > Does this mean that BDR might work on heterogeneous encoding > environemnt? > > No. Here the "we" was meant to be PG core for V10 or later. > > But anyway some encodings (like SJIS) have > caharacters with the same destination in its mapping so BDR > doesn't seem to work with such conversions. So each encoding > might should have a property to inform its usability under BDR > environment, but. > > > PG doesn't allow SJIS as a db encoding. So it doesn't matter here. Oops. I suppose EUC_JP also has such characters but I'm not sure now. > > The downside is that this will impact users who use a common subset of > > two encodings. This is most common for Windows-1252 <-> ISO-8859-15 > > (or -1 if you're old-school) but also arises anywhere the common 7 bit > > subset is used. Until we can define an encoding exception policy > > though, I think we should defer supporting those and make them a > > "later" problem. > > If the conversion is rejected for now, we should check the > encoding identity instead. Ok, I'll go on this direction for the next patch. regards, -- Kyotaro Horiguchi NTT Open Source Software Center
On 01/02/17 04:05, Kyotaro HORIGUCHI wrote: > Hello, > > At Tue, 31 Jan 2017 12:46:18 +0000, "Shinoda, Noriyoshi" <noriyoshi.shinoda@hpe.com> wrote in <AT5PR84MB0084FAE5976D89CDE9733093EE4A0@AT5PR84MB0084.NAMPRD84.PROD.OUTLOOK.COM> >> I tried a committed Logical Replication environment. I found >> that replication between databases of different encodings did >> not convert encodings in character type columns. Is this >> behavior correct? > > The output plugin for subscription is pgoutput and it currently > doesn't consider encoding but would easiliy be added if desired > encoding is informed. > > The easiest (but somewhat seems fragile) way I can guess is, > > - Subscriber connects with client_encoding specification and the > output plugin pgoutput decide whether it accepts the encoding > or not. If the subscriber doesn't, pgoutput send data without > conversion. > Hmm I wonder if we should just make the subscriber send the client_encoding always (based on server encoding of the subscriber). That should solve the issue in combination with your patch no? -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
At Sat, 4 Feb 2017 21:27:32 +0100, Petr Jelinek <petr.jelinek@2ndquadrant.com> wrote in <bcc7f7e9-f558-b19e-b544-000ba7cf286c@2ndquadrant.com> > Hmm I wonder if we should just make the subscriber send the > client_encoding always (based on server encoding of the subscriber). > That should solve the issue in combination with your patch no? Yeah, right. I considered that a subscriber might want to set its own value for that but that is useless. The attached patch does the following things to just prevent making a logical replication connection between databases with inconsistent encodings. - added client_encoding with subscriber(or standby)'s encoding at the last of options in libpqrcv_connect. - CheckLogicalDecodingRequirements refuses connection for a request with inconsistent encodings. > ERROR: logical replication requires consistent encodings on both side (publisher = UTF8, subscriber = EUC_JP) We could check this earlier if involving physical replication but I think this is a matter of logical replication. regards, -- Kyotaro Horiguchi NTT Open Source Software Center
On 14/02/17 03:23, Kyotaro HORIGUCHI wrote: > At Sat, 4 Feb 2017 21:27:32 +0100, Petr Jelinek <petr.jelinek@2ndquadrant.com> wrote in <bcc7f7e9-f558-b19e-b544-000ba7cf286c@2ndquadrant.com> >> Hmm I wonder if we should just make the subscriber send the >> client_encoding always (based on server encoding of the subscriber). >> That should solve the issue in combination with your patch no? > > Yeah, right. I considered that a subscriber might want to set its > own value for that but that is useless. > > The attached patch does the following things to just prevent > making a logical replication connection between databases with > inconsistent encodings. > > - added client_encoding with subscriber(or standby)'s encoding at > the last of options in libpqrcv_connect. > > - CheckLogicalDecodingRequirements refuses connection for a > request with inconsistent encodings. > >> ERROR: logical replication requires consistent encodings on both side (publisher = UTF8, subscriber = EUC_JP) > I am not quite convinced that this should be handled by logical decoding itself. It's quite possible to have output plugins that will handle this correctly for their use-cases (by doing similar conversion you did in the original patch) so they should not be prevented doing so. So it's probably better to check this in the plugin. I do like the idea of just using client_encoding in libpqrcv_connect though. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On 2/15/17 17:55, Petr Jelinek wrote: > I am not quite convinced that this should be handled by logical decoding > itself. It's quite possible to have output plugins that will handle this > correctly for their use-cases (by doing similar conversion you did in > the original patch) so they should not be prevented doing so. > So it's probably better to check this in the plugin. > > I do like the idea of just using client_encoding in libpqrcv_connect though. Well, it is sort of a libpq connection, and a proper libpq client should set the client encoding, and a proper libpq server should do encoding conversion accordingly. If we just play along with this, it all works correctly. Other output plugins are free to ignore the encoding settings (just like libpq can send binary data in some cases). The attached patch puts it all together. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Attachment
On 2/17/17 10:14, Peter Eisentraut wrote: > Well, it is sort of a libpq connection, and a proper libpq client should > set the client encoding, and a proper libpq server should do encoding > conversion accordingly. If we just play along with this, it all works > correctly. > > Other output plugins are free to ignore the encoding settings (just like > libpq can send binary data in some cases). > > The attached patch puts it all together. committed -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From: Peter Eisentraut [mailto:peter.eisentraut@2ndquadrant.com] >Sent: Friday, February 24, 2017 1:32 AM >To: Petr Jelinek <petr.jelinek@2ndquadrant.com>; Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> >Cc: craig@2ndquadrant.com; Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>; pgsql-hackers@postgresql.org >Subject: Re: [HACKERS] Logical Replication and Character encoding > >On 2/17/17 10:14, Peter Eisentraut wrote: >> Well, it is sort of a libpq connection, and a proper libpq client >> should set the client encoding, and a proper libpq server should do >> encoding conversion accordingly. If we just play along with this, it >> all works correctly. >> >> Other output plugins are free to ignore the encoding settings (just >> like libpq can send binary data in some cases). >> >> The attached patch puts it all together. > >committed Hi, Thank you very much for making a new patch. I tried a new committed version. In the case of PUBLICATION(EUC_JP) and SUBSCRIPTION(UTF-8) environment, it worked as expected. Great!. However, in the case of PUBLICATION(UTF-8) and SUBSCRIOTION(EUC_JP) environment, the following error was output and the processwent down. - PUBLICATION (UTF-8) postgres=> INSERT INTO encode1 VALUES (1, 'ascii') ; INSERT 0 1 postgres=> INSERT INTO encode1 VALUES (2, '漢') ; -- Expect UTF-8 Character 0xE6BCA2 will be convert EUC_JP 0xB4C1 INSERT 0 1 - SUBSCRIPTION (EUC_JP) postgres=> SELECT * FROM encode1;c1 | c2 ----+------- 1 | ascii (1 row) $ tail data.euc/pg_log/postgresql.log LOG: starting logical replication worker for subscription "sub1" LOG: logical replication apply for subscription "sub1" has started ERROR: insufficient data left in message LOG: worker process: logical replication worker for subscription 16439 (PID 22583) exited with exit code 1 Snapshot: https://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.gz 2017-02-24 00:28:58 Operating System: Red Hat Enterprise Linux 7 Update 2 (x86-64) Regards.
Sorry for the abesnse. At Fri, 24 Feb 2017 02:43:14 +0000, "Shinoda, Noriyoshi" <noriyoshi.shinoda@hpe.com> wrote in <AT5PR84MB00847ABEA48EAE9A97D51157EE520@AT5PR84MB0084.NAMPRD84.PROD.OUTLOOK.COM> > >From: Peter Eisentraut [mailto:peter.eisentraut@2ndquadrant.com] > >Sent: Friday, February 24, 2017 1:32 AM > >To: Petr Jelinek <petr.jelinek@2ndquadrant.com>; Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> > >Cc: craig@2ndquadrant.com; Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>; pgsql-hackers@postgresql.org > >Subject: Re: [HACKERS] Logical Replication and Character encoding > > > >On 2/17/17 10:14, Peter Eisentraut wrote: > >> Well, it is sort of a libpq connection, and a proper libpq client > >> should set the client encoding, and a proper libpq server should do > >> encoding conversion accordingly. If we just play along with this, it > >> all works correctly. > >> > >> Other output plugins are free to ignore the encoding settings (just > >> like libpq can send binary data in some cases). > >> > >> The attached patch puts it all together. > > > >committed .. > However, in the case of PUBLICATION(UTF-8) and SUBSCRIOTION(EUC_JP) environment, the following error was output and theprocess went down. ... > LOG: starting logical replication worker for subscription "sub1" > LOG: logical replication apply for subscription "sub1" has started > ERROR: insufficient data left in message > LOG: worker process: logical replication worker for subscription 16439 (PID 22583) exited with exit code 1 Yeah, the patch sends converted string with the length of the orignal length. Usually encoding conversion changes the length of a string. I doubt that the reverse case was working correctly. As the result pg_sendstring is not usable for this case since we don't have the true length of the string to be sent. So my first patch did the same thing using pg_server_to_client() explicitly. That being said, I think that a more important thing is that the consensus about the policy of logical replication between databases with different encodings is refusing connection. The reason for that is it surely breaks BDR usage for some combinations of encodings. Anyway the attached patch fixes the current bug about encoding in logical replication. regards, -- Kyotaro Horiguchi NTT Open Source Software Center diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c index bc6e9b5..da81a2d 100644 --- a/src/backend/replication/logical/proto.c +++ b/src/backend/replication/logical/proto.c @@ -16,6 +16,7 @@#include "catalog/pg_namespace.h"#include "catalog/pg_type.h"#include "libpq/pqformat.h" +#include "mb/pg_wchar.h"#include "replication/logicalproto.h"#include "utils/builtins.h"#include "utils/lsyscache.h" @@ -442,9 +443,13 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) pq_sendbyte(out, 't'); /* 'text' data follows */ outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]); + + if (pg_get_client_encoding() != GetDatabaseEncoding()) + outputstr = pg_server_to_client(outputstr, strlen(outputstr)); + len = strlen(outputstr) + 1; /* null terminated */ pq_sendint(out, len, 4); /* length */ - pq_sendstring(out, outputstr); /* data */ + appendBinaryStringInfo(out, outputstr, len); /* data */ pfree(outputstr);
On 2/27/17 00:23, Kyotaro HORIGUCHI wrote: > Yeah, the patch sends converted string with the length of the > orignal length. Usually encoding conversion changes the length of > a string. I doubt that the reverse case was working correctly. I think we shouldn't send the length value at all. This might have been a leftover from an earlier version of the patch. See attached patch that removes the length value. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Attachment
On 03/03/17 20:37, Peter Eisentraut wrote: > On 2/27/17 00:23, Kyotaro HORIGUCHI wrote: >> Yeah, the patch sends converted string with the length of the >> orignal length. Usually encoding conversion changes the length of >> a string. I doubt that the reverse case was working correctly. > > I think we shouldn't send the length value at all. This might have been > a leftover from an earlier version of the patch. > > See attached patch that removes the length value. > Well the length is necessary to be able to add binary format support in future so it's definitely not an omission. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On 3/3/17 14:51, Petr Jelinek wrote: > On 03/03/17 20:37, Peter Eisentraut wrote: >> On 2/27/17 00:23, Kyotaro HORIGUCHI wrote: >>> Yeah, the patch sends converted string with the length of the >>> orignal length. Usually encoding conversion changes the length of >>> a string. I doubt that the reverse case was working correctly. >> >> I think we shouldn't send the length value at all. This might have been >> a leftover from an earlier version of the patch. >> >> See attached patch that removes the length value. >> > > Well the length is necessary to be able to add binary format support in > future so it's definitely not an omission. Right. So it appears the right function to use here is pq_sendcountedtext(). However, this sends the strings without null termination, so we'd have to do extra processing on the receiving end. Or we could add an option to pq_sendcountedtext() to make it do what we want. I'd rather stick to standard pqformat.c functions for handling the protocol. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
At Fri, 3 Mar 2017 21:31:24 -0500, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <88397afa-a8ec-8d8a-1c94-94a4795a3870@2ndquadrant.com> > On 3/3/17 14:51, Petr Jelinek wrote: > > On 03/03/17 20:37, Peter Eisentraut wrote: > >> On 2/27/17 00:23, Kyotaro HORIGUCHI wrote: > >>> Yeah, the patch sends converted string with the length of the > >>> orignal length. Usually encoding conversion changes the length of > >>> a string. I doubt that the reverse case was working correctly. > >> > >> I think we shouldn't send the length value at all. This might have been > >> a leftover from an earlier version of the patch. > >> > >> See attached patch that removes the length value. > >> > > > > Well the length is necessary to be able to add binary format support in > > future so it's definitely not an omission. > > Right. So it appears the right function to use here is > pq_sendcountedtext(). However, this sends the strings without null > termination, so we'd have to do extra processing on the receiving end. > Or we could add an option to pq_sendcountedtext() to make it do what we > want. I'd rather stick to standard pqformat.c functions for handling > the protocol. It seems reasonable. I changed the prototype of pg_sendcountedtext as the following, | extern void pq_sendcountedtext(StringInfo buf, const char *str, int slen, | bool countincludesself, bool binary); I think 'binary' seems fine for the parameter here. The patch size increased a bit. By the way, I noticed that postmaster launches logical replication launcher even if wal_level < logical. Is it right? regards, -- Kyotaro Horiguchi NTT Open Source Software Center diff --git a/src/backend/access/common/printsimple.c b/src/backend/access/common/printsimple.c index 5fe1c72..62fa70d 100644 --- a/src/backend/access/common/printsimple.c +++ b/src/backend/access/common/printsimple.c @@ -96,7 +96,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) pq_sendcountedtext(&buf, VARDATA_ANY(t), VARSIZE_ANY_EXHDR(t), - false); + false, false); } break; @@ -106,7 +106,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) char str[12]; /* sign,10 digits and '\0' */ pg_ltoa(num, str); - pq_sendcountedtext(&buf, str, strlen(str), false); + pq_sendcountedtext(&buf, str, strlen(str), false, false); } break; @@ -116,7 +116,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) char str[23]; /* sign,21 digits and '\0' */ pg_lltoa(num, str); - pq_sendcountedtext(&buf, str, strlen(str), false); + pq_sendcountedtext(&buf, str, strlen(str), false, false); } break; diff --git a/src/backend/access/common/printtup.c b/src/backend/access/common/printtup.c index a2ca2d7..1ebc50f 100644 --- a/src/backend/access/common/printtup.c +++ b/src/backend/access/common/printtup.c @@ -353,7 +353,8 @@ printtup(TupleTableSlot *slot, DestReceiver *self) char *outputstr; outputstr= OutputFunctionCall(&thisState->finfo, attr); - pq_sendcountedtext(&buf, outputstr, strlen(outputstr), false); + pq_sendcountedtext(&buf, outputstr, strlen(outputstr), + false, false); } else { @@ -442,7 +443,7 @@ printtup_20(TupleTableSlot *slot, DestReceiver *self) Assert(thisState->format == 0); outputstr= OutputFunctionCall(&thisState->finfo, attr); - pq_sendcountedtext(&buf, outputstr, strlen(outputstr), true); + pq_sendcountedtext(&buf, outputstr, strlen(outputstr), true, false); } pq_endmessage(&buf); diff --git a/src/backend/libpq/pqformat.c b/src/backend/libpq/pqformat.c index c8cf67c..f799130 100644 --- a/src/backend/libpq/pqformat.c +++ b/src/backend/libpq/pqformat.c @@ -129,23 +129,24 @@ pq_sendbytes(StringInfo buf, const char *data, int datalen) */voidpq_sendcountedtext(StringInfo buf,const char *str, int slen, - bool countincludesself) + bool countincludesself, bool binary){ - int extra = countincludesself ? 4 : 0; + int extra_self = countincludesself ? 4 : 0; + int extra_binary = binary ? 1 : 0; char *p; p = pg_server_to_client(str, slen); if (p !=str) /* actual conversion has been done? */ { slen = strlen(p); - pq_sendint(buf, slen + extra, 4); - appendBinaryStringInfo(buf, p, slen); + pq_sendint(buf, slen + extra_self + extra_binary, 4); + appendBinaryStringInfo(buf, p, slen + extra_binary); pfree(p); } else { - pq_sendint(buf, slen + extra, 4); - appendBinaryStringInfo(buf, str, slen); + pq_sendint(buf, slen + extra_self + extra_binary, 4); + appendBinaryStringInfo(buf, str, slen + extra_binary); }} diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c index bc6e9b5..c5e4214 100644 --- a/src/backend/replication/logical/proto.c +++ b/src/backend/replication/logical/proto.c @@ -417,7 +417,6 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) Form_pg_type typclass; Form_pg_attribute att = desc->attrs[i]; char *outputstr; - int len; /* skip dropped columns */ if (att->attisdropped) @@ -442,9 +441,12 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) pq_sendbyte(out, 't'); /* 'text' data follows */ outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]); - len = strlen(outputstr) + 1; /* null terminated */ - pq_sendint(out, len, 4); /* length */ - pq_sendstring(out, outputstr); /* data */ + + /* + * Here, the string is sent as binary data so the length field counts + * terminating NULL. + */ + pq_sendcountedtext(out, outputstr, strlen(outputstr), false, true); pfree(outputstr); diff --git a/src/backend/tcop/fastpath.c b/src/backend/tcop/fastpath.c index 2efed95..0ca9522 100644 --- a/src/backend/tcop/fastpath.c +++ b/src/backend/tcop/fastpath.c @@ -160,7 +160,8 @@ SendFunctionResult(Datum retval, bool isnull, Oid rettype, int16 format) getTypeOutputInfo(rettype,&typoutput, &typisvarlena); outputstr = OidOutputFunctionCall(typoutput, retval); - pq_sendcountedtext(&buf, outputstr, strlen(outputstr), false); + pq_sendcountedtext(&buf, outputstr, strlen(outputstr), + false, false); pfree(outputstr); } else if (format == 1) diff --git a/src/include/libpq/pqformat.h b/src/include/libpq/pqformat.h index 4df87ec..29cba1a 100644 --- a/src/include/libpq/pqformat.h +++ b/src/include/libpq/pqformat.h @@ -19,7 +19,7 @@ extern void pq_beginmessage(StringInfo buf, char msgtype);extern void pq_sendbyte(StringInfo buf, int byt);externvoid pq_sendbytes(StringInfo buf, const char *data, int datalen);extern void pq_sendcountedtext(StringInfo buf,const char *str, int slen, - bool countincludesself); + bool countincludesself, bool binary);extern void pq_sendtext(StringInfo buf, const char *str, intslen);extern void pq_sendstring(StringInfo buf, const char *str);extern void pq_send_ascii_string(StringInfo buf, constchar *str);
On 06/03/17 11:06, Kyotaro HORIGUCHI wrote: > At Fri, 3 Mar 2017 21:31:24 -0500, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <88397afa-a8ec-8d8a-1c94-94a4795a3870@2ndquadrant.com> >> On 3/3/17 14:51, Petr Jelinek wrote: >>> On 03/03/17 20:37, Peter Eisentraut wrote: >>>> On 2/27/17 00:23, Kyotaro HORIGUCHI wrote: >>>>> Yeah, the patch sends converted string with the length of the >>>>> orignal length. Usually encoding conversion changes the length of >>>>> a string. I doubt that the reverse case was working correctly. >>>> >>>> I think we shouldn't send the length value at all. This might have been >>>> a leftover from an earlier version of the patch. >>>> >>>> See attached patch that removes the length value. >>>> >>> >>> Well the length is necessary to be able to add binary format support in >>> future so it's definitely not an omission. >> >> Right. So it appears the right function to use here is >> pq_sendcountedtext(). However, this sends the strings without null >> termination, so we'd have to do extra processing on the receiving end. >> Or we could add an option to pq_sendcountedtext() to make it do what we >> want. I'd rather stick to standard pqformat.c functions for handling >> the protocol. > > It seems reasonable. I changed the prototype of > pg_sendcountedtext as the following, > > | extern void pq_sendcountedtext(StringInfo buf, const char *str, int slen, > | bool countincludesself, bool binary); > > I think 'binary' seems fine for the parameter here. The patch > size increased a bit. > Hmm I find it bit weird that binary true means NULL terminated while false means not NULL terminated, I would think that the behaviour would be exactly opposite given that text strings are the ones where NULL termination matters? > By the way, I noticed that postmaster launches logical > replication launcher even if wal_level < logical. Is it right? Yes, that came up couple of times in various threads. The logical replication launcher is needed only to start subscriptions and subscriptions don't need wal_level to be logical, only publication needs that. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
At Mon, 6 Mar 2017 11:13:48 +0100, Petr Jelinek <petr.jelinek@2ndquadrant.com> wrote in <a7c0cc59-9c21-238c-b6a7-d6eb519ad9ee@2ndquadrant.com> > >>> Well the length is necessary to be able to add binary format support in > >>> future so it's definitely not an omission. > >> > >> Right. So it appears the right function to use here is > >> pq_sendcountedtext(). However, this sends the strings without null > >> termination, so we'd have to do extra processing on the receiving end. > >> Or we could add an option to pq_sendcountedtext() to make it do what we > >> want. I'd rather stick to standard pqformat.c functions for handling > >> the protocol. > > > > It seems reasonable. I changed the prototype of > > pg_sendcountedtext as the following, > > > > | extern void pq_sendcountedtext(StringInfo buf, const char *str, int slen, > > | bool countincludesself, bool binary); > > > > I think 'binary' seems fine for the parameter here. The patch > > size increased a bit. > > > > Hmm I find it bit weird that binary true means NULL terminated while > false means not NULL terminated, I would think that the behaviour would > be exactly opposite given that text strings are the ones where NULL > termination matters? You're right. I renamed it as with more straightforward name. Addition to that, it contained a stupid bug that sends a garbage byte when non-null-terminated string is not converted. And the comment is edited to reflect the new behavior. > > By the way, I noticed that postmaster launches logical > > replication launcher even if wal_level < logical. Is it right? > > Yes, that came up couple of times in various threads. The logical > replication launcher is needed only to start subscriptions and > subscriptions don't need wal_level to be logical, only publication needs > that. regards, -- Kyotaro Horiguchi NTT Open Source Software Center diff --git a/src/backend/access/common/printsimple.c b/src/backend/access/common/printsimple.c index 5fe1c72..62fa70d 100644 --- a/src/backend/access/common/printsimple.c +++ b/src/backend/access/common/printsimple.c @@ -96,7 +96,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) pq_sendcountedtext(&buf, VARDATA_ANY(t), VARSIZE_ANY_EXHDR(t), - false); + false, false); } break; @@ -106,7 +106,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) char str[12]; /* sign,10 digits and '\0' */ pg_ltoa(num, str); - pq_sendcountedtext(&buf, str, strlen(str), false); + pq_sendcountedtext(&buf, str, strlen(str), false, false); } break; @@ -116,7 +116,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) char str[23]; /* sign,21 digits and '\0' */ pg_lltoa(num, str); - pq_sendcountedtext(&buf, str, strlen(str), false); + pq_sendcountedtext(&buf, str, strlen(str), false, false); } break; diff --git a/src/backend/access/common/printtup.c b/src/backend/access/common/printtup.c index a2ca2d7..1ebc50f 100644 --- a/src/backend/access/common/printtup.c +++ b/src/backend/access/common/printtup.c @@ -353,7 +353,8 @@ printtup(TupleTableSlot *slot, DestReceiver *self) char *outputstr; outputstr= OutputFunctionCall(&thisState->finfo, attr); - pq_sendcountedtext(&buf, outputstr, strlen(outputstr), false); + pq_sendcountedtext(&buf, outputstr, strlen(outputstr), + false, false); } else { @@ -442,7 +443,7 @@ printtup_20(TupleTableSlot *slot, DestReceiver *self) Assert(thisState->format == 0); outputstr= OutputFunctionCall(&thisState->finfo, attr); - pq_sendcountedtext(&buf, outputstr, strlen(outputstr), true); + pq_sendcountedtext(&buf, outputstr, strlen(outputstr), true, false); } pq_endmessage(&buf); diff --git a/src/backend/libpq/pqformat.c b/src/backend/libpq/pqformat.c index c8cf67c..a83c73e 100644 --- a/src/backend/libpq/pqformat.c +++ b/src/backend/libpq/pqformat.c @@ -123,30 +123,39 @@ pq_sendbytes(StringInfo buf, const char *data, int datalen) * The data sent to the frontend by thisroutine is a 4-byte count field * followed by the string. The count includes itself or not, as per the * countincludesselfflag (pre-3.0 protocol requires it to include itself). - * The passed text string need not be null-terminated, and the data sent - * to the frontend isn't either. + * The passed text string need not be null-terminated but should not contain + * null as a part. sendterminator instructs to send null-terminator. * -------------------------------- */voidpq_sendcountedtext(StringInfobuf, const char *str, int slen, - bool countincludesself) + bool countincludesself, bool sendterminator){ - int extra = countincludesself ? 4 : 0; + int extra_self = countincludesself ? 4 : 0; + int extra_term = sendterminator ? 1 : 0; char *p; p = pg_server_to_client(str, slen); if (p != str) /* actual conversion has been done? */ { slen = strlen(p); - pq_sendint(buf, slen + extra, 4); + pq_sendint(buf, slen + extra_self + extra_term, 4); appendBinaryStringInfo(buf, p, slen); pfree(p); } else { - pq_sendint(buf, slen + extra, 4); + pq_sendint(buf, slen + extra_self + extra_term, 4); appendBinaryStringInfo(buf, str, slen); + } + + /* + * str may not be null-terminated. Send the terminator separately from the + * given string. + */ + if (sendterminator) + appendBinaryStringInfo(buf, "", 1);}/* -------------------------------- diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c index bc6e9b5..c5e4214 100644 --- a/src/backend/replication/logical/proto.c +++ b/src/backend/replication/logical/proto.c @@ -417,7 +417,6 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) Form_pg_type typclass; Form_pg_attribute att = desc->attrs[i]; char *outputstr; - int len; /* skip dropped columns */ if (att->attisdropped) @@ -442,9 +441,12 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) pq_sendbyte(out, 't'); /* 'text' data follows */ outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]); - len = strlen(outputstr) + 1; /* null terminated */ - pq_sendint(out, len, 4); /* length */ - pq_sendstring(out, outputstr); /* data */ + + /* + * Here, the string is sent as binary data so the length field counts + * terminating NULL. + */ + pq_sendcountedtext(out, outputstr, strlen(outputstr), false, true); pfree(outputstr); diff --git a/src/backend/tcop/fastpath.c b/src/backend/tcop/fastpath.c index 2efed95..0ca9522 100644 --- a/src/backend/tcop/fastpath.c +++ b/src/backend/tcop/fastpath.c @@ -160,7 +160,8 @@ SendFunctionResult(Datum retval, bool isnull, Oid rettype, int16 format) getTypeOutputInfo(rettype,&typoutput, &typisvarlena); outputstr = OidOutputFunctionCall(typoutput, retval); - pq_sendcountedtext(&buf, outputstr, strlen(outputstr), false); + pq_sendcountedtext(&buf, outputstr, strlen(outputstr), + false, false); pfree(outputstr); } else if (format == 1) diff --git a/src/include/libpq/pqformat.h b/src/include/libpq/pqformat.h index 4df87ec..e724b04 100644 --- a/src/include/libpq/pqformat.h +++ b/src/include/libpq/pqformat.h @@ -19,7 +19,7 @@ extern void pq_beginmessage(StringInfo buf, char msgtype);extern void pq_sendbyte(StringInfo buf, int byt);externvoid pq_sendbytes(StringInfo buf, const char *data, int datalen);extern void pq_sendcountedtext(StringInfo buf,const char *str, int slen, - bool countincludesself); + bool countincludesself, bool sendterminator);extern void pq_sendtext(StringInfo buf, const char *str,int slen);extern void pq_sendstring(StringInfo buf, const char *str);extern void pq_send_ascii_string(StringInfo buf,const char *str);
Mmm. I shot the previous mail halfway. At Mon, 6 Mar 2017 11:13:48 +0100, Petr Jelinek <petr.jelinek@2ndquadrant.com> wrote in <a7c0cc59-9c21-238c-b6a7-d6eb519ad9ee@2ndquadrant.com> > > By the way, I noticed that postmaster launches logical > > replication launcher even if wal_level < logical. Is it right? > > Yes, that came up couple of times in various threads. The logical > replication launcher is needed only to start subscriptions and > subscriptions don't need wal_level to be logical, only publication needs > that. I understand, thanks. Anyway the message seems to have hidden from startup message:) -- Kyotaro Horiguchi NTT Open Source Software Center
After further thinking, I prefer the alternative approach of using pq_sendcountedtext() as is and sticking the trailing zero byte on on the receiving side. This is a more localized change, and keeps the logical replication protocol consistent with the main FE/BE protocol. (Also, we don't need to send a useless byte around.) Patch attached, and also a test case. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
At Wed, 5 Apr 2017 11:33:51 -0400, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <5401fef6-c0c0-7e8a-d8b1-169e30cbd854@2ndquadrant.com> > After further thinking, I prefer the alternative approach of using > pq_sendcountedtext() as is and sticking the trailing zero byte on on the > receiving side. This is a more localized change, and keeps the logical > replication protocol consistent with the main FE/BE protocol. (Also, we > don't need to send a useless byte around.) I'm not sure about the significance of the trailing zero in the the logical replication protocol. Anyway the patch works. > Patch attached, and also a test case. The problem was revealed when a string is shortened by encoding conversion. The test covers the situation. - The patches appliy on the master cleanly. - The patch works for the UTF-8 => EUC_JP case. - The test seems proper. By the way, an untranslatable character on the publisher table stops walsender with the following error. > ERROR: character with byte sequence 0xe6 0xbc 0xa2 in encoding "UTF8" has no equivalent in encoding "LATIN1" > STATEMENT: COPY public.t TO STDOUT > LOG: could not send data to client: Broken pipe > FATAL: connection to client lost walreceiver stops on the opposite side with the following complaint. > ERROR: could not receive data from WAL stream: ERROR: character with byte sequence 0xe6 0xbc 0xa2 in encoding "UTF8"has no equivalent in encoding "LATIN1" > CONTEXT: COPY t, line 1: "" > LOG: worker process: logical replication worker for subscription 16391 sync 16384 (PID 26915) exited with exit code 1 After this, walreceiver repeats reconnecting to master with no wait. Maybe walreceiver had better refrain from reconnection after certain kinds of faiure but it is not an urgent issue. regards, -- Kyotaro Horiguchi NTT Open Source Software Center
On 4/5/17 21:32, Kyotaro HORIGUCHI wrote: > At Wed, 5 Apr 2017 11:33:51 -0400, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <5401fef6-c0c0-7e8a-d8b1-169e30cbd854@2ndquadrant.com> >> After further thinking, I prefer the alternative approach of using >> pq_sendcountedtext() as is and sticking the trailing zero byte on on the >> receiving side. This is a more localized change, and keeps the logical >> replication protocol consistent with the main FE/BE protocol. (Also, we >> don't need to send a useless byte around.) > > I'm not sure about the significance of the trailing zero in the > the logical replication protocol. It doesn't matter. It could be "nice" to have it because then the receiving side doesn't need to add it if it wants to assemble a C string. But for the reasons above, it doesn't seem worth doing that. > walreceiver stops on the opposite side with the following > complaint. > >> ERROR: could not receive data from WAL stream: ERROR: character with byte sequence 0xe6 0xbc 0xa2 in encoding "UTF8"has no equivalent in encoding "LATIN1" >> CONTEXT: COPY t, line 1: "" >> LOG: worker process: logical replication worker for subscription 16391 sync 16384 (PID 26915) exited with exit code 1 > > After this, walreceiver repeats reconnecting to master with no > wait. Maybe walreceiver had better refrain from reconnection > after certain kinds of faiure but it is not an urgent issue. This is controlled by wal_retrieve_retry_interval. I see what you are saying about treating certain errors as potentially permanent, but I think a lot more work would need to be done to be able to manage that. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 4/6/17 11:47, Peter Eisentraut wrote: > On 4/5/17 21:32, Kyotaro HORIGUCHI wrote: >> At Wed, 5 Apr 2017 11:33:51 -0400, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <5401fef6-c0c0-7e8a-d8b1-169e30cbd854@2ndquadrant.com> >>> After further thinking, I prefer the alternative approach of using >>> pq_sendcountedtext() as is and sticking the trailing zero byte on on the >>> receiving side. This is a more localized change, and keeps the logical >>> replication protocol consistent with the main FE/BE protocol. (Also, we >>> don't need to send a useless byte around.) >> >> I'm not sure about the significance of the trailing zero in the >> the logical replication protocol. > > It doesn't matter. It could be "nice" to have it because then the > receiving side doesn't need to add it if it wants to assemble a C > string. But for the reasons above, it doesn't seem worth doing that. committed -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
At Thu, 6 Apr 2017 14:43:56 -0400, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <330a093a-d155-c866-cbf2-8f36fdf51e95@2ndquadrant.com> > On 4/6/17 11:47, Peter Eisentraut wrote: > > On 4/5/17 21:32, Kyotaro HORIGUCHI wrote: > >> At Wed, 5 Apr 2017 11:33:51 -0400, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <5401fef6-c0c0-7e8a-d8b1-169e30cbd854@2ndquadrant.com> > >>> After further thinking, I prefer the alternative approach of using > >>> pq_sendcountedtext() as is and sticking the trailing zero byte on on the > >>> receiving side. This is a more localized change, and keeps the logical > >>> replication protocol consistent with the main FE/BE protocol. (Also, we > >>> don't need to send a useless byte around.) > >> > >> I'm not sure about the significance of the trailing zero in the > >> the logical replication protocol. > > > > It doesn't matter. It could be "nice" to have it because then the > > receiving side doesn't need to add it if it wants to assemble a C > > string. But for the reasons above, it doesn't seem worth doing that. Ok, it's fine for me. > committed Thanks! regards, -- Kyotaro Horiguchi NTT Open Source Software Center