Thread: [JDBC] ArrayIndexOutOfBoundsException from ByteConverter.int8 when resolvinggenerated keys

Hei,


PROBLEM: 
After inserting some records into a table via a PreparedStatement with batch inserts, when trying to get generated key values from the generated keys result set, we sometimes get an ArrayIndexOutOfBoundsException from JDBC driver code when it's trying to convert bytes into long. This happens randomly, like 5% of times this method is used in production.


EXCEPTION:
Some times it's this:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2101) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2505) ~[postgresql-42.0.0.jar:42.0.0]
        ... our code ...

Other times it's just (without a stacktrace!?):
java.lang.ArrayIndexOutOfBoundsException: null

Before upgrading to 42.0.0, we used 9.4.1212 and then the line numbers where a little different:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2060) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2486) ~[postgresql-9.4.1212.jar:9.4.1212]
        ... our code ...


CODE:
public void insert(List<Record> records) {
    String sql = "INSERT INTO portal.record (company_id, status, type, oop_expense, created_by) VALUES (?, ?, ?, ?, ?)";

    Connection connection = DataSourceUtils.getConnection(getJdbcTemplate().getDataSource());
    try (PreparedStatement ps = connection.prepareStatement(sql, new String[] {"id"})) {
        for (Record record : records) {
            ps.setLong(1, record.getCompanyId());
            ps.setString(2, record.getStatus());
            ps.setString(3, record.getType());
            ps.setBoolean(4, record.getOopExpense());
            ps.setString(5, "user123");

            ps.addBatch();
        }

        int[] ints = ps.executeBatch();
        ResultSet generatedKeys = ps.getGeneratedKeys();
        for (int i = 0; i < ints.length; i++) {
            if (ints[i] == 1) {
                if (!generatedKeys.next()) {
                    throw new RuntimeException("Returned keys count from INSERT does not match record count!");
                }
                Record record = records.get(i);
                record.setId(generatedKeys.getLong("id"));   // THIS IS WHERE THE EXCEPTION IS THROWN SOMETIMES
            }
        }
        generatedKeys.close();
    } catch (SQLException e) {
        throw new RuntimeException("Failed to insert records!", e);
    } finally {
        DataSourceUtils.releaseConnection(connection, getJdbcTemplate().getDataSource());
    }
}


CREATE TABLE portal.record
(
  id bigserial NOT NULL,
  company_id bigint NOT NULL,
  status character varying(100) NOT NULL,
  type character varying(100),
  created_by character varying(100) NOT NULL,
  created_date timestamp without time zone NOT NULL DEFAULT now(),
  updated_by character varying(100),
  updated_date timestamp without time zone,
  oop_expense boolean,
  CONSTRAINT pk_record_id PRIMARY KEY (id),
  CONSTRAINT fk_record_company FOREIGN KEY (company_id)
      REFERENCES portal.company (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

Id field sequence is currently at 21540.


BACKGROUND:
Production system
Java 8, Spring Framework, JDBC 42.0.0
PostgreSQL server 9.6.1
We are using similar batch inserts in a few other methods but haven't received exceptions from those.


OTHER:
if (oid == Oid.INT8) {
    return ByteConverter.int8(this_row[col], 0);
}
seems unnecessary, because the readLongValue on the next line would do the same call anyways?


Best regards,
Erko
Hi Erko,

Do you have any idea how many records are inserted in a batch? If so is there any correlation ?

As to your last point. Thanks that code can be changed


On 10 May 2017 at 03:39, Erko Hansar <erko.hansar@gmail.com> wrote:
Hei,


PROBLEM: 
After inserting some records into a table via a PreparedStatement with batch inserts, when trying to get generated key values from the generated keys result set, we sometimes get an ArrayIndexOutOfBoundsException from JDBC driver code when it's trying to convert bytes into long. This happens randomly, like 5% of times this method is used in production.


EXCEPTION:
Some times it's this:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2101) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2505) ~[postgresql-42.0.0.jar:42.0.0]
        ... our code ...

Other times it's just (without a stacktrace!?):
java.lang.ArrayIndexOutOfBoundsException: null

Before upgrading to 42.0.0, we used 9.4.1212 and then the line numbers where a little different:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2060) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2486) ~[postgresql-9.4.1212.jar:9.4.1212]
        ... our code ...


CODE:
public void insert(List<Record> records) {
    String sql = "INSERT INTO portal.record (company_id, status, type, oop_expense, created_by) VALUES (?, ?, ?, ?, ?)";

    Connection connection = DataSourceUtils.getConnection(getJdbcTemplate().getDataSource());
    try (PreparedStatement ps = connection.prepareStatement(sql, new String[] {"id"})) {
        for (Record record : records) {
            ps.setLong(1, record.getCompanyId());
            ps.setString(2, record.getStatus());
            ps.setString(3, record.getType());
            ps.setBoolean(4, record.getOopExpense());
            ps.setString(5, "user123");

            ps.addBatch();
        }

        int[] ints = ps.executeBatch();
        ResultSet generatedKeys = ps.getGeneratedKeys();
        for (int i = 0; i < ints.length; i++) {
            if (ints[i] == 1) {
                if (!generatedKeys.next()) {
                    throw new RuntimeException("Returned keys count from INSERT does not match record count!");
                }
                Record record = records.get(i);
                record.setId(generatedKeys.getLong("id"));   // THIS IS WHERE THE EXCEPTION IS THROWN SOMETIMES
            }
        }
        generatedKeys.close();
    } catch (SQLException e) {
        throw new RuntimeException("Failed to insert records!", e);
    } finally {
        DataSourceUtils.releaseConnection(connection, getJdbcTemplate().getDataSource());
    }
}


CREATE TABLE portal.record
(
  id bigserial NOT NULL,
  company_id bigint NOT NULL,
  status character varying(100) NOT NULL,
  type character varying(100),
  created_by character varying(100) NOT NULL,
  created_date timestamp without time zone NOT NULL DEFAULT now(),
  updated_by character varying(100),
  updated_date timestamp without time zone,
  oop_expense boolean,
  CONSTRAINT pk_record_id PRIMARY KEY (id),
  CONSTRAINT fk_record_company FOREIGN KEY (company_id)
      REFERENCES portal.company (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

Id field sequence is currently at 21540.


BACKGROUND:
Production system
Java 8, Spring Framework, JDBC 42.0.0
PostgreSQL server 9.6.1
We are using similar batch inserts in a few other methods but haven't received exceptions from those.


OTHER:
if (oid == Oid.INT8) {
    return ByteConverter.int8(this_row[col], 0);
}
seems unnecessary, because the readLongValue on the next line would do the same call anyways?


Best regards,
Erko

Hi Erko,

Do you have any idea how many records are inserted in a batch? If so is there any correlation ?

As to your last point. Thanks that code can be changed


On 10 May 2017 at 03:39, Erko Hansar <erko.hansar@gmail.com> wrote:
Hei,


PROBLEM: 
After inserting some records into a table via a PreparedStatement with batch inserts, when trying to get generated key values from the generated keys result set, we sometimes get an ArrayIndexOutOfBoundsException from JDBC driver code when it's trying to convert bytes into long. This happens randomly, like 5% of times this method is used in production.


EXCEPTION:
Some times it's this:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2101) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2505) ~[postgresql-42.0.0.jar:42.0.0]
        ... our code ...

Other times it's just (without a stacktrace!?):
java.lang.ArrayIndexOutOfBoundsException: null

Before upgrading to 42.0.0, we used 9.4.1212 and then the line numbers where a little different:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2060) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2486) ~[postgresql-9.4.1212.jar:9.4.1212]
        ... our code ...


CODE:
public void insert(List<Record> records) {
    String sql = "INSERT INTO portal.record (company_id, status, type, oop_expense, created_by) VALUES (?, ?, ?, ?, ?)";

    Connection connection = DataSourceUtils.getConnection(getJdbcTemplate().getDataSource());
    try (PreparedStatement ps = connection.prepareStatement(sql, new String[] {"id"})) {
        for (Record record : records) {
            ps.setLong(1, record.getCompanyId());
            ps.setString(2, record.getStatus());
            ps.setString(3, record.getType());
            ps.setBoolean(4, record.getOopExpense());
            ps.setString(5, "user123");

            ps.addBatch();
        }

        int[] ints = ps.executeBatch();
        ResultSet generatedKeys = ps.getGeneratedKeys();
        for (int i = 0; i < ints.length; i++) {
            if (ints[i] == 1) {
                if (!generatedKeys.next()) {
                    throw new RuntimeException("Returned keys count from INSERT does not match record count!");
                }
                Record record = records.get(i);
                record.setId(generatedKeys.getLong("id"));   // THIS IS WHERE THE EXCEPTION IS THROWN SOMETIMES
            }
        }
        generatedKeys.close();
    } catch (SQLException e) {
        throw new RuntimeException("Failed to insert records!", e);
    } finally {
        DataSourceUtils.releaseConnection(connection, getJdbcTemplate().getDataSource());
    }
}


CREATE TABLE portal.record
(
  id bigserial NOT NULL,
  company_id bigint NOT NULL,
  status character varying(100) NOT NULL,
  type character varying(100),
  created_by character varying(100) NOT NULL,
  created_date timestamp without time zone NOT NULL DEFAULT now(),
  updated_by character varying(100),
  updated_date timestamp without time zone,
  oop_expense boolean,
  CONSTRAINT pk_record_id PRIMARY KEY (id),
  CONSTRAINT fk_record_company FOREIGN KEY (company_id)
      REFERENCES portal.company (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

Id field sequence is currently at 21540.


BACKGROUND:
Production system
Java 8, Spring Framework, JDBC 42.0.0
PostgreSQL server 9.6.1
We are using similar batch inserts in a few other methods but haven't received exceptions from those.


OTHER:
if (oid == Oid.INT8) {
    return ByteConverter.int8(this_row[col], 0);
}
seems unnecessary, because the readLongValue on the next line would do the same call anyways?


Best regards,
Erko

Thank you for getting back so quickly,

1) I know that this problem has occurred multiple times when inserting only 1 record. The end users *think* that they have seen it when handling multiple records too. Unfortunately I can't be sure because the transactions are rolled back and we don't log the "potentially inserted data". I will improve logging to track the number of rows in the batch and when the exception happens next time, I can give more information.

2) There was a similar problem discussed in 2012, which ended with an explanation: https://www.postgresql.org/message-id/alpine.BSO.2.00.1211081338240.29600%40leary.csoft.net
Was this ever fixed?

Br,
Erko



On 10 May 2017 at 13:56, Dave Cramer <pg@fastcrypt.com> wrote:
Hi Erko,

Do you have any idea how many records are inserted in a batch? If so is there any correlation ?

As to your last point. Thanks that code can be changed


On 10 May 2017 at 03:39, Erko Hansar <erko.hansar@gmail.com> wrote:
Hei,


PROBLEM: 
After inserting some records into a table via a PreparedStatement with batch inserts, when trying to get generated key values from the generated keys result set, we sometimes get an ArrayIndexOutOfBoundsException from JDBC driver code when it's trying to convert bytes into long. This happens randomly, like 5% of times this method is used in production.


EXCEPTION:
Some times it's this:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2101) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2505) ~[postgresql-42.0.0.jar:42.0.0]
        ... our code ...

Other times it's just (without a stacktrace!?):
java.lang.ArrayIndexOutOfBoundsException: null

Before upgrading to 42.0.0, we used 9.4.1212 and then the line numbers where a little different:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2060) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2486) ~[postgresql-9.4.1212.jar:9.4.1212]
        ... our code ...


CODE:
public void insert(List<Record> records) {
    String sql = "INSERT INTO portal.record (company_id, status, type, oop_expense, created_by) VALUES (?, ?, ?, ?, ?)";

    Connection connection = DataSourceUtils.getConnection(getJdbcTemplate().getDataSource());
    try (PreparedStatement ps = connection.prepareStatement(sql, new String[] {"id"})) {
        for (Record record : records) {
            ps.setLong(1, record.getCompanyId());
            ps.setString(2, record.getStatus());
            ps.setString(3, record.getType());
            ps.setBoolean(4, record.getOopExpense());
            ps.setString(5, "user123");

            ps.addBatch();
        }

        int[] ints = ps.executeBatch();
        ResultSet generatedKeys = ps.getGeneratedKeys();
        for (int i = 0; i < ints.length; i++) {
            if (ints[i] == 1) {
                if (!generatedKeys.next()) {
                    throw new RuntimeException("Returned keys count from INSERT does not match record count!");
                }
                Record record = records.get(i);
                record.setId(generatedKeys.getLong("id"));   // THIS IS WHERE THE EXCEPTION IS THROWN SOMETIMES
            }
        }
        generatedKeys.close();
    } catch (SQLException e) {
        throw new RuntimeException("Failed to insert records!", e);
    } finally {
        DataSourceUtils.releaseConnection(connection, getJdbcTemplate().getDataSource());
    }
}


CREATE TABLE portal.record
(
  id bigserial NOT NULL,
  company_id bigint NOT NULL,
  status character varying(100) NOT NULL,
  type character varying(100),
  created_by character varying(100) NOT NULL,
  created_date timestamp without time zone NOT NULL DEFAULT now(),
  updated_by character varying(100),
  updated_date timestamp without time zone,
  oop_expense boolean,
  CONSTRAINT pk_record_id PRIMARY KEY (id),
  CONSTRAINT fk_record_company FOREIGN KEY (company_id)
      REFERENCES portal.company (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

Id field sequence is currently at 21540.


BACKGROUND:
Production system
Java 8, Spring Framework, JDBC 42.0.0
PostgreSQL server 9.6.1
We are using similar batch inserts in a few other methods but haven't received exceptions from those.


OTHER:
if (oid == Oid.INT8) {
    return ByteConverter.int8(this_row[col], 0);
}
seems unnecessary, because the readLongValue on the next line would do the same call anyways?


Best regards,
Erko


Thank you for getting back so quickly,

1) I know that this problem has occurred multiple times when inserting only 1 record. The end users *think* that they have seen it when handling multiple records too. Unfortunately I can't be sure because the transactions are rolled back and we don't log the "potentially inserted data". I will improve logging to track the number of rows in the batch and when the exception happens next time, I can give more information.

2) There was a similar problem discussed in 2012, which ended with an explanation: https://www.postgresql.org/message-id/alpine.BSO.2.00.1211081338240.29600%40leary.csoft.net
Was this ever fixed?

Br,
Erko



On 10 May 2017 at 13:56, Dave Cramer <pg@fastcrypt.com> wrote:
Hi Erko,

Do you have any idea how many records are inserted in a batch? If so is there any correlation ?

As to your last point. Thanks that code can be changed


On 10 May 2017 at 03:39, Erko Hansar <erko.hansar@gmail.com> wrote:
Hei,


PROBLEM: 
After inserting some records into a table via a PreparedStatement with batch inserts, when trying to get generated key values from the generated keys result set, we sometimes get an ArrayIndexOutOfBoundsException from JDBC driver code when it's trying to convert bytes into long. This happens randomly, like 5% of times this method is used in production.


EXCEPTION:
Some times it's this:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2101) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2505) ~[postgresql-42.0.0.jar:42.0.0]
        ... our code ...

Other times it's just (without a stacktrace!?):
java.lang.ArrayIndexOutOfBoundsException: null

Before upgrading to 42.0.0, we used 9.4.1212 and then the line numbers where a little different:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2060) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2486) ~[postgresql-9.4.1212.jar:9.4.1212]
        ... our code ...


CODE:
public void insert(List<Record> records) {
    String sql = "INSERT INTO portal.record (company_id, status, type, oop_expense, created_by) VALUES (?, ?, ?, ?, ?)";

    Connection connection = DataSourceUtils.getConnection(getJdbcTemplate().getDataSource());
    try (PreparedStatement ps = connection.prepareStatement(sql, new String[] {"id"})) {
        for (Record record : records) {
            ps.setLong(1, record.getCompanyId());
            ps.setString(2, record.getStatus());
            ps.setString(3, record.getType());
            ps.setBoolean(4, record.getOopExpense());
            ps.setString(5, "user123");

            ps.addBatch();
        }

        int[] ints = ps.executeBatch();
        ResultSet generatedKeys = ps.getGeneratedKeys();
        for (int i = 0; i < ints.length; i++) {
            if (ints[i] == 1) {
                if (!generatedKeys.next()) {
                    throw new RuntimeException("Returned keys count from INSERT does not match record count!");
                }
                Record record = records.get(i);
                record.setId(generatedKeys.getLong("id"));   // THIS IS WHERE THE EXCEPTION IS THROWN SOMETIMES
            }
        }
        generatedKeys.close();
    } catch (SQLException e) {
        throw new RuntimeException("Failed to insert records!", e);
    } finally {
        DataSourceUtils.releaseConnection(connection, getJdbcTemplate().getDataSource());
    }
}


CREATE TABLE portal.record
(
  id bigserial NOT NULL,
  company_id bigint NOT NULL,
  status character varying(100) NOT NULL,
  type character varying(100),
  created_by character varying(100) NOT NULL,
  created_date timestamp without time zone NOT NULL DEFAULT now(),
  updated_by character varying(100),
  updated_date timestamp without time zone,
  oop_expense boolean,
  CONSTRAINT pk_record_id PRIMARY KEY (id),
  CONSTRAINT fk_record_company FOREIGN KEY (company_id)
      REFERENCES portal.company (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

Id field sequence is currently at 21540.


BACKGROUND:
Production system
Java 8, Spring Framework, JDBC 42.0.0
PostgreSQL server 9.6.1
We are using similar batch inserts in a few other methods but haven't received exceptions from those.


OTHER:
if (oid == Oid.INT8) {
    return ByteConverter.int8(this_row[col], 0);
}
seems unnecessary, because the readLongValue on the next line would do the same call anyways?


Best regards,
Erko


Erko,

On 10 May 2017 at 07:12, Erko Hansar <erko.hansar@gmail.com> wrote:
Thank you for getting back so quickly,

1) I know that this problem has occurred multiple times when inserting only 1 record. The end users *think* that they have seen it when handling multiple records too. Unfortunately I can't be sure because the transactions are rolled back and we don't log the "potentially inserted data". I will improve logging to track the number of rows in the batch and when the exception happens next time, I can give more information.


Thanks, that would be useful.
 
2) There was a similar problem discussed in 2012, which ended with an explanation: https://www.postgresql.org/message-id/alpine.BSO.2.00.1211081338240.29600%40leary.csoft.net
Was this ever fixed?


I expect so, but have to do some digging



 
Br,
Erko



On 10 May 2017 at 13:56, Dave Cramer <pg@fastcrypt.com> wrote:
Hi Erko,

Do you have any idea how many records are inserted in a batch? If so is there any correlation ?

As to your last point. Thanks that code can be changed


On 10 May 2017 at 03:39, Erko Hansar <erko.hansar@gmail.com> wrote:
Hei,


PROBLEM: 
After inserting some records into a table via a PreparedStatement with batch inserts, when trying to get generated key values from the generated keys result set, we sometimes get an ArrayIndexOutOfBoundsException from JDBC driver code when it's trying to convert bytes into long. This happens randomly, like 5% of times this method is used in production.


EXCEPTION:
Some times it's this:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2101) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2505) ~[postgresql-42.0.0.jar:42.0.0]
        ... our code ...

Other times it's just (without a stacktrace!?):
java.lang.ArrayIndexOutOfBoundsException: null

Before upgrading to 42.0.0, we used 9.4.1212 and then the line numbers where a little different:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2060) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2486) ~[postgresql-9.4.1212.jar:9.4.1212]
        ... our code ...


CODE:
public void insert(List<Record> records) {
    String sql = "INSERT INTO portal.record (company_id, status, type, oop_expense, created_by) VALUES (?, ?, ?, ?, ?)";

    Connection connection = DataSourceUtils.getConnection(getJdbcTemplate().getDataSource());
    try (PreparedStatement ps = connection.prepareStatement(sql, new String[] {"id"})) {
        for (Record record : records) {
            ps.setLong(1, record.getCompanyId());
            ps.setString(2, record.getStatus());
            ps.setString(3, record.getType());
            ps.setBoolean(4, record.getOopExpense());
            ps.setString(5, "user123");

            ps.addBatch();
        }

        int[] ints = ps.executeBatch();
        ResultSet generatedKeys = ps.getGeneratedKeys();
        for (int i = 0; i < ints.length; i++) {
            if (ints[i] == 1) {
                if (!generatedKeys.next()) {
                    throw new RuntimeException("Returned keys count from INSERT does not match record count!");
                }
                Record record = records.get(i);
                record.setId(generatedKeys.getLong("id"));   // THIS IS WHERE THE EXCEPTION IS THROWN SOMETIMES
            }
        }
        generatedKeys.close();
    } catch (SQLException e) {
        throw new RuntimeException("Failed to insert records!", e);
    } finally {
        DataSourceUtils.releaseConnection(connection, getJdbcTemplate().getDataSource());
    }
}


CREATE TABLE portal.record
(
  id bigserial NOT NULL,
  company_id bigint NOT NULL,
  status character varying(100) NOT NULL,
  type character varying(100),
  created_by character varying(100) NOT NULL,
  created_date timestamp without time zone NOT NULL DEFAULT now(),
  updated_by character varying(100),
  updated_date timestamp without time zone,
  oop_expense boolean,
  CONSTRAINT pk_record_id PRIMARY KEY (id),
  CONSTRAINT fk_record_company FOREIGN KEY (company_id)
      REFERENCES portal.company (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

Id field sequence is currently at 21540.


BACKGROUND:
Production system
Java 8, Spring Framework, JDBC 42.0.0
PostgreSQL server 9.6.1
We are using similar batch inserts in a few other methods but haven't received exceptions from those.


OTHER:
if (oid == Oid.INT8) {
    return ByteConverter.int8(this_row[col], 0);
}
seems unnecessary, because the readLongValue on the next line would do the same call anyways?


Best regards,
Erko



Erko,

On 10 May 2017 at 07:12, Erko Hansar <erko.hansar@gmail.com> wrote:
Thank you for getting back so quickly,

1) I know that this problem has occurred multiple times when inserting only 1 record. The end users *think* that they have seen it when handling multiple records too. Unfortunately I can't be sure because the transactions are rolled back and we don't log the "potentially inserted data". I will improve logging to track the number of rows in the batch and when the exception happens next time, I can give more information.


Thanks, that would be useful.
 
2) There was a similar problem discussed in 2012, which ended with an explanation: https://www.postgresql.org/message-id/alpine.BSO.2.00.1211081338240.29600%40leary.csoft.net
Was this ever fixed?


I expect so, but have to do some digging



 
Br,
Erko



On 10 May 2017 at 13:56, Dave Cramer <pg@fastcrypt.com> wrote:
Hi Erko,

Do you have any idea how many records are inserted in a batch? If so is there any correlation ?

As to your last point. Thanks that code can be changed


On 10 May 2017 at 03:39, Erko Hansar <erko.hansar@gmail.com> wrote:
Hei,


PROBLEM: 
After inserting some records into a table via a PreparedStatement with batch inserts, when trying to get generated key values from the generated keys result set, we sometimes get an ArrayIndexOutOfBoundsException from JDBC driver code when it's trying to convert bytes into long. This happens randomly, like 5% of times this method is used in production.


EXCEPTION:
Some times it's this:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2101) ~[postgresql-42.0.0.jar:42.0.0]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2505) ~[postgresql-42.0.0.jar:42.0.0]
        ... our code ...

Other times it's just (without a stacktrace!?):
java.lang.ArrayIndexOutOfBoundsException: null

Before upgrading to 42.0.0, we used 9.4.1212 and then the line numbers where a little different:
java.lang.ArrayIndexOutOfBoundsException: 5
        at org.postgresql.util.ByteConverter.int8(ByteConverter.java:27) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2060) ~[postgresql-9.4.1212.jar:9.4.1212]
        at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2486) ~[postgresql-9.4.1212.jar:9.4.1212]
        ... our code ...


CODE:
public void insert(List<Record> records) {
    String sql = "INSERT INTO portal.record (company_id, status, type, oop_expense, created_by) VALUES (?, ?, ?, ?, ?)";

    Connection connection = DataSourceUtils.getConnection(getJdbcTemplate().getDataSource());
    try (PreparedStatement ps = connection.prepareStatement(sql, new String[] {"id"})) {
        for (Record record : records) {
            ps.setLong(1, record.getCompanyId());
            ps.setString(2, record.getStatus());
            ps.setString(3, record.getType());
            ps.setBoolean(4, record.getOopExpense());
            ps.setString(5, "user123");

            ps.addBatch();
        }

        int[] ints = ps.executeBatch();
        ResultSet generatedKeys = ps.getGeneratedKeys();
        for (int i = 0; i < ints.length; i++) {
            if (ints[i] == 1) {
                if (!generatedKeys.next()) {
                    throw new RuntimeException("Returned keys count from INSERT does not match record count!");
                }
                Record record = records.get(i);
                record.setId(generatedKeys.getLong("id"));   // THIS IS WHERE THE EXCEPTION IS THROWN SOMETIMES
            }
        }
        generatedKeys.close();
    } catch (SQLException e) {
        throw new RuntimeException("Failed to insert records!", e);
    } finally {
        DataSourceUtils.releaseConnection(connection, getJdbcTemplate().getDataSource());
    }
}


CREATE TABLE portal.record
(
  id bigserial NOT NULL,
  company_id bigint NOT NULL,
  status character varying(100) NOT NULL,
  type character varying(100),
  created_by character varying(100) NOT NULL,
  created_date timestamp without time zone NOT NULL DEFAULT now(),
  updated_by character varying(100),
  updated_date timestamp without time zone,
  oop_expense boolean,
  CONSTRAINT pk_record_id PRIMARY KEY (id),
  CONSTRAINT fk_record_company FOREIGN KEY (company_id)
      REFERENCES portal.company (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

Id field sequence is currently at 21540.


BACKGROUND:
Production system
Java 8, Spring Framework, JDBC 42.0.0
PostgreSQL server 9.6.1
We are using similar batch inserts in a few other methods but haven't received exceptions from those.


OTHER:
if (oid == Oid.INT8) {
    return ByteConverter.int8(this_row[col], 0);
}
seems unnecessary, because the readLongValue on the next line would do the same call anyways?


Best regards,
Erko



Erko>2) There was a similar problem discussed in 2012, which ended with an explanation: https://www.postgresql.org/message-id/alpine.BSO.2.00.1211081338240.29600%40leary.csoft.net

I though "text-binary" flip might cause issue here. bigserial is an alias for int8, and int8 binary representation is 8 bytes always.

Erko, if you file a pull request https://github.com/pgjdbc/pgjdbc with a test case, then it would get executed and we'll know if the issue is still there and/or which DB versions are affected.

Vladimir
Erko>2) There was a similar problem discussed in 2012, which ended with an explanation: https://www.postgresql.org/message-id/alpine.BSO.2.00.1211081338240.29600%40leary.csoft.net

I though "text-binary" flip might cause issue here. bigserial is an alias for int8, and int8 binary representation is 8 bytes always.

Erko, if you file a pull request https://github.com/pgjdbc/pgjdbc with a test case, then it would get executed and we'll know if the issue is still there and/or which DB versions are affected.

Vladimir
Here's the case:
1) Execute "insert" via non-batch multiple times. This causes pgjdbc to server-prepare the insert and use binary transfer for the generated keys.
Note: so far so good, it is perfectly fine.
2) Execute the same SQL via batch API. Here things break. Batch-insert wants generated keys in text format (there's QUERY_NO_BINARY_TRANSFER flag exactly for that case), and it does ask backend to send the data in text format. However, query metadata at pgjdbc side specify that "output columns are in binary format", thus generated keys resultset tries to access columns as binary and fails.

So possible workarounds are:
WA1) Add special marker (that is a comment) to batch SQL. That will make sure batch-insert uses its own statement.
WA2) mark INT8 as text-only. It is something like binaryTransferDisable=INT8 connection property

Vladimir

ср, 10 мая 2017 г. в 14:37, Vladimir Sitnikov <sitnikov.vladimir@gmail.com>:
Erko>2) There was a similar problem discussed in 2012, which ended with an explanation: https://www.postgresql.org/message-id/alpine.BSO.2.00.1211081338240.29600%40leary.csoft.net

I though "text-binary" flip might cause issue here. bigserial is an alias for int8, and int8 binary representation is 8 bytes always.

Erko, if you file a pull request https://github.com/pgjdbc/pgjdbc with a test case, then it would get executed and we'll know if the issue is still there and/or which DB versions are affected.

Vladimir
Here's the case:
1) Execute "insert" via non-batch multiple times. This causes pgjdbc to server-prepare the insert and use binary transfer for the generated keys.
Note: so far so good, it is perfectly fine.
2) Execute the same SQL via batch API. Here things break. Batch-insert wants generated keys in text format (there's QUERY_NO_BINARY_TRANSFER flag exactly for that case), and it does ask backend to send the data in text format. However, query metadata at pgjdbc side specify that "output columns are in binary format", thus generated keys resultset tries to access columns as binary and fails.

So possible workarounds are:
WA1) Add special marker (that is a comment) to batch SQL. That will make sure batch-insert uses its own statement.
WA2) mark INT8 as text-only. It is something like binaryTransferDisable=INT8 connection property

Vladimir

ср, 10 мая 2017 г. в 14:37, Vladimir Sitnikov <sitnikov.vladimir@gmail.com>:
Erko>2) There was a similar problem discussed in 2012, which ended with an explanation: https://www.postgresql.org/message-id/alpine.BSO.2.00.1211081338240.29600%40leary.csoft.net

I though "text-binary" flip might cause issue here. bigserial is an alias for int8, and int8 binary representation is 8 bytes always.

Erko, if you file a pull request https://github.com/pgjdbc/pgjdbc with a test case, then it would get executed and we'll know if the issue is still there and/or which DB versions are affected.

Vladimir

On 10 May 2017 at 16:35, Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:
Here's the case:
1) Execute "insert" via non-batch multiple times. This causes pgjdbc to server-prepare the insert and use binary transfer for the generated keys.
Note: so far so good, it is perfectly fine.
2) Execute the same SQL via batch API. Here things break. Batch-insert wants generated keys in text format (there's QUERY_NO_BINARY_TRANSFER flag exactly for that case), and it does ask backend to send the data in text format. However, query metadata at pgjdbc side specify that "output columns are in binary format", thus generated keys resultset tries to access columns as binary and fails.

I would have thought this would have been solved by now :( 

So possible workarounds are:
WA1) Add special marker (that is a comment) to batch SQL. That will make sure batch-insert uses its own statement.
This is ugly 
WA2) mark INT8 as text-only. It is something like binaryTransferDisable=INT8 connection property
This is probably preferable 





On 10 May 2017 at 16:35, Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:
Here's the case:
1) Execute "insert" via non-batch multiple times. This causes pgjdbc to server-prepare the insert and use binary transfer for the generated keys.
Note: so far so good, it is perfectly fine.
2) Execute the same SQL via batch API. Here things break. Batch-insert wants generated keys in text format (there's QUERY_NO_BINARY_TRANSFER flag exactly for that case), and it does ask backend to send the data in text format. However, query metadata at pgjdbc side specify that "output columns are in binary format", thus generated keys resultset tries to access columns as binary and fails.

I would have thought this would have been solved by now :( 

So possible workarounds are:
WA1) Add special marker (that is a comment) to batch SQL. That will make sure batch-insert uses its own statement.
This is ugly 
WA2) mark INT8 as text-only. It is something like binaryTransferDisable=INT8 connection property
This is probably preferable 




Today's stats:
08:30:36 Inserting 1 records >> EXCEPTION
08:30:47 Inserting 1 records >> EXCEPTION
09:13:59 Inserting 7 records >> OK
09:14:17 Inserting 18 records >> OK
09:14:27 Inserting 2 records >> OK
09:14:52 Inserting 4 records >> OK
09:15:02 Inserting 4 records >> OK
09:15:13 Inserting 2 records >> OK
09:15:38 Inserting 8 records >> OK
09:26:05 Inserting 1 records >> EXCEPTION
09:42:18 Inserting 5 records >> OK
09:42:28 Inserting 4 records >> OK
10:43:08 Inserting 2 records >> OK
10:43:56 Inserting 13 records >> OK
15:30:14 Inserting 1 records >> OK
15:31:27 Inserting 1 records >> OK
15:31:42 Inserting 1 records >> OK
15:31:44 Inserting 1 records >> OK
15:32:07 Inserting 2 records >> OK

So there is no clear pattern.

WA3) rewrite my code and stop using batch inserts and do single inserts because it doesn't matter if the code takes 30ms instead of 15ms to run
WA4) before batch insert, select X keys from sequence and then insert with those specific keys and stop using the generated keys in the batch methods

Anyways is this a hard thing to fix on the jdbc side?

BR,
Erko

On 11 May 2017 at 15:36, Dave Cramer <pg@fastcrypt.com> wrote:

On 10 May 2017 at 16:35, Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:
Here's the case:
1) Execute "insert" via non-batch multiple times. This causes pgjdbc to server-prepare the insert and use binary transfer for the generated keys.
Note: so far so good, it is perfectly fine.
2) Execute the same SQL via batch API. Here things break. Batch-insert wants generated keys in text format (there's QUERY_NO_BINARY_TRANSFER flag exactly for that case), and it does ask backend to send the data in text format. However, query metadata at pgjdbc side specify that "output columns are in binary format", thus generated keys resultset tries to access columns as binary and fails.

I would have thought this would have been solved by now :( 

So possible workarounds are:
WA1) Add special marker (that is a comment) to batch SQL. That will make sure batch-insert uses its own statement.
This is ugly 
WA2) mark INT8 as text-only. It is something like binaryTransferDisable=INT8 connection property
This is probably preferable 





Today's stats:
08:30:36 Inserting 1 records >> EXCEPTION
08:30:47 Inserting 1 records >> EXCEPTION
09:13:59 Inserting 7 records >> OK
09:14:17 Inserting 18 records >> OK
09:14:27 Inserting 2 records >> OK
09:14:52 Inserting 4 records >> OK
09:15:02 Inserting 4 records >> OK
09:15:13 Inserting 2 records >> OK
09:15:38 Inserting 8 records >> OK
09:26:05 Inserting 1 records >> EXCEPTION
09:42:18 Inserting 5 records >> OK
09:42:28 Inserting 4 records >> OK
10:43:08 Inserting 2 records >> OK
10:43:56 Inserting 13 records >> OK
15:30:14 Inserting 1 records >> OK
15:31:27 Inserting 1 records >> OK
15:31:42 Inserting 1 records >> OK
15:31:44 Inserting 1 records >> OK
15:32:07 Inserting 2 records >> OK

So there is no clear pattern.

WA3) rewrite my code and stop using batch inserts and do single inserts because it doesn't matter if the code takes 30ms instead of 15ms to run
WA4) before batch insert, select X keys from sequence and then insert with those specific keys and stop using the generated keys in the batch methods

Anyways is this a hard thing to fix on the jdbc side?

BR,
Erko

On 11 May 2017 at 15:36, Dave Cramer <pg@fastcrypt.com> wrote:

On 10 May 2017 at 16:35, Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:
Here's the case:
1) Execute "insert" via non-batch multiple times. This causes pgjdbc to server-prepare the insert and use binary transfer for the generated keys.
Note: so far so good, it is perfectly fine.
2) Execute the same SQL via batch API. Here things break. Batch-insert wants generated keys in text format (there's QUERY_NO_BINARY_TRANSFER flag exactly for that case), and it does ask backend to send the data in text format. However, query metadata at pgjdbc side specify that "output columns are in binary format", thus generated keys resultset tries to access columns as binary and fails.

I would have thought this would have been solved by now :( 

So possible workarounds are:
WA1) Add special marker (that is a comment) to batch SQL. That will make sure batch-insert uses its own statement.
This is ugly 
WA2) mark INT8 as text-only. It is something like binaryTransferDisable=INT8 connection property
This is probably preferable