Getting "could not read block" error when creating an index on a function. - Mailing list pgsql-general

From Demitri Muna
Subject Getting "could not read block" error when creating an index on a function.
Date
Msg-id E88B1929-C94D-45C1-B11F-0B48B39D64B7@demitri.com
Whole thread Raw
Responses Re: Getting "could not read block" error when creating an index on a function.  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Hello,

I’m getting stuck on a problem I was hoping I could get some help with. I’m trying to create an index for the results
ofa function that touches two tables like this and get the following error: 

CREATE INDEX my_idx ON mytable (first_time(id));
ERROR:  could not read block 0 in file "base/16386/42868": read only 0 of 8192 bytes
CONTEXT:  SQL function "first_time" during startup

Every time I run this, the last number in the block path increments by one, e.g.

ERROR:  could not read block 0 in file "base/16386/42869": read only 0 of 8192 bytes
ERROR:  could not read block 0 in file "base/16386/42870": read only 0 of 8192 bytes
ERROR:  could not read block 0 in file "base/16386/42871": read only 0 of 8192 bytes

The database is sitting on two tablespaces (pg_default and ‘data2’). When I try to use the second, I get:

CREATE INDEX my_idx ON mytable (first_time(id)) TABLESPACE data2;
ERROR:  could not read block 0 in file "pg_tblspc/17007/PG_13_202007201/16386/42870": read only 0 of 8192 bytes
CONTEXT:  SQL function "first_time" during startup

with the last number similarly incrementing upon repeated attempts.

Relevant details:

* PostgreSQL version 13.1 running on Ubuntu 20.04.1 on an AWS instance using 2 x 8TB EBS storage.
* The database is ~15TB in size.
* I am not worried about data loss; the database can be considered read-only and I have all of the files needed to
recreateany table. 
* I am hoping to not recreate the whole database from scratch since doing so and creating the required indices will
takemore than a week. 
* I used these settings while importing the files to speed the process since I was not worried about data loss to
improvethe import speed (all turned back on after import): 

autovacuum = off
synchronous_commit=off
fsync = off
full_page_writes = off

* I will not do the above again. :)
* The postmaster server crashed at least twice during the process due to running out of disk space.
* Creating any number of new indices on bare columns is no problem.
* I DROPped and recreated the functions with no change.
* This statement returns no results (but maybe am looking at the wrong thing):

select n.nspname AS schema, c.relname AS realtion from pg_class c inner join pg_namespace n on (c.relnamespace = n.oid)
wherec.relfilenode = '16386’; 

From reading about this error (missing block files) it suggests I have some database corruption, which is fine as I can
easilydelete anything problematic and recreate. I’ve deleted the indices related to the function and recreated them,
butthe same error remains. Accessing the related tables seems ok, but with that much data I can’t guarantee that. I
don’tget any errors.  

Any help would be appreciated!

Cheers,
Demitri




pgsql-general by date:

Previous
From: Stephen Frost
Date:
Subject: Re: Problem with ssl and psql in Postgresql 13
Next
From: Tom Lane
Date:
Subject: Re: Problem with ssl and psql in Postgresql 13