Re: FileFallocate misbehaving on XFS - Mailing list pgsql-hackers
From | Michael Harris |
---|---|
Subject | Re: FileFallocate misbehaving on XFS |
Date | |
Msg-id | CADofcAX8eRgGHgRkC8RHmr1fAmaCEXg5xKAgfPFkRi9Nn-L4Lg@mail.gmail.com Whole thread Raw |
In response to | Re: FileFallocate misbehaving on XFS (Andres Freund <andres@anarazel.de>) |
List | pgsql-hackers |
Hi Andres On Wed, 11 Dec 2024 at 03:09, Andres Freund <andres@anarazel.de> wrote: > I think it's implied, but I just want to be sure: This was one of the affected > systems? Yes, correct. > Any chance to get df output? I'm mainly curious about the number of used > inodes. Sorry, I could swear I had included that already! Here it is: # df /var/opt Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/ippvg-ipplv 4197492228 3803866716 393625512 91% /var/opt # df -i /var/opt Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/ippvg-ipplv 419954240 1568137 418386103 1% /var/opt > Could you show the mount options that end up being used? > grep /var/opt /proc/mounts /dev/mapper/ippvg-ipplv /var/opt xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 These seem to be the defaults. > I assume you have never set XFS options for the PG directory or files within > it? Correct. > Could you show > xfs_io -r -c lsattr -c stat -c statfs /path/to/directory/with/enospc -p--------------X pg_tblspc/16402/PG_16_202307071/49163/1132925906.4 fd.path = "pg_tblspc/16402/PG_16_202307071/49163/1132925906.4" fd.flags = non-sync,non-direct,read-only stat.ino = 4320612794 stat.type = regular file stat.size = 201211904 stat.blocks = 393000 fsxattr.xflags = 0x80000002 [-p--------------X] fsxattr.projid = 0 fsxattr.extsize = 0 fsxattr.cowextsize = 0 fsxattr.nextents = 165 fsxattr.naextents = 0 dioattr.mem = 0x200 dioattr.miniosz = 512 dioattr.maxiosz = 2147483136 fd.path = "pg_tblspc/16402/PG_16_202307071/49163/1132925906.4" statfs.f_bsize = 4096 statfs.f_blocks = 1049373057 statfs.f_bavail = 98406378 statfs.f_files = 419954240 statfs.f_ffree = 418386103 statfs.f_flags = 0x1020 geom.bsize = 4096 geom.agcount = 4 geom.agblocks = 262471424 geom.datablocks = 1049885696 geom.rtblocks = 0 geom.rtextents = 0 geom.rtextsize = 1 geom.sunit = 0 geom.swidth = 0 counts.freedata = 98406378 counts.freertx = 0 counts.freeino = 864183 counts.allocino = 2432320 > I'd try monitoring the per-ag free space over time and see if the the ENOSPC > issue is correlated with one AG getting full. 'freesp' is probably too > expensive for that, but it looks like > xfs_db -r -c agresv /dev/nvme6n1 > should work? > > Actually that output might be interesting to see, even when you don't hit the > issue. I will see if I can set that up. > How many partitions are there for each of the tables? Mainly wondering because > of the number of inodes being used. It is configurable and varies from site to site. It could range from 7 up to maybe 60. > Are all of the active tables within one database? That could be relevant due > to per-directory behaviour of free space allocation. Each pg instance may have one or more application databases. Typically data is being written into all of them (although sometimes a database will be archived, with no new data going into it). You might be onto something though. The system I got the above prints from is only experiencing this issue in one directory - that might not mean very much though, it only has 2 databases and one of them looks like it is not receiving imports. But another system I can access has multiple databases with ongoing imports, yet all the errors bar one relate to one directory. I will collect some data from that system and post it shortly. Cheers Mike
pgsql-hackers by date: