Thread: Doubts after evaluating Xlogdump

Doubts after evaluating Xlogdump

From
Vinay Gupta
Date:
Hi,

Is dere any better way we can understand the contents of pg_xlog files. I am using xlogdump for analysing my xlog contents. Sometimes in my database server in some particular hours, the pgxlog files generation exceeded than the expected counts almost 3X than the  normal count. 

--- Size of 1 hour xlog files (620 files)
 
[root@db xlogdump1]# du -sh /tmp/pgxlogdump/
9.9G    /tmp/pgxlogdump/
[root@db1002 xlogdump1]# 

--- Parsing Xlog files

[root@db1002 xlogdump1]# /usr/pgsql-9.2/bin/xlogdump  /tmp/pgxlogdump/* > xlog_analysis24feb2014_18.txt
Unable to read continuation page?
Unable to read continuation page?
...
..  so on

-- Final output file size after parsing

[root@db xlogdump1]# du -sh xlog_analysis24feb2014_18.txt
3.6G    xlog_analysis24feb2014_18.txt

I don't think it parsed all files completely because output file size is too small.

So to analyse the output file , i use the below command :

cat xlog_analysis24feb2014_18.txt |awk  '{print $8 "," $7}'  | sort | uniq -dc | sort


that gives me output like below :-
 195068 s/d/r:1663/1228184/1363132,clean:
 200548 s/d/r:1663/1228184/1363155,bkpblock[1]:
 203964 s/d/r:1663/112743/5263051,insert:
 205003 s/d/r:1663/112743/5263054,insert_leaf:
 205673 s/d/r:1663/112743/5263053,insert_leaf:
 208101 s/d/r:1663/112743/5263208,insert:
 269840 ,vacuum:
 210556 s/d/r:1663/112743/5263212,insert_leaf:
 250651 s/d/r:1663/1228184/1522128,update:
 250867 s/d/r:1663/1228184/1522132,insert_leaf:
 253874 s/d/r:1663/1228184/1291583,insert_leaf:
 253887 s/d/r:1663/1228184/1291579,update:
 280151 s/d/r:1663/112743/919064,insert_leaf:
 281578 s/d/r:1663/112743/919027,update:
 293516 s/d/r:1663/1228184/1363132,bkpblock[1]:
 351713 s/d/r:1663/1228184/1363648,hot_update:
 361884 s/d/r:1663/1228184/5263067,insert_leaf:
 382961 s/d/r:1663/1228184/1363654,bkpblock[1]:
 392147 s/d/r:1663/1228184/1363652,bkpblock[1]:
 392957 s/d/r:1663/1228184/5263063,insert:
 393086 s/d/r:1663/1228184/1522150,update:
 412499 s/d/r:1663/1228184/5263286,insert_leaf:
 437279 s/d/r:1663/1228184/5263282,insert:
 927606 s/d/r:1663/1228184/1363648,update:
 932525 s/d/r:1663/1228184/1363652,insert_leaf:
 932668 s/d/r:1663/1228184/1363654,insert_leaf:
1078769 s/d/r:1663/1228184/5263108,insert_leaf:
1155642 s/d/r:1663/1228184/5263105,insert:
1181241 s/d/r:1663/1228184/5263107,insert_leaf:

Is dere any better way where i can analyse who is contributing most in my pgxlog file generation.

Also,  how can i link my insert_leaf statements to my tables/indexes ?

1078769 s/d/r:1663/1228184/5263108,insert_leaf:



Thanks

Re: Doubts after evaluating Xlogdump

From
Jeff Janes
Date:
On Thu, Feb 27, 2014 at 7:39 AM, Vinay Gupta <vdadagupta@gmail.com> wrote:
 
1078769 s/d/r:1663/1228184/5263108,insert_leaf:
1155642 s/d/r:1663/1228184/5263105,insert:
1181241 s/d/r:1663/1228184/5263107,insert_leaf:

Is dere any better way where i can analyse who is contributing most in my pgxlog file generation.

Also,  how can i link my insert_leaf statements to my tables/indexes ?

1078769 s/d/r:1663/1228184/5263108,insert_leaf:

Use the --oid2name flag for xlogdump, or just manually use oid2name.  You have to make sure to specify the correct port, etc., to use that.

Cheers,

Jeff