Egor Rogov's Blog

Our blog is where our Postgres Pro experts share their knowledge with the community. Blog posts cover a variety of topics, including Postgres internals, extensions and monitoring solutions.

Recent posts

May 13   •   PostgreSQL
In previous articles we discussed query execution stages and statistics . Last time, I started on data access methods, namely Sequential scan . Today we will cover Index Scan. This article requires a basic understanding of the index method interface. If words like "operator class" and "access method properties" don't ring a bell, check out my article on indexes from a while back for a refresher. Plain Index Scan Indexes return row version IDs (tuple IDs, or TIDs for short), which can be handled in one of two ways. The first one is Index scan . Most (but not all) index methods have the INDEX SCAN property and support this approach. The operation is represented in the plan with an Index Scan node ...
March 31   •   PostgreSQL
In previous articles we discussed how the system plans a query execution and how it collects statistics to select the best plan. The following articles, starting with this one, will focus on what a plan actually is, what it consists of and how it is executed. In this article, I will demonstrate how the planner calculates execution costs. I will also discuss access methods and how they affect these costs, and use the sequential scan method as an illustration. Lastly, I will talk about parallel execution in PostgreSQL, how it works and when to use it. I will use several seemingly complicated math formulas later in the article. You don't have to memorize any of them to get to the bottom of how the planner works; they are merely there to show where I get my numbers from. Pluggable storage engines The PostgreSQL's approach to storing data on disk will not be optimal for every possible type of load. Thankfully, you have options. Delivering on its promise of extensibility, PostgreSQL 12 and higher supports custom table access methods (storage engines), although it ships only with the stock one, heap: ...
March 10   •   PostgreSQL
Despite the ongoing tragic events, we continue the series. In the last article we reviewed the stages of query execution . Before we move on to plan node operations (data access and join methods), let's discuss the bread and butter of the cost optimizer: statistics. As usual, I use the demo database for all my examples. You can download it and follow along. You will see a lot of execution plans here today. We will discuss how the plans work in more detail in later articles. For now just pay attention to the numbers that you see in the first line of each plan, next to the word rows. These are row number estimates, or cardinality. ...
February 18   •   PostgreSQL
Hello! I'm kicking off another article series about the internals of PostgreSQL. This one will focus on query planning and execution mechanics. This series will cover: query execution stages (this article), statistics, sequential and index scans, nested-loop, hash, and merge joins. Many thanks to Alexander Meleshko for the translation of this series into English. This article borrows from our course QPT Query Optimization (available in English soon), but focuses mostly on the internal mechanisms of query execution, leaving the optimization aspect aside. Please also note that this article series is written with PostgreSQL 14 in mind. Simple query protocol The fundamental purpose of the PostgreSQL client-server protocol is twofold: it sends SQL queries to the server, and it receives the entire execution result in response. The query received by the server for execution goes through several stages. Parsing First, the query text is parsed , so that the server understands exactly what needs to be done. Lexer and parser. The lexer is responsible for recognizing lexemes in the query string (such as SQL keywords, string and numeric literals, etc.), and the parser makes sure that the resulting set of lexemes is grammatically valid. The parser and lexer are implemented using the standard tools Bison and Flex. The parsed query is represented as an abstract syntax tree. ...
July 28, 2021   •   PostgreSQL
To remind you, we've already talked about relation-level locks , row-level locks , locks on other objects (including predicate locks) and interrelationships of different types of locks. The following discussion of locks in RAM finishes this series of articles. We will consider spinlocks, lightweight locks and buffer pins, as well as events monitoring tools and sampling. ...
July 15, 2021   •   PostgreSQL
We've already discussed some object-level locks (specifically, relation-level locks), as well as row-level locks with their connection to object-level locks and also explored wait queues, which are not always fair. We have a hodgepodge this time. We'll start with deadlocks (actually, I planned to discuss them last time, but that article was excessively long in itself), then briefly review object-level locks left and finally discuss predicate locks .
July 1, 2021   •   PostgreSQL
Last time, we discussed object-level locks and in particular relation-level locks. In this article, we will see how row-level locks are organized in PostgreSQL and how they are used together with object-level locks. We will also talk of wait queues and of those who jumps the queue.
June 17, 2021   •   PostgreSQL
The previous two series of articles covered isolation and multiversion concurrency control and logging . In this series, we will discuss locks . This series will consist of four articles: Relation-level locks (this article). Row-level locks. Locks on other objects and predicate locks. Locks in RAM.  
May 13, 2021   •   PostgreSQL
So, we got acquainted with the structure of the buffer cache and in this context concluded that if all the RAM contents got lost due to failure, the write-ahead log (WAL) was required to recover. The size of the necessary WAL files and the recovery time are limited thanks to the checkpoint performed from time to time. In the previous articles we already reviewed quite a few important settings that anyway relate to WAL. In this article (being the last in this series) we will discuss problems of WAL setup that are unaddressed yet: WAL levels and their purpose, as well as the reliability and performance of write-ahead logging.
April 23, 2021   •   PostgreSQL
We already got acquainted with the structure of the buffer cache  — one of the main objects of the shared memory — and concluded that to recover after failure when all the RAM contents get lost, the write-ahead log (WAL) must be maintained. The problem yet unaddressed, where we left off last time, is that we are unaware of where to start playing back WAL records during the recovery. To begin from the beginning, as the King from Lewis Caroll's Alice advised, is not an option: it is impossible to keep all the WAL records from the server start — this is potentially both a huge memory size and equally huge duration of the recovery. We need such a point that is gradually moving forward and that we can start the recovery at (and safely remove all the previous WAL records, accordingly). And this is the checkpoint , to be discussed below.
April 15, 2021   •   PostgreSQL
Last time we got acquainted with the structure of an important component of the shared memory — the buffer cache. A risk of losing information from RAM is the main reason why we need techniques to recover data after failure. Now we will discuss these techniques.
April 1, 2021   •   PostgreSQL
The previous series addressed isolation and multiversion concurrency control , and now we start a new series: on write-ahead logging . To remind you, the material is based on training courses on administration that Pavel Luzanov and I are creating (see the "Training courses" section of our website), but does not repeat them verbatim and is intended for careful reading and self-experimenting. This series will consist of four parts: Buffer cache (this article). Write-ahead log — how it is structured and used to recover the data. Checkpoint and background writer — why we need them and how we set them up. WAL setup and tuning — levels and problems solved, reliability, and performance.
March 19, 2021   •   PostgreSQL
We started with problems related to isolation , made a digression about low-level data structure , discussed row versions in detail and observed how data snapshots are obtained from row versions. Then we covered different vacuuming techniques: in-page vacuum (along with HOT updates), vacuum and autovacuum . Now we've reached the last topic of this series. We will talk on the transaction id wraparound and freezing. Transaction ID wraparound PostgreSQL uses 32-bit transaction IDs. This is a pretty large number (about 4 billion), but with intensive work of the server, this number is not unlikely to get exhausted. For example: with the workload of 1000 transactions a second, this will happen as early as in one month and a half of continuous work. But we've mentioned that multiversion concurrency control relies on the sequential numbering, which means that of two transactions the one with a smaller number can be considered to have started earlier. Therefore, it is clear that it is not an option to just reset the counter and start the numbering from scratch. ...
February 9, 2021   •   PostgreSQL
To remind you, we started with problems related to isolation , made a digression about low-level data structure , discussed row versions in detail and observed how data snapshots are obtained from row versions. Then we explored in-page vacuum (and HOT updates) and vacuum . Now we'll look into autovacuum. Autovacuum We've already mentioned that normally (i. e., when nothing holds the transaction horizon for a long time) VACUUM usually does its job. The problem is how often to call it. If we vacuum a changing table too rarely, its size will grow more than desired. Besides, a next vacuum operation may require several passes through indexes if too many changes were done. If we vacuum the table too often, the server will constantly do maintenance rather than useful work — and this is no good either. Note that launching VACUUM on schedule by no means resolves the issue because the workload can change with time. If the table starts to change more intensively, it must be vacuumed more often. Autovacuum is exactly the technique that enables us to launch vacuuming depending on how intensive the table changes are. ...
January 18, 2021   •   PostgreSQL
We started with problems related to isolation , made a digression about low-level data structure , then discussed row versions and observed how data snapshots are obtained from row versions. Last time we talked about HOT updates and in-page vacuuming, and today we'll proceed to a well-known vacuum vulgaris . Really, so much has already been written about it that I can hardly add anything new, but the beauty of a full picture requires sacrifice. So keep patience. Vacuum What does vacuum do? In-page vacuum works fast, but frees only part of the space. It works within one table page and does not touch indexes. The basic, "normal" vacuum is done using the VACUUM command, and we will call it just "vacuum" (leaving "autovacuum" for a separate discussion). So, vacuum processes the entire table. It vacuums away not only dead tuples, but also references to them from all indexes. Vacuuming is concurrent with other activities in the system. The table and indexes can be used in a regular way both for reads and updates (however, concurrent execution of commands such as CREATE INDEX, ALTER TABLE and some others is impossible). Only those table pages are looked through where some activities took place. To detect them, the visibility map is used (to remind you, the map tracks those pages that contain pretty old tuples, which are visible in all data snapshots for sure). Only those pages are processed that are not tracked by the visibility map, and the map itself gets updated. The free space map also gets updated in the process to reflect the extra free space in the pages. ...