• | PGConf.Online 2021 wrap-up: 57 talks, 30 international speakers, 2000+ registrants!

    PGConf.Online wrapped up 10 days ago, and we want to share some important stats with you. We were glad to have welcomed 2000+ registrants this year with 700-900 of them joining us online each day! The total attendance/registration rate for all three days was near 75%, which is amazing!

  • | Celebrating Women Who Code in Postgres Pro

    In early March, before International Women’s Day, we asked our female tech professionals a few questions and were surprised by some of them providing us answers :) Let’s celebrate women who code working here at Postgres Pro and have a look at what they are up to!

     

  • | Postgres Professional is now sponsoring Psycopg development!

    Postgres Professional has become one of the main sponsors backing the development of the Psycopg library, the most popular PostgreSQL adapter for the Python programming language.

  • | Postgres Professional at FOSDEM 2021

    It’s this time of the year again, and FOSDEM is coming! In 2021, Postgres Professional is really well-represented in the PostgreSQL devroom, as we have 5 talks accepted by the Committee. Let’s take a look at all presentations to be given by our team at this conference.

All news

  • | WAL in PostgreSQL: 4. Setup and Tuning

    So, we got acquainted with the structure of the buffer cache and in this context concluded that if all the RAM contents got lost due to failure, the write-ahead log (WAL) was required to recover. The size of the necessary WAL files and the recovery time are limited thanks to the checkpoint performed from time to time.

    In the previous articles we already reviewed quite a few important settings that anyway relate to WAL. In this article (being the last in this series) we will discuss problems of WAL setup that are unaddressed yet: WAL levels and their purpose, as well as the reliability and performance of write-ahead logging.

    WAL levels

    The main WAL task is to ensure recovery after a failure. But once we have to maintain the log anyway, we can also adapt it to other tasks by adding some more information to it. There are several logging levels. The wal_level parameter specifies the level, and each next level includes everything that gets into WAL of the preceding level plus something new.

    ...

  • | WAL in PostgreSQL: 3. Checkpoint

    We already got acquainted with the structure of the buffer cache — one of the main objects of the shared memory — and concluded that to recover after failure when all the RAM contents get lost, the write-ahead log (WAL) must be maintained.

    The problem yet unaddressed, where we left off last time, is that we are unaware of where to start playing back WAL records during the recovery. To begin from the beginning, as the King from Lewis Caroll's Alice advised, is not an option: it is impossible to keep all the WAL records from the server start — this is potentially both a huge memory size and equally huge duration of the recovery. We need such a point that is gradually moving forward and that we can start the recovery at (and safely remove all the previous WAL records, accordingly). And this is the checkpoint, to be discussed below.

    Checkpoint

    What features must the checkpoint have? We must be sure that all the WAL records starting with the checkpoint will be applied to the pages flushed to disk. If it were not the case, during recovery, we could read from disk a version of the page that is too old, apply the WAL record to it and by doing so, irreversibly hurt the data.

    ...

  • | WAL in PostgreSQL: 2. Write-Ahead Log

    Last time we got acquainted with the structure of an important component of the shared memory — the buffer cache. A risk of losing information from RAM is the main reason why we need techniques to recover data after failure. Now we will discuss these techniques.

    The log

    Sadly, there's no such thing as miracles: to survive the loss of information in RAM, everything needed must be duly saved to disk (or other nonvolatile media).

    Therefore, the following was done. Along with changing data, the log of these changes is maintained. When we change something on a page in the buffer cache, we create a record of this change in the log. The record contains the minimum information sufficient to redo the change if the need arises.

    For this to work, the log record must obligatory get to disk before the changed page gets there. And this explains the name: write-ahead log (WAL).

    In case of failure, the data on disk appear to be inconsistent: some pages were written earlier, and others later. But WAL remains, which we can read and redo the operations that were performed before the failure but their result was late to reach the disk.

    ...

Blog

  • mamonsu

    An active monitoring agent for Postgres Pro. Based on zabbix, mamonsu provides an extensible cross-platform solution that can collect and visualize multiple Postgres Pro and system metrics.

  • JsQuery

    JsQuery – is a language to query jsonb data type, introduced in PostgreSQL release 9.4. It's primary goal is to provide an additional functionality to jsonb (currently missing in PostgreSQL), such as a simple and effective way to search in nested objects and arrays, more comparison operators with indexes support.

  • pg_probackup

    Pg_probackup is a utility to manage backup and recovery of PostgreSQL database clusters. It is designed to perform periodic backups of the PostgreSQL instance that enable you to restore the server in case of a failure.

  • pg_variables

    Functions for defining and using variables in client sessions.

Postgres Extensions

  • Page-level data compression

    PGLZ compression of individual values is used now in PostgreSQL. That’s good, but sometimes it’s possible to achieve significant compression only when compressing multiple values together. This is why we’re considering page-level data compression.

  • JIT compilation of queries

    Current approach for execution is essentially a kind plan tree interpretation. JIT-compilation of queries means compilation of plan tree into binary. Such compilation could get rid of multiple levels of indirection therefore accelerating queries.

  • Multi-master cluster with sharding

    Multi-master cluster with sharding which provides read/write scalability as well as high availability is obviously one of most wanted DBMS features. Experience shows that we should move in step-by-step manner in order to have it in core one day. Postgres Professional company joins community efforts in this direction.

Roadmap

.


Tasks

Postgres Professional

Postgres Professional is the Russian PostgreSQL company founded by Russian PostgreSQL contributors. Company has 50+ employees, among them three Major PostgreSQL Contributors.

Postgres Professional is an active member of international PostgreSQL community, developers had committed 93 patches to the latest release of PostgreSQL 10.0.

Our company had successfully performed large PostgreSQL projects including database migration projects for well-known Russian and international companies. We provide industrial PostgreSQL services: vendor technical support, migration, custom extensions and core patches development, migration-related consulting, training and certification.