All news

  • Indexes in PostgreSQL — 10 (Bloom)

    | Indexes in PostgreSQL — 10 (Bloom)

    In the previous articles we discussed PostgreSQL indexing engine and the interface of access methods, as well as B-trees, GiST, SP-GiST, GIN, RUM, and BRIN. But we still need to look at Bloom indexes.

    Bloom

    General concept

    A classical Bloom filter is a data structure that enables us to quickly check membership of an element in a set. The filter is highly compact, but allows false positives: it can mistakenly consider an element to be a member of a set (false positive), but it is not permitted to consider an element of a set not to be a member (false negative).

    The filter is an array of m bits (also called a signature) that is initially filled with zeros. k different hash functions are chosen that map any element of the set to k bits of the signature. To add an element to the set, we need to set each of these bits in the signature to one. Consequently, if all the bits corresponding to an element are set to one, the element can be a member of the set, but if at least one bit equals zero, the element is not in the set for sure.

    In the case of a DBMS, we actually have N separate filters built for each index row. As a rule, several fields are included in the index, and it's values of these fields that compose the set of elements for each row.

    By choosing the length of the signature m, we can find a trade-off between the index size and the probability of false positives. The application area for Bloom index is large, considerably "wide" tables to be queried using filters on each of the fields. This access method, like BRIN, can be regarded as an accelerator of sequential scan: all the matches found by the index must be rechecked with the table, but there is a chance to avoid considering most of the rows at all.

  • Indexes in PostgreSQL — 9 (BRIN)

    | Indexes in PostgreSQL — 9 (BRIN)

    In the previous articles we discussed PostgreSQL indexing engine, the interface of access methods, and the following methods: B-trees, GiST, SP-GiST, GIN, and RUM. The topic of this article is BRIN indexes.

    BRIN

    General concept

    Unlike indexes with which we've already got acquainted, the idea of BRIN is to avoid looking through definitely unsuited rows rather than quickly find the matching ones. This is always an inaccurate index: it does not contain TIDs of table rows at all.

    Simplistically, BRIN works fine for columns where values correlate with their physical location in the table. In other words, if a query without ORDER BY clause returns the column values virtually in the increasing or decreasing order (and there are no indexes on that column).

    This access method was created in scope of Axle, the European project for extremely large analytical databases, with an eye on tables that are several terabyte or dozens of terabytes large. An important feature of BRIN that enables us to create indexes on such tables is a small size and minimal overhead costs of maintenance.

    This works as follows. The table is split into ranges that are several pages large (or several blocks large, which is the same) - hence the name: Block Range Index, BRIN. The index stores summary information on the data in each range. As a rule, this is the minimal and maximal values, but it happens to be different, as shown further. Assume that a query is performed that contains the condition for a column; if the sought values do not get into the interval, the whole range can be skipped; but if they do get, all rows in all blocks will have to be looked through to choose the matching ones among them.

    It will not be a mistake to treat BRIN not as an index, but as an accelerator of sequential scan. We can regard BRIN as an alternative to partitioning if we consider each range as a "virtual" partition.

    Now let's discuss the structure of the index in more detail.

  • Indexes in PostgreSQL — 8 (RUM)

    | Indexes in PostgreSQL — 8 (RUM)

    We have already discussed PostgreSQL indexing engine, the interface of access methods, and main access methods, such as: hash indexes, B-trees, GiST, SP-GiST, and GIN. In this article, we will watch how gin turns into rum.

    RUM

    Although the authors claim that gin is a powerful genie, the theme of drinks has eventually won: next-generation GIN has been called RUM.

    This access method expands the concept that underlies GIN and enables us to perform full-text search even faster. In this series of articles, this is the only method that is not included in a standard PostgreSQL delivery and is an external extension. Several installation options are available for it:

    • Take "yum" or "apt" package from the PGDG repository. For example, if you installed PostgreSQL from "postgresql-10" package, also install "postgresql-10-rum".
    • Build from source code on github and install on your own (the instruction is there as well).
    • Use as a part of Postgres Pro Enterprise (or at least read the documentation from there).

    Limitations of GIN

    What limitations of GIN does RUM enable us to transcend?

    First, "tsvector" data type contains not only lexemes, but also information on their positions inside the document. As we observed last time, GIN index does not store this information. For this reason, operations to search for phrases, which appeared in version 9.6, are supported by GIN index inefficiently and have to access the original data for recheck.

    Second, search systems usually return the results sorted by relevance (whatever that means). We can use ranking functions "ts_rank" and "ts_rank_cd" to this end, but they have to be computed for each row of the result, which is certainly slow.

    To a first approximation, RUM access method can be considered as GIN that additionally stores position information and can return the results in a needed order (like GiST can return nearest neighbors). Let's move step by step.

Blog

  • JsQuery

    JsQuery – is a language to query jsonb data type, introduced in PostgreSQL release 9.4. It's primary goal is to provide an additional functionality to jsonb (currently missing in PostgreSQL), such as a simple and effective way to search in nested objects and arrays, more comparison operators with indexes support.

  • mamonsu

    An active monitoring agent for Postgres Pro. Based on zabbix, mamonsu provides an extensible cross-platform solution that can collect and visualize multiple Postgres Pro and system metrics.

  • pg_probackup

    Pg_probackup is a utility to manage backup and recovery of PostgreSQL database clusters. It is designed to perform periodic backups of the PostgreSQL instance that enable you to restore the server in case of a failure.

  • pg_variables

    Functions for defining and using variables in client sessions.

Postgres Extensions

  • Query plans management

    Business-critical applications require continuous monitoring and adjustment of query plans. The problem hardens due the fact that actual resources consumption by query execution could differ significantly with slight changes of the query plan cost. Thus, there is a permanent risk of bad plan selection and overall system performance degradation after collecting new statistics. DBAs want to protect themselves from this risk by freezing plans of particular most critical queries.

  • Connection pooling

    Working with large number of simultaneous connections almost inevitably leads to usage of connection pools. Unfortunately, present pooling solutions are very limited in access to all PostgreSQL features.

  • Multi-master cluster with sharding

    Multi-master cluster with sharding which provides read/write scalability as well as high availability is obviously one of most wanted DBMS features. Experience shows that we should move in step-by-step manner in order to have it in core one day. Postgres Professional company joins community efforts in this direction.

Roadmap

.


Tasks

Postgres Professional

Postgres Professional is the Russian PostgreSQL company founded by Russian PostgreSQL contributors. Company has 50+ employees, among them three Major PostgreSQL Contributors.

Postgres Professional is an active member of international PostgreSQL community, developers had committed 93 patches to the latest release of PostgreSQL 10.0.

Our company had successfully performed large PostgreSQL projects including database migration projects for well-known Russian and international companies. We provide industrial PostgreSQL services: vendor technical support, migration, custom extensions and core patches development, migration-related consulting, training and certification.