F.24. in_memory

The in_memory extension enables you to store data in Postgres Pro shared memory using in-memory tables implemented via foreign data wrappers (FDW).

Note

This extension cannot be used together with autonomous transactions.

In-memory tables are index-organized — table rows are stored in leaf pages of the B-tree index defined on the primary key for the table. This solution offers the following benefits:

  • Fast random access on the primary key. This can result in significant performance benefits when working with data that requires a very high read and write access rate, especially on multi-core systems.

  • Effective space usage. There is no primary key duplication as all data is stored directly in the index.

In-memory tables support transactions, including savepoints. However, the data in such tables is stored only while the server is running. Once the server is shut down, all in-memory data gets truncated. When using in-memory tables, you should also take into account the following restrictions:

  • Persistence, WAL, and data replication are currently not supported for in-memory tables.

  • Secondary indexes are not supported.

  • Isolation levels are supported up to REPEATABLE READ. SERIALIZABLE isolation level is not supported. REPEATABLE READ level is used instead.

  • In-memory tables do not support TOAST or any other mechanism for storing big tuples. Since the in-memory page size is 1 kB, and the B-tree index requires at least three tuples in a page, the maximum row length is limited to 304 bytes.

  • When a row is deleted from an in-memory table, the corresponding data page is not freed. See Section F.24.2.3 for details.

F.24.1. Installation and Setup

To enable in-memory tables for your cluster:

  1. Make sure that the postgres_fdw module is enabled.

  2. Add the in_memory value to the shared_preload_libraries variable in the postgresql.conf file:

    shared_preload_libraries = 'in_memory'
    

  3. Create the in_memory extension using the following statement:

    CREATE EXTENSION in_memory;
    

As a result, the in_memory foreign server is created, and a separate shared memory pool is allocated for in-memory tables, with pre-created pages for storing in-memory data. For in-memory tables, smaller locality is required for effective memory access, as compared to hard drives or SSD, so in-memory page size is 1 kB only. Once the extension is created, you can start using in-memory tables as explained in Section F.24.2.

Tip

If required, you can increase the memory size allocated for in-memory tables. For details, see Section F.24.2.6.

F.24.2. Usage

F.24.2.1. Creating In-Memory Tables

To add an in-memory table to your database, create a foreign table on the in_memory server, using the regular CREATE FOREIGN TABLE syntax. By default, a unique B-tree index is built upon the first column, in the ascending order. If required, you can use the INDICES option in the OPTIONS clause to define a different B-tree index structure, as follows:

OPTIONS ( INDICES '[ UNIQUE ] {column [ COLLATE collation ] [ASC | DESC] } [, ... ]' )

where column is the column to include into the B-tree index, and collation is the name of the collation to use for this column. You can specify up to eight arbitrary columns separated by commas, with sorting options defined for each of these columns (COLLATE, ASC/DESC). The UNIQUE declares the created index unique, reiterating the default behavior.

All columns to be used in the B-tree index must be of a type for which the default B-tree operator class is available. For details on operator classes, see Section 11.9.

Examples

Create an in-memory table blog_views to store statistics on blog views based on blog post IDs, with the unique B-tree index built upon the first column, in the ascending order:

CREATE FOREIGN TABLE blog_views
(
    id int8 NOT NULL,
    author text,
    views bigint NOT NULL
) SERVER in_memory
OPTIONS (INDICES 'UNIQUE (id)');

Define the B-tree index on the id and author columns, with the author values sorted in the ascending order using "ru_RU" collation:

CREATE FOREIGN TABLE blog_views
(
    id int8 NOT NULL,
    author text,
    views bigint NOT NULL
) SERVER in_memory
OPTIONS (INDICES '(id, author COLLATE "ru_RU" ASC)');

F.24.2.2. Running Queries on In-Memory Tables

Once an in-memory table is created, you can run all the main DML operations on this table: SELECT, INSERT, UPDATE, DELETE.

If you use the primary key as the scan qualifier when running queries, a key lookup or range scan is performed. Otherwise, a full index scan is required.

Examples

Fill the blog_views table with initial zero values for initial ten blog posts:

postgres=# INSERT INTO blog_views (SELECT id, 0 FROM generate_series(1, 10) AS id);

Increment the view count for a couple of posts and display the result:

postgres=# UPDATE blog_views SET views = views + 1 WHERE id = 1;
UPDATE 1
postgres=# UPDATE blog_views SET views = views + 1 WHERE id = 1;
UPDATE 2
postgres=# UPDATE blog_views SET views = views + 1 WHERE id = 2;
UPDATE 1
postgres=# SELECT * FROM blog_views WHERE id = 1 OR id = 2;
 id | views
----+-------
  1 |  2
  2 |  1
(2 rows)

Check planning and execution costs for a query that only requires a primary key lookup:

postgres=# EXPLAIN ANALYZE SELECT * FROM blog_views WHERE id = 1;
                       QUERY PLAN
-------------------------------------------------------------------
Foreign Scan on blog_views  (cost=0.02..0.03 rows=1 width=16)
(actual time=0.013..0.014 rows=1 loops=1)
  Pk conds: (id = 1)
Planning time: 0.060 ms
Execution time: 0.035 ms
(4 rows)

Check the costs of calculating the sum of all views, which requires a full index scan:

postgres=# EXPLAIN ANALYZE SELECT SUM(views) FROM blog_views;
                         QUERY PLAN
--------------------------------------------------------------------
Aggregate  (cost=1.62..1.63 rows=1 width=32)
(actual time=0.323..0.323 rows=1 loops=1)
Foreign Scan on blog_views  (cost=0.02..1.30 rows=128 width=8)
(actual time=0.005..0.168 rows=1000 loops=1)
Planning time: 0.113 ms
Execution time: 0.353 ms
(4 rows)

F.24.2.3. Deleting In-Memory Data

When a row is deleted from an in-memory table, data pages are not freed. To free the pages occupied by in-memory tables, you can:

  • Delete the table using the DROP FOREIGN TABLE command.

  • Truncate the table using the TRUNCATE command.

RESTART IDENTITY, CONTINUE IDENTITY, CASCADE, and RESTRICT options of the TRUNCATE command are not supported by in-memory tables.

F.24.2.4. Writing Data to In-Memory Tables on Hot Standby

In some cases, it can be useful to perform write operations on hot standby servers. For example, suppose you need to collect statistics on queries run on hot standby. Since in-memory tables are writable, you can use them for such purposes.

To set up in-memory tables for write queries on standby, create the required in-memory tables on the master server as explained in Section F.24.2.1. Once the replication is complete, you can start writing data to these tables on hot standby.

Important

If the hot standby server is restarted, all the data stored in its in-memory tables gets truncated. You can continue writing data to the same in-memory tables, but all the previously stored data will be lost.

F.24.2.5. Getting Statistics on In-Memory Tables

To get statistics on in-memory pages available in your cluster, run the in_memory_page_stats function, which returns the number of all used and free in-memory pages, as well as the total number of pages allocated for in-memory tables. For example:

postgres=# SELECT * FROM in_memory.in_memory_page_stats();
 busy_pages | free_pages | all_pages
------------+------------+-----------
        576 |       7616 |      8192
(1 row)

F.24.2.6. Fine-Tuning Memory Settings

F.24.2.6.1. Increasing Shared Memory Pool for In-Memory Tables

In-memory tables are stored in a separate shared memory segment. Its size is defined by the in_memory.shared_pool_size parameter. By default, this memory segment is limited to 8 MB.

If the data to be stored in in-memory tables exceeds the size of the allocated memory segment, the following error occurs:

ERROR: failed to get a new page: shared pool size is exceeded

To avoid such issues, you can increase the in_memory.shared_pool_size value, or limit the size of the stored data. Changing the shared pool size requires a server restart.

F.24.2.6.2. Managing the Undo Log

To enable multi-version concurrency control (MVCC), the in_memory module uses the undo log — a shared-memory ring buffer that stores the previous versions of data entries and pages. The size of the undo log is defined by the in_memory.undo_size parameter and is limited to 1 MB by default. If a buffer overflow occurs before a transaction is complete, the following error is returned:

ERROR: failed to add undo record: undo size is exceeded

To avoid this issue, you can increase the in_memory.undo_size value, or split the transactions into smaller ones.

If the required version of the entry or page has already been overwritten in the undo log when it is accessed for read, the following error occurs:

ERROR: snapshot is outdated

In this case, you can:

  • Increase the in_memory.undo_size value. Changing this parameter requires a server restart.

  • Ensure that the undo log is not truncated while the snapshot is in use. To achieve this, you can use the READ COMMITTED isolation level, or split a complex query into several smaller ones.

F.24.3. Reference

F.24.3.1. Configuration Variables

in_memory.shared_pool_size (integer)

Defines the size of the shared memory segment allocated for in-memory tables.

Default: 8MB

in_memory.undo_size (integer)

Defines the size of the undo log.

Default: 1MB

F.24.3.2. Functions

in_memory.in_memory_page_stats()

Displays statistics on pages of in-memory tables:

  • busy_pages — in-memory pages containing any data.

  • free_pages — empty in-memory pages. This number includes all the initially allocated pages to which no data has been written yet, as well as the pages from which all data has been deleted.

  • all_pages — the total number of in-memory pages allocated on this server.

F.24.4. Authors

Postgres Professional, Moscow, Russia