mark@mark.mielke.cc wrote:
> As a thought experiment, I'm not seeing the benefit. I think if you
> could prove a benefit, then any proof you provided could be used to
> improve the already existing caching layers, and would apply equally
> to read-only or read-write pages. For example, why not be able to
> hint to PostgreSQL that a disk-based table should be considered a
> priority to keep in RAM. That way, PostgreSQL would avoid pushing
> pages from this table out.
>
If memcached (or pgmemcached implemented in triggers) can show a speed
improvement using ram based caching (even with network overhead) of
specific data then it stands to reason that this ram based cache can be
integrated into postgres with better integration that will overcome the
issues that pgmemcached has. So I threw some ideas out there to get
others thinking on these lines to see if we can come up with a way to
improve or integrate this principle.
My original thoughts were integrating it into the sql level to allow the
database structure to define what we would want to cache in ram, which
is similar to what is happening with using pgmemcached.
Expanding create table to specify that a table gets priority in cache or
allocate x amount of cache to be used by table y could be a better way
than saying all of this table in ram.
I think the main benefit of my first ideas would come from the later
examples I gave where create memory tablespace with slaves would allow
the use of extra machines, effectively increasing the ram available
outside the current postgres setup.
Maybe implementing this idea as a way of increasing the current postgres caching would be a better implementation than
thememory tablespaces
idea. As in integrating a version of pgmemcached as an option into the
current caching layers. Thus implementing it at the config level instead
of the structure design. Although defining tables to get priority or
allocated space in the ram cache would fit well with that.