Thread: Slicing TOAST
I'm proposing this now as a possible GSoC project: In 1-byte character encodings (i.e. not UTF-8), SUBSTR() is optimised to allow seeking straight to the exact slice when retrieving a large toasted value. This reduces I/O considerably when you have large toasted values since it is an O(1) action rather than an O(N). This is possible because the slicing of toasted values is predictable on 1 byte encodings. It would be useful to have a predictable function perform the slicing, so we could use that knowledge later to optimise searches in a wider range of situations. More specifically, since UTF-8 is so common, to allow optimisations in that encoding of common data: text, XML, JSON. e.g. if we knew that an XML document has a required element called TITLE and that occurs only once and always in the first slice, it would be useful information to use in search functions. (Not sure, but it may be possible to assign non-consecutive slice numbers to allow variable data mid-way through a column value if needed). e.g. in UTF-8 free text we could put 500 characters in each slice, so that even if that could be anywhere between 500 and 2000 bytes it would still fit just fine. e.g. for arrays, if we put say 200 elements per slice, then accessing particular elements would require only 1 slice retrieval. Doing this would *possibly* reduce packing density, but not certainly so. But it would greatly improve access times to large structured toast values. Implementation would be to have a slicing function that gets called iteratively on a column value until it returns no further slices. There is no proposal for search functions. It would be up to the search function to confirm the details of the slicing function prior to using that knowledge in a search. We'd need a way to check that the function inputs matched the slicing of the column, so we'd need to have some requirement for input on the function to be matched against metadata on the column. So presumably some decoration of the input parameters of functions, which sounds like a little too much difficulty. So the proposal would be to provide the slicing/chunking function at the datatype level not the column level. The user would create a binary compatible type, that is effectively XML or whatever, just with extra constraints on usage for slicing. But now I consider the syntax, I'll call it a splitter function since slicer, chunker sound silly to me. CREATE TYPE my_xml LIKE xml SPLITTER my_toast_function; Search functions would then be designed that take such datatypes as input and would be able to rely with certainty upon the toast slicing algorithms in order to retrieve data. Doing it this way means that different XML or JSON schemas could have specific search functions optimised for them. I'm proposing this now as a possible GSoC project; I don't propose to actively work on it myself. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On 05/14/2013 10:05 AM, Simon Riggs wrote: > I'm proposing this now as a possible GSoC project: > > In 1-byte character encodings (i.e. not UTF-8), SUBSTR() is optimised > to allow seeking straight to the exact slice when retrieving a large > toasted value. This reduces I/O considerably when you have large > toasted values since it is an O(1) action rather than an O(N). > > This is possible because the slicing of toasted values is predictable > on 1 byte encodings. > > It would be useful to have a predictable function perform the slicing, > so we could use that knowledge later to optimise searches in a wider > range of situations. More specifically, since UTF-8 is so common, to > allow optimisations in that encoding of common data: text, XML, JSON. > > e.g. if we knew that an XML document has a required element called > TITLE and that occurs only once and always in the first slice, it > would be useful information to use in search functions. (Not sure, but > it may be possible to assign non-consecutive slice numbers to allow > variable data mid-way through a column value if needed). > > e.g. in UTF-8 free text we could put 500 characters in each slice, so > that even if that could be anywhere between 500 and 2000 bytes it > would still fit just fine. > > e.g. for arrays, if we put say 200 elements per slice, then accessing > particular elements would require only 1 slice retrieval. > > Doing this would *possibly* reduce packing density, but not certainly > so. But it would greatly improve access times to large structured > toast values. On the contrary, as it would enable us to pack the chunks fitting more on the page, especially for :) That is, first chunk into N bytes, then compress each chunk ----------------- Hannu
On 14 May 2013 08:05, Simon Riggs <simon@2ndquadrant.com> wrote: > I'm proposing this now as a possible GSoC project: Unfortunately the deadline for project submissions for students was 3rd May. If this isn't worked on before next year, it can of course be put forward as an idea for GSoC 2014. -- Thom
On 14 May 2013 18:21, Thom Brown <thom@linux.com> wrote: > On 14 May 2013 08:05, Simon Riggs <simon@2ndquadrant.com> wrote: >> I'm proposing this now as a possible GSoC project: > > Unfortunately the deadline for project submissions for students was > 3rd May. If this isn't worked on before next year, it can of course > be put forward as an idea for GSoC 2014. Having reviewed the list of project ideas, I thought I'd submit an alternative, in case we have difficulties. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
> I'm proposing this now as a possible GSoC project; I don't propose to > actively work on it myself. The deadline for submitting GSOC projects (by students) was a week ago.So is this a project suggestion for next year ...? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
On 14.05.2013 21:36, Josh Berkus wrote: > >> I'm proposing this now as a possible GSoC project; I don't propose to >> actively work on it myself. > > The deadline for submitting GSOC projects (by students) was a week ago. > So is this a project suggestion for next year ...? I've been thinking, we should already start collecting ideas for next year, and collect them throughout the year. I know I come up with some ideas every now and then, but when it's time for another GSoC, I can't remember any of them. I just created a GSoC2014 ideas pages on the wiki, for collecting these: https://wiki.postgresql.org/wiki/GSoC_2014. Let's keep the ideas coming, throughout the year. - Heikki
On 14 May 2013 19:47, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > On 14.05.2013 21:36, Josh Berkus wrote: >> >> >>> I'm proposing this now as a possible GSoC project; I don't propose to >>> actively work on it myself. >> >> >> The deadline for submitting GSOC projects (by students) was a week ago. >> So is this a project suggestion for next year ...? > > > > I've been thinking, we should already start collecting ideas for next year, > and collect them throughout the year. I know I come up with some ideas every > now and then, but when it's time for another GSoC, I can't remember any of > them. > > I just created a GSoC2014 ideas pages on the wiki, for collecting these: > https://wiki.postgresql.org/wiki/GSoC_2014. Let's keep the ideas coming, > throughout the year. Thanks Heikki, that's a capital idea. -- Thom
On Tue, May 14, 2013 at 11:47 AM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > I've been thinking, we should already start collecting ideas for next year, > and collect them throughout the year. I know I come up with some ideas every > now and then, but when it's time for another GSoC, I can't remember any of > them. It seems like the PostgreSQL Wiki Todo list has a lot of deadwood. I wouldn't tell a novice hacker to go and pick something from there. Maintaining a list of good beginner projects is actually a pretty hard undertaking. One thing I've heard multiple times in the past is that the archetypal beginner project is to add some feature to psql. Well, psql is fairly feature complete these days, so finding something to do there that's likely to be accepted is probably not that easy. -- Peter Geoghegan
On 14 May 2013 20:04, Peter Geoghegan <pg@heroku.com> wrote: > On Tue, May 14, 2013 at 11:47 AM, Heikki Linnakangas > <hlinnakangas@vmware.com> wrote: >> I've been thinking, we should already start collecting ideas for next year, >> and collect them throughout the year. I know I come up with some ideas every >> now and then, but when it's time for another GSoC, I can't remember any of >> them. > > It seems like the PostgreSQL Wiki Todo list has a lot of deadwood. I > wouldn't tell a novice hacker to go and pick something from there. > Maintaining a list of good beginner projects is actually a pretty hard > undertaking. I think that's why Heikki is proposing we collect GSoC-friendly ideas separately and put them on the list for next year. -- Thom
Hello, Heikki. You wrote: HL> On 14.05.2013 21:36, Josh Berkus wrote: >> >>> I'm proposing this now as a possible GSoC project; I don't propose to >>> actively work on it myself. >> >> The deadline for submitting GSOC projects (by students) was a week ago. >> So is this a project suggestion for next year ...? HL> I've been thinking, we should already start collecting ideas for next HL> year, and collect them throughout the year. I know I come up with some HL> ideas every now and then, but when it's time for another GSoC, I can't HL> remember any of them. HL> I just created a GSoC2014 ideas pages on the wiki, for collecting these: HL> https://wiki.postgresql.org/wiki/GSoC_2014. Let's keep the ideas coming, HL> throughout the year. Good idea! It reminds about feature proposed by Pavel Stehule while ago here: http://www.postgresql.org/message-id/BANLkTini+ChGKfnyjkF1rsHSQ2kMktSDjg@mail.gmail.com It's about streaming functionality for BYTEA type. But I think streaming must be added to BYTEA, TEXT and VARCHAR without length specifier too. As Pavel stated: "A very large bytea are limited by query size - processing long query needs too RAM". This is the holy true, which came up suddenly in the project of one of my client. Becuase he used bytea for images storing and text format in PQexec, which as you know doubles-triples size of the data. Some more details from Pavel: <quote> There is a few disadvantages LO against bytea, so there are requests for "smarter" API for bytea. Significant problem is different implementation of LO for people who have to port application to PostgreSQL from Oracle, DB2. There are some JDBC issues too. For me - main disadvantage of LO in one space for all. Bytea removes this disadvantage, but it is slower for lengths > 20 MB. It could be really very practical have a possibility insert some large fields in second NON SQL stream. Same situation is when large bytea is read. </quote> I'm not sure if the whole project is simple enough for GSOC, but I suppose it may be splitted somehow. PS Should we start separate thread for proposals, because I've spent an hour since I found wiki for GSOC14 mention. HL> - Heikki -- With best wishes,Pavel mailto:pavel@gf.microolap.com