Thread: Thank you: an anti-question (or a Pg love letter)

Thank you: an anti-question (or a Pg love letter)

From
Steve Midgley
Date:
Hi,

I used to participate on this list awhile back (2008?). I migrated off to other stuff, but I'm back doing some Pg work recently. I don't have a question, but I wanted to share my experiences, since I think the core members of this community don't get thanked enough. 

Today I had a burly (for me) problem involving transforming some text into some json fields for ~1M rows. Running with middleware doing the schema transforms I was getting maybe 100 rows per second. Ugh. I could go back to the original couchdb project, but double ugh. So I re-wrote with native Postgres SQL:
update raw_documents set 
identity = json_extract_path(raw_data::json, 'identity'::text),
keys = json_extract_path(raw_data::json, 'keys'::text),
payload_schema = json_extract_path(raw_data::json, 'payload_schema'::text)
I don't even know how many rows per second b/c it came back in less than 5 minutes - maybe 3k rows per second?

I just wanted to write to say thank you to everyone who builds, supports and participates in the Postgres community. I'm done dinking around with different tool chains. If I have to persist something again, I'm doing it with Postgres, I don't care if it's json, xml, cols/rows or blobs. If I can't put save it to the filesystem, it's going in Pg. :)

Per the subject, this is an "anti-question" -- just sharing on Friday afternoon that Postgres is working perfectly - just as it should: fast, reliable, easy, conformant. Have a nice weekend and thank you!

Steve


Re: Thank you: an anti-question (or a Pg love letter)

From
Andrej
Date:
Thanks for sharing :)

Pure awesomeness!

On 4 April 2015 at 12:01, Steve Midgley <science@misuse.org> wrote:
> Hi,
>
> I used to participate on this list awhile back (2008?). I migrated off to
> other stuff, but I'm back doing some Pg work recently. I don't have a
> question, but I wanted to share my experiences, since I think the core
> members of this community don't get thanked enough.
>
> Today I had a burly (for me) problem involving transforming some text into
> some json fields for ~1M rows. Running with middleware doing the schema
> transforms I was getting maybe 100 rows per second. Ugh. I could go back to
> the original couchdb project, but double ugh. So I re-wrote with native
> Postgres SQL:
>
> update raw_documents set
> identity = json_extract_path(raw_data::json, 'identity'::text),
> keys = json_extract_path(raw_data::json, 'keys'::text),
> payload_schema = json_extract_path(raw_data::json, 'payload_schema'::text)
>
> I don't even know how many rows per second b/c it came back in less than 5
> minutes - maybe 3k rows per second?
>
> I just wanted to write to say thank you to everyone who builds, supports and
> participates in the Postgres community. I'm done dinking around with
> different tool chains. If I have to persist something again, I'm doing it
> with Postgres, I don't care if it's json, xml, cols/rows or blobs. If I
> can't put save it to the filesystem, it's going in Pg. :)
>
> Per the subject, this is an "anti-question" -- just sharing on Friday
> afternoon that Postgres is working perfectly - just as it should: fast,
> reliable, easy, conformant. Have a nice weekend and thank you!
>
> Steve
>
>



-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.georgedillon.com/web/html_email_is_evil.shtml



Re: Thank you: an anti-question (or a Pg love letter)

From
john
Date:
+1
Johnf

On 04/06/2015 09:21 PM, Andrej wrote:
> Thanks for sharing :)
>
> Pure awesomeness!
>
> On 4 April 2015 at 12:01, Steve Midgley <science@misuse.org> wrote:
>> Hi,
>>
>> I used to participate on this list awhile back (2008?). I migrated off to
>> other stuff, but I'm back doing some Pg work recently. I don't have a
>> question, but I wanted to share my experiences, since I think the core
>> members of this community don't get thanked enough.
>>
>> Today I had a burly (for me) problem involving transforming some text into
>> some json fields for ~1M rows. Running with middleware doing the schema
>> transforms I was getting maybe 100 rows per second. Ugh. I could go back to
>> the original couchdb project, but double ugh. So I re-wrote with native
>> Postgres SQL:
>>
>> update raw_documents set
>> identity = json_extract_path(raw_data::json, 'identity'::text),
>> keys = json_extract_path(raw_data::json, 'keys'::text),
>> payload_schema = json_extract_path(raw_data::json, 'payload_schema'::text)
>>
>> I don't even know how many rows per second b/c it came back in less than 5
>> minutes - maybe 3k rows per second?
>>
>> I just wanted to write to say thank you to everyone who builds, supports and
>> participates in the Postgres community. I'm done dinking around with
>> different tool chains. If I have to persist something again, I'm doing it
>> with Postgres, I don't care if it's json, xml, cols/rows or blobs. If I
>> can't put save it to the filesystem, it's going in Pg. :)
>>
>> Per the subject, this is an "anti-question" -- just sharing on Friday
>> afternoon that Postgres is working perfectly - just as it should: fast,
>> reliable, easy, conformant. Have a nice weekend and thank you!
>>
>> Steve
>>
>>
>
>