Hi Josh,
We've got installations with hundreds of concurrent heads-down users
doing very transaction-intensive things, with very high volumes of
human-entered Sales Orders (thousands a day), machine-managed EDI
(thousands of invoices, shipping notices, purchase orders, etc.), and
tens of thousands of complex general ledger transactions a day,
including serial or lot-controlled inventory movement across multiple
locations. Some users have millions of SKUs, hundreds of thousands of
contacts and accounts (customers, vendors, etc) in the CRM subsystem, etc.
To the best of my knowledge, we have never had performance problems
caused by PostgreSQL or our use of stored procedures to handle the
business logic. When we do have issues, they are easily addressed by
adding indexes or fixing inefficiences in what client-side code we do
still have. Happily, living in the open source world, those issues are
typically brought to our attention (and solved) rather quickly :)
Cheers,
Ned
On 10/4/2011 12:42 PM, Joshua Kramer wrote:
>
>> I see this as a wake up call that our advocacy needs to focus on the
>> case studies, like that of Urban Airship, to demonstrate how to scale
>> infrastructure with Postgres. Keeping this information either secret
>> or difficult to find results in throwing out or scaling back use of
>> Postgres.
>
> Hey, Ned Lilly - are you on this list? Do you have any examples of
> highly scaled xTuple installations? (For those who are unaware, xTuple
> is an open source ERP solution based on a Qt frontend. All of the
> business logic resides in Postgres stored procedures.)
>
> http://www.xtuple.org
>
> Cheers,
> -JK
>
--
Ned Lilly
President and CEO
xTuple
119 West York Street
Norfolk, VA 23510
tel. 757.461.3022 x101
email: ned@xtuple.com
www.xtuple.com