Re: [GENERAL] Are we losing momentum? Answer: Heck No! - Mailing list pgsql-advocacy

From Corey W. Gibbs
Subject Re: [GENERAL] Are we losing momentum? Answer: Heck No!
Date
Msg-id 01C30323.FF8C9D90.cgibbs@westmarkproducts.com
Whole thread Raw
List pgsql-advocacy
I asked this question last year and here are the responses I received.

Good Morning Everyone,
I have a general question about who is using Postgresql.  This is not a
marketing survey and any information I collect will only be used by me.

<long drawn out story snipped out which by the way, ended with the user switching to PG, gee the app runs much faster
now>

Brian Heaton wrote:
Corey,

    My firm is currently using Postgres as the back-end of a military
network monitoring app.  This will end up being deployed in tactical
vehicles.  Our databases tend to have 1 huge table (5-10M rows), 2-3
medium tables (50-100K rows), and 2 smaller tables (5-10K rows).  Our UI
is currently in Java using JDBC (of course).  We also interface directly
in C from a couple of utility and reporting apps.

            THX/BDH
Brian Hirt wrote:

For what it's worth:

Our company runs MobyGames (http://www.mobygames.com) a project similar
to IMDB, but for video and computer games.  We exclusively use
postgres.  We've been using it since december of 1998 (pg6.5.3) and have
been very happy with it.  The database is relatively small, around 1.5GB
in about 200 tables.  All of our pages are dynamically created, and we
serve up about 1,000,000 pages a day (each page usually causes at least
20-30 queries against the database.).  Most of the database activity is
select queries, there is only about 0.5MB - 1.0MB of additional content
added a day. The database runs on a single box and has performed well.
When there have been problems with postgres, the developers have been
very proactive about finding a solution, and the problems have always
been resolved within a day or two.  From extensive past experience with
both Oracle and Sybase, I can say that's great.

--brian hirt



Kym Farnik wrote:
Hi - We use various SQL DBMSs including Postgres.
The choice of DBMS depends on customer needs.  RD
are an Online Application Development company.

We have positioned Postgres for the 'entry level'
customer.   This is a little misleading as some
of those customers have quite large databases.

By comparision our Govt accounts use Oracle (it's the
DBMS of choice for the South Australian Govt).
Some of our larger customers also use Oracle.
One customer in the advertising/image processing
industry has a projected storage requirement of
6 peta bytes.  They are using Oracle on Solaris. :-)

On the other hand our CMS product uses Postgres as
do companies like Ballon Aloft (www.balloonaloft.com.au)

To quote our markting stuff...

Introduction
------------
Recall Design use PostgreSQL (http://www.postgresql.org) as a Database
Management System (DBMS) for web application projects.   PostgreSQL is a
free, open source DBMS product.   This article discusses the advantages of
using PostgreSQL over commercial databases such as Oracle and Microsoft SQL
Server.

Recall Design use Oracle for large and/or multi-server applications.
Commercial DBMSs, such as Oracle, are used where specific features, such as
Spatial, are required.   We design our systems so that customers have the
option of migrating their DBMS from PostgreSQL to a commercial DBMS such as
Oracle.   This allows customers to start low-cost with the option to expand
as required.

...  More stuff from the Postgres site follows (with GNU legals)


Jeff Fitzmyers wrote:
One thing that has not been mentioned is the ability to start companies
with a very small budget.

I am developing the webified office backend on an oldish Mac OSX laptop
with postgres / php / apache. I am Mr Mom and the laptop allows me too
work instantly with partners, clients and the main website. The
flexibility is fantastic.

Ever tried to put oracle on a laptop?? A coworker has, and for some
reason 5 high end laptops could keep him busy for a few days with
oracle, java and configuration, etc. I think it took much longer then an
hour just to load oracle. The first time I set up the Mac it took 30
minutes to get everything going with no problems.

I met a few of the developers at a past Linux expo. They seemed very
nice and very capable. I am very pleased with the development pace and
focus of postgres. Each new release is like christmas :-) Plus the
postgres lists are great sources of education!

Thanks, Jeff Fitzmyers


Jeff Self wrote:
I understand where you are coming from. I worked for a city government
up until a year ago. I built our intranet using Linux on a discarded
server with apache and postgreSQL. But they didn't care about the fact
that is was free. They wanted all data to be stored on the mainframe. I
got tired of the scene and I left to join Great Bridge. We know the rest
of this story.

I'm now back in city government, although with a different city. They
are much more open to creativity here and are allowing me to develop on
Linux running postgreSQL. I'm in the process of developing a Job
Information System for our Personnel department, whom I work directly
for, that will use Apache, PostgreSQL, JSP's, and some Perl. So I'm a
happy camper now.

Put together a proposal for them. In one column, list the costs for
installing PostgreSQL on your existing Linux servers. In the other
column, list the cost of a server running Windows XP/2000 with MS SQL
server. Don't forget to include the cost of licenses for all 15 users
and. Also throw in Visual Studio .net which was just announced the other
day. I believe its around $1000 per user. Let them decide.

Steve Wolfe wrote:
Since I've posted a number of times to this list, it's no big secret
that www.iboats.com is powered by Postgres.  It's been rock-solid for us,
and served us very well.  Our data directory is about 1.5 gigs in size,
spread out over a few hundred tables, some very small, some very large.
We do all of our programming in Perl.  Investers have never heard of
Postgres, and sometimes mention getting Oracle, so we tell them "Terrific,
if you want us to get Oracle, we can do that.  We'll just need an extra
half-million dollars to do it with."   Reality then slaps them in the
face.....

Tony wrote:
The regional foundation for contempory art in Pays de la Loire, France

Contact database of about 30K people - mailing
Works database with about 700 works of Art - conservation, expo
planning...
Library database with about 6000 books

Clients are all Macs. The reason for leaving the world of closed source
was the cost per seat for client licences. There are 10 people using the
database. Interface is www, jdbc, jsp

The public bit of the works data base will be linked to the web site as
will all of the library database.


Lief Jensen wrote:
Hi,
   I think we have a technically interesting product:
 The application:
    Logging Time & Attendance for employees, production time incl.
machinery
for invoicing customers and efficiency reports, project times also for
customer invoicing, salary calculations including all kinds of weird
employee-contract specifics, and of course a lot of reports.
  The system:
    Little over 80 tables with an awfull lot of 'foreign keys' originally
with
referential integrity. Time-stamp input (logging events) range from few
hundreds a day to several thousands a day (not that much ;-). Rather heavy
access in generating reports, though, since there is a lot of cross
referencing tables. In house this is running on PostgreSQL 7.1.2/3 on Linux
(Slackware 8.0) AMD K7 500MHz 512MB RAM. The database is only around 50MB
with
one table having ~20MB. The datacollection (time events like job start, job
stop, break start, break stop) is done on a small 'terminal' specially
designed for the purpose. These terminals are connected on a two-wire
network
to a special controller, communicating with a computer using RS232. The
interface program (called the OnLine program) is programmed in C++ and can
run
on both Windows and Linux. In the in-house system the OnLine is running
directly on the database server.  The OnLine program connects to the
database
using ODBC, even on Linux.

  A little history:
    Our project started in the early days of M$Access (Access 2.0) where
everyone sought this was the way to go :-(, at least in my surroundings, my
company and our customers. The first project didn't go too well, the system
was certainly too complex for Access 2.0 and Windows 3.11. First with the
transition to Access 97, the system started to be usable. However, it was
still not performing very well and could only be used by small companies.
At
this time we started using Informix as the backend running on Linux. This
was
certainly early days for Informix on Linux. It worked, but was difficult to
administer and hard for 'novices' like us to get it working good. The main
problem was the ODBC driver on Windows and we tried 3 different brands
(including Informix' ), in several different versions. All of them needed a
lot of modification in Access frontend. Access is certainly not SQL 'clean'
and it is very hard to figure out what the JetEngine is doing. However, we
got
it working, but performance was poor, some reports could take a couple of
days
(yes, more than 24 hours !!!) and when does a Windows machine run for that
long ? ;-)

   I had been playing with PostgreSQL on my own for some years, and finally
last spring we decided to make the move and transfer all data to PostgreSQL
7.1.2. As you all know installing and getting Postgres running is VERY easy
and everything including transferring data (I needed to write af few
scripts
to do it and do a lot of testing) took only a few days. Having everything
in
PG now the interesting part was to test performance, but first of course
the
postgres ODBC driver was easy to set up, worked at first shot, and now the
performance: reports formerly taking those days were now done in few hours,
and with a bit of tweeking we got it down to about 1/2 hour and we really
didn't optimize it (no stored procedures or such). Some simpler reports
(with
almost same results as the heavy ones) I did for our intranet, showing up
in
split seconds. The system has now been running in-house for almost a year,
no
break-down, no down time on the database. No NT restart every now and then.
(We have another in-house application running on WinNT/M$SQL Server that
needs
to be restarted every 2. week, even with 1.5GB RAM.)

    Additional:
      Have a look at OpenACS (http://www.openacs.org). This is the ACS
system
moved to PostgreSQL !! A very interesting project. There is also references
to
sties/people using PG.

    Greetings,

 Leif


Andrew Gould wrote:
My office performs financial and clinical data analysis to find
opportunities to improve operations and the quality of patient care.  We
used PostgreSQL 7.1.3 on FreeBSD to create a relational data model version
of most of our Decision Support System and
integrated data from additional data sources.  We also have data for all
inpatients discharged from nonrural hospitals in Texas during 1999 and
2000.  We use the state data to derive benchmarks; and apply the benchmarks
to internal data.  The database for internal data is currently 3GB.  The
database for the state data in 14GB.

I am currently preparing to move the data from several MS Access database
applications to PostgreSQL databases.  The users will never know anything
changed.

Since the hospital is mostly a Windows shop; we use MS Access 97 and 2000
as front-ends via ODBC drivers.

I have setup phpPgAdmin (Apache web server with PHP4) so that I can answer
simple questions from any executive's office in the system.

I have a Python script that obtains a current list of PostgreSQL databases.
 It renames existing .gz dump files to .gz.old.  It then vacuums all
databases and uses pg_dump and gzip to back them up into individual .gz
files.  The script is run by cron to ensure that even new databases are
backed up automatically on a weekly basis.

Andrew Gould

Nick Frankhauser wrote:
We're not in production yet, but our application needs to scale up by about
70MB per year for each customer we add. All of our customers have about 10
years worth of history to start with, So I figured them to be roughly 1GB
each initially. Since our "short list" is for about 35 customers, I mocked
up a test database with 35GB of test data & had some family members pound
the web site with queries for a few hours. The response time was very
reasonable. Our demo site has a much smaller database behind it, but the
data was generated by the same random routines that created the large
database, so it shows roughly what sort of application we're running
(http://www.doxpop.com). Performance seems to be about on par with
SQL-server & Oracle, and I've never crashed the database unless I'm abusing
root privilege while stupid.

Performance and reliability is just not a problem, and you can find it in
many products. I think the more important issue is support, and that's
where the open-source community leaves the commercial sector in the dust.

Here is my support experience:

When I used MS SQL-server and Oracle in my last job, if I logged a support
call, I'd be lucky to get a response within a day. Of course there is no
support outside of normal office hours unless you pony up big money. If I
had an interesting problem, it could take days to get escalated up to the
people who understood & enjoyed challenging problems. -And of course even
"standard" support was pretty pricey.

When I was just starting out with PostgreSQL, I *really* screwed up my
database with some dumb last-minute changes at 11:30 PM the night before a
sales demo, I compounded the problem by moving my WAL files & generally
doing many of the things you shouldn't do. I posted frantic requests for
help, and received the help I needed at about 2AM. By 3AM, I had received
clarification after a second round of questions and by 5AM I was ready for
the demo. Around 6AM, two of the developer/guru people had lent their
expertise as well. Not only did I get good support in the middle of the
night, I also got the personal attention of two developers during the time
that most support folks are still stumbling around in search of caffeine. I
don't think you can buy that kind of support anywhere.

PostgreSQL is a part of our competitive advantage. Of course we try to give
back to the community by spending a little time each day being a part of
that unusual 365 X 24 support staff on lists like this, but the time spent
is minor compared to the savings- and our participation makes us better
administrators.



Holger Marzen wrote:
On Fri, 15 Feb 2002, Corey W. Gibbs wrote:

> any other server." "Opensource software isn't going any where." "Can we
> depend on it?" are common questions and statements I have heard.

Can we depend on it? That is the silliest question ever, baut hardly
anyone seeh to know why.

The important thing about software "in production" is not the price.
There is nothing wrong paying good money for good software. But software
that comes without source code is no good software. Why? Because the
manufacturer drops support for every version withing a few years. And
then you have software running that no-one can support.

You could say: "OK, so we spend a lot of money every year again and
upgrade to the latest version. We accept even the downtime." Yes, if you
are lucky. But the manufacturer will finally merge with a competitor or
simply vanish. Bang!

> I am not trying to start a ruckus or a flamewar, but I would like to know
> who's using Postgres out there.  What's the application?  How big are
your
> databases?  Are you using Visual Basic or C to connect to it through ODBC
> or are you using a Web interface?

We use PostgreSQL as a database for web servers: raw data to generate
network statistics from (about 160.000 rows, growing) and user databases
for access privileges. I am very happy that I found mod_auth_pgsql, so
PostgreSQL tables can be used with .htaccess. Great!

Many people use MySQL for these purposes (and it's OK for simple
applications). But why use a lightweight database if I can enjoy
transactions, triggers and so on with the full-function PostgreSQL?

--

Neal Lindsay wrote:
[snip]
> I am not trying to start a ruckus or a flamewar, but I would like to
> know  who's using Postgres out there.
[snap]

<ruckus>

We use it at the small consulting company I work for to track time billed
to
jobs.  The current front end is in Access97 with the backend in PG 7.1.3 (7
tables).  I developed it partway in 100% Access and transferred my tables
to
a PG backend before I deployed it.  Tastes great, less filling.  Never had
a
stability problem.  I am currently working on a more feature-full version
with PG 7.2 on the back and PHP web forms on the front (25+ tables).
 Access
(+ VBA) is like a lot of Microsoft products: they make easy things easy and
slightly hard things darn near impossible.  I like a lot of abstraction on
top of my DB, so Access wasn't cutting it.  If the way you store it very
similar to the way you see it though (and you don't mind the licensing)
Access is pretty nice.  Not for the backend though.  You (and probably
everybody else here) already know, but it bears repeating:  Access is not a
good multi-user database backend.

</ruckus>

Neal Lindsay


Raymond O'Donnell wrote:

Ireland, for whom I've developed a number of web applications of varying
scale and complexity. The web server is a windows machine (we've just
upgraded from NT4 to 2000) from which COM objects and ASP script talk via
ODBC to a Linux machine running PostgreSQL.

I'm also currently developing an application for a language school; this is
written in Delphi and runs on Windows client machines from which, again, it
talks via ODBC to a Linux server running PostgreSQ

Andrew Sullivan wrote:
We're running the first gTLD since .com, .org, and .net, and we'redoing it
with Postgres.  Not Oracle.  Not DB2.  Not Sybase.  And notMS SQL Server

And you know what?  The Oracle developers can't believe how fast it is.
 Plus we're saving thousands in license fees.  It does everything we want
and more, and it does it fast.  It's stable, and a breeze to administer.

> Are you using Visual Basic or C to connect to it through ODBC
> or are you using a Web interface?

We're using JDBC.

Andy Samuel wrote:
I use PostgreSQL with Kylix + ZeosDBO for a Point of Sales Application for
my client.  It has been great ! But the size of the database is not big.
I'm currently developing a Hotel Information System with Delphi + ZeosDBO +
PostgreSQL ( in Linux ). If you search the email archieve, you'll find some
people use it with HUGE
amount of data.

Shane Dawalt wrote:
I'm a network engineer at Wright State University (free software is good
:-).  I have used Postgresql for production work since version 6.2.3
(1996/7).  I use it for two primary operations. I use Perl most of the time
with the DBD:Pg and DBI:Pg modules.  They work well.  I also have used PGP
within my Apache web server to access the database, also works very well.

1) We have a large modem bank of around 253 modems. All modems log RADIUS
authentication messages as well as activity logs. Each night I process the
RADIUS logs from the modem bank servers for the previous day. A perl script
summaries the info and stuffs it into the database. Modem sessions are
stored for 1 year.  I currently have about 1.9 million records in the
database taking about 500 megabytes.  (I'm re-coding the thing to reduce
this space ... I was stupid when I first wrote it.)  I have other perl apps
that do a second-by-second accounting of all modems. They generate graphs
or text output depending on which manager reads it.

2) We have a large number of network ports on our campus with over 5,000
active network ports for faculty/staff alone. We needed a way to enforce
our network policies which disallow users from setting up their own
repeated/switched/wireless devices (security issues). I have written a
database and several perl apps that use SNMP to interrogate all of our
Cisco switching devices for ethernet addresses which are updated in a large
database. Queries are then ran against the database to find people who are
potentially violating our policies and reports are generated. The software
has the ability to shut down the associated network ports automatically
though this feature has not been enabled just yet. (I'm still in bug-squash
mode.)

  These apps and the database are being hosted on a Digital 433 MHz
personal workstation with a single processor.  It works well, but would
work better if not for its measily 128 Mbytes of RAM. They are rather
database-communication intensive. If I wrote some additional PLpgsql
functions within the database server itself then alot of the communications
would vanish since most of the work would be performed on the server side
rather than at the client side.  This is for a rainy day though.


Bill Gribble wrote:
My company is using postgres in several related applications in retail
point of sale and inventory management.

Our point of sale system, OpenCheckout, uses postgres as its backend.
The size of the databases varies according to the retail install, but
for a recent trade show demo we loaded up a craft and hobby industry
database of UPC codes and item information that contained about 800,000
items.   With that size database, random lookups on an indexed field
(the UPC code) were reasonably quick.  We haven't extensively tested
with large numbers of users but our early results are positive.

We are also using postgres as a server for a fixed asset tracking system
we are working on.  Inventory management and computer service people
with wireless handhelds (compaq ipaqs running Linux) connect to a
postgres server to get network configuration, service history, and
hardware information from computers, switches, and even network jack
plates keyed on a barcoded property tag.  The user just scans the tag
with the integrated barcode scanner and can view or edit lots of
different kinds of information.

And we use the same handheld system to interface to our point of sale
inventory database, for receiving people in the warehouse to scan
incoming items into the database or for reordering people wandering the
aisles of the store.  Postgres lets us tie all this together pretty
easily.

Sad to say :) we use SQLite when we have to go off the network and
operate disconnected with the handheld units.  The ipaq just doesn't
have enough horsepower and storage space (32M of non-volatile storage,
64M RAM) to run postgres locally plus all our software.  We keep an
audit trail table and replay it when we can get wireless access to the
postgres server again.

We access the database in a variety of ways.  Most of our tools are
written in Scheme and use a Scheme wrapper for the libpq libraries.  For
the accounting components we use a middleware layer based on the
'gnucash' accounting engine, which provides a uniform financial
transaction API.  The actual POS front end is written in Java (so it can
use the JavaPOS point of sale hardware driver standard) and gets many of
its configuration parameters from the same database using JDBC.


Fernando San Martin Woerner Wrote:
Corey

    I was in your shoes 3 years ago, right now i'm using postgres in place of
ms access, from vb with no problem, i fact there's a lot of things better
than using access, i work in a medium size construcction company in Chile,
we have a business of US$ 20 million by year, our database it's used from
internet by phone 56k/b connections, all clients software are programmed in
vb using odbc drivers and every things are ok. if you need some gui for
windows yo have pgexplorer or pgadmin and are very good.

    So ms sql server it's not a good option, first: price are expensive,
second
you need some ms OS to run it and there you loose your reliabity,
perfomance
and security. postgres it's easy to program and there's a lot of
documentation and information plus you can get help from pgsql mailing list
that it's better than some technical support.


try it. i did it and was a very good experience

regards


pgsql-advocacy by date:

Previous
From: David Wheeler
Date:
Subject: Re: Tech Docs and Consultants
Next
From: Josh Berkus
Date:
Subject: Re: Tech Docs and Consultants