Does the native driver do a round trip for each record fetched, or can it batch them into multiples?
For example, in the Oracle native driver (for Python, in my case), setting the cursor arraysize makes a huge performance difference when pulling back large datasets.
Pulling back 800k + records through a cursor on a remote machine with the default arraysize was way too long(3 hours before I canceled it).
Upping the arraysize to 800 dropped that to around 40 minutes, including loading each record into a local Postgres via a function call (more complex database structure to be handled).
This is on low-level test equipment.
This is a relevant issue for us, as we well be developing a new front end to our application. and we still haven't finalized the architecture.
The backend build to date uses Python / Postgres. Python/Flask is one option, possibly serving the data to Android / web via JSON / REST.
Another option is to query directly from node.js and get JSON or native query from the database (extensive use of functions / stored procedures).
Our application is data-intensive, involving a lot of geotracking data across hundreds of devices at it's core, and then quite a bit of geo/mapping/ analytics around that..