I think the best advice I can think of is to go back to the basics. Tools like sar and top and look at logs. Changing random settings on both the client and server seems like guessing. I find it unlikely that the changes you made (jdbc and shared buffers) had the effects you noticed. Determine if it is I/O, CPU, or network. Put all your settings back to the way they were. If the DB did not change, then look at OS and network.
On machine 1 - a table that contains between 12 and 18 million rows On machine 2 - a Java app that calls Select * on the table, and writes it into a Lucene index
Originally had a fetchSize of 10,000 and would take around 38 minutes for 12 million, 50 minutes for 16ish million to read it all & write it all back out as the lucene index
One day it started taking 4 hours. If something changed, we dont know what it was
We tracked it down to, after 10 million or so rows, the Fetch to get the next 10,000 rows from the DB goes from like 1 second to 30 seconds, and stays there
After spending a week of two devs & DBA trying to solve this, we eventually "solved" it by upping the FetchRowSize in the JDBC call to 50,000
It was performing well enough again for a few weeks
then...one day... it started taking 4 hours again
we tried upping the shared_buffer from 16GB to 20GB