So how big is your data set? Is it much bigger, about the same, or much smaller than your shared_buffers? The problem with a giant
The total DB size is 1,9 TB
shared_buffers is that postgresql has a VERY big set of data to keep track of when it comes time to write it out, evict stuff, etc etc. generally speaking the kernel is better optimized to cache huge data sets. For instance, I can get very good performance on a machine with a multi-terabyte data set, running with 512GB RAM and only using 10GB as shared_buffers, and letting the kernel use the rest as cache. Making share_buffers bigger doesn't make it faster after a few gigabytes, even with 24 cores and 10 SSDs in RAID-5 (we can go faster with RAID 10 but we need the space). 51GB is a huge shared_buffer setting. I'd want to see some benchmarks showing it was faster than 1 or 2GB. I'm betting it won't be.
Also are you using a pooler? I would take it as no. Note that your connections go from 30 or so to over 140 during a spike. A computer, based on number of concurrent iops it can handle etc, will have a performance graph that climbs as you hit a certain peak number of active connections. On a machine like yours I'd expect that peak to be between 4 and 20. Restricting active connections to a number in that range makes the machine faster in terms of throughput, and keeps it from slowly tipping over as you go further and further past it's peak number.
pgbouncer is super easy to setup and it can handle huge numbers of idle connections (10,000 etc) while keeping the db running at its fastest. My advice? Get a pooler in there.
I'm not using a pooler.. But I'll have a look on it