From my simple, engineering hat perspective I feel this question is difficult to answer without hard numbers which include:
1) the amount of data stored in the db including indexes
2) Expected peak number and type of db operations per second
3) Max latency the operations can consume
4) Number and type of CPUs on each workstation
5) Available RAM for each workstation
6) Data storage utilization (MBs, IOPs)
7) Data storage capacity and type (SSD/spinning/network)
8) Type of network ( number of networks and bw)
9) Network bw utilization
10) during operation what is the swap space utilization
From these numbers the approach and architecture can be determined.
For example, if the available ram has sufficient capacity, file system cache and pgcache can be tuned to keep all the data in ram likely to significantly improve write and query performance. Another option is to have a dedicate SSD for data and/or swap space.
I prefer these analytical "rules of 10" as starting point to guide alternatives