It really depends on how much of those 64 vCPUs and 250GB RAM your SQL Server actually uses today, and whether you’re running on a physical box or a virtual machine.
PostgreSQL doesn’t have a 1:1 sizing formula against SQL Server. I’d first look at real CPU/memory usage, workload pattern (OLTP vs reporting), and how connections/queries behave. I’d also factor in how well we can migrate and map the data types and queries, because good type choices and query rewrites can significantly reduce resource usage.
If it’s a physical server, I’d start with similar hardware for PostgreSQL and then tune Postgres parameters (shared_buffers, work_mem, etc.) based on monitoring.
If it’s a VM, I’d provision a bit more capacity than the current SQL Server allocation to give some headroom for tuning and unexpected overhead, and then right-size after observing the real load in PostgreSQL.
Once the migration is done and in steady use, we can monitor CPU, memory, and I/O in PostgreSQL and then optimise or scale down/up based on real metrics instead of guessing up front.
On Mon, 2025-12-01 at 08:46 +0530, Raj wrote: > I am migrating from MSSQL to POSTGRESQL. In MSSQL, I am using 64 vCPU and 250GB RAM. > Now how much we can give in postgres?
If these specifications worked for you with Microsoft SQL Server, use the same with PostgreSQL. If you can, don't use Windows.