New server tuning

There are a few ways to combine all of this information into a process for tuning
new server. Which is the best is based on what else you expect the server to be
doing, along with what you're looking to adjust yourself versus taking rule of
thumb estimates for.

Dedicated server guidelines

Initial server tuning can be turned into a fairly mechanical process:

  1. Adjust the logging default to be more verbose in postgresql.conf
  2. Determine how large to set shared_buffers to. Start at 25 percent of system
    memory. Considering adjusting upward if you're on a recent PostgreSQL
    version with spread checkpoints and know your workload benefits from
    giving memory directory to the buffer cache. If you're on a platform where this
    parameter is not so useful, limit its value or adjust downward accordingly.
  3. Estimate your maximum connections generously, as this is a hard limit;
    clients will be refused connection once it's reached.
  4. Start the server with these initial parameters. Note how much memory is still
    available for the as filesystem cache.
  5. Adjust effective_cache_size based on shared_buffers plus the OS cache.
  6. Divide the OS cache size by max_connections, then by two. This gives you
    an idea of a maximum reasonable setting for {{work_mem}. If your application is
    not dependent on sort performance, a much lower value than that would be
    more appropriate.
  7. Set maintenance_work_mem to around 50 MB per GB of RAM.
  8. Increase checkpoint_segments to at least 10. If you have server-class
    hardware with a battery-backed write cache, a setting of 32 would be a
    better default.
  9. If you're using a platform where the default wal_sync_method is unsafe,
    change it to one that is.
  10. Increase wal_buffers to 16 MB.
  11. For PostgreSQL versions before 8.4, consider increases to both
    default_statistics_target (to 100, the modem default) and
    max_fsm_pages based on what you know about the database workload.
    Once you've setup some number of servers running your type of applications, you
    should have a better idea what kind of starting values make sense to begin with. The
    values for checkpoint_segments and work_mem in particular can end up being very different from what's suggested here.

Shared server guidelines

If your database server is sharing hardware with another use, particularly the common
situation where a database-driven application is installed on the same system, you
cannot be nearly as aggressive in your tuning as described in the last section. An exact
procedure is harder to outline. What you should try to do is use tuning values for the
memory-related values on the low side of recommended practice:

The other suggestions in the above section should still hold - using larger values for
checkpoint_segments and considering the appropriate choice of wal_sync_method,
for example, are no different on a shared system than on a dedicated one.

Then, simulate your application running with a full-sized workload, and then
measure available RAM to see if more might be suitable to allocate toward the
database. This may be an iterative process, and it certainly should be matched with
application-level benchmarking if possible. There's no sense in giving memory to
the database on a shared system if the application, or another layer of caching such
as at the connection pooler level, would use it more effectively. That same idea - get
reasonable starting settings and tune iteratively based on monitoring - works well
for a dedicated server, too.