The 5th dimension of MongoDB: Adjusting Eviction Threads

In certain situations, MongoDB can’t keep up with the dirty cache flushing. For these cases where the pressure on the cache is high, it might be necessary to adjust the cache settings.

And this is where the problem starts. First, the MongoDB documentation is not clear about this. The worst part is that there is a constant issue with the checkpoint process until the time of writing this post(version 4.4). MongoDB performs the checkpoint every 60 seconds, and there are not many settings that allow the DBA to have fine-grained control over this process.

What can the WiredTiger documentation tell us? Let’s see:

The eviction_target configuration value (default 80%) is the level at which WiredTiger attempts to keep the overall cache usage. Eviction worker threads are active when the cache contains at least this much content, expressed as a percentage of the total cache size.


The eviction_trigger configuration value (default 95%) is the level at which application threads start to perform the eviction. This will throttle application operations, increasing operation latency, usually resulting in the cache usage staying at this level when there is more cache pressure than eviction worker threads can handle in the background.


Operations will stall when the cache reaches 100% of the cache size. The application may want to change these settings from their defaults to either increase the range in which worker threads operate before application threads are throttled, or to use a larger proportion of RAM if eviction worker threads have no difficulty handling the cache pressure generated by the application.


The eviction_dirty_target (default 5%) and eviction_dirty_trigger (default 20%) operate in a similar way to the overall targets but only apply to dirty data in the cache. In particular, application threads will be throttled if the percentage of dirty data reaches the eviction_dirty_trigger. Any page that has been modified since it was read from disk is considered dirty.


By default, WiredTiger cache eviction is handled by a single, separate thread. In a large, busy cache, a single thread will be insufficient (especially when the eviction thread must wait for I/O). The eviction=(threads_min) and eviction=(threads_max) configuration values can be used to configure the minimum and maximum number of additional threads WiredTiger will create to keep up with the application eviction load.

Now, let’s take a closer look at two very important settings:


These parameters are expressed as a percentage of the total WiredTiger cache and control the overall cache usage. For example:

Consider a server with 200 Gb of RAM and WiredTiger cache set to 100 Gb. The eviction threads will try to keep the memory usage at around ~ 80 Gb (eviction_target). If the pressure is too high, and cache usage increases to as high as 95 Gb (eviction_trigger), then application/client threads will be summoned to help perform eviction before being allowed to do their job, at the cost of increasing latency to the clients.

So if the checkpoint is intense and disk can’t keep up with it, it will impact your application performance.

To reduce the application impact and reduce disk spikes stress, we have two alternatives:

The most obvious is increasing the IOPs of the disk. Although choosing this option may result in increased costs.

The other option is tuning the MongoDB eviction parameters to make the usage more linear instead of spiky. My following proposal is to reduce the number of dirty pages to make the disk usage flatter.

To change the settings on the fly, we can run the following command:

db.adminCommand( { "setParameter": 1, "wiredTigerEngineRuntimeConfig": "eviction=(threads_min=4,threads_max=4),checkpoint=(wait=60),eviction_dirty_trigger=5,eviction_dirty_target=1,eviction_trigger=95,eviction_target=80"})

As a reminder, any change may result in undesired collateral effects. The following command line is for the rollback:

db.adminCommand( { "setParameter": 1, "wiredTigerEngineRuntimeConfig": "eviction=(threads_min=1,threads_max=4),checkpoint=(wait=60),eviction_dirty_trigger=20,eviction_dirty_target=5,eviction_trigger=95,eviction_target=80"})

To make it persistent add to the MongoDB configuration file:

configString: "eviction=(threads_min=4,threads_max=4),checkpoint=(wait=60),eviction_dirty_trigger=5,eviction_dirty_target=1,eviction_trigger=95,eviction_target=80"

If you want more information about the checkpoint stall, don’t miss these posts:

That’s it! Hope it helped!

4 thoughts on “The 5th dimension of MongoDB: Adjusting Eviction Threads

  1. Darshan j Reply

    Eviction dirty trigger setting 1 will be not preferred for a write intense application. If the dirty cache is 1% then there will be eviction starts and if it reaches 5 % there will be application thread which involved in eviction.

  2. Andre Piwoni Reply

    I agree with Darshan. Dropping setting eviction_dirty_trigger=20 to eviction_dirty_trigger=5 will result in involving application threads so if your load is really intensive then it is not going to be beneficial. Vinnie’s load may have been less intensive which allowed worker threads to keep threshold below 5. That wasn’t my case for 1.3 billion inserts with SSD.

    Similar claims were made here:

    • Vinnie Post authorReply

      Hi Andre,

      Thanks for reaching out! Indeed the idea that I proposed is not going to offer maximum throughput. The intention is to offer a more stable throughput avoiding as Ivan showed in his blog post. What kind of issues you are facing in your environment?

      I have been doing additional testing and realized that not all the evicted pages are flushed. This task is upon the checkpoint interval(60 seconds). These new findings will result in a new blog post.

Leave a Reply