# Kafka performance: RAM

In 
Published 2022-12-03

This tutorial explains to you the impact of the RAM on the Kafka server performance.

Here are some information to retain about the RAM memory & Kafka cluster performance:

  • ZooKeeper uses the JVM heap, and 4GB RAM is typically sufficient. Too small of a heap will result in high CPU due to constant garbage collection while too large heap may result in long garbage collection pauses and loss of connectivity within the ZooKeeper cluster.

  • Kafka brokers use both the JVM heap & OS page cache

  • the JVM heap : used for replication of partitions between brokers and for log compaction. For small to medium-sized deployments, 4GB heap size is usually sufficient.

  • OS page cache : the consumers read data from here. OS page cache size depends on the retention period, the number of messages per second and the dimension of the message.

  • Usually we can see 32Gb or 64Gb RAM on production database machine;

  • The Kafka JVM heap is set up by using the KAFKA_HEAP_OPTS environment variable. The heap size must be monitored over time and could be increased if we have more partitions in the broker.

Example:

export KAFKA_HEAP_OPTS = "-Xmx4g"

Don't set -Xms (starting heap size) parameter: let it grow naturally.

  • The OS page cache is not set manually. The remaining RAM is used automatically by OS page cache.

  • Don't forget to disable RAM Swap - can set to 0 on certain Linux versions/distributions (by default it is 60 on Linux)

sysctl vm.swappiness=1
echo 'vm.swappiness=1' | sudo tee --append /etc/sysctl.conf