Installing required tools
Firt of all, to achieve our goal we’ll have to install libhugetlbfs. This powerful library allow us to back text, data areas, malloc() calls and shared memory with Hugepages.
On RHEL like Linux distros (CentOS, Scientific Linux, PacketLinux etc…) we can easly install libhugetlbfs by executing:
# sudo yum -y install libhugetlbfs-utils libhugetlbfs
libhugetlbfs is the library which provides easy access to hugepages of memory to any application
libhugetlbfs-utils is a set of user-space tools for configure and manage hugepage environment
Configuring Huge Page
Once installed our requires RPMs we can proceed with configuring hugepage on our Linux.
Using the old raw kernel interfaces
On RHEL 4 or 5 based distros:
Before to start let’s check the actual system Hugepage configuration, if the system has not yet been configured to use Hugepage then the output of the following command should be 0 (zero).
# grep HugePages_Total /proc/meminfo HugePages_Total: 0
Now, let’s check the size of our hugepage, we need this in order to calcoulate the number of hugepages we’ll need for our application. To manage Hugepage I reccommend to use root account.
# grep Hugepagesize /proc/meminfo Hugepagesize: 2048 kB
The output (2048 kB) shows that the size of a single Huge Page on this system is 2MB, this is pretty much the default setup for RHEL based Linux. Now, if we need 4GB of Huge Pages pool then 2048 Huge Pages need to be allocated.
To allocate our 2048 Huge Pages we can use:
# echo 2048 > /proc/sys/vm/nr_hugepages
Please Note: Before allocating a big number of Hugepage on a system that is running Virtual Machines or other memory hungry applications, make sure to shutdown your Virtual Machines and any memory hungry application before executing the previous command otherwise the execution may take a long time to complete.
To quickly and temporarly allocate them, or we can use:
# sysctl -w vm.nr_hugepages=2048
To make sure our system will always allocate 4GB of Hugepage at every reboot.
Now let’s check if the system has been configured correctly:
# grep HugePages_Total /proc/meminfo
We should be able to see this output now:
To check how many pages are free:
# grep HugePages_Free /proc/meminfo
And the output will depend on how many pages are left free at the moment we run the command.
Using hugeadm modern tool
Direct access to hugepage pool (the raw kernel interfaces) has been deprecated in favor of the hugeadm utility, so on the next steps we are going to use hugeadm to configure huge page pool (use this method if you have RHEL 6 or higher based distro, so CentOS 6.x/7.x, PacketLinux 1.x/2.x and so on).
To list all huge page pools available on a system (and display their min and max values) we can use:
# hugeadm --pool-list Size Minimum Current Maximum Default 2097152 0 0 0 *
To set the 2MB pool minimum to 512 pages:
# hugeadm --pool-pages-min 2MB:512
And to set our 2048 max pages:
# hugeadm --pool-pages-max 2MB:2048
At this point pool-list should display something like:
# hugeadm --pool-list Size Minimum Current Maximum Default 2097152 512 512 2048 *
To use libhugetlbfs features hugetlbfs must be mounted. Each hugetlbfs mount point is associated with a page size. To choose the size, use the pagesize mount option. If this option is omitted, the default huge page size will be used.
Completing huge page configuration
To mount the default huge page size:
# mkdir -p /mnt/hugetlbfs # mount -t hugetlbfs none /mnt/hugetlbfs
To mount 64KB pages (if the system hardware supports it):
# mkdir -p /mnt/hugetlbfs-64K # mount -t hugetlbfs none -opagesize=64k /mnt/hugetlbfs-64K
In the case the application uses a non root account (so probably most of the times) then permissions on the mountpoint need to be set appropriately. For example, if the user postfix needs to be used for the application that we’ll force to use huge page then:
Either we can use:
# mount -t hugetlbfs none /mnt/hugetlbfs -o uid=postfix -o gid=postfix
Or we can mount as root and then:
# chown postfix:postfix /mnt/hugetlbfs
At this point we can also set the optimal values for Shared Memory Max (shmmax) in /proc/sys/kernel/shmmax, because it should be set to the size of the largest shared memory segment size you want to be able to use. To do this we can use again hugeadm:
# hugeadm --set-recommended-shmmax
And now we can ask for a report of the complete configuration by executing:
# hugeadm --explain Total System Memory: 31792 MB Mount Point Options /mnt/hugetlbfs rw,seclabel,relatime Huge page pools: Size Minimum Current Maximum Default 2097152 512 512 2048 * Huge page sizes with configured pools: 2097152 The recommended shmmax for your currently allocated huge pages is 4294967296 bytes. To make shmmax settings persistent, add the following line to /etc/sysctl.conf: kernel.shmmax = 4294967296 To make your hugetlb_shm_group settings persistent, add the following line to /etc/sysctl.conf: vm.hugetlb_shm_group = 0 Note: Permanent swap space should be preferred when dynamic huge page pools are used.
Ok, we are ready to proceed with the next steps. Follow the output instructiosn above if you want to make your changes permanently on your system. (I reccommend to do this only AFTER you’ve done some good testing with your application).
At this point is time to start running your application using libhugetlbfs so that your app will use Hugepages.
The genral syntax to do this is:
LD_PRELOAD=libhugetlbfs.so HUGETLB_MORECORE=yes <command>
Where <command> is the name of your application (including file path if different from local directory).
So, for example, to run VIM (vi) using hugetlb we can type:
LD_PRELOAD=libhugetlbfs.so HUGETLB_MORECORE=yes vi ./example.txt
That’s it, thanks for reading and, if you enjoyed this post, please support my blog by visiting my on-line hacking and engineering merchandise shop on redbubble.com by clicking here, thank you!