I really really wanted to run the latest MariaDB with LZ4 Page Compression.. it is a game changer for many types of large databases I deal with. There isn’t a package for Centos in the trusted repos that includes any of the new algorithms, just zlib. So I compiled it manually in a way that is repeatable and follows best-practices, it’s powering this site. Now I can use InnoDB Page Compression with lzo, lzma, bzip2, snappy, or my favorite LZ4. Thought this would be a good chance to post a howto, and show that there is a lot you can do by compiling software yourself and breaking the package management one-click install shackles. That said, as soon as there is a stable CentOS package built that supports the new algorithms, I’ll switch to it. It’s a step-by-step that I just ran through a couple times from scratch. Also a lot of misc notes here for myself.
CentOS 7 64bit
This howto builds rpm packages to install the latest MariaDB for installation with yum, only these are supercharged and bleeding edge, and compiled and tested on the machine you build it on. These are directions for a new CentOS 7.9 64bit machine, best to start clean with a new instance. I’m doing this on Rackspace Openstack. These same instructions can be tweaked for other operating systems and other centos versions.
Prepare Build Environment
First, you must have sudo installed and allowed for your non-privileged user that will actually do the builds. Never compile as root. Best to never use root at all ever.
Enable Yum Repos
Just google for each of these and follow the instructions they recommend, such as verifying the rpms crytographically.
- mariadb: Use the MariaDB Repositories Configuration tool to generate the file for
/etc/yum.repos.d/mariadb.repo
- ius:
curl -O http://dl.iuscommunity.org/pub/ius/stable/Redhat/7/x86_64/ius-release-1.0-15.ius.el7.noarch.rpm sudo rpm -ivh ius-release-1.0-15.ius.el7.noarch.rpm
- epel:
sudo yum install epel-release
- rpmforge:
curl -O http://repository.it4i.cz/mirrors/repoforge/redhat/el7/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm sudo rpm -ivh rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
Install Yum Packages
These are needed for various build tools and dependencies. Also recommend installing ccache
for faster builds.
sudo yum install autoconf automake binutils bison bison-devel boost boost-devel btrfs-progs-devel bzip2-devel bzr cmake cmake3 cmake3-data cpp cross-binutils-common cross-gcc-common eventlog eventlog-devel eventlog-static extra-cmake-modules gcc gcc-c++ git glib glib2-devel glibc-devel glibc-headers glibc-static glibc-utils glib-devel gmp-devel gperf gperftools jemalloc jemalloc-devel kernel-devel kernel-headers kernel-tools libaio libaio-devel libdb libdb-devel libdbi libedit libedit-devel libmemcached libmemcached-devel libstemmer libstemmer-devel lrzip lua-devel lz4 lz4-devel lz4-static lzip lzo-devel lzo-minilzo lzop memcached memcached-devel msgpack msgpack-devel ncurses ncurses-base ncurses-devel ncurses-libs ncurses-static ncurses-term neon nss-devel nss-util-devel openssl-devel openssl-static pbzip2 pcre pcre-devel pcre-static readline readline-devel recode rpm-build rpm-devel rpmdevtools snappy-devel socat sqlite-devel thrift thrift-devel tokyocabinet unixODBC unixODBC-devel valgrind valgrind-devel xz-devel xz-lzma-compat zlib-devel
Now upgrade your entire system, and after it would be good to reboot now to start from a clean slate. After rebooting, go ahead and stop any cpu/io/mem intensive services or programs to make the compilation much faster. Stop things like crond, mysql, httpd, nginx, php-fpm, memcached, redis, postfix, cloudbackup tools, datadog, nagios, and any daemons like that. Might be a good time to see if you can permanently disable some of those if you don’t need it. The more RAM for InnoDB buffer pool the better.
Prepare Build Environment
First lets create the directories we will be using to build mariadb. We are doing an out-of-source build, which has many benefits, such as making it easy to repeat this process for upgrades.
Building “out-of-source” Building out-of-source provides additional benefits. For example it allows to build both Release and Debug configurations using the single source tree. Or build the same source with different version of the same compiler or with different compilers. Also you will prevent polluting the source tree with the objects and binaries produced during the make.
- Create the root, src, and build folders, these need to be chmod 755 and owned by your non-root user you are using with sudo:
mkdir -pv /opt/mariadb10/{src,build}
- Clone the mariadb git repo (takes awhile):
git clone https://github.com/MariaDB/server.git /opt/mariadb10/src/
Install Testing Tools
These tools will be used for make test
which is an important step that will alert you of any problems that you will want/need to fix before installing.
- Go to C mode:
export LC_ALL=C LANG=C
- Install yum packages:
sudo yum install moreutils perl-CPAN innotop mysqltuner mysqlreport iotop mytop
- Install Logger:
cpan Log::Log4perl
- Install cpanm:
cpan App::cpanminus
- Install More:
cpanm -S Test::More
Build MariaDB
Now that you are warmed up in the cli, its time to do the actual build. If all goes smoothly it should take about 30m – 1hr (mostly comiling). Optionally, during these steps you may want to debug some failed make tests, or track down and install missing packages listed by cmake.. and you can also take some time to read up on all the various configuration options that are possible read the BUILD-CMAKE in the src/ directory. I ended up re-running cmake with different options and improvements several times before the final build/make.
- C mode for speed:
export LC_ALL=C LANG=C
- Cd into your empty build directory:
cd /opt/mariadb10/build/
- Run cmake:
cmake ../src -DBUILD_CONFIG=mysql_release -DRPM=centos7
- Check cmake output and install any missing libraries and rerun cmake if desired:
grep NOTFOUND CMakeCache.txt
- Verify LZ4 availabile:
grep HAVE_LZ4 CMakeCache.txt
should show similar output likeWITH_INNODB_LZ4:STRING=AUTO HAVE_LZ4_H:INTERNAL=1 HAVE_LZ4_SHARED_LIB:INTERNAL=1
- Find number of processers (using cat /proc/cpuinfo), use it for the value of
make -j[n]
below for faster makes. - Make:
make -j4
- Create centos RPMS for installing with yum:
make package
- Run tests:
make test
.. should try to fix any failures before installing with yum.Running tests... Test project /opt/mariadb10/build Start 1: pcre_test 1/60 Test #1: pcre_test ........................ Passed 0.37 sec Start 2: pcre_grep_test 2/60 Test #2: pcre_grep_test ................... Passed 0.56 sec ... 100% tests passed, 0 tests failed out of 60 Total Test time (real) = 55.33 sec
- Good practice to: shutdown any running mysql servers, and create a full backup of the entire datadir
/var/lib/mysql
like this:sudo rsync -alvPh --delete /var/lib/mysql/ /var/lib/mysql.bk/
and while you are at it backup your/etc/
directory as well. - Now uninstall any existing mariadb packages, and take note of any dependencies that also get removed, after you install the new rpms with yum go back and install those dependencies.
- FINAL INSTALL:
sudo yum localinstall MariaDB-10.4.12-centos7-x86_64-server.rpm MariaDB-10.4.12-centos7-x86_64-client.rpm MariaDB-10.4.12-centos7-x86_64-common.rpm MariaDB-10.4.12-centos7-x86_64-shared.rpm
Post-Build
Now just configure the server normally, get it running with systemd, and you are good to go and free to experiment and learn how to use the new algorithms and mariadb features.
Verify Compression Support
$ mysql -Ntbe 'SHOW VARIABLES WHERE Variable_name LIKE "have_%" OR Variable_name LIKE "%_compression_%"'
+------------------------------------------+----------+ | have_compress | YES | | have_crypt | YES | | have_dynamic_loading | YES | | have_geometry | YES | | have_openssl | YES | | have_profiling | YES | | have_query_cache | YES | | have_rtree_keys | YES | | have_ssl | DISABLED | | have_symlink | YES | | innodb_compression_algorithm | lz4 | | innodb_compression_failure_threshold_pct | 5 | | innodb_compression_level | 3 | | innodb_compression_pad_pct_max | 50 | +------------------------------------------+----------+
Exclude from Yum
Exclude from yum by adding this to /etc/yum.conf
.. that way your yum-cron won’t inadvertantly replace your custom install with an updated mainline version from a repo, which if it doesn’t have the support for compression that would prevent your mysql from starting and be a pain.
exclude=MariaDB*
Rebuilding and Updating
Specifying Which Plugins to Build Note that unlike autotools, cmake tries to configure and build incrementally. You can modify one configuration option and cmake will only rebuild the part of the tree affected by it. For example, when you do
cmake -DWITH_EMBEDDED_SERVER=1
in the already-built tree, it will make libmysqld to be built, but no other configuration options will be changed or reset to their default values. Alternatively, you might simply delete the CMakeCache.txt file — this is the file where cmake stores current build configuration — and rebuild everything from scratch.
Choosing compression algorithm
You specify which compression algorithm to use with the –innodb-compression-algorithm= startup option for MariaDB. The options are:
Option | Description |
---|---|
none |
Default. Data is not compressed. |
zlib |
Pages are compressed with bundled zlib compression method. |
lz4 |
Pages are compressed using https://code.google.com/p/lz4/ compression method. |
lzo |
Pages are compressed using http://www.oberhumer.com/opensource/lzo/ compression method. |
lzma |
Pages are compressed using http://tukaani.org/xz/ compression method. |
bzip2 |
Pages are compressed using http://www.bzip.org/ compression method. |
snappy |
Pages are compressed using http://google.github.io/snappy/. (used by mariadb columnstore and mongodb) |
The compression method can be changed whenever needed. Currently the compression method is global (i.e. you can’t specify compression method/table).
set global innodb_compression_algorithm=lz4;
From this point on page compressed tables will use the lz4 compression method. This setting does not change already compressed pages that were compressed with a different compression method. This is because MariaDB supports pages that are uncompressed, compressed with e.g. lzo and compressed with e.g. lz4 in same tablespace. This is possible because every page in InnoDB tablespace contains compression method information on page header metadata.
Choosing compression level
You specify the default compression level to use with the –innodb-compression-level= startup option for MariaDB. Values are 0-9, default is 6. Note that not all compression methods allow choosing the compression level and in those cases the compression level value is ignored.
InnoDB/XtraDB Page Compression
Page compression is an alternative way to compress your tables which is different (but similar) to the InnoDB COMPRESSED storage format. In page compression, only uncompressed pages are stored in the buffer pool. This approach differs significantly from legacy InnoDB compressed tables using innodb-file-per-table=1
. Page compression can be used on any file system but is most beneficial on SSDs and Non-Volatile Memory (NVM) devices like FusionIO atomic-series. Page compression design also works with double-write enabled, but best performance is reached if double-write is disabled (i.e. innodb-doublewrite=0) and atomic writes are enabled (innodb-use-atomic-writes=1). This naturally requires that the used file system and storage device supports atomic writes.
Server.cnf configuration
Some innodb settings are required in order to use a custom compression algorithm. You must have innodb_file_per_table, and you must use Barracuda.
innodb_strict_mode = 1 innodb_file_format = Barracuda innodb_file_per_table = ON # zlib, lz4, lzo, lzma, bzip2, snappy innodb_compression_algorithm=lz4 innodb_compression_level=3 #innodb_stats_on_metadata = OFF #innodb_mtflush_threads=2 #innodb_use_mtflush=1 #innodb_doublewrite=0 #innodb_use_atomic_writes = 1 #innodb_use_fallocate=ON #innodb_use_trim=ON
Creating compressed tables
Only tables that are specified to be compressed are actually compressed,. You can create a page compressed table with:
create table users(user_id int not null, b name varchar(200), primary key(user_id)) engine=innodb page_compressed=1;
Enable Page Compression for existing tables
This will output the mysql needed to convert all the tables in a database to page compressed mode. Can be fed right into mysql.
mysqlshow db | sed '/^| [^ ]/!d; s/^| \([^ ]*\).*$/ALTER TABLE `\1` ENGINE=InnoDB PAGE_COMPRESSED=1;/'
Monitoring compression
SHOW STATUS contains new status variables that can be used to monitor compression
Status variable name | Values | Description |
---|---|---|
Innodb_page_compression_saved | 0 | Bytes saved by compression |
Innodb_num_pages_page_compressed | 0 | Number of pages compressed |
Innodb_num_page_compressed_trim_op | 0 | Number of trim operations |
Innodb_num_page_compressed_trim_op_saved | 0 | Number of trim operations saved |
Innodb_num_pages_page_decompressed | 0 | Number of pages decompressed |
Innodb_num_pages_page_compression_error | 0 | Number of compression errors |