engrenage-passion

  • Increase font size
  • Default font size
  • Decrease font size

Bientôt

No current events.

Newsfeeds

Planet MySQL

Planet MySQL - https://planet.mysql.com
  • MySQL: size of your tables – tricks and tips
    Many of you already know how to retrieve the size of your dataset, schemas and tables in MySQL. To summarize, below are the different queries you can run: Dataset Size I the past I was using something like this : But now with sys schema being installed by default, I encourage you to use some of the formatting functions provided with it. The query to calculate the dataset is now: SELECT sys.format_bytes(sum(data_length)) DATA, sys.format_bytes(sum(index_length)) INDEXES, sys.format_bytes(sum(data_length + index_length)) 'TOTAL SIZE' FROM information_schema.TABLES ORDER BY data_length + index_length; Let’s see an example: Engines Used and Size For a list of all engines used: SELECT count(*) as '# TABLES', sys.format_bytes(sum(data_length)) DATA, sys.format_bytes(sum(index_length)) INDEXES, sys.format_bytes(sum(data_length + index_length)) 'TOTAL SIZE', engine `ENGINE` FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('sys','mysql', 'information_schema', 'performance_schema', 'mysql_innodb_cluster_metadata') GROUP BY engine; Let’s see an example on the same database as above: and on 5.7 with one MyISAM table (eeek): Schemas Size Now let’s find out which schemas are the larges: SELECT TABLE_SCHEMA, sys.format_bytes(sum(table_rows)) `ROWS`, sys.format_bytes(sum(data_length)) DATA, sys.format_bytes(sum(index_length)) IDX, sys.format_bytes(sum(data_length) + sum(index_length)) 'TOTAL SIZE', round(sum(index_length) / sum(data_length),2) IDXFRAC FROM information_schema.TABLES GROUP By table_schema ORDER BY sum(DATA_length) DESC; Top 10 Tables by Size And finally a query to get the list of the 10 largest tables: SELECT CONCAT(table_schema, '.', table_name) as 'TABLE', ENGINE, CONCAT(ROUND(table_rows / 1000000, 2), 'M') `ROWS`, sys.format_bytes(data_length) DATA, sys.format_bytes(index_length) IDX, sys.format_bytes(data_length + index_length) 'TOTAL SIZE', round(index_length / data_length,2) IDXFRAC FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10; You can modify the query to retrieve the size of any given table of course. That was the theory and it’s always good to see those queries time to time. But… But can we trust these results ? In fact, sometimes, this can be very tricky, let’s check this example: mysql> SELECT COUNT(*) AS TotalTableCount ,table_schema, CONCAT(ROUND(SUM(table_rows)/1000000,2),'M') AS TotalRowCount, CONCAT(ROUND(SUM(data_length)/(1024*1024*1024),2),'G') AS TotalTableSize, CONCAT(ROUND(SUM(index_length)/(1024*1024*1024),2),'G') AS TotalTableIndex, CONCAT(ROUND(SUM(data_length+index_length)/(1024*1024*1024),2),'G') TotalSize FROM information_schema.TABLES GROUP BY table_schema ORDER BY SUM(data_length+index_length) DESC LIMIT 1; +-----------------+--------------+---------------+----------------+-----------------+-----------+ | TotalTableCount | TABLE_SCHEMA | TotalRowCount | TotalTableSize | TotalTableIndex | TotalSize | +-----------------+--------------+---------------+----------------+-----------------+-----------+ | 15 | wp_lefred | 0.02M | 5.41G | 0.00G | 5.41G | +-----------------+--------------+---------------+----------------+-----------------+-----------+ This seems to be a very large table ! Let’s verify this: mysql> select * from information_schema.TABLES where table_schema='wp_lefred' and table_name = 'wp_options'\G *************************** 1. row *************************** TABLE_CATALOG: def TABLE_SCHEMA: wp_lefred TABLE_NAME: wp_options TABLE_TYPE: BASE TABLE ENGINE: InnoDB VERSION: 10 ROW_FORMAT: Dynamic TABLE_ROWS: 3398 AVG_ROW_LENGTH: 1701997 DATA_LENGTH: 5783388160 MAX_DATA_LENGTH: 0 INDEX_LENGTH: 442368 DATA_FREE: 5242880 AUTO_INCREMENT: 1763952 CREATE_TIME: 2018-09-18 00:29:16 UPDATE_TIME: 2018-09-17 23:44:40 CHECK_TIME: NULL TABLE_COLLATION: utf8mb4_unicode_ci CHECKSUM: NULL CREATE_OPTIONS: TABLE_COMMENT: 1 row in set (0.01 sec) In fact we can see that the average row length is pretty big. So let’s verify on the disk: [root@vps21575 database]# ls -lh wp_lefred/wp_options.ibd -rw-r----- 1 mysql mysql 11M Sep 18 00:31 wp_lefred/wp_options.ibd 11M ?! The table is 11M but Information_Schema thinks it’s 5.41G ! Quite a big difference ! In fact this is because InnoDB creates these statistics from a very small amount of pages by default. So if you have a lot of records with a very variable size like it’s the case with this WordPress table, it could be safer to increase the amount of pages used to generate those statistics: mysql> set global innodb_stats_transient_sample_pages= 100; Query OK, 0 rows affected (0.00 sec) mysql> analyze table wp_lefred.wp_options; +----------------------+---------+----------+----------+ | Table | Op | Msg_type | Msg_text | +----------------------+---------+----------+----------+ | wp_lefred.wp_options | analyze | status | OK | +----------------------+---------+----------+----------+ 1 row in set (0.05 sec) Let's check the table statistics now: *************************** 1. row *************************** TABLE_CATALOG: def TABLE_SCHEMA: wp_lefred TABLE_NAME: wp_options TABLE_TYPE: BASE TABLE ENGINE: InnoDB VERSION: 10 ROW_FORMAT: Dynamic TABLE_ROWS: 3075 AVG_ROW_LENGTH: 1198 DATA_LENGTH: 3686400 MAX_DATA_LENGTH: 0 INDEX_LENGTH: 360448 DATA_FREE: 4194304 AUTO_INCREMENT: 1764098 CREATE_TIME: 2018-09-18 00:34:07 UPDATE_TIME: 2018-09-18 00:32:55 CHECK_TIME: NULL TABLE_COLLATION: utf8mb4_unicode_ci CHECKSUM: NULL CREATE_OPTIONS: TABLE_COMMENT: 1 row in set (0.01 sec) We can see that the average row length is much smaller now (and could be smaller with an even bigger sample). Let’s verify: mysql> SELECT CONCAT(table_schema, '.', table_name) as 'TABLE', ENGINE, -> CONCAT(ROUND(table_rows / 1000000, 2), 'M') `ROWS`, -> sys.format_bytes(data_length) DATA, -> sys.format_bytes(index_length) IDX, -> sys.format_bytes(data_length + index_length) 'TOTAL SIZE', -> round(index_length / data_length,2) IDXFRAC -> FROM information_schema.TABLES -> WHERE table_schema='wp_lefred' and table_name = 'wp_options'; +----------------------+--------+-------+----------+------------+------------+---------+ | TABLE | ENGINE | ROWS | DATA | IDX | TOTAL SIZE | IDXFRAC | +----------------------+--------+-------+----------+------------+------------+---------+ | wp_lefred.wp_options | InnoDB | 0.00M | 3.52 MiB | 352.00 KiB | 3.86 MiB | 0.10 | +----------------------+--------+-------+----------+------------+------------+---------+ In fact, this table uses a long text as column and it can be filled with many things or almost nothing. We can verify that for this particular table we have some very large values: mysql> select CHAR_LENGTH(option_value), count(*) from wp_lefred.wp_options group by 1 order by 1 desc limit 10; +---------------------------+----------+ | CHAR_LENGTH(option_value) | count(*) | +---------------------------+----------+ | 245613 | 1 | | 243545 | 2 | | 153482 | 1 | | 104060 | 1 | | 92871 | 1 | | 70468 | 1 | | 60890 | 1 | | 41116 | 1 | | 33619 | 5 | | 33015 | 2 | +---------------------------+----------+ Even if the majority of the records are much smaller: mysql> select CHAR_LENGTH(option_value), count(*) from wp_lefred.wp_options group by 1 order by 2 desc limit 10; +---------------------------+----------+ | CHAR_LENGTH(option_value) | count(*) | +---------------------------+----------+ | 10 | 1485 | | 45 | 547 | | 81 | 170 | | 6 | 167 | | 1 | 84 | | 83 | 75 | | 82 | 65 | | 84 | 60 | | 80 | 44 | | 30 | 42 | +---------------------------+----------+ Conclusion So in general, using Information_Schema provides a good overview of the tables size, but please always verify the size on disk to see if it matches because when a table contains records that can have a large variable size, those statistics are often incorrect because the InnoDB page sample used is too small. But don’t forget that on disk, table spaces can also be fragmented !    

  • Bloom filter and cuckoo filter
    The multi-level cuckoo filter (MLCF) in SlimDB builds on the cuckoo filter (CF) so I read the cuckoo filter paper. The big deal about the cuckoo filter is that it supports delete and a bloom filter does not. As far as I know the MLCF is updated when sorted runs arrive and depart a level -- so delete is required. A bloom filter in an LSM is per sorted run and delete is not required because the filter is created when the sorted run is written and dropped when the sorted run is unlinked.I learned of the blocked bloom filter from the cuckoo filter paper (see here or here). RocksDB uses this but I didn't know it had a name. The benefit of it is to reduce the number of cache misses per probe. I was curious about the cost and while the math is complicated, the paper estimates a 10% increase on the false positive rate for a bloom filter with 8 bits/key and a 512-bit block which is similar to a typical setup for RocksDB.Space EfficiencyI am always interested in things that use less space for filters and block indexes with an LSM so I spent time reading the paper. It is a great paper and I hope that more people read it. The cuckoo filter (CF) paper claims better space-efficiency than a bloom filter and the claim is repeated in the SlimDB paper as:However, by selecting an appropriate fingerprint size f and bucket size b, it can be shown that the cuckoo filter is more space-efficient than the Bloom filter when the target false positive rate is smaller than 3%The tl;dr for me is that the space savings from a cuckoo filter is significant when the false positive rate (FPR) is sufficiently small. But when the target FPR is 1% then a cuckoo filter uses about the same amount of space as a bloom filter.The paper has a lot of interesting math that I was able to follow. It provides formulas for the number of bits/key for a bloom filter, cuckoo filter and semisorted cuckoo filter. The semisorted filter uses 1 less bit/key than a regular cuckoo filter. The formulas assuming E is the target false positive rate, b=4, and A is the load factor: bloom filter: ceil(1.44 * log2(1/E)) cuckoo filter: ceil(log2(1/E) + log2(2b)) / A == (log2(1/E) + 3) / A semisorted cuckoo filter: ceil(log2(1/E) + 2) / A The target load factor is 0.95 (A = 0.95) and that comes at a cost in CPU overhead when creating the CF. Assuming A=0.95 then a bloom filter uses 10 bits/key, a cuckoo filter uses 10.53 and a semisorted cuckoo filter uses 9.47. So the cuckoo filter uses either 5% more or 5% less space than a bloom filter when the target FPR is 1% which is a different perspective from the quote I listed above. Perhaps my math is wrong and I am happy for an astute reader to explain that.When the target FPR rate is 0.1% then a bloom filter uses 15 bits/key, a cuckoo filter uses 13.7 and a semisorted cuckoo filter uses 12.7. The savings from a cuckoo filter are larger here but the common configuration for a bloom filter in an LSM has been to target a 1% FPR. I won't claim that we have proven that FPR=1% is the best rate and recent research on Monkey has shown that we can do better when allocating space to bloom filters.The first graph shows the number of bits/key as a function of the FPR for a bloom filter (BF) and cuckoo filter (CF). The second graph shows the ratio for bits/key from BF versus bits/key from CF. The results for semisorted CF, which uses 1 less bit/key, are not included.  For the second graph a CF uses less space than a BF when the value is greater than one. The graph covers FPR from 0.00001 to 0.09 which is 0.001% to 9%. R code to generate the graphs is here. CPU EfficiencyFrom the paper there is more detail on CPU efficiency in table 3, figure 5 and figure 7. Table 3 has the speed to create a filter, but the filter is much larger (192MB) than a typical per-run filter with an LSM and there will be more memory system stalls in that case. Regardless the blocked bloom filter has the least CPU overhead during construction.Figure 5 shows the lookup performance as a function of the hit rate. Fortunately performance doesn't vary much with the hit rate. The cuckoo filter is faster than the blocked bloom filter and the block bloom filter is faster than the semisorted cuckoo filter.Figure 7 shows the insert performance as a function of the cuckoo filter load factor. The CPU overhead per insert grows significantly when the load factor exceeds 80%.

  • Percona XtraDB Cluster 5.6.41-28.28 Is Now Available
    Percona announces the release of Percona XtraDB Cluster 5.6.41-28.28 (PXC) on September 18, 2018. Binaries are available from the downloads section or our software repositories. Percona XtraDB Cluster 5.6.41-28.28 is now the current release, based on the following: Percona Server for MySQL 5.6.41 Codership WSREP API Rrelease 5.6.41 Codership Galera library 3.24 Fixed Bugs PXC-1017: Memcached API is now disabled if node is acting as a cluster node, because InnoDB Memcached access is not replicated by Galera. PXC-2164: SST script compatibility with SELinux was improved by forcing it to look for port associated with the said process only. PXC-2155: Temporary folders created during SST execution are now deleted on cleanup. PXC-2199: TOI replication protocol was fixed to prevent unexpected GTID generation caused by the  DROP TRIGGER IF EXISTS statement logged by MySQL as a successful one due to its IF EXISTS clause. Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona! The post Percona XtraDB Cluster 5.6.41-28.28 Is Now Available appeared first on Percona Database Performance Blog.

  • [Solved] How to install MySQL Server on CentOS 7?
    Recently, when I am working on setting up MySQL Enterprise Server, I found, there is too much information available over internet and it was very difficult for a newbie to get what is needed for direct implementation. So, I decided to write a quick reference guide for setting up the server, covering end to end, starting from planning to production to maintenance. This is a first post in that direction, in this post, we will discuss about installing MySQL Enterprise Server on CentOS 7 machine. Note that, the steps are same for both the Enterprise and Community editions, only binary files are different, and downloaded from different repositories. If you are looking for installing MySQL on Windows operating system, please visit this page https://www.rathishkumar.in/2016/01/how-to-install-mysql-server-on-windows.html. I am assuming, hardware and the Operating System is installed and configured as per the requirement. Let us begin with the installation. Removing MariaDB: The CentOS comes with MariaDB as a default database, if you try to install, MySQL on top of it, you will encounter an error message stating the MySQL library files conflict with MariaDB library files. Remove the MariaDB to avoid errors and to have a clean installation.  Use below statements to remove MariaDB completely: sudoyum remove MariaDB-server sudoyum remove MariaDB-client (This can be done in single step) sudorm –rf /var/lib/mysql sudorm /etc/my.cnf (Run with sudo, if you are not logged in as Super Admin). Downloading RPM files: MySQL installation files (On CentOS 7 – rpm packages) can be downloaded from MySQL yum repository. For MySQL Community Edition – there is clear and step-by-step guide available at the MySQL website - https://dev.mysql.com/doc/mysql-yum-repo-quick-guide/en/. The only step missing is downloading MySQL yum repository to your local machine. (This might looks very simple step, but most of the newbies, it is very helpful).  wget https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm For MySQL Enterprise Edition – the binary files can be downloaded from Oracle Software Delivery Cloud (http://edelivery.oracle.com) for latest version or previous versions visit My Oracle Support (https://support.oracle.com/). As mentioned earlier, there is a clear and step-by-step guide available at the MySQL website for Community Edition, I will be continue with installing Enterprise Edition, steps are almost same. Choosing the RPM file: For MySQL Community Edition, all the RPM files will be included in the downloaded YUM repository, but for Enterprise Editions, these files will be downloaded separately. (For system administration purpose, all these files can be created under a MySQL repository). For newbies, it may be confusing to understand, the different RPM files and its contents, I am concentrating on only files required for stand-alone MySQL instances. If there is requirement for embedded MySQL or if you working on developing plugins for MySQL, can install other files. It is completely depends on your requirement. The following tables, describe the required files and where to install. RPM File Description Location mysql-commercial-server-5.7.23-1.1.el7.x86_64.rpm MySQL Server and related utilities to run and administer a MySQL server. On Server mysql-commercial-client-5.7.23-1.1.el7.x86_64.rpm Standard MySQL clients and administration tools. On Server & On Client mysql-commercial-common-5.7.23-1.1.el7.x86_64.rpm Common files needed by MySQL client library, MySQL database server, and MySQL embedded server. On Server mysql-commercial-libs-5.7.23-1.1.el7.x86_64.rpm Shared libraries for MySQL Client applications On Server Installing MySQL: Install the MySQL binary files in the following order, this is to avoid dependency errors, the following statements will install MySQL on local machine: sudo yum localinstall mysql-commercial-libs-5.7.23-1.1.el7.x86_64.rpm sudo yum localinstall mysql-commercial-client-5.7.23-1.1.el7.x86_64.rpm sudo yum localinstall mysql-commercial-common-5.7.23-1.1.el7.x86_64.rpm sudo yum localinstall mysql-commercial-server-5.7.23-1.1.el7.x86_64.rpm Starting the MySQL Service: On CentOS 7, the mysql service can be started by following: sudosystemctl start mysqld.service sudosystemctl status mysqld.service Login to MySQL Server for first time: Once the service is started, the superuser account ‘root’@’localhost’ created and temporary password is stored at the error log file (default /var/log/mysqld.log). The password can be retrieved by using the following command: sudo grep 'temporary password' /var/log/mysqld.log As soon as logged in to MySQL with the temporary password, need to reset the root password, until that, you cannot run any queries on MySQL server. You can reset the root account password by running below command. mysql –u root –h localhost -p alter user 'root'@'localhost' identified by 'NewPassword'; You can verify the MySQL status and edition by running the following commands, sample output provided below for MySQL 8.0 Community Edition (GPL License) running on Windows machine. MySQL License Status Notes: MySQL conflicts with MariaDB: in case if there is conflict with MariaDB, you will see the error message as below: file /usr/share/mysql/xxx from install ofMySQL-server-xxx conflicts with file from package mariadb-libs-xxx To resolve this error remove mariadb server and its related files from CentOS server. Refer the section - Removing mariadb. Can’t connect to mysql server: MySQL server is installed but unable to connect from client. Check this page for possible causes and solutions: https://www.rathishkumar.in/2017/08/solved-cannot-connect-to-mysql-server.html Please let me know, if you are facing any other errors on comment section. i hope this post is 

  • Configuring and Managing SSL On Your MySQL Server
    In this blog post, we review some of the important aspects of configuring and managing SSL in MySQL hosting. These would include the default configuration, disabling SSL, and enabling and enforcing SSL on a MySQL server. Our observations are based on the community version of MySQL 5.7.21. Default SSL Configuration in MySQL By default, MySQL server always installs and enables SSL configuration. However, it is not enforced that clients connect using SSL. Clients can choose to connect with or without SSL as the server allows both types of connections. Let’s see how to verify this default behavior of MySQL server. When SSL is installed and enabled on MySQL server by default, we will typically see the following: Presence of *.pem files in the MySQL data directory. These are the various client and server certificates and keys that are in use for SSL as described here. There will be a note in the mysqld error log file during the server start, such as: [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them. Value of ‘have_ssl’ variable will be YES: mysql> show variables like ‘have_ssl’; +—————+——-+ | Variable_name | Value | +—————+——-+ | have_ssl      | YES | +—————+——-+   With respect to MySQL client, by default, it always tries to go for encrypted network connection with the server, and if that fails, it falls back to unencrypted mode. So, by connecting to MySQL server using the command: mysql -h <hostname> -u <username> -p We can check whether the current client connection is encrypted or not using the status command: mysql> status ————– mysql  Ver 14.14 Distrib 5.7.21, for Linux (x86_64) using  EditLine wrapper Connection id:          75 Current database: Current user:           root@127.0.0.1 SSL:                    Cipher in use is DHE-RSA-AES256-SHA Current pager:          stdout Using outfile:          ” Using delimiter:        ; Server version:         5.7.21-log MySQL Community Server (GPL) Protocol version:       10 Connection:             127.0.0.1 via TCP/IP ………………………….. The SSL field highlighted above indicates that the connection is encrypted. We can, however, ask the MySQL client to connect without SSL by using the command: mysql -h <hostname> -u <username>  -p –ssl-mode=DISABLED  mysql> status ————– Connection id:          93 Current database: Current user:           sgroot@127.0.0.1 SSL:                    Not in use Current pager:          stdout Using outfile:          ” Using delimiter:        ; Server version:         5.7.21-log MySQL Community Server (GPL) Protocol version:       10 Connection:             127.0.0.1 via TCP/IP …………………………… We can see that even though SSL is enabled on the server, we are able to connect to it without SSL. How To Configure and Manage SSL on Your #MySQL ServerClick To Tweet Disabling SSL in MySQL If your requirement is to completely turn off SSL on MySQL server instead of the default option of ‘enabled, but optional mode’, we can do the following:  Delete the *.pem certificate and key files in the MySQL data directory. Start MySQL with SSL option turned off. This can be done by adding a line entry: ssl=0    in the my.cnf file.  We can observe that: There will NOT be any note in mysqld logs such as : [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.  Value of ‘have_ssl’ variable will be DISABLED: mysql> show variables like ‘have_ssl’; +—————+——-+ | Variable_name | Value | +—————+——-+ | have_ssl      | DISABLED | +—————+——-+ Enforcing SSL in MySQL We saw that though SSL was enabled by default on MySQL server, it was not enforced and we were still able to connect without SSL. Now, by setting the require_secure_transport system variable, we will be able to enforce that server will accept only SSL connections. This can be verified by trying to connect to MySQL server with the command: mysql -h <hostname> -u sgroot -p –ssl-mode=DISABLED And, we can see that the connection would be refused with following error message from the server: ERROR 3159 (HY000): Connections using insecure transport are prohibited while –require_secure_transport=ON. SSL Considerations for Replication Channels By default, in a MySQL replication setup, the slaves connect to the master without encryption. Hence to connect to a master in a secure way for replication traffic, slaves must use MASTER_SSL=1; as part of the ‘CHANGE MASTER TO’ command which specifies parameters for connecting to the master. Please note that this option is also mandatory in case your master is configured to enforce SSL connection using require_secure_transport.

Dedicated Cloud Hosting for your business with Joomla ready to go. Launch your online home with CloudAccess.net.