engrenage-passion

  • Increase font size
  • Default font size
  • Decrease font size

Bientôt

No current events.

Newsfeeds

Planet MySQL

Planet MySQL - https://planet.mysql.com
  • Webinar Tues 6/26: MariaDB Server 10.3
    Please join Percona’s Chief Evangelist, Colin Charles on Tuesday, June 26th, 2018, as he presents MariaDB Server 10.3 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4). Register Now   MariaDB Server 10.3 is out. It has some interesting features around system versioned tables, Oracle compatibility, column compression, an integrated SPIDER engine, as well as MyRocks. Learn about what’s new, how you can use it, and how it is different from MySQL. Register Now Colin Charles Chief Evangelist Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team of MariaDB Server in 2009, and had worked at MySQL since 2005, and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within open source communities in APAC, and has spoken at many conferences. Experienced technologist, well known in the open source world for work that spans nearly two decades within the community. Pays attention to emerging technologies from an integration standpoint. Prolific speaker at many industry-wide conferences delivering talks and tutorials with ease. Interests: application development, systems administration, database development, migration, Web-based technologies. Considered expert in Linux and Mac OS X usage/administration/roll-out’s. Specialties: MariaDB, MySQL, Linux, Open Source, Community, speaking & writing to technical audiences as well as business stakeholders. The post Webinar Tues 6/26: MariaDB Server 10.3 appeared first on Percona Database Performance Blog.

  • Percona XtraBackup 2.4.12 Is Now Available
    Percona announces the GA release of Percona XtraBackup 2.4.12 on June 22, 2018. You can download it from our download site and apt and yum repositories. Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups. New features and improvements: Percona XtraBackup now prints used arguments to standard output. Bug fixed PXB-1494. Bugs fixed xtrabackup --copy-back didn’t read which encryption plugin to use from plugin-load setting of the my.cnf configuration file. Bug fixed PXB-1544. xbstream was exiting with zero return code when it failed to create one or more target files instead of returning error code 1. Bug fixed PXB-1542. Meeting a zero sized keyring file, Percona XtraBackup was removing and immediately recreating it, which could affect external software noticing this file had undergo manipulations. Bug fixed PXB-1540. xtrabackup_checkpoints files were encrypted during a backup, which caused additional difficulties to take incremental backups. Bug fixed PXB-202. Other bugs fixed: PXB-1526 “Test kill_long_selects.sh failing with MySQL 5.7.21”. Release notes with all the improvements for version 2.4.12 are available in our online documentation. Please report any bugs to the issue tracker. The post Percona XtraBackup 2.4.12 Is Now Available appeared first on Percona Database Performance Blog.

  • UNDERSTANDING MySQL CLIENT / SERVER PROTOCOL USING PYTHON AND WIRESHARK – PART 1
    MySQL Client / Server protocol is used in many areas. For example: MySQL Connectors like ConnectorC, ConnectorJ and etc. MySQL proxy Between master and slave What is MySQL Client / Server protocol? MySQL Client / Server protocol is accepted conventions (rules). Through these rules client and server “talks” and understand each other. Client connects to server through TCP connection with special socket, sends to server special packets and accepts them from server. There are two phases of this connection: Connection phase Command phase Next illustration describes phases: STRUCTURE OF PACKETS Each packet consists of valuable data types. Maximum length of each packet can be 16MB. If the length of packet is more than 16MB, then it is separated into several chunks (16MB). First of all let’s see the protocol data types. MySQL Client / Server protocol has two data types: Integer types String types (See the official documentation: https://dev.mysql.com/doc/internals/en/basic-types.html) INTEGER TYPES Integer types also separates into two section: Fixed length integer types Length-encoded integer types Fixed length integer type consumes 1, 2, 3, 4, 6 or 8 bytes. For example if we want to describe number 2 in int data type then we can write it like this in hex format: 03 00 00. Or if we want to write describe number 2 in int then we can write it like this in hex format: 03 00 Length-encoded integer types consumes 1, 3, 4 or 9 bytes. Before length-encoded integer types comes 1 byte. To detect the length of integer we have to check that first byte. If the first byte is less than 0xfb ( < 251 ) then next one byte is valuable (it is stored as a 1-byte integer) If the first byte is equal to 0xfc ( == 252 ) then it is stored as a 2-byte integer If the first byte is equal to 0xfd ( == 253 ) then it is stored as a 3-byte integer If the first byte is equal to 0xfe ( == 254 ) then it is stored as a 8-byte integer But if the first byte is equal to 0xfb there is no need to read next bytes, it is equal to the NULL value of MySQL, and if equal to 0xff it means that it is undefined. For example to convert fd 03 00 00 … into normal integer we have to read first byte and it is 0xfd. According to the above rules we have to read next 3 bytes and convert it into normal integer, and its value is 2 in decimal number system. So value of length-encoded integer data type is 2. STRING TYPES String types also separates into several sections. String<Fix> – Fixed-length string types. They have a known, hardcoded length String<NULL> – Null terminated string types. These strings end with 0x00 byte String<Var> – Variable length string types. Before such strings comes fixed-length integer type. According to that integer we can calculate actual length of string String<Lenenc> – Length-encoded string types. Before such strings comes length-encoded integer type. According to that integer we can calculate actual length of string String<EOF> – If a string is the last component of a packet, its length can be calculated from the overall packet length minus the current position SNIFF WITH WIRESHARK Let’s start wireshark to sniff the network, filter MySQL packets by ip (in my case server ip is 54.235.111.67). Then let’s try to connect to MySQL server by MySQL native client on our local machine. >> mysql -u[username] -p[password] -h[host ip] -P3306 As you can see after TCP connection to the server we several MySQL packets from the server. First of them is greeting packet. Let’s dig into this packet and describe each field. First 3 bytes are packet length: Next 1 byte is packet number: Rest of bytes are payload of Greeting packet of MySQL Client / Server protocol Let’s describe each field of greeting packet. Protocol number – Int<1> Server version – String<NULL> Thread id – Int<4> Salt1 – String<NULL> Server capabilities – Int<2> Server language – Int<1> Server Status – Int<2> Extended Server Capabilities – Int<2> Authentication plugin length – Int<1> Reserved bytes – 10 bytes Salt2 – String<NULL> Authentication plugin string – String<EOF> Server language is integer, next table will help us to pick appropriate language by integer value: In my case server language is 0x08 (in decimal number system it is 8 also). From above table we can see that equivalent of 8 is latin1_swedish_ci. Now we know that default language of server is latin1_swedish_ci. Server capabilities and server status are also integers. But reading each BIT of these integers we can know about server capabilities and status. Next illustration describes server capability and status bits: Using greeting packet client prepares Login Request Packet to send to the server for authentication. Now let’s research login request packet. First 3 bytes describes payload length Next 1 byte is packet number Client capabilities – Int<2> / Same as Server capabilities Extended client capabilities – Int<2> / Same as Server extended capabilities Max packet – Int<4> / describes the maximum length of packet Charset – Int<1> / in my case it is 0x21 (in decimal number system is 33), from the table we can see that it is utf8_general_ci. We set server’s default charset from latin1_swedish_ci to utf8_general_ci Username – String<NULL> Password – String<Var> Client Auth Plugin string – String<NULL> As you can see password is encrypted. To encrypt a password we will use sha1, md5 algorithms, also salt1 and salt2 strings from previous Greeting Packet sent from server. Then we get OK packet from the server if we are authenticated successfully. Otherwise we would get ERR packet. 3 bytes are packet length 1 byte is packet number Affected rows – Int<1> Server status – Int<2> Warnings – Int<2> That’s all. We have finished researching Connection Phase. Now let’s start to write our python codes.

  • On InnoDB Data Compression in MySQL
    Another story that I've prepared back in April for my meeting with one of customers in London was a "compression story". We spent a lot of time on it in several support issues in the past, with only limited success.In case of InnoDB tables, there are actually two ways to compress data (besides relying on filesystem compression or compressing individual columns at server or application side). Historically the first one was introduced by the Barracuda InnoDB file format and ROW_FORMAT=COMPRESSED it supported. Notable number of related bugs were reported with time, and it may be not that easy to identify them all (you can find current list of bugs tagged with "compression" here). I've picked up the following bugs for my "story": Bug #88220 - "compressing and uncompressing InnoDB tables seems to be inconsistent". Over years Simon Mudd, Monty Solomon (see related Bug #70534 - "Removing table compression leaves compressed keys") and other community members reported several bugs related to inconsistencies and surprises with key_block_size option. It is used for both MyISAM and InnoDB storage engines (for compressed tables) and it seems nobody is going to fix the remaining problems until they are gone with MyISAM engine. Bug #69588 - "MyISAM to InnoDB compressed slower than MyISAM to InnoDB, Then InnoDB to Compressed". Just a detail to take into account, noted 5 years ago by Joffrey MICHAIE, verified almost 4 years ago and then getting zero public attention from Oracle engineers. Bug #62431 - "What is needed to make innodb compression work for 32KB pages?". Nothing can be done according to the manual:"In particular, ROW_FORMAT=COMPRESSED in the Barracuda file format assumes that the page size is at most 16KB and uses 14-bit pointers." Bug #78827 - "Speedup replication of compressed tables". Come on, Daniël van Eeden, nobody cares that"Replication and InnoDB compressed tables are not efficiently working together."The bug is still "Open". Bug #75110 - "Massive, to-be-compressed not committed InnoDB table is total database downtime". This problem was reported by Jouni Järvinen back in 2014. Surely this is not a bug, but it seems nobody even tried to speed up compression in any way on multiple cores. Bug #84439 - "Table of row size of ~800 bytes does not compress with KEY_BLOCK_SIZE=1". It was reported by Jean-François Gagné, who asked for a reasonable error message at least. Nothing happens after verification. Bug #77089 - "Misleading innochecksum error for compressed tables with key_block_size=16". This problem was reported by Laurynas Biveinis more than three years ago, immediately verified and then got zero attention. The boats above do not use the space for mooring efficiently. They need better compression. Transparent Page Compression for InnoDB tables was added later and looked promising. If you are lucky to use filesystem with sparse file and hole punching support and proper OS or kernel version, then you could expect notable saving of disk space with very few additional keystrokes (like COMPRESSION="zlib") when defining the table. Different compression libraries were supported. Moreover (see here), only uncompressed pages are stored in memory in this case, and this improved the efficiency of buffer pool usage. Sounded promising originally, but there are still bugs to consider: Bug #78277 - "InnoDB deadlock, thread stuck on kernel calls from transparent page compression". This bug alone (reported by Mark Callaghan back in 2015) may be a reason to NOT use the feature in production, as soon as you hit it (chances are high). there are many interesting comments that there are environments where the feature works as fast as expected, but I think this summary is good enough for most users:"[19 Oct 2015 15:56] Mark Callaghan...Slow on XFS, slow on ext4, btrfs core team tells me it will be slow there. But we can celebrate that it isn't slow on NVMFS - closed source, not GA, can't even find out where to buy it, not aware of anyone running it."The bug is still "Open". Bug #81145 - "Sparse file and punch hole compression not working on Windows". Not that I care about Windows that much, but still. The bug is "Verified" for 2 years. Bug #87723 - "mysqlbackup cannot work with mysql5.7 using innodb page-level compression" Now this is awesome! Oracle's own MySQL Enterprise Backup does NOT support the feature. Clearly they cared about making it useful... As a side note, same problem affects Percona's xtrabackup (see PXB-1394). MariaDB resolved the problem (and several related ones like MDEV-13023) with mariabackup tool. Bug #87603 - "compression/tablespace ignored in create/alter table when not using InnoDB". COMPRESSION='.../' option is supported for MyISAM tables as well, and this again leads to problems when switching to another storage engine, as Tomislav Plavcic noted. Bug #78672 - "assert fails in fil_io during linkbench with transparent innodb compression". This crash (assertion failure) was noted by Mark Callaghan back in 2015. May not crash anymore since 5.7.10 according to the last comment, but nobody cares to close the bug or comment anything useful. The bug is still "Verified". That's almost all I prepared for my "compression story". It had to be sad one.What about the moral of the story? For me it's the following: Classical InnoDB compression (page_format=compressed) has limited efficiency and does not get any attention from developers recently. If you hit some problem with this feature you have to live with it. Transparent page compression for InnoDB seems to be originally more like a proof of concept in MySQL that may not work well in production on commodity hardware, and software and was not integrated with backup tools. MariaDB improved it, added support for backing up page compressed tables efficiently with the same familiar xtrabackup-based approach, but there are still open problems to resolve (see MDEV-15527 and MDEV-15528 that I also picked up for my "story"). It seems (based on public sources review at least) that both compression options do not get much attention from Oracle developers recently. If you check new features of MySQL 8.0 GA here,  you may notice that zlib version is updated, compressed temporary InnoDB tables are no longer supported and... that's all about compression for InnoDB! This story could probably be shortened to just one link to the summary post by Mark Callaghan from Facebook (who studied the efficiency of data compression by various engines a lot, among other performance metrics), or by simple statement that if you want data to be compressed efficiently at server side do NOT use current InnoDB implementations and better use RocksDB engine (with MariaDB or Percona Server if you need other modern features also). But I can not write any story about MySQL without referring to some bugs, and this is how I've ended up with the above.What if you just switched to MySQL 8.0 GA and need some new features from it badly? Then just wait for a miracle to happen (and hope Percona will make it one day :)

  • MySQL 8.0 InnoDB Cluster – the quick hands-on manual
    I’m just back from a trip in Barcelona where I presented MySQL 8.0 InnoDB Cluster (at dataops and Barcelona MySQL Meetup) and the majority of feedback was great, but I also had some comments on the demos I showed. The first one was: This is a joke of course (maybe it’s true for some), people found it very easy and they liked it. But then, the second one was that all I showed wasn’t easy to find, some people who already played with the solution didn’t succeeded in creating a cluster so easily… not because they had errors or encountered bugs, but more because they just didn’t know how to do it. The goal of this blog post is to illustrate how to create a MySQL 8.0 InnoDB Cluster very quickly using the new MySQL Shell only ! Initial environment We have 3 MySQL 8.0.11 instances running: mysql1, mysql2 and mysql3 They can all communicate to each others and they have a user with all privileges created on each of them : clusteradmin. Nothing else has been changed, no configuration file has been modified. The only important thing is that if you have created the admin user on them individually, just run RESET MASTER on all the instances. Quick Cluster Creation Let’s connect to one instance (doesn’t matter which one), and let’s create the cluster. I will list all commands needed to create the cluster very quickly and on a next post I will explain them in more details: Connection using the Shell MySQL JS> \c clusteradmin@mysql1 Creating a session to 'clusteradmin@mysql1' Enter password: **** Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 10 (X protocol) Server version: 8.0.11 MySQL Community Server - GPL No default schema selected; type \use to set one. MySQL [mysql1+ ssl] JS> MySQL Configuration MySQL [mysql1+ ssl] JS> dba.configureInstance('clusteradmin@mysql1',{'restart': true}) You will prompted to validate the changes and if a restart of MySQL is required, it will happen (if you use a default MySQL installation, 3 settings will be changed and a restart will be required). And you need to configure all the other nodes that you want to be part of the cluster: MySQL [mysql1+ ssl] JS> dba.configureInstance('clusteradmin@mysql2',{'restart': true}) MySQL [mysql1+ ssl] JS> dba.configureInstance('clusteradmin@mysql3',{'restart': true}) Cluster Creation When the servers are restarted, just connect again to one of them using the Shell and create the cluster: MySQL [mysql1+ ssl] JS> \c clusteradmin@mysql1 MySQL [mysql1+ ssl] JS> cluster=dba.createCluster('MyCluster') Now you already have a cluster but with only one membere, you can verify this using the status() method of the cluster object we created: MySQL [mysql1+ ssl] JS> cluster.status() { "clusterName": "MyCluster", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306" } Add the other members Now it’s time to add the 2nd and 3rd member to the new cluster: MySQL [mysql1+ ssl] JS> cluster.addInstance('clusteradmin@mysql2:3306') MySQL [mysql1+ ssl] JS> cluster.addInstance('clusteradmin@mysql3:3306') Please, pay attention that here the port for MySQL standard protocol (3306) is required. Check the cluster We can verify our cluster using again the status() method: MySQL [mysql1+ ssl] JS> cluster.status() { "clusterName": "MyCluster", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306" } And voilà ! We have a 3 node MySQL 8.0 InnoDB Cluster ! Minimal Commands Summary connect to mysql1: \c clusteradmin@mysql1 configure mysql1: dba.configureInstance('clusteradmin@mysql1',{'restart': true}) configure mysql2: dba.configureInstance('clusteradmin@mysql2',{'restart': true}) configure mysql3: dba.configureInstance('clusteradmin@mysql3',{'restart': true}) connect to mysql1 (after restart): \c clusteradmin@mysql1 create cluster: dba.createCluster('MyCluster') add mysql2 to the cluster: cluster.addInstance('clusteradmin@mysql2:3306') add mysql3 to the cluster: cluster.addInstance('clusteradmin@mysql3:3306')

Dedicated Cloud Hosting for your business with Joomla ready to go. Launch your online home with CloudAccess.net.