Home News Feeds Planet MySQL
Planet MySQL
Planet MySQL -

  • Picking the right clustering for MySQL: Cloud-only services or flexible Tungsten clusters?
    As businesses head into the cloud, it is tempting to reach for the first product that offers to make database operation as simple as punching a few buttons on a menu.  However, there’s a big difference between firing up cloud database services such as Amazon RDS for testing or development and finding a solution that can handle hundreds of millions of transactions daily. This webinar explores how

  • MySQL Connector/Python v2.0.0 alpha
    A new major version of Connector/Python is available: v2.0.0 alpha has been been released and is available for download! As with any alpha-software, it’s probably not good to throw it in production just yet. Our manual has the full change log but here’s an overview of most important changes for this relase. Some incompatibilities The world evolves, at least the software does, and Python is not different. I’m not as bold as the guys at Django who dropped support of Python v2.6 with the Django v1.7 release. I’m leaving it in because I’m nice. Supported Python: 2.6 and 2.7 and 3.3 and 3.4 We do not support any longer Python 3.1 and 3.2. One of the reasons is that we consolidated the code bases for both major Python versions, and the unicode syntax brought back in 3.3 was a blessing (for example, u’パイソン’). Raw Cursors Return bytearray Objects Since we consolidated the code bases for Python 2 and 3, we needed make the behaviour as much as possible the same between the two. It’s not easy with Unicode strings, but with bytes we have the bytearray type. Raw cursors will return them instead of strings in Python 2 and bytes in Python 3. If you want to have previous behaviour back, you can inherit from MySQLCursorRaw and change some methods. Please leave comments if you’d like an example for this. LOAD LOCAL DATA INFILE  On by Default In Connector/Python v1.x you needed to set the client flags to enable the LOAD LOCAL DATA INFILE on the client side. Here an example: # Connector/Python v1.2 import mysql.connector from mysql.connector import ClientFlag cnx = mysql.connector.connect(.. , client_flags=[-ClientFlag.[LOCAL_FILES]) Now in Connector/Python v2.0 it is on. However, some people might not like it so there is a switch to disable it: # Connector/Python v2.0 import mysql.connector cnx = mysql.connector.connect(.. , allow_local_infile=False) Note that you still need to make sure that the MySQL Server is configured to allow this statement. New Cursors: dict and namedtuple At last, we have cursors which return rows as dictionaries or named tuples. PEP-249 does not define these since not all database systems might return the columns in a case insensitive or sensitive way. But this is MySQL. Here is an example how to use cursor returning dictionaries: query = ( "SELECT TABLE_NAME, TABLE_ROWS " "FROM INFORMATION_SCHEMA.TABLES " " WHERE TABLE_SCHEMA='mysql' ORDER BY TABLE_NAME" ) cur = cnx.cursor(dictionary=True) cur.execute(query) for row in cur: print("{TABLE_NAME:>30s} {TABLE_ROWS}".format(**row)) That’s far less code for something simple. Each row would look this:     {u'TABLE_NAME': u'user', u'TABLE_ROWS': 11} If you like named tuples better, you can do the same, simply giving the named_tuple argument. cur = cnx.cursor(named_tuple=True) cur.execute(query) for row in cur: if row.TABLE_ROWS > 0: print("{name:>30s} {rows}".format( name=row.TABLE_NAME, rows=row.TABLE_ROWS)) You can also combine it with the raw=True argument to have raw cursors. Options Files Support Added Option files can now be read so you don’t have to have all these connection arguments repeated everywhere in your source code. There are lots of ways to do it this, but we needed to be able to read and support the MySQL options files read by client tools and server. import mysql.connector cnx = mysql.connector.connect(options_files='/etc/mysql/connectors.cnf') By default we do not read any file. You have to explicitly specify which files and in which order have to be read. The options groups that are read are client and connector_python. You can also override this and specify which particular group(s) using the argument option_groups. And more.. The  packaging and other supporting files have been removed. If you like to see WiX file (I dare you) for the Windows packaging, then you can (still) check the v1.x branches. We moved them away because these modules and files are (or can be) fixed and developed by other teams. And watch out for v2.0.1 ! Useful links Documentation: Release Notes: Downloads: Feedback, bugs and feature requests: Forum: v2.0.0-alpha is not available through PyPI. No alpha/beta/RC will be.

  • Monitoring MySQL flow control in Percona XtraDB Cluster 5.6
    Monitoring flow control in a Galera cluster is very important. If you do not, you will not understand why writes may sometimes be stalled. Percona XtraDB Cluster 5.6 provides 2 status variables for such monitoring: wsrep_flow_control_paused and wsrep_flow_control_paused_ns. Which one should you use?What is flow control?Flow control does not exist with regular MySQL replication, but only with Galera replication. It is simply the mechanism nodes are using when they are not able to keep up with the write load: to keep replication synchronous, the node that is starting to lag instructs the other nodes that writes should be paused for some time so it does not get too far behind.If you are not familiar with this notion, you should read this blogpost.Triggering flow control and graphing itFor this test, we’ll use a 3-node Percona XtraDB Cluster 5.6 cluster. On node 3, we will adjust gcs.fc_limit so that flow control is triggered very quickly and then we will lock the node:pxc3> set global wsrep_provider_options="gcs.fc_limit=1"; pxc3> flush tables with read lock;Now we will use sysbench to insert rows on node 1:$ sysbench --test=oltp --oltp-table-size=50000 --mysql-user=root --mysql-socket=/tmp/pxc1.sock prepareBecause of flow control, writes will be stalled and sysbench will hang. So after some time, we will release the lock on node 3:pxc3> unlock tables;During the whole process, wsrep_flow_control_paused and wsrep_flow_control_paused_ns are recorded every second with mysqladmin ext -i1. We can then build a graph of the evolution of both variables:While we can clearly see when flow control was triggered on both graphs, it is much easier to know when flow control was stopped with wsrep_flow_control_paused_ns. It would be even more obvious if we have had several timeframes when flow control is in effect.ConclusionMonitoring a server is obviously necessary if you want to be able to catch issues. But you need to look at the right metrics. So don’t be scared if you are seeing that wsrep_flow_control_paused is not 0: it simply means that flow control has been triggered at some point since the server started up. If you want to know what is happening right now, prefer wsrep_flow_control_paused_ns.The post Monitoring MySQL flow control in Percona XtraDB Cluster 5.6 appeared first on MySQL Performance Blog.

  • Prepared Statement Samples
    One of our most useful features is Top Queries, which allows users to examine various metrics for families of similar queries. Once a query family is selected, we can drill down into its query samples. Individual samples are shown on a scatterplot. Selecting individual samples will display its captured SQL, latency, errors, and more. We collect query samples by decoding the MySQL protocol over TCP and extracting the query text. Not only that, we also capture errors, warnings, and more by inspecting traffic over the wire. Until now, query samples excluded prepared statements. This is because prepared statements are not sent over the wire as strings of SQL text. Statement preparation and execution is quite different from regular query execution, but in the end we generate a good approximation of what the statement SQL would look like and display it on the samples page. Regenerating the statement text happens in three steps. First, we grab the statement text from a COM_STMT_PREPARE command sent from a client to the MySQL server. The server then sends the client a COM_STMT_PREPARE response with a statement ID. When a statement is to be executed, the client sends a COM_STMT_EXECUTE command to the server with the statement ID and its associated parameters. The parameters sent by the client are sent in the binary protocol format. When our agent sees a COM_STMT_EXECUTE, it decodes the parameters and does a string replace in the original statement text to approximate the SQL query text. Regenerating SQL from prepared statement executions is not perfect, but should be very good. We understand the argument types, escape strings and text when necessary, and represent parameters in hex when we have to. You'll notice that the metric parameter is a quoted string, not a literal number. That's because we actually send that parameter in ASCII format, as text. The protocol doesn't lie! Of course, we're showing a sample of SQL that never executed as displayed, which is kind of disingenuous, isn't it? But if you think about it for a moment, this is exactly what you get from sources like the slow query log, which also shows SQL that's reconstructed from the prepared statement with parameters substituted in. The main difference is that the slow query log doesn't tell you it's showing you SQL that never actually executed, whereas here the Action column is execute which tells you it's a prepared statement, not a simple query. Astute readers will have noticed the EXPLAIN tab to the right of the SQL sample. That's an upcoming feature. Stay tuned for more on that! If you are interested in seeing how this can benefit your company, please click below for a demo or here for a free trial.

  • Putting MySQL Fabric to Use: July 30 webinar
    Martin and I have recently been blogging together about MySQL Fabric (in case you’ve missed this, you can find the first post of the series here), and on July 30th, we’re going to be presenting a webinar on this topic titled “Putting MySQL Fabric to Use.”The focus of the webinar is to help you get started quickly on this technology, so we’ll include very few slides (mostly just a diagram or two) and then jump straight into shared screen mode, with lots of live console and source code examples.In order to make the best use of time, we won’t show you how to install and configure MySQL Fabric. However, we can point you to a few resources to help you get ready and even follow our examples as we go:The official manual is an obvious starting pointOur second post in the series includes configuration instructionsThis git repo contains the test environment we’ll use to run our demos. Specifically, we’ll use the sharding branch, so if you intend to follow our examples as we go, we recommend checking that one out.If you’re interested, you can register for this webinar here, and if there’s something specific you’d like to see (we had a request for PHP examples in the comments to our last post) feel free to post that as a comment. We can’t promise we’ll be able to cover all requests during the webinar, but we’ll incorporate examples to the repo as time allows.Hope to see you then!The post Putting MySQL Fabric to Use: July 30 webinar appeared first on MySQL Performance Blog.



Copyright © 2001 - 2013 K-Factor Technologies, Inc.

Site Meter

a href=