MySQL Community Reception at Oracle OpenWorld
If you are in San Francisco on October 27, 2015, please come along to our MySQL team reception at Jillian's at Metreon. You do not require an Oracle OpenWorld pass to attend, all we ask is that you please register in advance.
We look forward to seeing you there!
Pythian Achieves Amazon Web Services Partner Network DevOps Competency
Pythian DevOps services accelerate AWS adoption and delivery of innovative customer experiencesLAS VEGAS, NEVADA – AWS RE:INVENT – October 6, 2015 – Pythian, a global IT services company specializing in helping companies adopt disruptive technologies to optimize revenue-generating systems, today announced it has achieved the Amazon Web Services (AWS) Partner Network (APN) DevOps Competency. As an addition to Pythian’s existing Advanced Consulting Partner status in the APN, the DevOps Competency further validates the company’s deep capabilities and demonstrated performance in helping customers transform their business to be more efficient and agile leveraging the AWS platform and DevOps principles, particularly around continuous integration, continuous delivery, and configuration management.Pythian DevOps services use a documented maturity model and client-centric strategic planning process to introduce cultural changes, best practices, and new technologies to companies seeking to deliver high quality software more rapidly. The Pythian DevOps offering leverages enhanced teamwork models, innovative process methodologies and repetitive task automation. It also helps clients improve collaboration between business, development, and operations, and establishes and optimizes agile application life cycles for competitive advantage. “Pythian DevOps aligns client and sector-specific needs with world-class strategy and implementation, maximizing business value and extraction for the modern software-based business,” said Aaron Lee, vice president, transformation services at Pythian.Organizations like Shutterfly, an Internet-based publishing service, have already benefited from Pythian DevOps-related expertise including cloud database replication, sharding, monitoring, and tooling. Pythian brought Shutterfly deep MySQL and distributed systems experience, along with a DevOps team that provided expertise in configuration management, orchestration, and operational visibility, all critical to helping the company integrate a new application on time. By taking advantage of AWS, Amazon Relational Database Service (Amazon RDS), Chef, and MySQL, Shutterfly experienced zero downtime during launch and improved application stability, flexibility, and scalability for bursts and web-scale growth.“Pythian DevOps and managed services was a good investment because we got the environment we needed, when we needed it. In the context of the project, it was a high-value solution that fit our budget,” said Charles Howard, DBA manager of Internet operations at Shutterfly. Pythian’s expert DevOps team is known for getting results with minimal to no disruption to the client’s operations. Pythian DevOps clients enjoy increased efficiencies, best-in-class security, and lower operational costs, along with the ability to deliver more innovative, higher-quality services. “Our commitment to excellence means Pythian hires only the top 5 percent of global technical talent, so clients know their critical AWS cloud-based systems are being managed by the industry’s leading technology professionals. Selection for the APN DevOps Competency validates what our clients value in us – a unique approach that includes a dedicated team, transparency, and a secure framework,” said Vanessa Simmons, director of business development at Pythian. Pythian was also recently named an RDS Database Migration Partner to help customers with database migration projects using the RDS Migration Tool.For more information on Pythian DevOps services visit the Pythian booth #1255 this week, October 6-9th, at AWS re:Invent. Learn more about Pythian DevOps services or view a summary of the Pythian DevOps service offering.About PythianFounded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, big data, advanced analytics, and DevOps to advance innovation and increase agility. Specializing in designing, implementing, and managing systems that directly contribute to revenue growth and business success, Pythian’s highly skilled technical teams work as an integrated extension of our clients’ organizations to deliver continuous transformation and uninterrupted operational excellence.Press ContactLynda PartnerEmail: email@example.comThe post Pythian Achieves Amazon Web Services Partner Network DevOps Competency appeared first on Pythian.
Advanced MySQL Server Auditing
We remember when we first started auditing MySQL servers, there were very few tools available. In one of our early big gigs, we were battling serious performance issues for a client. At the time, tuning-primer.sh was about the only tool available that could be used to diagnose performance bottlenecks. Fortunately, with a lot of manual interpolation of the raw data it presented, we were able to find the issue with the server and suggest how to resolve them. For that we are very thankful. It was a first step in analyzing MySQL status variables, minimizing the number of formulas to learn and calculate by hand. Obviously doing it by hand takes forever!
Now fast-forward to today. Unfortunately, not much has changed. Many DBAs and developers are still using open source tools such as tuning-primer, mysqltuner.pl, mysqlreport, and so on. Don’t get the wrong; those tools have been great tools, but there is now another tool that far exceeds what these tools can do. Analyst for MySQL can do practically everything that those tools can do and more. And, it can save you a lot of time in the process! And who doesn’t need to save time, especially when you are managing hundreds of MySQL servers!
We developed Analyst originally as a GUI version of tuning-primer.sh. Right from the beginning, we added the capability to add more rules. Over time, we added several more rules, hundreds in fact! This extended the power of Analyst to include auditing for security, clustering, replication, best practices, and more.
While we were at it, we thought “Wouldn’t it be great to see the data in graphical form in charts and graphs?” And “Oh, wouldn’t it be wonderful if it scored the server in different areas such as performance, security, and so on?”
From there, we realized that we were spending hours doing all of this manually. Hours that we had to bill to our customers. While that was great for us, we knew that this tool would save us a lot of time and we could then pass those savings onto our clients and it allowed us to pick up a lot more work. Everyone wants to be more efficient.
Each time we did a server audit, we manually put together a large report for the client (or a manager) to show them what action items needed to be taken to bring their server into spec and optimize it for performance, improve security, and so on. It didn’t take long to see the value in making Analyst able to export reports we could then just email managers and clients. And, wow, has that made life easier!
While we are ever indebted to tuning-primer and the other open source scripts we have used, we think it is time for a new and improved tool for the MySQL community. Take a look at a comparison of features below:
Comparison of Free Scripts to Analyst for MySQL
Compares features of free open source scripts, such as tuning-primer.sh, mysqltuner.pl, and mysqlreport, to Analyst for MySQL. Please note some features are limited to the paid versions.
ScriptsAnalyst for MySQL
No Agent Required
No Installation Required on Server
Remote SSH Connectivity
Graphical User Interface (GUI)
Operating Systems Supported
OS Metric Auditing
Easy Team Collaboration
Additional Export Formats
Graphs & Charts
Advisor Rule Upgrades
Custom Advisor Rules
Technical Support Available
Download a FREE copy of Analyst for MySQL. We think you will be pleasantly surprised what it can do! If you need even more power, there are even Professional and Enterprise editions which extend the functionality even more!
We would love to hear what you think by leaving comments below. If you have used Analyst for MySQL and found it to be helpful, we would love to hear about that as well! Customer feedback drives us to keep making additional improvements and enhancements, as well as adding new features.
Download Analyst for MySQL FREEWindows, Mac, & Linux
Two Common Reasons for Replication Lag
As a MySQL support engineer, I see this so often, that I felt it could help to write a post about it.
Customers contact us to ask about replication lag – ie, a slave is very far behind the master, and not catching up. (“Very far” meaning hours behind.)
The most common reason I encounter is databases having InnoDB tables without explicit primary keys. Especially if you are using row-based replication (“RBR”), you want explicit primary keys on all your tables. Otherwise, MySQL will scan the entire table for each row that is updated. (See bug 53375 . ) Maybe I’m a relational purist, but why would you want to have tables without explicit primary keys, anyway? (On the other, less-purist, hand, for performance reasons, sometimes a short surrogate PK may be preferred to a lengthy logical one. )
The other common reason is that the slave is single-threaded, and single-threaded performance can’t keep up with the multi-threaded master. In this case, if multiple databases are being updated, enabling the multi-threaded slave can help. ( See the manual for more.)
Nothing new or groundshaking here, but I hope this helps someone who is googling “MySQL replication lag”.
TwinDB Really Loves Backups
A week or two ago one of my former colleagues (at Percona) Jevin Real gave a talk titled Evolving Backups Strategy, Deploying pyxbackup at Percona Live 2015 in Amsterdam. I think Jervin raised some very good points about where MySQL backup solutions in general fall short. There are definitely a lot of tools and scripts out there that claim to do MySQL backups correctly, but don’t actually do it correctly. What I am more interested though is in measuring TwinDB against the points that Jervin highlighted to see if TwinDB falls short too.
We distribute TwinDB agent as a package that can be installed using the standard OS package management system. For example, using YUM on CentOS, RHEL and Amazon Linux, or using APT on Debian and Ubuntu. Since the package is installed using standard packaging systems, hence any dependencies are resolved in the standard way. The packages are publicly available on the TwinDB repository:
https://packagecloud.io/twindb/main for CentOS, RHEL, Debian and Ubuntu
https://packagecloud.io/twindb/amzn_main for Amazon Linux
Note that we had to package the TwinDB agent separately for Amazon Linux because since version 2015.03 it is kind of a special case where it can neither be called RHEL 6 nor a RHEL 7. One issue we hit was with the perl-DBD-MySQL package which is a dependency of Percona XtraBackup package. We had to repackage perl-DBD-MySQL, details on why and how can be found in one of our earlier blog posts Xtrabackup and MySQL 5.6 on Amazon instance.
Furthermore, we have tried to define the instructions on how to install the agent very clearly.
TwinDB agent is itself written in Python. Python as a language is portable and runs on many unix variants as mentioned at General Python FAQ. What we also did to further test portability was to build a Continuous Integration/Continuous Deployment pipeline to run automated integration tests on all the platforms that we support. Below is a list of the platforms that we support:
You are rest assured that we take portability very seriously and nothing gets rolled out without being automatically tested.
I would be publishing a separate post detailing how we implemented the Continuous Integration/Continuous Deployment pipeline.
A Tool for non-DBAs
The main reason we developed TwinDB agent was to simplify the backup process. We did not want to build something that was complex to use or that would require the user to be a DBA. That is why in most cases no configuration is needed on part of the user. Things like backup schedule and retention policy already have sane defaults which work for most of the cases. A configuration can be tied to a group of MySQL servers or a single MySQL server and if you want to change the backup schedule it is very simple to change it from the GUI.
You do not need to SSH into the MySQL server to do any kind of configuration of the TwinDB agent. The agent is also smart enough to detect replication topology and chooses an appropriate slave to take backups. If the replication topology changes, the agent is able to detect that and would automatically choose a different slave to take backups.
No babysitting needed
As you would have already noticed from the description above, in majority of the cases the agent is autonomous. The only times you would really need to interact with it is for example through the GUI when you have to make configuration changes, or when you need to restore backups. And then you have TwinDB support itself whenever you need it and wherever you need it.
Remote streaming of backups
The TwinDB agent stores the backup on dedicated storage space and does not store it on the same server running MySQL. You do not even have to manage the storage space. TwinDB manages the storage space and provides you with two options:
TwinDB-hosted scalable and secure storage in the cloud
On-premises secure hosting on your hardware, be it baremetal or your private cloud
There are many reasons why you would backup MySQL server. But the most important reason is for you to be able to restore the backup as and when needed. We here at TwinDB understand that and have simplified the restore process so that you can just click to do a restore of the backup.
Off-site backups have traditionally been a starting point for data protection. However there are risks involved at multiple stages: transporting the backups and storing them. Therefore, from our perspective encryption of data-in-flight and encryption of data-at-rest is equally important as it enables both, safe transport and safe storage of backed-up data. The next logical question would be: How exactly is encryption done? TwinDB uses asymmetric encryption using OpenGPG. The GPG key is auto-generated on a MySQL server that is being backed up by the agent and the users own the GPG private key. Furthermore, the encryption is always on. So whether you use TwinDB agent as an off-site backup solution or if you backup to on-premises storage space, you are rest assured that the backups are always encrypted.
Full and incremental backups
The TwinDB agent supports both full and incremental backups. There is no additional configuration needed to enable incremental backups. The GUI allows you to configure the backup schedule as shown in one of the screenshots above. You can configure how often full and incremental backups get taken.
We are working on a feature which would allow in-flight backup validation. Our extensive experience with the InnoDB Data Recovery Tool and with data recoveries itself, gives us a head-start on how we want to implement backup validation. In-flight backup validation will provide immediate feedback in contrast to backup validation techniques generally used right now. This is all the more beneficial for users with large datasets.
At the moment though, our backup validation strategy also revolves around regular backup restores.
Reading through all that I have written I hope you get a better picture of where TwinDB agent stacks up against all the other backup solutions. I certainly got a very clear picture when comparing to all the points that my former colleague Jervin raised. I think TwinDB is a perfect example of how a backup solution should be implemented and furthermore it just works.
The post TwinDB Really Loves Backups appeared first on Backup and Data Recovery for MySQL.