Amazon RDS Migration Tool
Amazon has just released their RDS Migration Tool, and Pythian has recently undertaken training to use for our clients. I wanted to share my initial thoughts on the tool, give some background on its internals, and provide a walk-through on the functionality it will be most commonly used for.There are many factors to consider when evaluating cloud service providers, including cost, performance, and high availability and disaster recovery options. One of the most critical and overlooked elements of any cloud offering though, is the ease of migration. Often, weeks are spent evaluating all of the options only to discover after the choice is made that it will take hours of expensive downtime to complete the migration, and that there is no good rollback option in the case of failure.In order to reduce the friction inherent in the move to a DBaaS offering, Amazon has developed an RDS Migration tool. This is an in-depth look at this new tool, which will be available after September 1, 2015. Contact Pythian to start a database migration.With the introduction of the RDS Migration tool, Amazon has provided a powerful engine capable of handling much more than basic migration tasks. It works natively with Oracle, SQL Server, Sybase, MySQL, PostgreSQL, Redshift (target only), Aurora (target only), and provides an ODBC connector for all other source systems. The engine is powerful enough to handle fairly complex transformations and replication topologies; however, it is a migration tool and isn’t intended for long-term use.ArchitectureAmazon’s RDS Migration Tool architecture is very simple. It consists of your source system, an AWS VM with the Migration Tool installed on it, and the target RDS instance.Each migration is broken up into Tasks. Within a Task, a source and target database are defined, along with the ability to transform the data, filter the tables or data being moved, and perform complex transformations.Tasks can be scheduled to run at particular times, can be paused and resumed, and can alert on success or failure. It’s important to note that if a task is paused while a table is loading, that table will be reloaded completely from the beginning when the task resumes.Within a running task, the following high-level steps are performed: • Data is pulled from the source using a single thread per table • Data is converted into a generic data type • All transformations are applied • Data is re-converted into the target system’s datatype and inserted • After the initial load, if specified, the tool monitors for updates to data and applies them in near real-timeWhile processing the data, each table has a single thread reading from it, and any updates are captured using the source system’s native change data capture utility. Changes are not applied until after the initial load is completed. This is done to avoid overloading the source system, where it’s assumed client applications will still be running.Performance ConsiderationsThere are several factors which might limit the performance seen when migrating a database.Network Bandwidth Probably the biggest contributor to performance issues across data centers, there is no magic button when moving to RDS. If the database is simply too big or too busy for the network to handle the data being sent across, then other options may need to be explored or used in conjunction with this tool.Some workarounds to consider when network performance is slow include: • Setup AWS Direct Connect • Use a bulk-load utility, and then use the tool to catch up on transactions • Only migrate data from a particular point in timeRDS Migration Tool Server CPU The migration tool converts all data into a common data type before performing any transformations, then converts them into the target database’s data type. This is obviously very heavy on the server’s CPU, and this is where the main performance bottlenecks on the server are seen.Capacity of Source database This tool uses a single SELECT statement to migrate the data, and then returns for any changed data after the initial bulk load is completed. On a busy system, this can be a lot of undo and redo data to migrate, and the source system needs to be watched closely to ensure the log files don’t grow out of control.Capacity of Target database In the best case scenario, this will be the limiter as it means all other systems are moving very fast. Amazon does recommend disabling backups for the RDS system while the migration is running to minimize logging.WalkthroughThe following walkthrough looks at the below capabilities of this tool in version 1.2:• Bulk Data Migration to and from the client’s environment and Amazon RDS • Near Real-Time Updates to data after the initial load is completed • The ability to transform data or add auditing information on the fly • Filtering capabilities at the table or schema levelYou will need to have setup network access to your databases for the RDS Migration Tool.1. After confirming access with your account manager, access the tool by opening the AWS console, selecting EC2, and choosing AMIs. 2. Select the correct AMI and build your new VM. Amazon recommends an M4.large or M4.xlarge.3. After building the new VM, you will need to install the connectors for your database engine. In this example, we’ll be using Oracle Instant Client 126.96.36.199 and MySQL ODBC Connector 5.2.7.For the SQL Server client tools, you will need to stop the Migration services before installing.4. Access the Migration ToolWithin VM: http://localhost/AmazonRDSMigrationConsole/Public URL: https:[VM-DNS]/AmazonRDSMigrationConsole/Username/Password is the Administrator login to the VM5. The first screen after logging in displays all of your current tasks and their statuses. 6. Clicking on the Tasks menu in the upper-left corner will bring up a drop-down menu to access Global Settings. From here, you can set Notifications, Error Handling, Logging, etc… 7. Back on the Tasks menu, click the Manage Databases button to add the source and target databases. As mentioned earlier, this walkthrough will be an Oracle to Aurora migration. Aurora targets are a MySQL database for the purposes of this tool. 8. After defining your connections, close the Manage Databases pop-up and select New Task. Here, you can define if the task will perform a bulk-load of your data and/or if it will attempt to apply changes made. 9. After closing the New Task window, simply drag & drop the source and target connectors into the task.10. By selecting Task Settings, you can now define task level settings such as number of threads, truncate or append data, and define how a restart is handled when the task is paused. You can also override the global error handling and logging settings here.The best practice recommendation is to find the largest LOB value in your source database and set that as the max LOB size in the task. Setting this value allows the task to optimize LOB handling, and will give the best performance.11. Select the Table Selection button to choose which tables will be migrated. The tool uses wildcard searches to allow any combination of tables to exclude or include. For example, you can:Include all tables in the databaseInclude all tables in a schema or set of schemasExclude individual tables and bring over all remaining tablesInclude individual tables and exclude all remaining tablesThe tool has an Expand List button which will display all tables that will be migrated.In this screenshot, all tables in the MUSER08 schema that start with T1 will be migrated, while all tables that start with T2 will be excluded EXCEPT for the T20, T21, T22, & T23 tables. 12. After defining which tables will be migrated, select an individual table and choose the Table Settings button. Here you can add transformations for the individual tables, add new columns or remove existing ones, and filter the data that is brought over.In this screenshot, the T1 table records will only be brought over if the ID is greater than or equal to 50 and the C1 column is LIKE ‘Migrated%’ 13. Select the Global Transformations button. Like the table selection screen, you use wildcards to define which tables these transformations will be applied to. You can:Rename the schemaRename the tableRename columnsAdd new columnsDrop existing columnsChange the column data typesIn this screenshot, a new column named MigratedDateTime will be created on all tables and populated with the current DateTime value. 14. Finally, save the task and choose Run. This will kick off the migration process and bring up the Monitoring window. From here, you can see the current task’s status, notifications, and errors, as well as get an idea of the remaining time.
Starting with MySQL 5.7.8, we are shipping a new client utility called mysqlpump that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects and table data. The goal of mysqlpump is to have a modern utility that is extendable and has native support for parallelization. We felt that the best way to achieve this was to write an entirely new tool where we would be free to break compatibility with mysqldump and where we would not be forced to implement some of the legacy functionality that it provides. mysqlpump executes all of the operations related to dumping multiple databases and the objects inside them in parallel in order to drastically reduce the total time needed to perform a logical backup.
Here is the command you would to do a basic backup of all databases (by default mysqlpump will dump all databases):mysqlpump --user=root --password > full_backup.sql
mysqlpump --all-databases --user=root --password > full_backup.sql
Note: mysqlpump will not dump the following special internal databases by default: PERFORMANCE_SCHEMA, INFORMATION_SCHEMA, SYS_SCHEMA, ndbinfo. mysqlpump will also not dump the following system tables from the mysql schema unless they are not explicitly requested using the --include-tables option: user, db, tables_priv, columns_priv, procs_priv, proxies_priv, event, proc, apply_status, schema, general_log, slow_log.
mysqlpump divides the dump process into several sub-tasks and then adds these sub-tasks to a multi-threaded queue. This queue is then processed by N threads (2 by default) in order to allow the work to be done in parallel. Each thread makes a connection to the MySQL server to retrieve all necessary information and then begins its work. The number of threads used can be configured using the --default-parallelism and --parallel-schemas options.
To backup all databases with 4 threads:mysqlpump --user=root --password --default-parallelism=4 > full_backup.sqlTo create 2 worker queues, 1 queue to process databases db1,db2 and 1 queue to process all others with 3 threads per queue (note that by default 2 threads will be created per queue in order to complete the dump tasks):mysqlpump --user=root --password --parallel-schemas=db1,db2 --default-parallelism=3 > full_backup.sqlTo spawn 5 threads to work on the first queue in which db1,db2 will be processed and 3 threads to work on the default queue for rest of the databases:mysqlpump --user=root --password --parallel-schemas=5:db1,db2 --default-parallelism=3 > full_backup.sqlTo spawn 5 threads to work on the first queue in which db1,db2 will be processed, 2 threads to work on the second queue to process db3,db4 and 3 threads to work on default queue used for all other databases:mysqlpump --user=root --password --parallel-schemas=5:db1,db2 --parallel-schemas=2:db3,db4 --default-parallelism=3 > full_backup.sqlTo backup only the ‘accounts’ and ‘inventory’ databases/schemas:mysqlpump --databases accounts inventory --user=root --password > partial_backup.sqlIf we want to export only the metadata information and skip all the data associated with the tables then we can use the --skip-dump-rows option:mysqlpump --databases accounts inventory --skip-dump-rows --user=root --password > partial_backup.sqlThis example would only dump databases/schemas db1 and db2:mysqlpump --user=root --password --include-databases=db1,db2 --result-file=db1_db2_backup.sqlThis example would dump all databases/schemas with a name starting with ‘db':mysqlpump --user=root --password --include-databases=db1% --result-file=all_db_backup.sqlThis example would dump all databases/schemas except db1 and db2:mysqlpump --user=root --password --exclude-databases=db1,db2 --result-file=partial_backup.sqlThis example would dump all tables from all databases/schemas except a table named ‘t’ that may be present within any database/schema:mysqlpump --user=root --password --exclude-tables=t --result-file=partial_backup.sqlThis example would dump all tables from all databases/schemas except table names matching the ‘__user‘ regexp pattern in any database/schema (dbuser, STuser, 45user, etc.):mysqlpump --user=root --password --exclude-tables=__user --result-file=partial_backup.sqlThis example would dump only events with name ‘ev2′ and routines with name ‘p1′ from all databases/schemas, excluding the mysql system database/schema:mysqlpump --user=root --password --include-events=ev2 --include-routines=p1 --exclude-databases=mysql --result-file=partial_backup.sqlThis example would only dump the users present in mysql.user table:mysqlpump --user=root --password --exclude-databases=% --usersThis example would dump all users present in the mysql.user table except the root user(s):mysqlpump --user=root --password --users --exclude-users=root
By default mysqlpump will also show the dump progress. For example:mysqlpump --user=root --password > full_backup.sql
Dump progress: 0/1 tables, 6/6 rows
Dump progress: 4/6 tables, 53964/1867889 rows
Dump progress: 4/6 tables, 109464/1867889 rows
Dump progress: 4/6 tables, 173714/1867889 rows
Dump progress: 4/6 tables, 252464/1867889 rows
Dump progress: 4/6 tables, 316464/1867889 rows
Dump progress: 4/6 tables, 332964/1867889 rows
Dump completed in 40554 milliseconds
Importing and restoring the data is easy. To import the generated backup file into a new instance of MySQL, you can simply use the mysql command-line client to import the data:mysql -uroot -p < partial_backup.sql
mysqlpump currently tries to do as much work in parallel as possible and the dump threads lack a synchronization point before they start backing up the data. This makes it currently unsafe as a general purpose backup replacement (i.e. fully replacing mysqldump). We will be adding consistent backup related features and much more in upcoming versions.
For additional details on the development work so far, please see WL#7755.
We look forward to hearing from MySQL users as you begin trying out this new tool for creating logical dumps and backups! If you encounter any problems with this new tool—or you have ideas for new features—please let us know here in the comments, open a bug report / feature request at bugs.mysql.com, or open a support ticket.
As always, THANK YOU for using MYSQL!
Percona Server 5.5.45-37.4 is now available
Percona is glad to announce the release of Percona Server 5.5.45-37.4 on September 2, 2015. Based on MySQL 5.5.45, including all the bug fixes in it, Percona Server 5.5.45-37.4 is now the current stable release in the 5.5 series.Percona Server is open-source and free. Details of the release can be found in the 5.5.45-37.5 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.Bugs Fixed:Querying INFORMATION_SCHEMA GLOBAL_TEMPORARY_TABLES table would crash threads working with internal temporary tables used by ALTER TABLE. Bug fixed #1113388.FLUSH INDEX_STATISTICS/FLUSH CHANGED_PAGE_BITMAPS and FLUSH USER_STATISTICS/RESET CHANGE_PAGE_BITMAPS pairs of commands were inadvertently joined, i.e. issuing either command had the effect of both. The first pair, besides flushing both index statistics and changed page bitmaps, had the effect of FLUSH INDEX_STATISTICS requiring SUPER instead of RELOAD privilege. The second pair resulted in FLUSH USER_STATISTICS destroying changed page bitmaps. Bug fixed #1472251.If a new connection thread was created while a SHOW PROCESSLIST command or a INFORMATION_SCHEMA.PROCESSLIST query was in progress, it could have a negative TIME_MS value returned in the PROCESSLIST output. Bug fixed #1379582.Other bugs fixed: #768038 and #1472256.Release notes for Percona Server 5.5.45-37.4 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.The post Percona Server 5.5.45-37.4 is now available appeared first on MySQL Performance Blog.
Orchestrator visual cheatsheet, TL;DR the "smart" way
Orchestrator is really growing. And the amount of users (DBAs, sys admins) using it is growing. Which gives me a lot of immediate feedback in the form of "Look, there's just too many options to move slaves around! Which ones should we use?"
TL;DR look at the two visualized commands below
They are enough
The "smart" commands to end all commands
So all relocation commands are important, and give you fine-grained, pin-pointed control of the method of topology refactoring. However, most of the time you just want to move those servers around. Which is why there's a new "smart" mode which support these two commands, which you should be happy using:
relocate: move a single slave to another position
relocate-slaves: move all/some slaves of some server to another position.
What makes these commands Smart? You can move slaves around from anywhere to anywhere. And orchestrator figures out the bast execution path. If possible, it uses GTID. Not possible? Is Pseudo-GTID available? Great, using Pseudo-GTID. Oh, are there binlog servers involved? Really simple, use them. None of the above? Orchestrator will use "standard" binlog file:pos math (with limitations). Orchestrator will even figure out if multiple steps are necessary and will combine any of the above.
So you don't have to remember all the possible ways and options. The visual cheatsheet now boils down to these two:
Let's take a slightly deeper look
Moves a single slave X from any point to replicate another some server Z
As usual, orchestrator first confirms that X can replicate from Z (Z has log-slave-updates or is a binlog server; binlog format compatible, etc.)
With GTID/Pseudo-GTID, move from any point to any point
With binlog servers, move around the binlog server environment (at this point you are not using binlog servers, so ignore)
With normal replication, requires an "atomic" operation: either move the slave one level up, or make it replicate from a sibling.
You can relocate the same master, effectively repointing the slave back to its existing position. This serves to re-resolve master hostname; to reset relay logs; to verify slave is aligned with master.
Or combination of the above
Moves multiple slaves of server X to replicate from some other server W
By default moves all slaves of X, where possible
Each slave verified to be able to replicate from W. Those that can't are left behind.
Can filter using regular expression via --pattern=some?[reg]ex on slave hostnames
Can relocate under same master, effectively repointing all slaves (see above explanation)
Can relocate below one of the very slaves of X. If Y is in itself a slave of X and you're executing:
orchestrator -c relocate-slaves -i X -d Y
then Y is excluded from the list of relocated slaves. This effectively means "make Y local master of its current siblings". Very cool stuff.
When binlog servers involved, simple math-less repointing takes place
When GTID involved, let MySQL/MariaDB (both supported) do the math on a per-server basis
When Pseudo-GTID involved, greatly optimize by dividing into equivalence classes and only doing the math on a representative of each class.
Or combination of the above
What about the other commands?
The above covers such commands as move-up, move-below, repoint, repoint-slaves, match-below, multi-match-slaves, regroup-slaves and more. It does not cover enslave-master and make-co-master which are a bit different.
My guess is you can pass 98% of your operations with relocate and relocate-slaves. Otherwise just run orchestrator with no arguments nor options to get a full-blown breakdown of available commands.
relocate-slaves is achieved by dragging the slaves of an instance on top of a new master, as follows:
Find latest orchestrator release at https://github.com/outbrain/orchestrator/releases
How MySQL-Sandbox is tested, and tests MySQL in the process
MySQL-Sandbox is a great tool for testing a new release, and in fact this is what I do when a new MySQL tarball becomes available. I don't think many people are aware of the full testing capabilities of the sandbox, though.When you think about testing, you may just think of creating a sandbox with the new tarball, and then hammering it with your pet procedure. That works, of course, as the main purpose of MySQL-Sandbox is to allow you to do just that. There is, however, a full test suite that can tell you in a short while if your tarball is compatible with the past or not.This procedure is quite strict. It has happened several times that I caught a bug in a new release of MySQL, or Percona Server, or MariaDB, just by running this suite.How MySQL-Sandbox gets testedBefore describing how to test, I would like to show what I do. When a new version of MySQL-Sandbox is ready (and I happen to have time, because, hey! I also have a day job!) I subject it to the full test suite, which is about 500 different tests (the number may vary depending on the operating system and the MySQL version being tested). Then I repeat the full test for every version that I have stored in my test machines. That is, from version 5.0 to 5.7, passing through Percona Server and MariaDB forks, I test about a dozen versions:5.0.965.1.735.5.4188.8.131.52.184.108.40.206.85.7.9mariadb 10.0.20mariadb 10.1.6mariadb 5.5.40percona server 5.5.43percona server 5.6.25The above versions change when new releases are available. I repeat the same test in two different machines, covering OSX and Linux, for a grand total of ≈ 12,000 tests before I upload the tarball to CPAN.What does the MySQL-Sandbox test suite doThere are 20 files in the ./t directory, each one starting a set of thematic tests:t/01_modules.t tests the MySQL Sandbox modulet/02_test_binaries.t tests the completeness of MySQL-Sandbox deploymentt/03_test_sandbox.t is the longest and more comprehensive test in the suite. It installs every type of sandbox, and runs basic functinal tests for each. It also includes an expensive test for sandbox installation with pattern recognition. In total, it runs for more than 10 minutes. This is where new versions may fail if they are not fully compatible with previous ones.t/04_test_sbtool.t is a test for the companion tool, which can do many operations, including installing and testing hierarchical replication.t/05_test_smoke.t tests a sandbox with the basic functionalities that were considered important to test manually when I was working at MySQL. Tired of testing it manually, I scripted the procedure. It may not be so vital now, but it does not hurt to run it.t/06_test_user_defined.t is a demo test for the simplest user-defined tests implemented with the sandboxt/07_test_user_defined.t does the same as the above, but testing replication instead of a single servert/08_test_single_port_checking.t tests that we can install a sandbox multiple time, with automatic port choicet/09_test_multiple_port_checking.t tests group sandboxes for the above featuret/10_check_start_restart.t tests that we can restart sandboxed servers with options set on-the-flyt/11_replication_parameters.t tests the installation of replication groups with user parameters that affect one or more nodes.t/12_custom_user_pwd.t tests the ability of defining your own passwords instead of using the defaultst/13_innodb_plugin_install.t tests the installation of the innodb plugin (it runs only only for version 5.1)t/14_semi_synch_plugin_install.t tests the installation and functioning of the semi-synchronous plugin (requires version 5.5+)t/15_user_privileges.t tests that the sandbox users have the privileges that were expectedt/16_replication_options.t similar to test #11, but testing a different set of optionst/17_replication_flow.t tests that regular and circular replication groups can transfer data as expected.t/18_force_creation.t tests that we can create a sandbox by overwriting an existing onet/19_replication_gtid.t tests replication with GTIDt/20_add_option.t tests the ability of restarting a server with a permanent new option.How to test a new tarball with MySQL-SandboxYou do not need to go to the full testing of every version. That's done to make sure that MySQL-Sandbox does not have regressions, and works as expected with all the versions. But if your purpose is to make sure that your new tarball is ready to be used, a simple pass of the test suite will do. Here are the steps:Get the tarball. For this demonstration, we will use the latest Percona Server: Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl100.tar.gzWe will need to download the MySQL-Sandbox code. It is not enough to have it installed, as we will need visibility and access to the test files. $ git clone https://github.com/datacharmer/mysql-sandbox.git$ cd mysql-sandboxWe need the tarball to be extracted in the default directory ($HOME/opt/mysql): $ make_sandbox --export_binaries --add_prefix=Perc /path/to/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl100.tar.gzThis step moves the tarball under $HOME/opt/mysql, creates a directory named Perc5.6.25, and installs a sandbox from that new directory. The previous step is only needed for two things: creating the directory in the right place, and making sure the tarball can be installed. Sometimes this step fails because of some surprising incompatibility. At this point, we can remove the new sandbox: $ sbtool -o delete -s ~/sandboxes/msb_Perc5_6_25Now we are ready, and we can start the test: $ perl Makefile.PL$ make$ export TEST_VERSION=Perc5.6.25$ make testThis will run for quite a while. Depending on the power of your server, it can take from 20 to 40 minutes.$ make testPERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.tt/01_modules.t ...................... okt/02_test_binaries.t ................ okt/03_test_sandbox.t ................. Testing . Please wait. This will take a few minutest/03_test_sandbox.t ................. okt/04_test_sbtool.t .................. okt/05_test_smoke.t ................... okt/06_test_user_defined.t ............ okt/07_test_user_defined.t ............ okt/08_test_single_port_checking.t .... okt/09_test_multiple_port_checking.t .. okt/10_check_start_restart.t .......... okt/11_replication_parameters.t ....... okt/12_custom_user_pwd.t .............. okt/13_innodb_plugin_install.t ........ # Skipping version 5.6.26 for this test. It is not in the required range (5.1.6 - 5.1.99)t/13_innodb_plugin_install.t ........ okt/14_semi_synch_plugin_install.t .... okt/15_user_privileges.t .............. okt/16_replication_options.t .......... okt/17_replication_flow.t ............. okt/18_force_creation.t ............... okt/19_replication_gtid.t ............. okt/20_add_option.t ................... okAll tests successful.Files=20, Tests=495, 1560 wallclock secs ( 0.22 usr 0.03 sys + 480.89 cusr 73.55 csys = 554.69 CPU)Result: PASSCAVEATS: When you run this test, you must be aware of three things:If you have sandboxes running in $HOME/sandboxes, they will be stopped when the test starts. This is a necessity to avoid clashes, as the test needs to start every possible combination.If your sandboxes were running in a non-standard location, you need to stop them manually, as they may make the test fail. The test will not care if the main server runs (on port 3306.) It will not clash, and it will not attempt to shut it down.If you stop the test during the execution, you may end up with orphaned servers, which are installed under ./t/test_sb. If that happens, you will need to stop and eventually remove them manually.Running only a subset of the testsIf you want to test only something specific, you can do that by invoking one of the tests mentioned above. For example, if you want to test only basic replication, you can do this:$ perl Makefile.PL$ make$ export TEST_VERSION=Perc5.6.25$ prove -b -v t/17_replication_flow.tt/17_replication_flow.t ..1..24installing and starting masterinstalling slave 1installing slave 2starting slave 1. sandbox server startedstarting slave 2. sandbox server startedinitializing slave 1initializing slave 2replication directory installed in $HOME/sandboxes/repl_deploymentok - Replication directory created# Master log: mysql-bin.000001 - Position: 3667 - Rows: 2# Testing slave #1ok - Slave #1 acknowledged reception of transactions from masterok - Slave #1 IO thread is runningok - Slave #1 SQL thread is runningok - Table t1 found on slave #1ok - Table t1 has 2 rows on #1# Testing slave #2ok - Slave #2 acknowledged reception of transactions from masterok - Slave #2 IO thread is runningok - Slave #2 SQL thread is runningok - Table t1 found on slave #2ok - Table t1 has 2 rows on #2# TESTS : 10# FAILED: 0 ( 0.0%)# PASSED: 10 (100.0%)ok - Replication test was successful# executing "stop" on $HOME/sandboxes/repl_deploymentexecuting "stop" on slave 1executing "stop" on slave 2executing "stop" on master# executing "clear" on $HOME/sandboxes/repl_deploymentexecuting "clear" on slave 1executing "clear" on slave 2executing "clear" on mastersandbox at has been removedok - Regular replication directory repl_deployment removedinstalling node 1installing node 2installing node 3# server: 1:# server: 2:# server: 3:# server: 1:# server: 2:# server: 3:# server: 1:# server: 2:# server: 3:Circular replication activatedgroup directory installed in $HOME/sandboxes/repl_deploymentok - circular replication installed# Master log: mysql-bin.000001 - Position: 1126 - Rows: 2# Testing slave #1ok - Slave #1 IO thread is runningok - Slave #1 SQL thread is runningok - Table t1 found on slave #1ok - Table t1 has 2 rows on #1# Testing slave #2ok - Slave #2 IO thread is runningok - Slave #2 SQL thread is runningok - Table t1 found on slave #2ok - Table t1 has 2 rows on #2# TESTS : 8# FAILED: 0 ( 0.0%)# PASSED: 8 (100.0%)ok - Replication test was successful# executing "stop" on $HOME/sandboxes/repl_deployment# server: 1:# server: 2:# server: 3:executing "stop" on node 1executing "stop" on node 2executing "stop" on node 3# executing "clear" on $HOME/sandboxes/repl_deployment# server: 1:# server: 2:# server: 3:executing "clear" on node 1executing "clear" on node 2executing "clear" on node 3sandbox at has been removedok - Circular replication directory repl_deployment removedokAll tests successful.Files=1, Tests=24, 69 wallclock secs ( 0.05 usr 0.01 sys + 3.24 cusr 2.57 csys = 5.87 CPU)Result: PASSYou can repeat the procedure for every file ./t/*.tWriting your own testsAs a parting thought, let me mention again that you can create your own user-defined tests using the sandbox simple hooks. Here is a sample user defined test that you can run using test_sandbox:There are two kind of tests: shell and sql The test type is defined by a keyword followed by a colon. The 'shell' test requires: a 'command', which is passed to a shell.The 'expected' label is a string that you expect to find within the shell output. If you don't expect anything, you can just say "expected = OK", meaning that you will be satisfied with a ZERO exit code reported by the operating system. The 'msg' is the description of the test that is shown to you when the test runs.shell:command = make_sandbox $TEST_VERSION -- --no_confirm --sandbox_directory=msb_XXXXexpected = sandbox server startedmsg = sandbox creationThe 'sql' test requires a 'path', which is the place where the test engine expects to find a 'use' script.The 'query' is passed to the above mentioned script and the output is captured for further processing.The 'expected' parameter is a string that you want to find in the query output.The 'msg' parameter is like the one used with the 'shell' test.sql:path = $SANDBOX_HOME/msb_XXXXquery = select 10 * 10expected = 100msg = checking database responseAll strings starting with a $ are expanded to their corresponding environment variables. For example, if $SANDBOX_HOME is /home/sb/tests, the line below will expand to command = /home/sb/tests/msb_5_1_30/stopIt is a good idea to finish every test with a cleanup. Here, we simply stop the servershell:command = $SANDBOX_HOME/msb_XXXX/stopexpected = OKmsg = stoppedTo run this example, you have two options:Run it directly with test_sandbox: $ test_sandbox --versions=Perc5.6.25 --tests=user --user_tests=name_of_your_file.sbOr create an harness like the ones in the test suite. See for example t/06_test_user_defined.t, which then invokes the test proper, which is check_single_server.sb. $ export TEST_VERSION=Perc5.6.25$ cat t/06_test_user_defined.tuse lib './t';use Test_Helper;test_sandbox( 'test_sandbox --user_test=./t/check_single_server.sb', 3);In this code, the '3' after the test name is the number of tests expected to run. If this paradigm is too simple (and I know that sometimes it is) you can write your own plugins in Perl, using as examples the ones in the suite. e.g.:$ cat t/08_test_single_port_checking.tuse lib './t';use Test_Helper;test_sandbox( 'test_sandbox --tests=user --user_test=./t/single_port_checking.sb.pl', 6);The perl plugin requires, of course, some knowledge of Perl, but they allow a greater flexibility to create your own checks.