Banner
Home News Feeds Planet MySQL
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Using Perl to Send Tweets Stored in a MySQL Database to Twitter
    Using twitter can sometimes feel like driving downtown, screaming what you want to say out the window, and hoping someone hears you. There might be tens of thousands of people downtown, but your message will only be heard by a few. Because of this, your best bet is to repeat your message as often as possible. Twitter is free and if you want to reach as many people as possible, it’s another great tool for getting your message out there. But sending tweets on a scheduled basis can be a pain. There are client programs available which allow you to schedule your tweets (Hootsuite is one I have used in the past). You can load your tweets in the morning, and have the application tweet for you all day long. But you still have to load the application with your tweets—one by one. A friend of mine asked me if there was a way to send the same 200 tweets over and over again, spaced out every 20 minutes or so. He has a consulting business, and just wants to build up a list of twitter followers by tweeting inspirational quotes. If he tweets for twenty hours a day, and sends three quotes an hour, it will take him a little more than three days to burn through his 200 quotes. And he can always add more quotes or space out the tweets as necessary. I decided to write a Perl script to do this for him. To start, we will need a MySQL database to store the tweets. I use the MySQL Workbench product as my client application for connecting to MySQL. From within Workbench, I can create my tweet database: CREATE DATABASE 'tweets' /*!40100 DEFAULT CHARACTER SET latin1 */ I will then need a table inside my database to store my tweets. CREATE TABLE 'tweets' ( 'id' int(11) NOT NULL AUTO_INCREMENT, 'tweet' char(140) DEFAULT NULL, 'last_tweet_date' datetime NOT NULL DEFAULT '2015-01-01 00:00:00', 'tweet_count' int(5) DEFAULT NULL, 'tweet_length' int(3) DEFAULT NULL, PRIMARY KEY ('id') ) ENGINE=InnoDB AUTO_INCREMENT=100001 DEFAULT CHARSET=latin1 The tweet messages will be stored in the tweet column, and the last date the tweet was sent will have a time stamp in the last_tweet_date column. When we perform a search to find the next tweet, we will simply sort our search by the last_tweet_date and then id, and limit our output to one tweet. After we send the tweet, we will update the last_tweet_date column and send that tweet to the end of the line. The script will also incrementally change the tweet_count value (number of times the tweet has been sent), and record the length of the tweet in tweet_length. I do not do any error checking in the script to make sure the tweet was sent, but errors are printed. We now need to insert some tweets into our table. Since my friend is going to be sending inspirational quotes, I found a few I can import. In order to make it easier for importing, all single quote marks (‘) will be replaced by the carat symbol (^). I can then swap these symbols inside the Perl script. You could use the backslash (\) before the single quote, but I prefer a single character substitution so I know how long the tweet will be. I will also use the tilde (~) as a way to designate a carriage return in my tweet. The Perl script will replace the tilde with a carriage return (\n). Two tildes give me two carriage returns and a blank line. insert into tweets (tweet) VALUES('I^m not afraid. -Luke~~You will be. -Yoda~~http://SomeWebSiteHere.com'); insert into tweets (tweet) VALUES('Do or do not. There is no try.~~-Yoda~~http://SomeWebSiteHere.com'); insert into tweets (tweet) VALUES('No, I am your father.~~-Darth~~http://SomeWebSiteHere.com'); I also created a history table to store the tweet identification numbers. Each tweet is assigned a unique number by twitter, and this is how you can access this tweet. I save this information so I can delete the tweets later using this number. I have included a short script for deleting tweets near the end of this post. CREATE TABLE 'history' ( 'id' int(11) NOT NULL AUTO_INCREMENT, 'tweet' char(140) DEFAULT NULL, 'tweet_id' varchar(30) DEFAULT NULL, 'tweet_update' datetime DEFAULT NULL, PRIMARY KEY ('id') ) ENGINE=InnoDB AUTO_INCREMENT=1000032 DEFAULT CHARSET=latin1 You will need to register your application with twitter via apps.twitter.com, and obtain the following: consumer_key consumer_secret access_token access_token_secret You will also need to register your mobile phone in order to link your twitter account to your application. I have not figured out how to use this script with someone else’s account, as the instructions for scripting Perl for use with twitter are not very thorough. I will try to add this at a later date. Now that you have your application information and all of your tables created with data inserted, here is the Perl script for sending tweets. (You will need to install the necessary Perl modules that are used.) #!/usr/bin/perl use Net::Twitter::Lite::WithAPIv1_1; use DBI; use DBD::mysql; my $Database = "tweets"; # ---------------------------------------------------------------------------------- # this has to be near the top - as other parts of the script rely on these figures # ---------------------------------------------------------------------------------- my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time; $year = $year + 1900; $mon = $mon + 1; # add a zero if the value is less than 10 if ($sec < 10) { $sec = "0$sec"; } if ($min < 10) { $min = "0$min"; } if ($hour < 10) { $hour = "0$hour"; } if ($mday < 10) { $mday = "0$mday"; } if ($mon < 10) { $mon = "0$mon"; } if ($year < 10) { $year = "0$year"; } if ($wday < 10) { $wday = "0$wday"; } if ($yday < 10) { $yday = "0$yday"; } if ($isdst < 10) { $isdst = "0$isdst"; } $DateTime = "$year-$mon-$mday $hour:$min:$sec"; # ---------------------------------------------------------------------------------- # retrieve tweet from database # ---------------------------------------------------------------------------------- $dbh = ConnectToMySql($Database); $query = "select id, tweet, last_tweet_date, tweet_count FROM tweets order by last_tweet_date, id limit 1"; $sth = $dbh->prepare($query); $sth->execute(); while (@data = $sth->fetchrow_array()) { $id = $data[0]; $tweet = $data[1]; $last_tweet_date = $data[2]; $tweet_count = $data[3]; } $tweet_original = $tweet; # ---------------------------------------------------------------------------- # find tildes ~ and substitute for carriage return # find carats and substitue for single quote # ---------------------------------------------------------------------------- $tweet =~ s/~/\n/g; $tweet =~ s/\^/\'/g; # ---------------------------------------------------------------------------------- # check length of tweet # ---------------------------------------------------------------------------------- $tweet_length = length($tweet); if (length($tweet) > 140) { print "Error - tweet is longer than 140 characters\n"; exit; } # add to the tweet count $tweet_count++; # ---------------------------------------------------------------------------- # send tweet # ---------------------------------------------------------------------------- my $nt = Net::Twitter::Lite::WithAPIv1_1->new( traits => [qw/API::RESTv1_1/], consumer_key => "$consumer_key", consumer_secret => "$consumer_secret", access_token => "$access_token", access_token_secret => "$access_token_secret", ssl => 1 ); my $results = eval { $nt->update("$tweet") }; if ( my $err = $@ ) { die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error'); warn "HTTP Response Code: ", $err->code, "\n", "HTTP Message......: ", $err->message, "\n", "Twitter error.....: ", $err->error, "\n"; } # ---------------------------------------------------------------------------- # update mysql with new date for last_tweet date/time # ---------------------------------------------------------------------------- $dbh = ConnectToMySql($Database); $query = "UPDATE tweets SET last_tweet_date = '$DateTime' , tweet_count = '$tweet_count' , tweet_length = '$tweet_length' where id = '$id'"; $sth = $dbh->prepare($query); $sth->execute(); # ---------------------------------------------------------------------------- # get the status id of the last tweet # ---------------------------------------------------------------------------- my $statuses = $nt->user_timeline({ user => "2044_The_Book", count=> 1 }); for my $status ( @$statuses ) { $tweet_id = "$status->{id}"; } if ( my $err = $@ ) { die $@ unless blessed $err && $err->isa('Net::Twitter::Lite::Error'); warn "HTTP Response Code: ", $err->code, "\n", "HTTP Message......: ", $err->message, "\n", "Twitter error.....: ", $err->error, "\n"; } # ---------------------------------------------------------------------------- # replace special characters # ---------------------------------------------------------------------------- $tweet =~ s/\\\n/~/g; $tweet =~ s/\'/^/g; # update mysql with new date for last_tweet date/time $dbh = ConnectToMySql($Database); $query = "insert into history (tweet,tweet_id,tweet_update) values ('$tweet_original','$tweet_id','$DateTime')"; $sth = $dbh->prepare($query); $sth->execute(); #---------------------------------------------------------------------- sub ConnectToMySql { #---------------------------------------------------------------------- my ($db) = @_; open(PW, "<..\/accessTweets") || die "Can't access login credentials"; my $db= <PW>; my $host= <PW>; my $userid= <PW>; my $passwd= <PW>; chomp($db); chomp($host); chomp($userid); chomp($passwd); my $connectionInfo="dbi:mysql:$db;$host:3306"; close(PW); # make connection to database my $l_dbh = DBI->connect($connectionInfo,$userid,$passwd); return $l_dbh; } In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file contains the following information: database_name hostname or IP MySQL user name password You can instead include your information inside the file if you prefer. If you want to delete your tweets, you can create a script to access the tweets in your history table, and then delete them one at a time. Here is an example without the database connections: #!/usr/bin/perl use Net::Twitter::Lite::WithAPIv1_1; use DBI; use DBD::mysql; # ---------------------------------------------------------------------------- # delete tweet # ---------------------------------------------------------------------------- # replace the values for $consumer_key $consumer_secret $access_token $access_token_secret # with your values for your application my $nt = Net::Twitter::Lite::WithAPIv1_1->new( consumer_key => "$consumer_key", consumer_secret => "$consumer_secret", access_token => "$access_token", access_token_secret => "$access_token_secret", ssl => 1, ); my $statuses = $nt->destroy_status({ id => "$tweet_id" }); exit; Be sure to replace the value of $tweet_id with the value from the tweet you want to delete. That’s it for now. Hopefully this was fun, interesting, and even useful! Thank you for using MySQL!   Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

  • Fedora 22 is out, and we’re ready
    Fedora 22 arrived yesterday. With a cutting edge GCC (5.1), the new DNF package management system, and improved tooling for server administration, we congratulate the Fedora community on yet another innovative release. We’re following up from our side, and as of yesterday our repos offer Fedora 22 packages of these products: MySQL Server 5.6 (currently […]

  • MySQL 5.7 key features
    The other day I was discussing new features of MySQL 5.7 with a Percona Support customer. After that conversation, I thought it would be a good idea to compile list of important features of MySQL 5.7. The latest MySQL 5.7.6 release candidate (RC) is out and is packed with nice features. Here’s a list of some MySQL 5.7 key features.Replication Enhancements:One of the top features in MySQL 5.7 is multi-source replication. With multi-source replication you can point multiple master server’s to slave so limitation of slave having only one master is lift off. There is nice blog post written by my colleague on multi-source replication you will find useful.SHOW SLAVE STATUS is non-blocking since MySQL 5.7. SHOW SLAVE STATUS returns immediately without waiting for STOP SLAVE to finish which can be blocked by long running SQL query from replication SQL_THREAD. As a side note, the LOCK FREE SHOW SLAVE STATUS feature is first implemented in Percona Server 5.5.Now you can have all the information about SHOW SLAVE STATUS from performance schema database tables. More details here from the manual.With the new CHANGE REPLICATION FILTER command now you can modify replication filters rules without bouncing MySQL servers.Since MySQL 5.7 you can perform CHANGE MASTER TO without stopping the slave via the STOP SLAVE command. For further details check the manual.There is now a different method for parallel replication. With new implementation the slave can apply transaction in parallel with single database/schema too. Check slave_parallel_type for details.Global Transaction Identifiers (GTID) is a feature that automatically tracks the replication position in replication stream, and since MySQL 5.7 gtid_mode is dynamic variables, which means you can enable/disable GTID in replication topology without synchronizing and restarting entire set of MySQL servers. As a side note, online GTID deployment feature is added in Percona Server 5.6. With this feature you can deploy GTID on existing replication setups without marking master read_only and stopping all slaves in replication chain. My colleague Stephane had written nice blogpost to perform online migration without master downtime.InnoDB Enhancements:Now you can resize InnoDB buffer pool online. Since MySQL 5.7 innodb_buffer_pool_size is a dynamic variable which provides the ability to resize buffer pool without restarting MySQL server.From MySQL 5.7, online ALTER TABLE also supports RENAME INDEX clause to rename an index. This change will take in place without table copy operation.InnoDB supports Transportable Tablespace feature for partitioned InnoDB tables. I wrote a blog post on Transportable Tablespace that you will find useful.Innochecksum utility is enhanced with new options. I also wrote a recent blog post on this same topic.As of MySQL 5.7, InnoDB supports “spatial indexes” and it also supports online DDL operation to add spatial indexes i.e. ALTER TABLE .. ALGORITHM=INPLACE.Improved InnoDB buffer pool dump/reload operations. A new system variable, innodb_buffer_pool_dump_pct allows you to specify percentage of most recently used pages in each buffer pool to read out and dump.Triggers:As per SQL standard, MySQL 5.7 now supports multiple triggers per table for trigger event (DML) and timing (BEFORE,AFTER) i.e. multiple triggers are permitted now for each event e.g. multiple triggers on INSERT action.Performance Improvements: Bulk data load is improved on InnoDB in MySQL 5.7. InnoDB performs a bulk load when creating or rebuilding indexes. This method known as sorted index build and enhance create index operation and it also impacts FULLTEXT indexes.Currently there is a single page cleaner thread responsible for flushing dirty pages from the buffer pool(s). In MySQL 5.7 InnoDB parallel flushing was implemented to improve flushing where separate background thread for each buffer pool instance for flush list, LRU list. It’s worth to mention a two-threaded flushing implemented in Percona Server 5.6.Optimizer Improvements:EXPLAIN FOR CONNECTION will let you run explain statements for already running queries. This may yield important information towards query optimization.In MySQL 5.7 the optimizer avoids the creatation temporary table for result of UNION ALL queries and this will help to reduce disk I/O and disk space when UNION yields large result set. I found Morgan Tocker post informative on same.JSON format for EXPLAIN first introduced in MySQL 5.6 which produces extended information. JSON format for EXPLAIN is enhanced in version 5.7 by printing total query cost which makes it easier to see the difference between the good and bad execution plans.MySQL 5.7 now supports generated columns also known as virtual columns as new feature. My colleague Alexander explained this really well in this blogpostMySQL Test Suite Enhancements:The MySQL test suite now uses InnoDB as its default storage engine. Along with that many new tests added and existing tests enhanced including test suite for replication with GTID.Security Enhancements: Since MySQL 5.7 there is a password expiration policy in place. Any user that connects to a MySQL server goes through a password expiration life cycle and must change the password. More from the manual here.Database administrators can nowo lock/unlock user accounts. Check details here.As of MySQL 5.7, installation only creates only one ‘root@localhost’ user account with random password and marks the password expiration cycle. So, installation no longer creates anonymous-user accounts and along with that there is no test database. For root user account password, MySQL generates it during data directory initialization and marks it as expired and will write a message to stdout displaying the password.Conclusion: This is only a short list of new features in MySQL 5.7. Please feel free to add your favorite features in the comments section. Along with new features, there are quite a few deprecated/removed features in MySQL 5.7. You can get full list from the manual.The post MySQL 5.7 key features appeared first on MySQL Performance Blog.

  • ClusterControl 1.2.10 Released
    The Severalnines team is pleased to announce the release of ClusterControl 1.2.10. This release contains key new features along with performance improvements and bug fixes. We have outlined some of the key new features below.      Highlights of ClusterControl 1.2.10 include: ClusterControl DSL (Domain Specific Language)  Integrated Developer Studio (Developer IDE)  Database Advisors/JS bundle  On-premise Deployment of MySQL / MariaDB Galera Cluster (New implementation) Detection of long running and deadlocked transactions (Galera) Detection of most advanced (last committed) node in case of cluster failure (Galera) Registration of manually added nodes with ClusterControl Failover and Slave Promotion in MySQL 5.6 Replication setups  General front-end optimizations  For additional details about the release: 1.2.10 ChangeLog Installation Instructions Documentation ClusterControl Administration Guide Upgrade instructions Advisor Bundle on Github ClusterControl DSL (Domain Specific Language): We are excited to announce our new, powerful ClusterControl DSL, which allows you extend the functionality of your ClusterControl platform by creating Advisors, Auto Tuners or “mini Programs”. The DSL syntax is based on JavaScript, with extensions to provide access to ClusterControl’s internal data structures and functions. The DSL allows you to execute SQL statements, run shell commands/programs across all your cluster hosts, and retrieve results to be processed for advisors/alerts or any other actions.  Integrated Developer Studio (Developer IDE): The ClusterControl Dev Studio provides a simple and elegant development environment to quickly create, edit, compile, run, test, debug and schedule your JS programs. This is pretty cool - you are able to develop database advisors or mini programs that automate database tasks from within your web browser.  Advisors/JS Bundle: Advisors in ClusterControl are powerful constructs; they provide specific advice on how to address issues in areas such as performance, security, log management, configuration, storage space, etc. They can be anything from simple configuration advice, warning on thresholds or more complex rules for predictions or cluster-wide automation tasks based on the state of your servers or databases. In general, advisors perform more detailed analysis, and produce more comprehensive recommendations than alerts. s9s-advisor-bundle on Github: We ship a set of basic advisors that are open source under an MIT licence and which include rules and alerts on security settings, system checks (NUMA, Disk, CPU), queries, innodb, connections, performance schema, Galera configuration, NDB memory usage, and so on. The advisors can be downloaded from Github. Through the Developer Studio, it is easy to import ClusterControl JS bundles written by our partners or community users, or export your own for others to try out.  On-premise Deployment of MySQL/MariaDB Galera Cluster: We have rewritten the on-premises deployment functionality for Galera clusters. You can now easily deploy a Galera cluster with up to 9 DB nodes. Detection of long running and deadlocked transactions: Deadlocks, also called deadly embrace, happens when two or more transactions permanently block each other. These can cause quite a number of problems, especially in a synchronous cluster like Galera. It is now possible to view these through the web UI. Galera Recovery - Detection of most advanced (last committed) node: In the unfortunate case of a cluster-wide crash, where the cluster is not restarting, you might need to bootstrap the cluster using the node with the most recent data. The admin can now get information about the most advanced node, and use that to bootstrap the cluster. Registration of manually added nodes with ClusterControl: In some cases, an admin might be using other automation tools, e.g., Chef or Puppet, to add nodes to an existing cluster. In that case, it is now easy to register these new nodes to ClusterControl so they show up in the UI. Failover and Slave Promotion in MySQL 5.6 Replication Setups: For MySQL Replication setups, you can now promote a slave to a master from the UI. It requires that you are on MySQL 5.6, and use GTID. We encourage you to provide feedback and testing. If you’d like a demo, feel free to request one.  With over 7,000 users to date, ClusterControl is the leading, platform independent automation and management solution for MySQL, MariaDB, MongoDB and PostgreSQL.  Thank you for your ongoing support, and happy clustering! For additional tips & tricks, follow our blog: http://www.severalnines.com/blog/. Blog category: Product UpdatesTags: clusterDatabasedevopsmanagementMariaDBMongoDBmonitoringMySQLpostgresPostgreSQL

  • MariaDB 10.0.19 Overview and Highlights
    MariaDB 10.0.19 was recently released, and is available for download here: https://downloads.mariadb.org/mariadb/10.0.19/ This is the tenth GA release of MariaDB 10.0, and 20th overall release of MariaDB 10.0. This was a quick release in order to get a fix for a mysql_upgrade bug (MDEV-8115) introduced in 10.0.18, so there is that, and only 9 other bug fixes. Here are the main items of note: Fixed the server crash caused by mysql_upgrade (MDEV-8115) Connect upgraded to 1.03.0007 Due to the mysql_upgrade bug fix as well as all of the fixes in MariaDB 10.0.18 (including 5 Security fixes), I would definitely recommend upgrading to this if you are running a prior version of MariaDB 10.0, especially 10.0.18. You can read more about the 10.0.19 release here: https://mariadb.com/kb/en/mariadb-10019-release-notes/ And if interested, you can review the full list of changes in 10.0.19 (changelogs) here: https://mariadb.com/kb/en/mariadb-10019-changelog/ Hope this helps.  

Banner

 

Copyright © 2001 - 2013 K-Factor Technologies, Inc.



Site Meter

a href=