PyBlosxom |
/SW/graphics:
Fixing Simple Photo Issues at the Command Line
(Without having to startup a huge program like gimp or photoshop.)
These utilities are also drawn from the ImageMagick[1] graphics suite. Imagemagick is available for all of Linux, Windows, and the Mac environment[2]. Some examples:
identify image1.jpg
convert -rotate 90 image1.jpg image2.jpg
for i in *.jpg ; do convert $i -resize 800x800 ../convert-folder/$i ; done
This may take a while if you have a lot of photos. In this example, photos will be resized such that they have a maximum width or heighth of 800 pixels, with no change in aspect ratio.
mogrify -resize 600x500 -quality 70 Gabe.jpg
Note that the key difference between mogrify and convert are that morgify over-writes the original file, and convert writes to a different file.
convert input.jpg output.png
# convert -font helvetica -fill white -pointsize 36 \
-draw 'text 10,50 "This is the annotation text"' \
floriade.jpg comment.jpg
for img in *.jpg ; do convert -sample 25%x25% $img thumb-$img ; done
nice ionice -c3 find . -type f -name "*.jpg" -exec mogrify -resize 200x200 -quality 70 {} \;(Note that "nice" and "ionice" lower the priority of the operations' claim on CPU and disk resources, respectively. Especially useful for a production server.)
convert file.jpg file.pdf(Note: convert converts to PDF just fine, but in the other direction (sometimes? always?) quality is problematic. pdftoppm works perfectly in that direction.)
pdftoppm -r 300 -jpeg file.pdf file
I have not yet found an Imagemagick manual that has the gift of making difficult tasks seem easy. Here[3][4][5] are some suggestions for manuals.
[1] http://www.imagemagick.org/script/index.php
[2] http://www.imagemagick.org/script/binary-releases.php
[3] http://www.imagemagick.org/script/command-line-tools.php
[4] http://linux.about.com/library/cmd/blcmdl1_ImageMagick.htm
[5] http://www.imagemagick.org/Usage/
posted at: 03:59 | path: /SW/graphics | permanent link to this entry
/Linux/Android:
Android with Minimal Google Malware
Why? I already have such a setup, and have Google Pinyin installed for Chinese input. Shortly after configuring Google Pinyin as my main keyboard, I got a popup reporting that Google Pinyin tried to access my address book. Why? Or how about this[1]: Google breaks networking for anyone in an iffy network environment (that would include all of China) just so Android can have a tantrum because it cannot callback to Google's servers. Why is that necessary? Sadly, even with minimal Google malware, the latter issue means that in China, the newest version of CyanogenMod (CM) I can run is CM11.
Down to business: the following is an outline of the necessary steps, if anyone needs more detail let me know and I will try to fill in the gaps.
First get yourself a CM-compatible phone from [2], and follow the installation instructions MINUS the bit about installing Google Play and friends. You do not need Google play, and you REALLY do not want it because of all the other Google crap it drags along. You just need to flash the bare CM image. Note that if you use a recent version of Debian or Ubuntu, all the necessary tools for flashing CM are already in the repos.
The first thing you want to install on your newly-flashed CM phone is FDroid[3]. (You can install FDroid with adb, one of the tools you just used to flash the CM image.) FDroid only contains open source software, so install as much of the software as you want / need first from the FDroid reposities. (This is also not blocked within China.) For the stuff you cannot find in FDroid, we will next get by a backdoor method that avoids running Google Play on your phone (which *is* blocked in China, anyway.)
On your (recent Debian/Ubuntu) machine install fdroidserver, apache, and pip. Also install gplaycli as follows:
pip install gplaycli
(Note that gplaycli is often installable by other methods, but in my experience it is extremely sensitive to dependency versions, and probably will not work. Use pip.)
Setup your FDroid repository[4]. (I run this on one of my desktop Linux machines.)
mkdir -p /srv/fdroid/repo
cd /var/www/
ln -s /srv/fdroid
cd /srv/fdroid/
fdroid init
FDroid apps look automatically for a repository at /fdroid/repo/ for any configured server. The above directory structure and symlink in /var/www/ provides exactly that.
I have a script that updates the repository as follows:
#!/bin/sh echo "Update custom FDroid Google Play mirror:" chown -R user: /srv/fdroid chmod -R 755 /srv/fdroid chmod 600 /srv/fdroid/config.py proxychains /usr/local/bin/gplaycli -u /srv/fdroid/repo --progress --verbose cd /srv/fdroid/ fdroid update --create-metadata
gplaycli updates all Android apps (apk's) in the repo from Google Play. (Note: I use proxychains to proxy this action to a server outside China, where Google Play is blocked.) Then fdroid needs to update the repo metadata for any changes.
Applications in the repo can get there initially by either copying an existing apk file in your possession into the repo, or searching / downloading the apk with gplaycli.
Now on your Android, add the FDroid server you just created as a repository. For this machine on my local network, for example, I just added an IP address like this:
http://192.168.8.107/
After that, if port 80 is open on your server machine and all permissions are correct, when you next update the repositories in your Android FDroid app, they should make available the apk's in your new repository.
Something really worth noting about CM: if you look in settings under "Privacy", have a look at "Privacy Guard". Any app selected in there (which should be most of them, certainly from among the non-Open Source apps) is blocked from accessing the address book or call logs. That is how I know Google Pinyin tried to access my Address Book, CM told me.
[1] https://code.google.com/p/android/issues/detail?id=81843
[2] http://wiki.cyanogenmod.org/w/Devices
[3] https://f-droid.org/
[4] https://f-droid.org/wiki/page/Setup_an_FDroid_App_Repo
posted at: 06:38 | path: /Linux/Android | permanent link to this entry
/Admin/SSL:
Free (Website) SSL with LetsEncrypt
Last I checked reading about LetsEncrypt[1] can make one a bit dizzy, but if you follow these steps it is really very straight forward:
letsencrypt certonly --webroot -d sub.domain.net --webroot-path=/var/www/html/
SSLCertificateFile /etc/letsencrypt/live/sub.domain.net/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/sub.domain.net/privkey.pem
letsencrypt renew --webroot --webroot-path=/var/www/html/ | mail -s 'renew LetsEncrypt SSL' name@email.com"
On the subject of renewals, as I recall every issued certificate expires after three months, and becomes eligible for renewal after two months. A two week period seems just about right.
[1] https://letsencrypt.org/
posted at: 07:54 | path: /Admin/SSL | permanent link to this entry
/Admin/VPN-options/Tinc:
Simple Tinc VPN
Tinc[1] is a rare animal, an actual peer-to-peer VPN that (for *NIX users) is easy to setup, not widely used and so (as far as I am aware) not blocked by anyone, including the GFW (Great Firewall of China).
My main OS is Debian, so this example of a very simple tinc configuration will follow Debian standards in getting two Tinc VPN nodes talking to one another -- typically, one would be your Desktop, and the other would be a server with a public IP address running Squid.
apt-get install tincThis is all that /etc/tinc contains after install:
/etc/tinc/nets.boot
Tinc can run multiple daemons, each handling a separate Tinc network on a separate subnet. To have each tinc network started automatically, simply add the network name to a list of same in nets.boot
Each tinc network is represented by a separate directory under /etc/tinc/. Each Tinc network also requires a hosts subdirectory where the public keys for other peers in this network are held. For the simplest possible configuration, here are the main decisions to make:
Let's first configure your desktop:
Create the requisite directory structure:
mkdir -p /etc/tinc/myvpn/hostsAnd create the two configuration files for this network:
vi /etc/tinc/myvpn/tinc-upcontaining something like this
modprobe tun10.99.3.1 is the private tinc IP address of the node you are currently configuring. And
ifconfig myvpn 10.99.3.1 netmask 255.255.0.0
vi /etc/tinc/myvpn/tinc.confcontaining this:
Name = mydesk
Device = /dev/net/tun
Port = 19001
ConnectTo = myremote
The Port line is optional, if omitted tinc will listen on the default port 655.
Create your keys for the myvpn network (each separate tinc network/subnet has different keys) for the desktop node by running this on it:
tincd -K -n myvpn(If things are correctly configured you should be able to just accept the defaults.) This is the pair of keys you just created:
/etc/tinc/myvpn/rsa_key.privThe former is your private key, the latter is your public key. Now edit the public key:
/etc/tinc/myvpn/hosts/mydesk
vi /etc/tinc/myvpn/hosts/mydeskDO NOT modify the key, but add this config block ABOVE the key:
Subnet = 10.9.3.1/32
Address = x.x.x.x 19001
The top line is the VPN private IP of the node, the bottom line is the real world (public, but not necessarily) IP and port where OTHER peers in this tinc network will find the machine. You will be sharing this file with all other peers in this network, and this config block tells them where to find this node.
IMPORTANT, EASILY OVERLOOKED STEP: fix permissions:
chown -R root: /etc/tinc/tinc-* are scripts that must be executable, otherwise your configuration will subtly break. Now start tinc:
chmod a+rx /etc/tinc/myvpn/tinc-*
systemctl start tinc.service
If all goes well, ifconfig should show a myvpn device with IP 10.9.3.1.
Configure the remote machine:
It is the same as the above desktop config, with these exceptions:
Putting it together:
Once tinc is running on the server, copy the public tinc key of each machine into the tinc hosts directory of the other machine.
Make sure port 19001 is open in the firewall on the myremote end.
Restart tinc on both ends:
systemctl restart tinc.service
and you should now be able to ping the tinc IP of the other machine from both ends.
This delivers to you a secure connection between desktop and a remote machine. If you would like to proxy browser traffic from mydesk through myremote, just install squid on myremote and enable connections from the tinc subnet. In your browser, set the proxy IP to the myremote tinc IP, port 3128 (default squid port), and select a proxy type of socks v5.
[1] http://www.tinc-vpn.org/
posted at: 08:17 | path: /Admin/VPN-options/Tinc | permanent link to this entry
/SW/business/WordPress:
Drupal 6 to WordPress 4 Migration
With many thanks to this[1] post for doing most of the work. Note that I have a simpler situation with only one user and no URL aliases.
Prepare your freshly install WordPress 4.4 database by truncating default posts / pages / comments after installation.
TRUNCATE TABLE wordpress.wp_comments;
TRUNCATE TABLE wordpress.wp_links;
TRUNCATE TABLE wordpress.wp_postmeta;
TRUNCATE TABLE wordpress.wp_posts;
TRUNCATE TABLE wordpress.wp_term_relationships;
TRUNCATE TABLE wordpress.wp_term_taxonomy;
TRUNCATE TABLE wordpress.wp_terms;
Migrate tags:
INSERT INTO wordpress.wp_terms (term_id, name, slug, term_group) SELECT d.tid, d.name, REPLACE(LOWER(d.name), ' ', '-'), 0 FROM drupal.term_data d INNER JOIN drupal.term_hierarchy h USING(tid);
INSERT INTO wordpress.wp_term_taxonomy (term_id, taxonomy, description, parent) SELECT d.tid `term_id`, 'category' `taxonomy`, d.description `description`, h.parent `parent` FROM drupal.term_data d INNER JOIN drupal.term_hierarchy h USING(tid);
Copy over Drupal posts to your WordPress database.
INSERT INTO wordpress.wp_posts (id, post_author, post_date, post_content, post_title, post_excerpt, post_modified, post_type, post_status, to_ping, pinged, post_content_filtered) SELECT DISTINCT n.nid `id`, n.uid `post_author`, FROM_UNIXTIME(n.created) `post_date`, r.body `post_content`, n.title `post_title`, r.teaser `post_excerpt`, FROM_UNIXTIME(n.changed) `post_modified`, n.type `post_type`, IF(n.status = 1, 'publish', 'private') `post_status`, '', '', '' FROM drupal.node n, drupal.node_revisions r WHERE n.vid = r.vid;
UPDATE wordpress.wp_posts SET post_type = 'post' WHERE post_type <> 'page' OR post_type <> 'post';
Update post to tag / category relationship:
INSERT INTO wordpress.wp_term_relationships (object_id, term_taxonomy_id) SELECT nid, tid FROM drupal.term_node;
Update tags / category post count:
UPDATE wordpress.wp_term_taxonomy tt SET `count` = (SELECT COUNT(tr.object_id) FROM wordpress.wp_term_relationships tr WHERE tr.term_taxonomy_id = tt.term_taxonomy_id);
The following code is supposed to help fix taxonomy:
UPDATE IGNORE wordpress.wp_term_relationships, wordpress.wp_term_taxonomy SET wordpress.wp_term_relationships.term_taxonomy_id = wordpress.wp_term_taxonomy.term_taxonomy_id WHERE wordpress.wp_term_relationships.term_taxonomy_id = wordpress.wp_term_taxonomy.term_id;
Insert comments to posts:
INSERT INTO wordpress.wp_comments (comment_post_ID, comment_date, comment_content, comment_parent, comment_author, comment_author_email, comment_author_url, comment_approved) SELECT DISTINCT nid, FROM_UNIXTIME(timestamp), comment, thread, name, mail, homepage, ((status + 1) % 2) FROM drupal.comments;
Update post comments count:
UPDATE wordpress.wp_posts SET `comment_count` = (SELECT COUNT(`comment_post_id`) FROM wordpress.wp_comments WHERE wordpress.wp_posts.`id` = wordpress.wp_comments.`comment_post_id`);
You can avoid this step if you omit copying the teaser / post_excerpt post field. Or, after the fact:
UPDATE wp_posts SET post_excerpt = NULL WHERE post_excerpt is not null;
My tag category links did not display out of the box, and this fixed that:
In backend --> Settings --> Permalinks set a custom value ("topics" for instance) for category base.
And finally, the comments that I imported are not displaying properly yet. At this point I do not care enough to try to fix it, so caveat emptor.
[1] http://www.jamediasolutions.com/blog/migrating-drupal-to-wordpress.html
posted at: 03:49 | path: /SW/business/WordPress | permanent link to this entry
/Admin/databases/MySQL:
Master-Slave Replication: First Steps
MASTER CONFIGURATION:
There are a lot of search engine hits for this subject, I liked Rackspace's contribution[1] the best personally. I am going to improvise off of that document. And I am going to assume there is an existing MySQL server that will be the master that we will replicate. First of all, the new master will need a 'slave' user:
grant replication slave on *.* TO slave_user@'ip-address' identified by 'password';
flush privileges;
And on a Debian / Ubuntu server, make these changes to /etc/mysql/my.cnf:
# bind-address = 127.0.0.1
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 15
max_binlog_size = 200M
binlog_ignore_db = mysql
Then restart MySQL. We comment out bind-address to permit non-localhost connections to the MySQL master from the slave. Both master and slave need a defined server-id, and they need to be different. log_bin is where the master records transactions that the slave will later pickup. The rest should be self-explanatory, except to say that in my setup binlog_ignore_db seems to be ignored. I wish it was not, but so far no major consequences.
COPY DATABASES:
Before replication can be started, the databases on both ends need to be exactly the same. On the master:
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
The first line puts all master databases into read-only mode, and the second line will print out the file name and position (an integer) at which the binlog's record of writes to the database stopped. It is very important to record these two values, as they will be needed later on the slave. Now dump all the databases except (optionally) mysql and (not optionally) information_schema and performance_schema (the latter two are internal MySQL things that do not replicate). First get a list of all databases:
mysql -uroot -p -e 'show databases' --skip-column-names | tr '\n' ' '
Edit the above list to remove mysql, information_schema and performance_schema, and then dump all databases:
mysqldump -uroot -p --databases list-of-databases | gzip > alldbs.sql.gz
Don't forget to release the read lock on the master and resume normal operation!!!:
UNLOCK TABLES;
Copy alldbs.sql.gz to the slave server.
SLAVE SETUP:
Install mysql-server on the slave, and make these changes to /etc/mysql/my.cnf:
# bind-address = 127.0.0.1
tmpdir = /var/tmp
server-id = 2
and restart MySQL. Allowing non-localhost connections on the slave is actually not necessary for replication, but will be necessary later for the tools we will use for integrity checking and repairs. The tmpdir must be preserved through reboots, so we have moved it from /tmp to /var/tmp (and installed tmpreaper to keep it clean). Now import the dump of databases from the master:
zcat alldbs.sql.gz | mysql -uroot -p
And start replication:
CHANGE MASTER TO MASTER_HOST='master-ip-address', MASTER_USER='slave_user', MASTER_PASSWORD='password', MASTER_LOG_FILE='filename', MASTER_LOG_POS=123456;
start slave;
where MASTER_LOG_FILE and MASTER_LOG_POS are the values that you recorded on the master when you locked the databases and issued a "SHOW MASTER STATUS" command, and slave_user is the user you created earlier on the master. Now check replication status:
show slave status\G
The field of particular interest is Seconds_Behind_Master. If things are working properly that integer should become smaller quite rapidly as the slave catches up with the master. Eventually that integer should get down very close to zero, if not zero. I am almost always seeing zero with my setup.
Something you will want to verify after master and slave are synced is a slave reboot. You should find that after a reboot Seconds_Behind_Master quickly returns to zero and replication continues uninterrupted.
[1] http://www.rackspace.com/knowledge_center/article/set-up-mysql-master-slave-replication
posted at: 00:48 | path: /Admin/databases/MySQL | permanent link to this entry
/Admin/databases/MySQL:
MySQL User Management
To create an typical, "ordinary" MySQL account with full access to one database:
mysql> GRANT ALL on databasename.* TO 'typicaluser'@'localhost' IDENTIFIED BY 'thispassword' REQUIRE SSL;
Note that the "REQUIRE SSL" is optional (and requires that SSL be first setup, which it is not by default). "localhost" may be replaced by an IP address, or by a '%' to permit access from anywhere. The '*' may be replaced by a specific table name, to grant access to only one table.
To create a root / superuser with full privileges to everything and all databases use one of these methods:
mysql> GRANT ALL PRIVILEGES ON *.* TO 'privileged'@'51.148.174.80' REQUIRE SSL WITH GRANT OPTION;
mysql> GRANT SUPER ON *.* TO user@'localhost' IDENTIFIED BY 'password';
To see any given user's privileges:
show grants for user@localhost;
Theoretically one can rescind any of those grants with a REVOKE statement. Follow any and all of this privilege modification stuff by:
mysql> FLUSH PRIVILEGES;
And finally, to remove a user:
mysql> DROP USER 'privileged'@'51.148.174.80';
Note that many (including myself in the past) would say that deleting the user record from the mysql.user table is how one goes about this. In fact, if you do this, and then click the "Privileges" tab in phpmyadmin, you will probably find that user still listed. Use "DROP USER" instead, to delete all mention of this user from all privilege tables.
posted at: 02:21 | path: /Admin/databases/MySQL | permanent link to this entry
/Admin/commandLine/network:
Port/Firewall Testing with Netcat and Socat
To avoid application-related issues and just do a bare-bones test of whether or not a port is open, use netcat or socat. On the server end, turn off the application that should be listening on port 9092, then:
FOR NETCAT:
Start listening on the server side:
netcat -l -p 9092 (newer versions of netcat)
netcat -l 9092 (older versions)
Attempt to connect to the port from the client on the other side of the firewall:
nc IPADDRESS 9092
If everything is working, anything you type on either end should be mirrored on the other end.
FOR SOCAT:
Start listening on the server side:
socat TCP4-LISTEN:9092,reuseaddr,fork gopen:/tmp/capture,seek-end=0,append
Now send some text from the client end to the listener:
date | socat STDIO tcp:localhost:9092
In this example, the client is on the same machine, replace "localhost" with an IP or DNS if not. All text sent from the client side will be appended to the file /tmp/capture.
posted at: 02:44 | path: /Admin/commandLine/network | permanent link to this entry
/Admin/Apache/PHP:
Best Practices for Custom PHP Parameters
First, do not edit php.ini, instead add a file to
/etc/php5/conf.d/
called something like php_ini_local.ini, for instance. Place parameters you would like to customize like upload_max_filesize in there, and they will take precedence over those in the php.ini file(s). And this way php.ini will in future also upgrade gracefully to newer versions without manual intervention.
To test parameter changes (before and after) the simple way, use PHP CLI in a terminal:
# php -a Ctrl-d
posted at: 03:47 | path: /Admin/Apache/PHP | permanent link to this entry
/Admin/virtualization/virtualBox:
Increase or Decrease Storage Space for a VirtualBox VM
Increasing is nearly trivial:
VBoxManage modifyhd virtualBoxUbuntu.vdi --resize 15000
for a 15G drive. Then inside the running VM use gparted to increast the main partition size to use all the newly added space.
To decrease, one can only reduce the amount of variable space used by the image by freeing up all unused space, apparently one cannot reduce the upper limit.
To free up space, delete unwanted files on the VM, install the zerofree package, then reboot into recovery mode and mount the root (main) partition ro:
mount -n -o remount,ro -t ext3 /dev/sda1 /
Then zero out the unused space in the partition:
zerofree -v /dev/sda1
And compact the image:
VBoxManage modifyhd virtualBoxUbuntu.vdi --compact
You should find the file system that the vdi is located in now has a lot more free space.
[1] http://dantwining.co.uk/2011/07/18/how-to-shrink-a-dynamically-expanding-guest-virtualbox-image/
posted at: 02:38 | path: /Admin/virtualization/virtualBox | permanent link to this entry
/Admin/databases/Oracle:
Resetting Administrative Passwords in Oracle
Login to the server as the "oracle" user (typically).
Find the sqlplus binary, and login as SYS, for instance:
/home/oracle/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus "/ as sysdba"
Then change the SYSTEM password:
SQL> show userThen login as SYSTEM:
SQL> passw system
SQL> quit
/home/oracle/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus "system/password"
and change the SYS password:
SQL> show user
SQL> passw sys
SQL> quit
References:
[1] https://rolfje.wordpress.com/2007/01/16/lost-oracle-sys-and-system-password/
posted at: 02:11 | path: /Admin/databases/Oracle | permanent link to this entry
/Admin/databases/MySQL:
MySQL Replication Error Recovery
SIMPLE ERRORS:
Sometimes when replaying the binary log, the slave will come across something that stops it cold, and it will go no further. One way to get around this is to start over and completely reinstall. Another way is:
mysql> STOP SLAVE; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; START SLAVE;
The result of the above is to simply skip the problematic instruction in the log, and go on. This might result in a table that is slightly out of sync, which can be dealt with by other means (see later in this post).
If the above does not work, and specifically if you see this error:
Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
the "Could not parse relay log event entry" is suggestive of a solution[1]: wipe out the relay log and re-create it from wherever the slave is currently at in the master log:
mysql> STOP SLAVE; CHANGE MASTER TO MASTER_LOG_FILE = 'mysql-bin.000012', MASTER_LOG_POS = 148376500; START SLAVE;
where
MASTER_LOG_POS is Exec_Master_Log_Pos, and
MASTER_LOG_FILE is Relay_Master_Log_File
in the current output of "SHOW SLAVE STATUS".
Another possibility[2], especially after the disorderly shutdown of a slave which is reputed to throw position in the relay log out of whack, resulting in a replay of already-run transactions and many inserts then throwing a duplicate key error, is to temporarily ignore replication errors. Ie. add this
slave-skip-errors = all
and let the slave run for a few minutes before turning it off again.
[1] https://stackoverflow.com/questions/12097696/mysql-replication-fails-with-error-could-not-parse-relay-log-event-entry
[2] https://www.percona.com/blog/2009/03/04/making-replication-a-bit-more-reliable/
posted at: 04:46 | path: /Admin/databases/MySQL | permanent link to this entry
/Admin/databases/MySQL:
Integrity Checking / Syncing MySQL Tables / Databases Using Percona Tools
Install the Percona Toolkit:
apt-get install percona-toolkit
in order to gain access to pt-table-sync and pt-table-checksum. Then check for tables that are out of sync with:
/usr/bin/pt-table-checksum --quiet --ignore-databases mysql,performance_schema,information_schema -umaster_user -ppassword
(Note: if databases that are not being replicated are not excluded from pt-table-checksum, it can hang up indefinitely.)
Then on the MySQL master, force a table on the slave into the exact state of that on the master with:
pt-table-sync --execute h=localhost,D=databasename,t=tablename h=slaveIP -umaster_user -ppassword
Or force a database on the slave into the exact state of that on the master with:
pt-table-sync --execute h=localhost h=10.9.93.1 --databases databasename -umaster_user -ppassword
posted at: 04:44 | path: /Admin/databases/MySQL | permanent link to this entry
/Admin/commandLine/misc:
Send E-mail From the Command Line
echo "This will go into the body of the mail." | mail -s "Hello world" you@youremail.comOr, if you wish to send to a remote SMTP server, install the heirloom-mailx package and:
export smtp=host:port
mailx -s "some subject" address@example.com
some random body text
ctrl-D
This also works well in a cron job.
posted at: 21:57 | path: /Admin/commandLine/misc | permanent link to this entry
/Admin/commandLine/network:
Configuring and Testing Proxies on the Command Line
Much *nix software will obey the proxy environment variables:
export http_proxy=http://1.2.3.4:3128/
export https_proxy=http://1.2.3.4:3128/
for a squid proxy on IP 1.2.3.4, for instance. (Use "unset" to turn these environment variables off later.) You can then test with and without the proxy thusly, for example:
w3m http://google.com/
w3m -no-proxy http://google.com/
Or you can test directly with telnet:
telnet 1.2.3.4 3128
which in the case of squid will result in following output:
Escape character is '^]'.
And then enter the URL you wish to access over the proxy:
GET http://www.google.com/ HTTP/1.1
followed by this line:
host:www.google.com
followed by two returns, and the site html will be dumped in your terminal.
posted at: 04:16 | path: /Admin/commandLine/network | permanent link to this entry