Last I checked reading about LetsEncrypt can make one a bit dizzy, but if you follow these steps it is really very straight forward:
letsencrypt certonly --webroot -d sub.domain.net --webroot-path=/var/www/html/
letsencrypt renew --webroot --webroot-path=/var/www/html/ | mail -s 'renew LetsEncrypt SSL' email@example.com"
On the subject of renewals, as I recall every issued certificate expires after three months, and becomes eligible for renewal after two months. A two week period seems just about right.
Tinc is a rare animal, an actual peer-to-peer VPN that (for *NIX users) is easy to setup, not widely used and so (as far as I am aware) not blocked by anyone, including the GFW (Great Firewall of China).
My main OS is Debian, so this example of a very simple tinc configuration will follow Debian standards in getting two Tinc VPN nodes talking to one another -- typically, one would be your Desktop, and the other would be a server with a public IP address running Squid.
apt-get install tincThis is all that /etc/tinc contains after install:
Tinc can run multiple daemons, each handling a separate Tinc network on a separate subnet. To have each tinc network started automatically, simply add the network name to a list of same in nets.boot
Each tinc network is represented by a separate directory under /etc/tinc/. Each Tinc network also requires a hosts subdirectory where the public keys for other peers in this network are held. For the simplest possible configuration, here are the main decisions to make:
Let's first configure your desktop:
Create the requisite directory structure:
mkdir -p /etc/tinc/myvpn/hostsAnd create the two configuration files for this network:
vi /etc/tinc/myvpn/tinc-upcontaining something like this
modprobe tun10.99.3.1 is the private tinc IP address of the node you are currently configuring. And
ifconfig myvpn 10.99.3.1 netmask 255.255.0.0
vi /etc/tinc/myvpn/tinc.confcontaining this:
Name = mydesk
Device = /dev/net/tun
Port = 19001
ConnectTo = myremote
The Port line is optional, if omitted tinc will listen on the default port 655.
Create your keys for the myvpn network (each separate tinc network/subnet has different keys) for the desktop node by running this on it:
tincd -K -n myvpn(If things are correctly configured you should be able to just accept the defaults.) This is the pair of keys you just created:
/etc/tinc/myvpn/rsa_key.privThe former is your private key, the latter is your public key. Now edit the public key:
vi /etc/tinc/myvpn/hosts/mydeskDO NOT modify the key, but add this config block ABOVE the key:
Subnet = 10.9.3.1/32
Address = x.x.x.x 19001
The top line is the VPN private IP of the node, the bottom line is the real world (public, but not necessarily) IP and port where OTHER peers in this tinc network will find the machine. You will be sharing this file with all other peers in this network, and this config block tells them where to find this node.
IMPORTANT, EASILY OVERLOOKED STEP: fix permissions:
chown -R root: /etc/tinc/tinc-* are scripts that must be executable, otherwise your configuration will subtly break. Now start tinc:
chmod a+rx /etc/tinc/myvpn/tinc-*
systemctl start tinc.service
If all goes well, ifconfig should show a myvpn device with IP 10.9.3.1.
Configure the remote machine:
It is the same as the above desktop config, with these exceptions:
Putting it together:
Once tinc is running on the server, copy the public tinc key of each machine into the tinc hosts directory of the other machine.
Make sure port 19001 is open in the firewall on the myremote end.
Restart tinc on both ends:
systemctl restart tinc.service
and you should now be able to ping the tinc IP of the other machine from both ends.
This delivers to you a secure connection between desktop and a remote machine. If you would like to proxy browser traffic from mydesk through myremote, just install squid on myremote and enable connections from the tinc subnet. In your browser, set the proxy IP to the myremote tinc IP, port 3128 (default squid port), and select a proxy type of socks v5.
There are a lot of search engine hits for this subject, I liked Rackspace's contribution the best personally. I am going to improvise off of that document. And I am going to assume there is an existing MySQL server that will be the master that we will replicate. First of all, the new master will need a 'slave' user:
grant replication slave on *.* TO slave_user@'ip-address' identified by 'password';
And on a Debian / Ubuntu server, make these changes to /etc/mysql/my.cnf:
# bind-address = 127.0.0.1
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 15
max_binlog_size = 200M
binlog_ignore_db = mysql
Then restart MySQL. We comment out bind-address to permit non-localhost connections to the MySQL master from the slave. Both master and slave need a defined server-id, and they need to be different. log_bin is where the master records transactions that the slave will later pickup. The rest should be self-explanatory, except to say that in my setup binlog_ignore_db seems to be ignored. I wish it was not, but so far no major consequences.
Before replication can be started, the databases on both ends need to be exactly the same. On the master:
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
The first line puts all master databases into read-only mode, and the second line will print out the file name and position (an integer) at which the binlog's record of writes to the database stopped. It is very important to record these two values, as they will be needed later on the slave. Now dump all the databases except (optionally) mysql and (not optionally) information_schema and performance_schema (the latter two are internal MySQL things that do not replicate). First get a list of all databases:
mysql -uroot -p -e 'show databases' --skip-column-names | tr '\n' ' '
Edit the above list to remove mysql, information_schema and performance_schema, and then dump all databases:
mysqldump -uroot -p --databases list-of-databases | gzip > alldbs.sql.gz
Don't forget to release the read lock on the master and resume normal operation!!!:
Copy alldbs.sql.gz to the slave server.
Install mysql-server on the slave, and make these changes to /etc/mysql/my.cnf:
# bind-address = 127.0.0.1
tmpdir = /var/tmp
server-id = 2
and restart MySQL. Allowing non-localhost connections on the slave is actually not necessary for replication, but will be necessary later for the tools we will use for integrity checking and repairs. The tmpdir must be preserved through reboots, so we have moved it from /tmp to /var/tmp (and installed tmpreaper to keep it clean). Now import the dump of databases from the master:
zcat alldbs.sql.gz | mysql -uroot -p
And start replication:
CHANGE MASTER TO MASTER_HOST='master-ip-address', MASTER_USER='slave_user', MASTER_PASSWORD='password', MASTER_LOG_FILE='filename', MASTER_LOG_POS=123456;
where MASTER_LOG_FILE and MASTER_LOG_POS are the values that you recorded on the master when you locked the databases and issued a "SHOW MASTER STATUS" command, and slave_user is the user you created earlier on the master. Now check replication status:
show slave status\G
The field of particular interest is Seconds_Behind_Master. If things are working properly that integer should become smaller quite rapidly as the slave catches up with the master. Eventually that integer should get down very close to zero, if not zero. I am almost always seeing zero with my setup.
Something you will want to verify after master and slave are synced is a slave reboot. You should find that after a reboot Seconds_Behind_Master quickly returns to zero and replication continues uninterrupted.
To create an typical, "ordinary" MySQL account with full access to one database:
mysql> GRANT ALL on databasename.* TO 'typicaluser'@'localhost' IDENTIFIED BY 'thispassword' REQUIRE SSL;
Note that the "REQUIRE SSL" is optional (and requires that SSL be first setup, which it is not by default). "localhost" may be replaced by an IP address, or by a '%' to permit access from anywhere. The '*' may be replaced by a specific table name, to grant access to only one table.
To create a root / superuser with full privileges to everything and all databases use one of these methods:
mysql> GRANT ALL PRIVILEGES ON *.* TO 'privileged'@'188.8.131.52' REQUIRE SSL WITH GRANT OPTION;
mysql> GRANT SUPER ON *.* TO user@'localhost' IDENTIFIED BY 'password';
To see any given user's privileges:
show grants for user@localhost;
Theoretically one can rescind any of those grants with a REVOKE statement. Follow any and all of this privilege modification stuff by:
mysql> FLUSH PRIVILEGES;
And finally, to remove a user:
mysql> DROP USER 'privileged'@'184.108.40.206';
Note that many (including myself in the past) would say that deleting the user record from the mysql.user table is how one goes about this. In fact, if you do this, and then click the "Privileges" tab in phpmyadmin, you will probably find that user still listed. Use "DROP USER" instead, to delete all mention of this user from all privilege tables.
To avoid application-related issues and just do a bare-bones test of whether or not a port is open, use netcat or socat. On the server end, turn off the application that should be listening on port 9092, then:
Start listening on the server side:
netcat -l -p 9092 (newer versions of netcat)
netcat -l 9092 (older versions)
Attempt to connect to the port from the client on the other side of the firewall:
nc IPADDRESS 9092
If everything is working, anything you type on either end should be mirrored on the other end.
Start listening on the server side:
socat TCP4-LISTEN:9092,reuseaddr,fork gopen:/tmp/capture,seek-end=0,append
Now send some text from the client end to the listener:
date | socat STDIO tcp:localhost:9092
In this example, the client is on the same machine, replace "localhost" with an IP or DNS if not. All text sent from the client side will be appended to the file /tmp/capture.
First, do not edit php.ini, instead add a file to
called something like php_ini_local.ini, for instance. Place parameters you would like to customize like upload_max_filesize in there, and they will take precedence over those in the php.ini file(s). And this way php.ini will in future also upgrade gracefully to newer versions without manual intervention.
To test parameter changes (before and after) the simple way, use PHP CLI in a terminal:
# php -a Ctrl-d
Increasing is nearly trivial:
VBoxManage modifyhd virtualBoxUbuntu.vdi --resize 15000
for a 15G drive. Then inside the running VM use gparted to increast the main partition size to use all the newly added space.
To decrease, one can only reduce the amount of variable space used by the image by freeing up all unused space, apparently one cannot reduce the upper limit.
To free up space, delete unwanted files on the VM, install the zerofree package, then reboot into recovery mode and mount the root (main) partition ro:
mount -n -o remount,ro -t ext3 /dev/sda1 /
Then zero out the unused space in the partition:
zerofree -v /dev/sda1
And compact the image:
VBoxManage modifyhd virtualBoxUbuntu.vdi --compact
You should find the file system that the vdi is located in now has a lot more free space.
Login to the server as the "oracle" user (typically).
Find the sqlplus binary, and login as SYS, for instance:
/home/oracle/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus "/ as sysdba"
Then change the SYSTEM password:
SQL> show userThen login as SYSTEM:
SQL> passw system
and change the SYS password:
SQL> show user
SQL> passw sys
Sometimes when replaying the binary log, the slave will come across something that stops it cold, and it will go no further. One way to get around this is to start over and completely reinstall. Another way is:
mysql> STOP SLAVE; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; START SLAVE;
The result of the above is to simply skip the problematic instruction in the log, and go on. This might result in a table that is slightly out of sync, which can be dealt with by other means (see later in this post).
If the above does not work, and specifically if you see this error:
Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
the "Could not parse relay log event entry" is suggestive of a solution: wipe out the relay log and re-create it from wherever the slave is currently at in the master log:
mysql> STOP SLAVE; CHANGE MASTER TO MASTER_LOG_FILE = 'mysql-bin.000012', MASTER_LOG_POS = 148376500; START SLAVE;
MASTER_LOG_POS is Exec_Master_Log_Pos, and
MASTER_LOG_FILE is Relay_Master_Log_File
in the current output of "SHOW SLAVE STATUS".
Another possibility, especially after the disorderly shutdown of a slave which is reputed to throw position in the relay log out of whack, resulting in a replay of already-run transactions and many inserts then throwing a duplicate key error, is to temporarily ignore replication errors. Ie. add this
slave-skip-errors = all
and let the slave run for a few minutes before turning it off again.
Install the Percona Toolkit:
apt-get install percona-toolkit
in order to gain access to pt-table-sync and pt-table-checksum. Then check for tables that are out of sync with:
/usr/bin/pt-table-checksum --quiet --ignore-databases mysql,performance_schema,information_schema -umaster_user -ppassword
(Note: if databases that are not being replicated are not excluded from pt-table-checksum, it can hang up indefinitely.)
Then on the MySQL master, force a table on the slave into the exact state of that on the master with:
pt-table-sync --execute h=localhost,D=databasename,t=tablename h=slaveIP -umaster_user -ppassword
Or force a database on the slave into the exact state of that on the master with:
pt-table-sync --execute h=localhost h=10.9.93.1 --databases databasename -umaster_user -ppassword
echo "This will go into the body of the mail." | mail -s "Hello world" firstname.lastname@example.orgOr, if you wish to send to a remote SMTP server, install the heirloom-mailx package and:
mailx -s "some subject" email@example.com
some random body text
This also works well in a cron job.
Much *nix software will obey the proxy environment variables:
for a squid proxy on IP 220.127.116.11, for instance. (Use "unset" to turn these environment variables off later.) You can then test with and without the proxy thusly, for example:
w3m -no-proxy http://google.com/
Or you can test directly with telnet:
telnet 18.104.22.168 3128
which in the case of squid will result in following output:
Escape character is '^]'.
And then enter the URL you wish to access over the proxy:
GET http://www.google.com/ HTTP/1.1
followed by this line:
followed by two returns, and the site html will be dumped in your terminal.
In Oracle, there is an old way to import ("imp") and a new way ("impdp"). Here I have examples of both.
There is a one-to-one relationship between users and schemas (="database"), but not all users have schemas. The first step in creating a new schema/database is to create a new user (C## prefix required):
SQL> create user C##anewuser identified by anewuser;
Grant some minimal privileges:
SQL> grant connect, create session, imp_full_database to C##anewuser;
SQL> alter user C##anewuser quota 200M on users;
Prepare to import the dump file:
Import an old-style dump file:
./imp \"/ as sysdba\" file=/home/oracle/app/oracle/admin/orcl/dpdump/dumpfile.dmp fromuser=olduser touser=C##anewuser log=/tmp/dumpfile.log;
Import an new-style dump file:
./impdp C##newuser/password dumpfile=expdp_dumpfile.dmp remap_schema=olduser:C##newuser log=expdp_dumpfile.log full=y;
Some gotchas..... Both of the above require knowing the old schema name, which is not terribly easy to figure out. Try leaving out the fromuser/touser/remap_schema stuff on the first pass and just look at the error messages in the log. Sometimes an import is looking for a tablespace that does not exist. Create the stupid thing:
SQL> CREATE TABLESPACE ONBASETEMP DATAFILE 'ONBASETEMP.dat' SIZE 40M ONLINE;
and then re-run the import.
Now verify new schema shows up in list of existing schemas:
SQL> select distinct owner from dba_objects;
Verify tables exist in newly imported schema:
SQL> SELECT DISTINCT OWNER, OBJECT_NAME FROM DBA_OBJECTS WHERE OBJECT_TYPE = 'TABLE' AND OWNER = 'C##anewuser';
Verify a table has columns:
SQL> describe C##anewuser.sometablename;
Verify a table has contents:
SQL> select * from C##anewuser.sometablename;
Find the new disk:
Initialize the disk for LVM use:
You can now see the new physical LVM volume with pvs. Add the physical volume to an existing volume group:
vgextend vg_oracle /dev/sdb
vgs should now show that the vg_oracle volume group now has a bunch of new free space (VFree). This space can be added to an existing logical volume with lvextend, or it can be used to create new logical volumes.
Some good advice in an age of people, companies, and governments avaricious to acquire / store / use / sell your personal information: use encryption wherever possible when communicating over networks.
Here is a nice concise guide to the basics of getting SSL working on MySQL.
First login to MySQL and check for SSL support:
# mysql -p
mysql> show variables like '%ssl%';
You should see "DISABLED" at this point, since you have not set it up yet. (If the response says anything other then "DISABLED" or "YES", then your MySQL server has probably been compiled without SSL support. Not a problem on Debian....)
Then Enable SSL Support in the Server:
FOR MySQL 5.5 YOU MUST USE A VERSION OF OPENSSL LESS THAN 1.0 TO CREATE THE FOLLOWING CERTIFICATES. Otherwise, when you try to login with the MySQL client using SSL, you will see this kind of error:
# mysql -uuser -ppassword --ssl-ca=/etc/mysql/ca-cert.pem
ERROR 2026 (HY000): SSL connection error: protocol version mismatch
I found an Ubuntu Lucid server which had a sufficiently old version of openssl to do the job.
First create the CA certificate:
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3601 -key ca-key.pem > ca-cert.pem
Now create the server certificate:
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout mysql-server-key.pem > mysql-server-req.pem
openssl x509 -req -in mysql-server-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > mysql-server-cert.pem
Now fix up the permissions of the SSL certs if necessary, and add this to the [mysqld] block of your /etc/mysql/my.cnf:
Note that client certificates are not necessary unless you WANT the server to authenticate the client. Also note that on Debian MySQL logging seems to go to syslog, not to the visible /var/log/mysql* log files.
After restarting MySQL,
mysql> show variables like 'have_ssl';
should result in a "YES".
Now Get MySQL clients Working:
Test a client using SSL on MySQL localhost. Create a temporary user for the test:
mysql> GRANT ALL on databasename.* TO 'ssluser'@'localhost' IDENTIFIED BY 'thispassword' REQUIRE SSL;From a terminal on the MySQL server, try logging in with this user:
mysql -ussluser -p --ssl-ca=/etc/mysql/cacert.pem
Once logged in, issue this MySQL command:
mysql> SHOW STATUS LIKE 'Ssl_cipher';
If you get anything other than a blank in the 'Value' column, SSL is working! Delete the test user:
mysql> DELETE FROM mysql.user WHERE user='ssluser' and host='localhost';And still on the MySQL server, create a user for remote access, from a specific IP address only:
mysql> GRANT ALL on databasename.* TO 'SSLremote'@'22.214.171.124' IDENTIFIED BY 'thispassword' REQUIRE SSL;On the remote client (IP address 126.96.36.199) presumably your desktop, try to login over SSL:
mysql -uSSLremote -pthispassword -hwww.mysqlserverhost.com --ssl-ca=/home/user/cacert.pem
If it works, mission accomplished!