Tinc is a rare animal, an actual peer-to-peer VPN that (for *NIX users) is easy to setup, not widely used and so (as far as I am aware) not blocked by anyone, including the GFW (Great Firewall of China).
My main OS is Debian, so this example of a very simple tinc configuration will follow Debian standards in getting two Tinc VPN nodes talking to one another -- typically, one would be your Desktop, and the other would be a server with a public IP address running Squid.
apt-get install tincThis is all that /etc/tinc contains after install:
Tinc can run multiple daemons, each handling a separate Tinc network on a separate subnet. To have each tinc network started automatically, simply add the network name to a list of same in nets.boot
Each tinc network is represented by a separate directory under /etc/tinc/. Each Tinc network also requires a hosts subdirectory where the public keys for other peers in this network are held. For the simplest possible configuration, here are the main decisions to make:
Let's first configure your desktop:
Create the requisite directory structure:
mkdir -p /etc/tinc/myvpn/hostsAnd create the two configuration files for this network:
vi /etc/tinc/myvpn/tinc-upcontaining something like this
modprobe tun10.99.3.1 is the private tinc IP address of the node you are currently configuring. And
ifconfig myvpn 10.99.3.1 netmask 255.255.0.0
vi /etc/tinc/myvpn/tinc.confcontaining this:
Name = mydesk
Device = /dev/net/tun
Port = 19001
ConnectTo = myremote
The Port line is optional, if omitted tinc will listen on the default port 655.
Create your keys for the myvpn network (each separate tinc network/subnet has different keys) for the desktop node by running this on it:
tincd -K -n myvpn(If things are correctly configured you should be able to just accept the defaults.) This is the pair of keys you just created:
/etc/tinc/myvpn/rsa_key.privThe former is your private key, the latter is your public key. Now edit the public key:
vi /etc/tinc/myvpn/hosts/mydeskDO NOT modify the key, but add this config block ABOVE the key:
Subnet = 10.9.3.1/32
Address = x.x.x.x 19001
The top line is the VPN private IP of the node, the bottom line is the real world (public, but not necessarily) IP and port where OTHER peers in this tinc network will find the machine. You will be sharing this file with all other peers in this network, and this config block tells them where to find this node.
IMPORTANT, EASILY OVERLOOKED STEP: fix permissions:
chown -R root: /etc/tinc/tinc-* are scripts that must be executable, otherwise your configuration will subtly break. Now start tinc:
chmod a+rx /etc/tinc/myvpn/tinc-*
systemctl start tinc.service
If all goes well, ifconfig should show a myvpn device with IP 10.9.3.1.
Configure the remote machine:
It is the same as the above desktop config, with these exceptions:
Putting it together:
Once tinc is running on the server, copy the public tinc key of each machine into the tinc hosts directory of the other machine.
Make sure port 19001 is open in the firewall on the myremote end.
Restart tinc on both ends:
systemctl restart tinc.service
and you should now be able to ping the tinc IP of the other machine from both ends.
This delivers to you a secure connection between desktop and a remote machine. If you would like to proxy browser traffic from mydesk through myremote, just install squid on myremote and enable connections from the tinc subnet. In your browser, set the proxy IP to the myremote tinc IP, port 3128 (default squid port), and select a proxy type of socks v5.
Thanks to Jon for reminding me that there is something better then flaky public proxies and the over-taxed Tor network. Tor is still better if you want end-to-end security and anonymity, but if you just want a secure hop out of the local censored network and after that you do not care, renting a cheap server (as little as $8/month at vpslink, 100G of bandwidth included) is a simple and easy option.
Assuming your remote server is called hostname.com, setting up an encrypted tunnel is as simple as executing this on a local terminal (must be root):
ssh -v -CND 1080 firstname.lastname@example.org
Note that for my own Debian server on the other end of the SSH proxy tunnel, I have found that "username" cannot be "root". I am not sure why this is (and it is definitely counter-intuitive) but if I try to tunnel to the root account on my server, when I try to use the tunnel to browse to a website it does not work and the following error is reported:
channel 1: open failed: administratively prohibited: open failed
If I tunnel to an ordinary user account on my server, I get no error and everything works. Go figure.....
To semi-automate this I created an alias in my ~/.bashrc:
tunnel="autossh -M 0 -v -CND 1080 email@example.com"
Thereafter, in any terminal, just invoke "tunnel" to create the encrypted tunnel. (To eliminate the password prompt, setup passwordless authentication.)
Any browser can use this proxy, by pointing its proxy setting at localhost and port 1080, with SOCKS 5 turned on. The Firefox FoxyProxy plugin makes this infinitely more flexible by allowing the simultaneous configuration of multiple proxies, and providing fine-grained control over which websites are accessed using which proxies.
Once FoxyProxy is installed into Firefox, you have the option of selecting any one proxy (or none) for all of your surfing, or associating certain websites with certain proxies and running FoxyProxy in "Patterns" mode. Since youtube is often getting itself blocked, a pattern for youtube would be:
While you are at it, install privoxy and make it your default proxy for websites that have not been diverted to Tor or your just created personal proxy. Privoxy blocks a lot of advertisements and information gathering by nosy websites.
Finally, note that
ssh -v -CND 1080 firstname.lastname@example.org
will only allow connections from the localhost. To allow connections from other computers over your local network, start it like this for example:
ssh -v -CND [192.168.8.58]:1080 email@example.com
This will allow any connections to port 1080 on the machine's exterior network interface. To start this as a persistent service at boot, add the following line to /etc/rc.local:
su username -c 'autossh -M 0 -v -CND [192.168.8.58]:1080 firstname.lastname@example.org'&
where username is the account you wish the service to run under.
For anyone with (non-root) SSH access to remote servers, sshuttle provides a very simple alternative to the key-juggling headache of configuring VPN. All you need is root locally (as it needs to modify iptables) and python installed on the opposite end. On the Virtual Machine I am using for some Twitter-related software development, I just turned off OpenVPN and replaced it with the following:
sshuttle --dns -vvr email@example.com 0/0 -x 192.168.8.0/24
and behavior seems to be the same, ie. everything (including DNS) is sent through the SSH tunnel, except traffic going to the local 192.168.8.0/24 subnet. But, it seems there is no automatic restart if the connection is broken, sshuttle just errors out. Enter restartd, with this line in /etc/restartd.conf
shuttleTunnel "sshuttle" "sshuttle --dns -r firstname.lastname@example.org 0/0 -x 192.168.8.0/24" " "
(Obviously I am using an SSH key here for passwordless server login....)
It is worth reading  to understand that this tool is meant to restore packet loss and thus TCP's automatic speed throttling to an SSH tunnel connection, thus improving overall performance.
Aka. How to bore through the Great Firewall if you do not want to use an SSH tunnel.
This is the OpenVPN documentation, but it is not altogether straight-forward to read, and it is missing some necessary detail.
This will get OpenVPN basically working and connected for you. First create your keys:
cd /usr/share/doc/openvpn/examples/easy-rsa/2.0 . ./vars ./clean-all ./build-ca ./build-key server ./build-key ./build-dh
Here I would add a warning from  that when creating your certificates above you must enter something at the "Common Name" prompt. My first time I just accepted all the defaults and got a connection error when I tried to start OpenVPN on my client. The second time, with a "Common Name" on the server certificates, everything just worked.
Then distribute the keys to server and client(s) per the references.
This will get the VPN client to the VPN server. However, we want to route *all* network traffic on the client through VPN and out to the internet. There are two components to this: some additional OpenVPN configuration, and some routing configuration on the server. On my VPN server, this is my /etc/openvpn/server.conf:
port 1348 proto udp dev tap ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem server 10.10.10.0 255.255.255.0 # vpn subnet ifconfig-pool-persist ipp.txt keepalive 10 120 comp-lzo user nobody ; group nobody persist-key persist-tun verb 10 mute 20 client-to-client client-config-dir ccd "route 188.8.131.52 255.255.0.0" ; push "route 192.168.1.0 255.255.255.0" # home subnet push "redirect-gateway def1" push "dhcp-option DNS 10.10.10.1"
This should be the same as in  except that "group nobody" line did not work on my Ubuntu Lucid server for some reason, and the last two "push" lines are what is needed on the server end to tell connecting clients to redirect all network traffic to the VPN. (Though I am not convinced that last dhcp line is having any effect on my Debian box at the moment....) Also, per , note that if your network interface is DHCP, it may die periodically because it is unable to communicate with your DHCP server.
On my VPN client, this is my /etc/openvpn/client.conf:
client dev tap proto udp resolv-retry infinite # this is necessary for DynDNS nobind user nobody ; group nobody persist-key persist-tun ca ca.crt cert x60s.crt key x60s.key comp-lzo verb 4 mute 20
which is the same as in , except for commenting out "group nobody", which did not work on my Debian Testing client machine.
Now for routing on the server. What a PITA. I tried at some length to get raw iptables to do the job, but in the end turned to my old faithful, firehol. Here is my /etc/firehol/firehol.conf on my server, which enables the routing of traffic between tap0 (VPN) and eth0 (internet):
version 5 # interface eth0 internet interface eth0 internet protection strong 10/sec 10 server "https http icmp ssh" accept server openvpn accept server ident reject with tcp-reset client all accept interface tap0 vpn server all accept client all accept router internet2vpn inface eth0 outface tap0 masquerade reverse client all accept server ident reject with tcp-reset
(After the fact, I ran into this very interesting post....)
My SSH Socks5 proxy works great, especially with the addition of autossh, but unlike most web browsers and Pidgin, many applications (particularly on the command line) just do not have proxy support built in.
Proxychains is a wrapper that redirects all network traffic through a designated proxy. To get it working is very simple. After installing, I made this change to the bottom of /etc/proxychains.conf:
# defaults set to "tor"
# socks4 127.0.0.1 9050
socks5 127.0.0.1 1082
ie. I commented out the default Tor proxy and added my local SSH socks5 proxy which I have placed on port 1082.
Then, for instance, to send my gpodder podcatcher through the SSH tunnel, I just start gpodder in a terminal as follows:
Then all of gpodder's network traffic (DNS queries included) go out via SSH through my out-of-country server. And now I have restored access to many blocked podcasts, PGP key servers, and no doubt many other things as they come up. I have been looking for something like this for years.
Sometimes the bandwidth is so bad here (or is it the "Great Firewall" deliberately trying to break my connection?) that my SSH tunnel will frequently fail. Very inconvenient, as I do not notice until I need it, then I have to do a manual restart and wait for it to connect (and said wait can sometimes be significant when bandwidth sucks...)
I have setup an alias in my .bashrc as follows:
alias tunnel="autossh -M 0 -v -CND 1082 email@example.com"
To start the tunnel at the beginning of the day, I just type "tunnel" in any terminal. And whenever the ssh connection is broken, autossh automatically (and apparently intelligently) restarts it. So far, so good.