First, do not edit php.ini, instead add a file to
called something like php_ini_local.ini, for instance. Place parameters you would like to customize like upload_max_filesize in there, and they will take precedence over those in the php.ini file(s). And this way php.ini will in future also upgrade gracefully to newer versions without manual intervention.
To test parameter changes (before and after) the simple way, use PHP CLI in a terminal:
# php -a Ctrl-d
Note that one can monitor progress with these changes using Google's analysis page at https://developers.google.com/speed/pagespeed/insights
One of the cheapest things you can do server-side is leverage browser caching with mod_expires. Apparently, even if a file is already in the browser cache, the browser will still query the server on every hit to see if the file has changed. No, the file is not transferred if it has not changed, but yes, there is still a demand made of the server. Unless you turn on mod_expires, which tells the browser in advance how long the file is good for. And thereafter, until the file expires, the browser will not bother the server to check if it has changed. Other then turning it on, there is little to configure with mod_expires except possibly to specify the expiry times in your VirtualHost or in global config file in the conf.d directory, for example:
CacheEnable disk /
There are three commonly mentioned possibilities for PHP opcode caching: eAccelerator, APC, and XCache. I do not see much to choose from between them performance-wise, but APC seems to be a very native PHP project, so I will go with that:
apt-get install php-apc
The first thing you will want to do is copy apc.php into some web root and have a look. There, apparently the most important thing to check is "Cache full count" which you want to keep pretty close to zero. To get there, I increased my cache size in /etc/php5/conf.d/apc.ini to:
Get and install your preferred version of the module from , ie.
dpkg -i mod-pagespeed-stable_current_amd64.deb
Add this line to the VirtualHost:
php_value error_reporting 0
I am running my personal server on a rather small Rackspace Cloud machine with 256M of memory. A default LAMP (plus a couple Python web apps) install seems to occasionally (my websites are usually not THAT busy) choke and force a reboot. Let's try and avoid that:
First thing I did was install a PHP op code cache called php5-xcache that I have used before. It is a zero configuration slam-dunk, and my spidy sense suggests that it produces a tangible speed improvement.
All of the reading I am doing (see references below) prominently mention unloading unused Apache modules so as to reduce the memory footprint of Apache processes, so this seems like the first place to start. This is a minimal (not requiring hacking my default Ubuntu server Apache configuration too much) list of Apache modules I can get by on at the moment:
alias.conf cgi.load dir.conf mime.conf php5.load ssl.load wsgi.conf alias.load deflate.conf dir.load mime.load rewrite.load status.conf wsgi.load authz_host.load deflate.load headers.load php5.conf ssl.conf status.load
Now into the murky world of Apache MPM (Multi-Processing Modules). Reading the "Compile-Time Configuration Issues" section in , one is left with the definite impression that the worker MPM would be a better choice then the prefork MPM in a limited memory situation. So why does prefork always seem to be installed? A little experiment tells all:
apt-get install apache2-mpm-worker
will result in the removal not only of apache2-mpm-prefork, but also all PHP modules!  has the answer:
"Apache PHP module is reputed to be unstable in multi-threaded environments"
 goes on to describe a somewhat complicated procedure for making the multi-threaded apache2-mpm-worker module work with PHP. Maybe later, if all else fails. Or maybe switch to lighttpd as a web server. Now back to the stodgy old Apache prefork MPM....
 gives some nice details on how one might go about optimizing the prefork MPM configuration. top is telling me that my Apache processes are currently consuming roughly 30+M per process. My current default configuration is this:
which I am going to crank WAY down to this:
StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0
StartServers 1 MinSpareServers 1 MaxSpareServers 2 MaxClients 5 MaxRequestsPerChild 200
Interesting that the default value of MaxRequestsPerChild was zero. Apparently the point of setting this value to non-zero is to periodically force Apache processes to be killed and recreated, to combat memory bloat (aparently they do not release memory?). It is also worth noting that my server's CPU usage tends to hangout near zero, while memory is always maxed out and swapping is a regular cause of problems. In top I have seen a single Apache process top out at as high as 50% of memory. I think killing processes frequently to reduce memory overhead is probably the way to go.
And finally,  recommends setting a really small KeepAliveTimeout of 2. And  recommends keeping "Timeout" small, since I have so few processes and do not want them all to be waiting for a timeout at the same time. So I went for a Timeout of only 10 seconds.
These  would indicate mod_wsgi is the best way to server Django sites using Apache.
mod_wsgi does not seem to exist in the CentOS 5 repositories. Using this as my guide, I installed from source as follows:
cd /usr/lib/python2.4/config ln -s ../../../lib64/libpython2.4.so . cd wget http://modwsgi.googlecode.com/files/mod_wsgi-3.3.tar.gz tar -xf mod_wsgi-3.3.tar.gz cd mod_wsgi-3.3 yum install httpd-devel ./configure --with-python=/usr/bin/python2.4 make make install
That all seemed to go well, and now I see this file: /usr/lib64/httpd/modules/mod_wsgi.so
Turn this module on in Apache, by adding the following lines to /etc/httpd/conf/httpd.conf:
LoadModule wsgi_module /usr/lib64/httpd/modules/mod_wsgi.so
AddHandler wsgi-script .wsgi
After "/etc/init.d/httpd restart" apache is still working. A very good sign.....
It is worth noting that the reference I am using for this also installed Python2.5 from source at the start of the whole process. CentOS 5 only has Python2.4. The reference did not justify why this was done, lets just cross our fingers and hope it will not be necessary.
Now lets see if we can get Apache to server up my helloWorld Django site. This seems to be the most authoritative document I can find on the subject.
cd /var/www/html/django/chinawandererCreate file /var/www/html/django/chinawanderer/apache/django.wsgi which contains the following:
Add this to /etc/httpd/conf/httpd.conf:import os, sys sys.path.append('/var/www/html/django') sys.path.append('/var/www/html/django/chinawanderer') os.environ['DJANGO_SETTINGS_MODULE'] = 'chinawanderer.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()
And it works. Django in action: http://domain.com/django/chinawanderer/
WSGIScriptAlias / /var/www/html/django/chinawanderer/apache/django.wsgi Order deny,allow Allow from all
(Note: php.ini changes apply to *all* PHP apps on the server.)
If you are getting the error:
Upload larger than maximum POST size (post_max_size variable in .htaccess or php.ini)
There is no simple one parameter solution. I found some good posts
I made the following changes in /etc/php5/apache2/php.ini:
post_max_size = 300M
upload_max_filesize = 300M
max_execution_time = 3600
max_input_time = 3600
which should present a maximum file upload limit of 300M and a timeout of one hour.
I expect to be accessing some services on my laptop via Apache over an untrusted network in the near future, so I need to turn on SSL.
As usual, turn on the Apache SSL module:
and the default SSL configuration:cd /etc/apache2/mods-enabled/ ln -s ../mods-available/ssl.conf . ln -s ../mods-available/ssl.load .
cd /etc/apache2/sites-enabled/ ln -s ../sites-available/default-ssl 001-default-ssl
Now restart Apache and it just works!? Apparently there is a "snakeoil" certificate already in place to get the job done. Trivial. Thank you, Debian Apache packagers.
Generally speaking, it would appear that a vanilla single root SSL certificate, self-signed or otherwise, is only good for exactly one domain that corresponds exactly to the "common name" used in creating the certificate.
Some vendors sell something called a "wildcard" certificate, where the common name on the certificate takes the form of "*.domain.com", and can be used to secure multiple sub-domains. Such a "wildcard" certificate, not suprisingly, seems to be considerably more expensive then a single root certificate. Apache even provides a built-in mechanism using a document root wildcard for mapping each sub-domain to a different document root.
Some vendors like Godaddy sell multiple domain certificates which seem to provide a discount to purchasing the same number of single root certificates.
A good source for a free certificate is cacert.org. cacert.org will sign a certificate for you for a domain if your e-mail address is in the whois record for the domain (this is an automated process on their end, they verify your identity by sending you a link in an e-mail ....) The Apache website has a nice concise explanation of how to create a server key and certificate signing request for cacert.org (or anyone else....)
Basically the process is:
openssl req -nodes -new -keyout try.key -out try.csr
openssl req -noout -text -in try.csr
cacert.org certificates seem to be good for six months. They send you an e-mail in advance of expiry.
For a particular SSL-enabled Apache virtual host, force users to always use https by placing a redirect in http virtual host, ie.:
DocumentRoot /var/www/vsc/apps ServerName apps.vancouversolidcomputing.com ServerAlias apps.vancouversolidcomputing.com ServerAdmin firstname.lastname@example.org CustomLog /var/log/apache2/access.log combined Redirect / https://apps.vancouversolidcomputing.com/
As I noted in an earlier post, name-based virtual hosting "seemed" to be working. "Seemed". In fact, the virtual hosts were finding the correct web root and loading the correct site, but browsers were consistently giving an error to the effect that the domain name in the certificate and the domain name the browser was pointed to were not the same.
Someone on the cacert.org e-mail list set me straight:
From: Pete Stephenson To: email@example.com Subject: Re: Certificate somehow associated with wrong sub-domain? Both subdomains share the same IP address. SSL is IP-based, rather than name-based. Specifically, when a client connects to a server, it establishes the SSL connection prior to sending the HTTP Host header, so the server has no idea which specific certificate to send. Depending on the server, it may send the first certificate mentioned in the configuration file or do something else entirely. You can solve this by adding multiple SubjectAltNames to a certificate (e.g. you'd have a SAN for apps.vancouversolidcomputing.com and another one for vsc.vancouversolidcomputing.com all in a single certificate) and telling your server to use the same certificate for both subdomains. More details, including a handy shell script which can generate the required CSR (some options, like the RSA key length are manually configurable in the shell script; it doesn't prompt the user for the keylength), are available here: http://wiki.cacert.org/wiki/VhostTaskForce Cheers! -Pete
So what I take from this is:
This page talks about the issue in general, and the various somewhat fuzzy and partially supported options -- "Currently the different browsers, servers and CAs all implement different and incompatible ways to use SSL certificates for several VHosts on the same server" -- this situation has not been entirely standardized yet!
This page seems to recommend the cacert.org way to setup Apache with the right kind of multiple SubjectAltName certificate, complete with a script for generating an appropriate Certificate Request and associated key. I used the script to generate the request, and sure enough:
# openssl req -noout -text -in vancouversolidcomputing_csr.pem Certificate Request: Data: Version: 0 (0x0) Subject: CN=www.vancouversolidcomputing.com Requested Extensions: X509v3 Subject Alternative Name: DNS:www.vancouversolidcomputing.com, DNS:vancouversolidcomputing.com, DNS:printshopdemo.vancouversolidcomputing.com, DNS:vsc.vancouversolidcomputing.com , DNS:solid.vancouversolidcomputing.com, DNS:apps.vancouversolidcomputing.com, DNS:ofri.vancouversolidcomputing.com
out comes a Certificate Request with multiple SubjectAltNames.
I then replaced *all* certificates in my Apache virtual hosts with this new certificate, ie.
in each virtual host block for each sub-domain / web root.
The certificate now works flawlessly in Iceape (which apparently contains the cacert.org Certificate Authority information) and Internet Explorer still complains about an untrusted Certificate Authority. Neither complains about domain names not matching, which was happening before.
 contained several other directives in each of the SSL virtual host blocks:
SSLProtocol all -SSLv2
but I have so far found these unnecessary.
Turn on the SSL module:
cd /etc/apache2/mods-enabled/ ln -s ../mods-available/ssl.conf . ln -s ../mods-available/ssl.load . /etc/init.d/apache2 restart
In Debian, /etc/apache2/mods-enabled/ports.conf should already have logic to listen on the default port 443 if the SSL module is loaded.
Now create a self-signed certificate (tinyca is a nice simple GUI that will do the job....) Just enter minimal information, and export the newly generated cert and key to files, being careful to set the expiration date nice and long, and export the key WITHOUT a password (otherwise you will have to provide a password every time apache is restarted).
Copy the exported certificate files to your server, into directory /etc/apache2/ssl. Now create an SSL block in the Apache Virtual Host where you would like SSL. The *:80 block will respond to normal http requests, and the *:443 block will respond to https (SSL) requests:
DocumentRoot /var/www/webroot ServerName subdomain.domain.com ServerAlias subdomain.domain.com ServerAdmin firstname.lastname@example.org CustomLog /var/log/apache2/access.log combinedNameVirtualHost *:443 DocumentRoot /var/www/webroot ServerName subdomain.domain.com ServerAlias subdomain.domain.com ServerAdmin email@example.com CustomLog /var/log/apache2/access.log combined SSLEngine on SSLCertificateFile /etc/apache2/ssl/cert.pem SSLCertificateKeyFile /etc/apache2/ssl/key.pem
I am not sure why the 443 block requires a NameVirtualHost line and the 80 block does not. Interestingly enough, this says "Name-based virtual hosting cannot be used with SSL secure servers because of the nature of the SSL protocol", which might have something to do with it? But despite this I currently HAVE got name-based virtual hosting working on SSL, unless there is something I do not understand here.
Here is a useful reference, in addition to the installed apache docs.