Apache: Difference between revisions
Line 489: | Line 489: | ||
SSLCertificateFile /etc/ssl.pem | SSLCertificateFile /etc/ssl.pem | ||
SSLCertificateChainFile /etc/ssl.pem | SSLCertificateChainFile /etc/ssl.pem | ||
=== Ubuntu SnakeOil Cert === | |||
Install the snakoil fake cert <ref>https://superuser.com/questions/58149/host-based-snakeoil-certificates-in-ubuntu</ref> | |||
sudo apt-get install ssl-cert | |||
sudo make-ssl-cert generate-default-snakeoil --force-overwrite | |||
/etc/ssl/certs/ssl-cert-snakeoil.pem | |||
/etc/ssl/private/ssl-cert-snakeoil.key | |||
== Linux System Password Protection == | == Linux System Password Protection == |
Revision as of 17:22, 3 February 2024
Virtual Hosting
touch /etc/httpd/conf.d/virtual.conf
Virtual Host Configuration File:
/etc/httpd/conf.d/virtual.conf:
HTTP Virtual Host
NameVirtualHost *:80 <VirtualHost *:80> ServerName t0e.org ServerAlias www.t0e.org DocumentRoot /www ServerAdmin admin@t0e.org ErrorLog logs/t0e.org-error_log CustomLog logs/t0e.org-access_log common </VirtualHost> <VirtualHost *:80> ServerName www2.t0e.org ... </VirtualHost>
HTTPS Virtual Host:
NameVirtualHost *:443 <VirtualHost *:443> ServerName t0e.org ServerAlias www.t0e.org # SSL Enabled: SSLEngine on SSLCertificateFile /etc/t0e.org.pem # If using a godaddy certificate chain: #SSLCertificateFile /etc/contractpal.net.pem #SSLCertificateKeyFile /etc/contractpal.net.pem #SSLCertificateChainFile /etc/contractpal.net.pem #Forwarding Traffic # RewriteEngine on # RewriteRule ^(/.*) http://localhost:8080/$1 [P] DocumentRoot /www ServerAdmin admin@t0e.org ErrorLog logs/t0e.org-error_log CustomLog logs/t0e.org-access_log common </VirtualHost>
References:
- A Day in the Life of #Apache: Running Name-Based SSL Virtual Hosts in Apache
- Apache: Name-based Virtual Host Support
- Getting SSL to work across virtual hosts in Apache
CentOS common configuration changes
Keep alives:
# KeepAlive Off KeepAlive On
Hide Apache version:
# ServerTokens Full ServerTokens Prod
# ServerSignature On ServerSignature Off
# in /etc/php.ini expose_php = Off
Forwarding / Rewriting / Proxying Traffic
NameVirtualHost *:80 <VirtualHost *:80> ServerName t0e.org ServerAlias www.t0e.org #Forwarding Traffic RewriteEngine on RewriteRule ^(/.*) http://localhost:8080/$1 [P] ServerAdmin admin@t0e.org ErrorLog logs/t0e.org-error_log CustomLog logs/t0e.org-access_log common </VirtualHost>
Apache module mod_rewrite - RewriteRule - http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html#RewriteRule
a2enmod rewrite
a2enmod proxy_http
Module mod_rewrite - URL Rewriting Engine:
- RewriteEngine
- RewriteCond
- RewriteRule
Apache 1.3 - URL Rewriting Guide
- ProxyPass
---
<VirtualHost *:80> ServerName home.burgenerfamily.com DocumentRoot /www/home <Directory /www/home> Options FollowSymLinks RewriteEngine On # allow existing files and directories to pass through RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d #RewriteRule ^static - [L,NC] #RewriteCond %{REQUEST_FILENAME} !^favicon.ico$ #RewriteCond %{REQUEST_FILENAME} !^static/ #RewriteRule ^(.+)$ /index.php?title=$1 [PT,L,QSA] RewriteRule ^(.*)$ /index.php/$1 [L,QSA] </Directory> ErrorLog logs/home.burgenerfamily.com-error_log CustomLog logs/home.burgenerfamily.com-access_log common </VirtualHost>
Proxy Pass
Enable Proxy: [1]
sudo a2enmod proxy sudo a2enmod proxy_http systemctl restart apache2
Similar to forwarding, but takes care of redirects:
ProxyPass /mirror/foo/ http://foo.com/ ProxyPassReverse /mirror/foo/ http://foo.com/
Suppose the local server has address http://wibble.org/; then
ProxyPass /mirror/foo/ http://foo.com/ ProxyPassReverse /mirror/foo/ http://foo.com/
will not only cause a local request for the <http://wibble.org/mirror/foo/bar> to be internally converted into a proxy request to <http://foo.com/bar> (the functionality ProxyPass provides here). It also takes care of redirects the server foo.com sends: when http://foo.com/bar is redirected by him to http://foo.com/quux Apache adjusts this to http://wibble.org/mirror/foo/quux before forwarding the HTTP redirect response to the client.
<VirtualHost *:80> ServerName wiki.oeey.com # Reverse Proxy ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ServerAdmin website@oeey.com ErrorLog logs/wiki-error_log CustomLog logs/wiki-access_log common </VirtualHost>
Proxy Pass to Jenkins: [2]
NameVirtualHost *:80 <VirtualHost *:80> ServerName jenkins.oeey.com #Forwarding Traffic #RewriteEngine on #RewriteRule ^(/.*) http://localhost:8080/$1 [P] ProxyPass / http://localhost:8080/ nocanon ProxyPassReverse / http://localhost:8080/ ProxyRequests Off ProxyPreserveHost On AllowEncodedSlashes NoDecode # Local reverse proxy authorization override # Most unix distribution deny proxy by default (ie /etc/apache2/mods-enabled/proxy.conf in Ubuntu) <Proxy http://localhost:8080/*> Order deny,allow Allow from all </Proxy> ServerAdmin admin@oeey.com ErrorLog logs/jenkins.oeey.com-error_log CustomLog logs/jenkins.oeey.com-access_log common </VirtualHost>
Proxy Pass HTTPS
Proxy Pass to Jenkins:
NameVirtualHost *:443 <VirtualHost *:443> ServerName jenkins.oeey.com # Enable SSL Support SSLEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key #Forwarding Traffic #RewriteEngine on #RewriteRule ^(/.*) http://localhost:8080/$1 [P] ProxyPass / http://localhost:8080/ nocanon ProxyPassReverse / http://localhost:8080/ ProxyPassReverse / http://jenkins.oeey.com/ ProxyRequests Off ProxyPreserveHost On AllowEncodedSlashes NoDecode RequestHeader set X-Forwarded-Proto "http" # Local reverse proxy authorization override # Most unix distribution deny proxy by default (ie /etc/apache2/mods-enabled/proxy.conf in Ubuntu) <Proxy http://localhost:8080/jenkins*> Order deny,allow Allow from all </Proxy> ServerAdmin admin@oeey.com ErrorLog logs/jenkins.oeey.com-error_log CustomLog logs/jenkins.oeey.com-access_log common </VirtualHost>
Redirect
Redirect with URI intact:
NameVirtualHost *:80 <VirtualHost *:80> ServerName test.oeey.com Redirect permanent / http://www.oeey.com/ </VirtualHost>
Redirect users to HTTPS with URI intact:
NameVirtualHost *:80 <VirtualHost *:80> ServerName test.oeey.com Redirect permanent / https://test.oeey.com/ </VirtualHost>
Can be wrapped in Location (if you want)
<Location /> Redirect permanent / https://test.oeey.com/ </Location>
Redirect with RewriteEngine with URI intact:
NameVirtualHost *:80 <VirtualHost *:80> ServerName test.oeey.com # Redirect HTTP to HTTPS RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} </VirtualHost>
Redirect removing URI:
NameVirtualHost *:80 <VirtualHost *:80> ServerName test.oeey.com RedirectMatch "(.*)" "http://www.oeey.com/" </VirtualHost>
- Alias
- Redirect
- RedirectPermanent
Default Directory Index
to set the default pages to try and load (e.g. index.html, index.php):
DirectoryIndex index.html index.php
Directory Index
To show full file names:
<Directory /> Options Indexes IndexOptions NameWidth=* </Directory>
Sample:
<Directory "/www"> Options Indexes FollowSymLinks IndexOptions FancyIndexing NameWidth=* AllowOverride None Require all granted </Directory>
Hide Apache and PHP Banner
Hide PHP version (X-Powered-By) - Edit php.ini: [3]
expose_php = Off
Hide Apache version: [4]
# ServerSignature On ServerSignature Off # ServerTokens Full ServerTokens Prod
References:
- Hide Apache ServerSignature/ServerTokens/PHP X-Powered-By - http://www.if-not-true-then-false.com/2009/howto-hide-and-modify-apache-server-information-serversignature-and-servertokens-and-hide-php-version-x-powered-by/
.htaccess
Override configuration settings on a per directory (and sub directories) level with:
.htaccess
Allow override of all configuration options:
<Directory /> AllowOverride All </Directory>
Example directory browsing in .htaccess:
Options +indexes
Allow from networks
<Directory "..."> Order Deny,Allow Deny from all Allow from 127.0.0.0/255.255.255.0 10.10.0.0/16 </Directory>
Log Files
ErrorLog logs/test.com-error_log CustomLog logs/test.com-access_log common
Make sure the soft link exists:
ln -s /var/log/httpd /etc/httpd/logs # RHEL based ln -s /var/log/apache2 /etc/apache2/logs # Debian Based
Log HTTPS X-Forwarded-For
Log the X-Forwarded-For header in apache...
LogFormat "[X-Forwarded-For: %{X-Forwarded-For}i] %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "[X-Forwarded-For: %{X-Forwarded-For}i] %h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent
PHP
Install PHP with Apache: (Ubuntu)
#apt-get install apache2 php5 libapache2-mod-php5 apt-get install apache2 php libapache2-mod-php
Apache SSL
Apache can do virtual SSL hosting!
Install mod_ssl: [5]
# rhel yum install mod_ssl
# debian sudo a2enmod ssl sudo a2ensite default-ssl
/etc/httpd/conf.d/ssl.conf:
LoadModule ssl_module modules/mod_ssl.so Listen 443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl SSLPassPhraseDialog builtin SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) SSLSessionCacheTimeout 300 SSLMutex default SSLRandomSeed startup file:/dev/urandom 256 SSLRandomSeed connect builtin SSLCryptoDevice builtin
Note: Delete the default VirtualHost section so we can add our own.
# SSL Configuration - from ssl.conf virtual host example SSLEngine On SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
# SSL configuration SSLEngine On SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /etc/apache2/ssl/cert.crt SSLCertificateKeyFile /etc/apache2/ssl/cert.key SSLCertificateChainFile /etc/apache2/ssl/sf_bundle.crt
NameVirtualHost *:80 <VirtualHost *:80> ServerName default.t0e.org # Redirect HTTP to HTTPS RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} </VirtualHost> NameVirtualHost *:443 <VirtualHost *:443> ServerName kb.t0e.org # SSL Enabled: (only once in config file) SSLEngine on SSLCertificateFile /etc/t0e.org.pem SSLCertificateKeyFile /etc/t0e.org.pem SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW DocumentRoot /www ServerAdmin admin@kb.t0e.org.com ErrorLog logs/kb.t0e.org-error_log CustomLog logs/kb.t0e.org-access_log common </VirtualHost> <VirtualHost *:443> ServerName wiki.t0e.org DocumentRoot /www ServerAdmin admin@wiki.t0e.org.com ErrorLog logs/wiki.t0e.org-error_log CustomLog logs/wiki.t0e.org-access_log common </VirtualHost>
Note: With SSL, Apache will always use the first default virtual hosts' certificate, so it only makes sense to define it there.
References:
- Apache SSL
- Apache SSL/TLS Encryption Documentation
- mod_ssl
- Article: Apache 2 with SSL/TLS: Step-by-Step, Part 1
- Van's Apache SSL/TLS mini-HOWTO
- Tomcat SSL Configuration
- Apache: SSL/TLS Strong Encryption: FAQ
- Getting SSL to work across virtual hosts in Apache
- Building a Secure RedHat Apache Server HOWTO
- CommonMisconfigurations - Httpd Wiki - http://wiki.apache.org/httpd/CommonMisconfigurations
--- chain ---
If you have a chain, create the pem as "key, cert, chain" and set Apache config like so:
SSLCertificateFile /etc/ssl.pem SSLCertificateKeyFile /etc/ssl.pem SSLCertificateChainFile /etc/ssl.pem
or just
SSLCertificateFile /etc/ssl.pem SSLCertificateChainFile /etc/ssl.pem
Ubuntu SnakeOil Cert
Install the snakoil fake cert [1]
sudo apt-get install ssl-cert
sudo make-ssl-cert generate-default-snakeoil --force-overwrite
/etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key
Linux System Password Protection
Using system accounts for HTTP authentication - http://wiki.zs64.net/Using_system_accounts_for_HTTP_authentication
httpd.conf
Edit httpd.conf to add these lines to the general configuration:
LoadModule auth_external_module libexec/apache2/mod_auth_external.so
(Should have been added by installing the port already.)
Add these lines to each virtual host where you'd like to use system accounts for authentication:
AddExternalAuth pwauth /usr/local/bin/pwauth SetExternalAuthMethod pwauth pipe
Add these lines to each Directory or Location section that you want protected by authentication:
AuthType Basic AuthExternal pwauth require valid-user
Password Protection with .htaccess and htpasswd
- Comprehensive guide to .htaccess: Password protection
- Using .htaccess Files with Apache
- Password Protection by Rod
- .htaccess Password Protection in PHP
Sample .htaccess file:
AuthUserFile /var/www/htpasswd AuthGroupFile /dev/null AuthName "Secret Place" AuthType Basic Require valid-user #Require user <user> #Require group <group>
To use with httpd.conf or virtual.conf, wrap in "<Directory>" tags:
<Directory /> AuthUserFile ... ... </Directory>
Note: if you want in your httpd.conf (or virtual.conf) surround with "<Location />...</Location>"
To add password file:
htpasswd -c /var/www/htpasswd admin htpasswd /var/www/htpasswd user1
# if you want to create a blank username with password: htpasswd /path/htpasswd
To use .htaccess:
<Directory /> Options FollowSymLinks AllowOverride AuthConfig </Directory>
Or user the .htaccess generator
Apache HTTP Server - Require Directive:
Require user userid [userid] ...
- Only the named users can access the resource.
Require group group-name [group-name] ...
- Only users in the named groups can access the resource.
Require valid-user
- All valid users can access the resource.
AuthType Basic AuthName "Restricted Resource" AuthUserFile /web/users AuthGroupFile /web/groups Require group admin
web based htpasswd manager
htpasswd.cgi by David Efflandt
Perl CGI Script to Modify User Passwords:
This allows users to manage / change their own passwords.
Use the Perl CGI script htpasswd.pl [cache]
- Edit location of Perl .i.e.: /usr/bin/perl
- Not /usr/local/bin/perl
- Edit the script to specify location of the password file i.e. /var/www/PasswordDir/.htpasswd
- SELinux users must add the correct attribute i.e. chcon -R -h -t httpd_sys_content_t /var/www/PasswordDir
- The password file must be located in a directory where CGI is allowed to modify files.
File: httpd.conf (portion):
.. ... <Directory "/var/www/PasswordDir"> Options -Indexes AllowOverride None Options None Order allow,deny Allow from all </Directory> ... ..
Directory Browsing
DocumentRoot /www/files <Directory /> Options FollowSymLinks +Indexes AllowOverride None </Directory>
Note: The '+' may not be needed.
In a .htaccess:
Options +indexes
Long file names: [6]
IndexOptions NameWidth=*
For root indexes, the welcome.conf may be overriding with the welcome screen. Make the following change: [7]
<LocationMatch "^/$> #Options -Indexes Options Indexes ErrorDocument 403 /error/noindex.html </LocationMatch>
cgi
Script Alias folder:
ScriptAlias /cgi-bin/ "C:/Program Files/Apache Group/Apache/cgi-bin/" ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
My favorite option, make all .cgi files executable: (This will need to be surrounded with '<Location />' or '<Directory />' tags)
AddHandler cgi-script .cgi Options +ExecCGI
Note: Make sure to 'chmod +x' the files.
Example:
<VirtualHost *:80> ServerName hg.t0e.org DirectoryIndex hgweb.cgi DocumentRoot /www/hg <Location /> AddHandler cgi-script .cgi Options +ExecCGI </Location> ServerAdmin admin@hg.t0e.org ErrorLog logs/hg.t0e.org-error_log CustomLog logs/hg.t0e.org-access_log common </VirtualHost>
--
Only files without an extension:
<VirtualHost *:80> ServerName pi.t0e.org DocumentRoot /www/pi <Directory /> AllowOverride All </Directory> DirectoryIndex index index.php index.html index.htm <Files ~ "^[a-zA-Z]*$"> SetHandler cgi-script Options +ExecCGI </Files>
CentOS Default
# # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory>
Python CGI
mod_proxy
- http://www.apachetutor.org/admin/reverseproxies
- http://www.linuxfocus.org/English/March2000/article147.html
- http://www.linuxhelp.net/guides/apacheproxy/
- http://webauth.stanford.edu/manual/mod/mod_proxy.html
- http://www.apachetutor.org/admin/reverseproxies
- http://confluence.atlassian.com/display/DOC/Using+Apache+with+mod_proxy
- http://www.serverwatch.com/tutorials/article.php/3290851
- http://apache.webthing.com/mod_proxy_html/
- http://www.ibm.com/developerworks/linux/library/wa-lampsec/index.html
prefork vs worker
Prefork performs better. Worker uses less memory. Prefork is most commonly used, and is the better choice for a web server.
To see which is compiled, and look for prefork.c or worker.c:
# httpd -l Compiled in modules: core.c prefork.c http_core.c mod_so.c
Statistics
Connections
Count active connections:
netstat -ant | grep ESTABLISHED | grep :80 | wc -l
From PHP:
<?php $number_of_users = shell_exec('netstat -ant | grep ESTABLISHED | grep :80 | wc -l'); echo $number_of_users; ?>
References:
mod_status
mod_status - Apache HTTP Server
Summary
The Status module allows a server administrator to find out how well their server is performing. A HTML page is presented that gives the current server statistics in an easily readable form. If required this page can be made to automatically refresh (given a compatible browser). Another page gives a simple machine-readable list of the current server state.
The details given are:
- The number of worker serving requests
- The number of idle worker
- The status of each worker, the number of requests that worker has performed and the total number of bytes served by the worker (*)
- A total number of accesses and byte count served (*)
- The time the server was started/restarted and the time it has been running for
- Averages giving the number of requests per second, the number of bytes served per second and the average number of bytes per request (*)
- The current percentage CPU used by each worker and in total by Apache (*)
- The current hosts and requests being processed (*)
The lines marked "(*)" are only available if ExtendedStatus is On.
LoadModule status_module modules/mod_status.so # # ExtendedStatus controls whether Apache will generate "full" status # information (ExtendedStatus On) or just basic information (ExtendedStatus # Off) when the "server-status" handler is called. The default is Off. # #ExtendedStatus On # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Change the ".example.com" to match your domain to enable. # #<Location /server-status> # SetHandler server-status # Order deny,allow # Deny from all # Allow from .example.com #</Location>
References:
- mod_status - Apache HTTP Server
- Find out how many connections are on an apache server : apache, connections, many, find, server
TRACE and TRACK
ISO Audit may result in this "Medium" risk detected:
- "HTTP Methods Allowed. It is observed that HTTP methods GET HEAD TRACE OPTIONS are allowed on this host. Disable all unnecessary methods."
- "HTTP TRACE / TRACK Methods Allowed. The remote webserver supports the TRACE and/or TRACK methods. TRACE and TRACK are HTTP methods that are used to debug web server connections. Disable these methods."
Disabling TRACE
Apache: Disable TRACE and TRACK methods | Racker Hacker:
Lots of PCI Compliance and vulnerability scan vendors will complain about TRACE and TRACK methods being enabled on your server. Since most providers run Nessus, you'll see this fairly often. Here's the rewrite rules to add:
RewriteEngine on RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F]
These directives will need to be added to each VirtualHost.
For apache version 1.3.34 (or later 1.3.x versions), or apache 2.0.55 (or later), this has been made easy. Just add the line TraceEnable off
TraceEnable off
Disabling the HTTP TRACE method:
The HTTP TRACE request method causes the data received by IBM HTTP Server from the client to be sent back to the client, as in the following example:
$ telnet 127.0.0.1 8080 Trying... Connected to 127.0.0.1. Escape character is '^]'. TRACE / HTTP/1.0 Host: foo A: b C: d HTTP/1.1 200 OK Date: Mon, 04 Oct 2004 14:07:59 GMT Server: IBM_HTTP_SERVER Connection: close Content-Type: message/http TRACE / HTTP/1.0 A: b C: d Host: foo Connection closed.
The TRACE capability could be used by vulnerable or malicious applications to trick a web browser into issuing a TRACE request against an arbitrary site and then send the response to the TRACE to a third party using web browser features.
Making the required configuration changes:
<VirtualHost www.example.com> ... # disable TRACE in the www.example.com virtual host RewriteEngine On RewriteCond %{REQUEST_METHOD} ^TRACE RewriteRule .* - [F] </VirtualHost>
Using SSL:
$ /usr/linux/bin/openssl s_client -connect 127.0.0.1:8444
For more information...
More background information on the concerns with TRACE is provided at http://www.apacheweek.com/issues/03-01-24#news.
Other:
- Apache Tips: Disable the HTTP TRACE method | MDLog:/sysadmin - http://www.ducea.com/2007/10/22/apache-tips-disable-the-http-trace-method/
TraceEnable
Apache Core Features - http://httpd.apache.org/docs/1.3/mod/core.html#traceenable
TraceEnable:
Syntax: TraceEnable [on|off|extended] Default: TraceEnable on Context: server config Status: core (Windows, NetWare) Compatibility: Available only in Apache 1.3.34, 2.0.55 and later
This directive overrides the behavior of TRACE for both the core server and mod_proxy. The default TraceEnable on permits TRACE requests per RFC 2616, which disallows any request body to accompany the request. TraceEnable off causes the core server and mod_proxy to return a 405 FORBIDDEN error to the client.
Finally, for testing and diagnostic purposes only, request bodies may be allowed using the non-compliant TraceEnable extended directive. The core (as an origin server) will restrict the request body to 64k (plus 8k for chunk headers if Transfer-Encoding: chunked is used). The core will reflect the full headers and all chunk headers with the request body. As a proxy server, the request body is not restricted to 64k. At this time the Apache 1.3 mod_proxy does not permit chunked request bodies for any request, including the extended TRACE request.
Disabling TRACK
The HTTP TRACK method
The TRACK method is a type of request supported by Microsoft web servers. It is not RFC compliant and is not supported directly by IBM HTTP Server. The method may be utilized as part of a cross-site scripting attack. See Vulnerability Note VU#288308 for more information.
Even though IBM HTTP Server does not support the TRACK method natively, it is possible for plug-in modules to provide support for it. To disable this capability for plug-in modules, in addition to disabling the TRACE method, add these two additional directives after the existing RewriteCond and RewriteRule directives which are used to disable TRACE:
RewriteCond %{REQUEST_METHOD} ^TRACK RewriteRule .* - [F]
Here is a full example showing the directives to disable both TRACE and TRACK:
<VirtualHost www.example.com> ... # disable TRACE and TRACK in the www.example.com virtual host RewriteEngine On RewriteCond %{REQUEST_METHOD} ^TRACE RewriteRule .* - [F] RewriteCond %{REQUEST_METHOD} ^TRACK RewriteRule .* - [F] </VirtualHost>
Attack
Cross-Site Tracing issues
Earlier this week a paper was published, "Cross-Site Tracing" which gave details of how the TRACE HTTP request could be used in Cross-Site Scripting attacks. Unfortunately this issue has not been very well understood by the media and has received a unwarranted amount of attention.
When an HTTP TRACE request is sent to a web server that supports it, that server will respond echoing the data that is passed to it, including any HTTP headers. The paper explains that some browsers can be scripted to perform a TRACE request. A browser with this functionality could be made to issue a TRACE request against an arbitrary site and pass the results on elsewhere. Since browsers will only send authentication details and cookies to the sites that issue them this means a user having a browser with this functionality could be tricked into sending their cookies or authentication details for arbitrary sites to an attacker.
For example, if you visited a page that an attacker has carefully crafted, the page could cause your browser to bounce a TRACE request against some site for which you have authentication cookies. The result of the TRACE will be a copy of what was sent to the site, which will therefore include those cookies or authentication data. The carefully crafted page can then pass that information on to the attacker.
TRACE requests can be disabled by making a change to the Apache server configuration. Unfortunately it is not possible to do this using the Limit directive since the processing for the TRACE request skips this authorisation checking. Instead the following lines can be added which make use of the mod_rewrite module.
RewriteEngine On RewriteCond %{REQUEST_METHOD} ^TRACE RewriteRule .* - [F]
Although the particular attack highlighted made use of the TRACE functionality to grab authentication details, this isn't a vulnerability in TRACE, or in the Apache web server. The same browser functionality that permits the published attack can be used for different attacks even if TRACE is disabled on the remote web server. For example an attacker could create a carefully crafted page that when visited submits a hidden request to some arbitrary site through your browser, grabs the result and passes it to the attacker.
Performance Tuning
Apache and Firewall Performance Tips from the Xenu.net Masters
CentOS January 2009 Thread
-------- Original Message -------- Subject: [CentOS] Apache Server Tuning for Performance Date: Wed, 21 Jan 2009 00:09:38 +0530 From: linux-crazy <hicheerup@gmail.com> Reply-To: CentOS mailing list <centos@centos.org> To: CentOS mailing list <centos@centos.org> Hi all, I am facing facing performance issues with our web servers which is working for concurrent 250 requests properly and then stops responding when the requests are more than 250 . The current configuration parameters are as follows : apachectl -version Server version: Apache/2.0.52 Server built: Jan 30 2007 09:56:16 Kernel : 2.6.9-55.ELsmp #1 SMP Fri Apr 20 16:36:54 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux Server Hardware : MAIN MEMORY (i) Memory Size 4 GB Dual-Core Intel 5160 processors. httpd.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 1000 KeepAliveTimeout 150 ## Server-Pool Size Regulation (MPM specific) # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # ServerLimit: maximum value for MaxClients for the lifetime of the server # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 251 MaxClients 251 MaxRequestsPerChild 4000 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> I want to know about the difference between worker MPM and Prefork MPM , how to find out which one will be used by my apache server and the recommended one for highly loaded server.If some one provide me the link that best explains above two comparison also be very use full. Can any one guide me tuning to be done for the maximum utilization of the Resources and better performance of the Servers. Regards, lingu _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Only use prefork MPM (worker is for Win32, AFAIK). If you set ServerLimit to 251, that's the limit. Set ServerLimit and MaxClients to the same value. (Larger than 250) OK? And take a couple of minutes to study the apache documentation. It's actually quite good and tranlated into many languages... BUT: what your server is being able to handle also depends on what you actually serve (PHP/JSP/Servelets/Perl/whatever! This is usually not a problem that is easily described and solved in two sentences. Rainer
Increase your MaxClients configuration, also enable if you haven't already the server-status module and monitor the server via http://<your server name>/server-status it will show how many workers are busy and what they are doing. As for which to use, prefork is the old forked method of doing things, the other uses threads. Depending on what kind of application your running on top of apache, if it is not thread safe(at some point many PHP modules were not thread safe), you may want to use prefork. Otherwise you can use the threading model. If your not sure I'd say stick to prefork to be safe until you can determine for sure that threading is safe. nate
Most list members would likely advise sticking with the prefork configuration. Without knowing what kind of applications you are running on your webserver, I wouldn't suggest changing it. Merely increasing the number of workers might make performance worse. Use ps or top to figure out how much each apache worker is using. Then decide how much ram you want to dedicate on your server to Apache, without going into swap. (Over-allocating and then paging out memory will only make performance much worse.) For example, if I have 2G or ram, and I want 1.5 for apache workers, my average apache worker size (resident memory) is 65MB, then I have room for 23 workers. (1024 * 1.5 ) / 65. (There are more accurate ways to calculate this usage, like taking shared memory into account.) Upgrading the ram in your web server is a pretty fast interim solution. Consider your application performance, too. The longer a request in your application takes, the more workers are in use on your web server, taking up more memory. If you have long-running queries in your database, take care of those first. Good luck Jed
Jed Reynolds wrote: > > Merely increasing the number of workers might make performance worse. > > Use ps or top to figure out how much each apache worker is using. Then > decide how much ram you want to dedicate on your server to Apache, > without going into swap. (Over-allocating and then paging out memory > will only make performance much worse.) For example, if I have 2G or > ram, and I want 1.5 for apache workers, my average apache worker size > (resident memory) is 65MB, then I have room for 23 workers. (1024 * 1.5 > ) / 65. (There are more accurate ways to calculate this usage, like > taking shared memory into account.) Pay attention to shared memory when doing this. A freshly-forked process shares virtually all RAM with its parent. How much and how quickly this changes varies wildly with the application type and activity that causes the child's data to become unique. With some types of applications (especially mod_perl) you may want to tune down the number of hits each child services to increase memory sharing. Also, if you are running external cgi programs you must take them into account. > Upgrading the ram in your web server is a pretty fast interim solution. > > Consider your application performance, too. The longer a request in your > application takes, the more workers are in use on your web server, > taking up more memory. If you have long-running queries in your > database, take care of those first. You may also have to turn off or tune down the HTTP 1.1 connection keepalives, trading the time it takes to establish a new connection for the RAM it takes to keep the associated process waiting for another request from the same client. -- Les Mikesell lesmikesell@gmail.com
Also take a look at apache alternatives like lighttpd and nginx. In certain cases they do miracles. Jure Pečar
I don't know about lighttpd (our results were mixed) - but NGINX is really _very_ fast. Last time I checked, it seems to be the fastest way to accelerate webpage-delivery on generic hardware and with OSS-software. But it's not a feature-monster like apache, so you still need that. Rainer
Linux-crazy wrote on Wed, 21 Jan 2009 00:09:38 +0530: > KeepAliveTimeout 150 My god, reduce this to 10 or 5. > ServerLimit 251 > MaxClients 251 There's your limit. However, you should check with your hardware if upping it is really desirable. With 4 GB I think you won't be able to handle much more anyway. Kai
Kai Schaetzl wrote: > There's your limit. However, you should check with your hardware if upping > it is really desirable. With 4 GB I think you won't be able to handle much > more anyway. That really depends. If you only shove out static pages and have one or two or three odd cgis on the machine, you can flatten down the httpd binary quite a bit by throwing out unneeded modules. 500 to 750 clients shouldn't be that much of a problem then ... Cheers, Ralph
Ralph Angenendt wrote on Wed, 21 Jan 2009 10:52:59 +0100: > That really depends. If you only shove out static pages and have one or > two or three odd cgis on the machine, you can flatten down the httpd > binary quite a bit by throwing out unneeded modules. > > 500 to 750 clients shouldn't be that much of a problem then ... Sure it depends ;-) I do serve dynamic pages but by removing the really unnecessary modules I made my httpds much faster (especially on pipelined image downloads) and they only have some 10 MB (RES) per worker. But I would expect that they are running the default set of modules. Kai
Kai Schaetzl schrieb: > Ralph Angenendt wrote on Wed, 21 Jan 2009 10:52:59 +0100: > > >> That really depends. If you only shove out static pages and have one or >> two or three odd cgis on the machine, you can flatten down the httpd >> binary quite a bit by throwing out unneeded modules. >> >> 500 to 750 clients shouldn't be that much of a problem then ... >> > > Sure it depends ;-) I do serve dynamic pages but by removing the really > unnecessary modules I made my httpds much faster (especially on pipelined > image downloads) and they only have some 10 MB (RES) per worker. > > But I would expect that they are running the default set of modules. > > Kai > > If you can separate-out the images to a specific URL (img.domain.com) and put them all in the same directory, you can use NGINX to serve them. Of course, the actual transfer-speed will not increase much - but latency will go down to the absolute minimum. And latency is what makes a page appear "fast" or "slow" to customers.
Ralph Angenendt wrote: > Kai Schaetzl wrote: >> There's your limit. However, you should check with your hardware if upping >> it is really desirable. With 4 GB I think you won't be able to handle much >> more anyway. > > That really depends. If you only shove out static pages and have one or > two or three odd cgis on the machine, you can flatten down the httpd > binary quite a bit by throwing out unneeded modules. > > 500 to 750 clients shouldn't be that much of a problem then ... Yeah gotta monitor it.. just checked in on some servers I ran at my last company and the front end proxies (99% mod_proxy) seem to peak out traffic wise at about 230 workers. For no other reason other than because I could each proxy has 8 apache instances running(4 for HTTP 4 for HTTPS). CPU usage peaks at around 3%(dual proc single core), memory usage around 800MB. About 100 idle workers. Keepalive was set to 300 seconds due to poor application design. One set of back end apache servers peaks traffic wise at around 100 active workers, though memory usage was higher, around 2GB because of mod_fcgid running ruby on rails. CPU usage seems to be at around 25%(dual proc quad core) for that one application. Each back end application had it's own dedicated apache instance(11 apps) for maximum stability/best performance monitoring. The bulk of them ran on the same physical hardware though. Traffic routing was handled by a combination of F5 load balancers and the apache servers previously mentioned(F5 iRules were too slow and F5's TMM was not scalable at the time). nate
> > > KeepAliveTimeout 150 > >reduce this to 10 or 5. > > > Kai > > -- > Kai Schätzl, Berlin, Germany Kai, what do you think about the "general Timeout it is set to 300 ive never much thought about it, yet should we be consider and possible reduce that one too? - rh
RobertH wrote on Wed, 21 Jan 2009 11:26:41 -0800: > what do you think about the "general Timeout > > it is set to 300 > > ive never much thought about it, yet should we be consider and possible > reduce that one too? I'm using 120, but I don't think reducing this value has much impact. Kai
Load Balancer
Apache Load Balancer
mod_proxy_balancer - Apache HTTP Server - mod_proxy_balancer - http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html
<Proxy balancer://mycluster> BalancerMember http://192.168.1.50:80 BalancerMember http://192.168.1.51:80 </Proxy> ProxyPass /test balancer://mycluster
User Directories
Default:
UserDir public_html
Results in:
~/public_html/
To disable user directories:
UserDir disabled
Ubuntu user dir config:
UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory>
References:
- Per-user web directories - Apache HTTP Server - http://httpd.apache.org/docs/2.2/howto/public_html.html
- mod_userdir - Apache HTTP Server - http://httpd.apache.org/docs/2.2/mod/mod_userdir.html#userdir
Minimal Memory Instance
Assuming running prefork module (check with 'apache -l'):
KeepAlive off <IfModule mpm_prefork_module> StartServers 2 MinSpareServers 2 MaxSpareServers 5 ServerLimit 20 MaxClients 20 MaxRequestsPerChild 10000 </IfModule>
References:
- Running Apache On A Memory-Constrained VPS | Kalzumeus Software - http://www.kalzumeus.com/2010/06/19/running-apache-on-a-memory-constrained-vps/
---
TODO See:
- http://wiki.vpslink.com/Low_memory_MySQL_/_Apache_configurations
- http://www.narga.net/optimizing-apachephpmysql-low-memory-server/
Issues
Could not reliably determine domain name
Warning:
Could not reliably determine the server's fully qualified domain name
Solution:
- Force ServerName in apache configs [8]
ServerName localhost
client denied by server configuration
browser:
403 Forbidden - You don't have permission to access / on this server.
Error:
==> httpd/default-error_log <== [Mon Sep 01 21:42:02.273169 2014] [authz_core:error] [pid 11221] [client 174.52.47.54:50675] AH01630: client denied by server configuration: /www/
Add Directory allow permissions: [9]
DocumentRoot /www <Directory /www> Order allow,deny Allow from all </Directory>
or this worked better for me: [10] [11]
DocumentRoot /www <Directory /www> Require all granted </Directory>
Actually this just worked on the latest Ubuntu system:
<Directory /www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory>
References:
- ClientDeniedByServerConfiguration - Httpd Wiki - https://wiki.apache.org/httpd/ClientDeniedByServerConfiguration
permissions missing component path
==> httpd/default-error_log <== [Mon Sep 01 21:50:09.256120 2014] [core:error] [pid 11511] (13)Permission denied: [client 174.52.47.54:50744] AH00035: access to \ /index.html denied (filesystem path '/www/index.html') because search permissions are missing on a component of the path