nginx mail server. iRedmail installation. Installing and configuring PHP

This article will explain how to configure NGINX Plus or NGINX Open Source as a proxy for a mail server or an external mail service.

Introduction

NGINX can proxy IMAP, POP3 and SMTP protocols to one of the upstream mail servers that host mail accounts and thus can be used as a single endpoint for email clients. This may bring in a number of benefits, such as:

  • easy scaling the number of mail servers
  • choosing a mail server basing on different rules, for example, choosing the nearest server basing on a client’s IP address
  • distributing the load among mail servers

Prerequisites

    NGINX Plus (already includes the Mail modules necessary to proxy email traffic) or NGINX Open Source compiled the Mail modules using the --with-mail parameter for email proxy functionality and --with-mail_ssl_module parameter for SSL/TLS support:

    $ ./configure --with-mail --with-mail_ssl_module --with-openssl=[ DIR] /openssl-1.1.1

    IMAP, POP3 and/or SMTP mail servers or an external mail service

Configuring SMTP/IMAP/POP3 Mail Proxy Servers

In the NGINX configuration file:

    mail(#...)

    mail ( server_name mail.example.com ; #... )

    mail ( server_name mail.example.com ; auth_http localhost : 9000 /cgi-bin/nginxauth.cgi ; #... )

    Alternatively, specify whether to inform a user about errors from the authentication server by specifying the proxy_pass_error_message directive. This may be handy when a mailbox runs out of memory:

    mail ( server_name mail.example.com ; auth_http localhost : 9000 /cgi-bin/nginxauth.cgi ; proxy_pass_error_message on ; #... )

    Configure each SMTP, IMAP, or POP3 server with the server blocks. For each server, specify:

    • the port number that correspond to the specified protocol with the listen directive
    • the protocol with the protocol directive (if not specified, will be automatically detected from the port specified in the listen directive)
    • permitted authentication methods with imap_auth , pop3_auth , and smtp_auth directives:

    server ( listen 25 ; protocol smtp ; smtp_auth login plain cram-md5 ; ) server ( listen 110 ; protocol pop3 ; pop3_auth plain apop cram-md5 ; ) server ( listen 143 ; protocol imap ; )

Setting up Authentication for a Mail Proxy

Each POP3/IMAP/SMTP request from the client will be first authenticated on an external HTTP authentication server or by an authentication script. Having an authentication server is obligatory for NGINX mail server proxy. The server can be created by yourself in accordance with the NGINX authentication protocol which is based on the HTTP protocol.

If authentication is successful, the authentication server will choose an upstream server and redirect the request. In this case, the response from the server will contain the following lines:

HTTP/1.0 200 OK Auth-Status: OK Auth-Server: # the server name or IP address of the upstream server that will be used for mail processing Auth port: # the port of the upstream server

If authentication fails, the authentication server will return an error message. In this case, the response from the server will contain the following lines:

HTTP/1.0 200 OK Auth-Status: # an error message to be returned to the client, for example “Invalid login or password” Auth-Wait: # the number of remaining authentication attempts until the connection is closed

Note that in both cases the response will contain HTTP/1.0 200 OK which might be confusing.

For more examples of requests to and responses from the authentication server, see the ngx_mail_auth_http_module in NGINX Reference documentation .

Setting up SSL/TLS for a Mail Proxy

Using POP3/SMTP/IMAP over SSL/TLS you make sure that data passed between a client and a mail server are secured.

To enable SSL/TLS for the mail proxy:

    Make sure your NGINX is configured with SSL/TLS support by typing-in the nginx -V command in the command line and then looking for the with --mail_ssl_module line in the output:

    $ nginx -V configure arguments: ... with--mail_ssl_module

    Make sure you have obtained server certificates and a private key and put them on the server. A certificate can be obtained from a trusted certificate authority (CA) or generated using an SSL library such as OpenSSL.

    ssl on ;

    startls on ;

    Add SSL certificates: specify the path to the certificates (which must be in the PEM format) with the ssl_certificate directive, and specify the path to the private key in the ssl_certificate_key directive:

    mail ( #... ssl_certificate /etc/ssl/certs/server.crt ; ssl_certificate_key /etc/ssl/certs/server.key ; )

    You can use only strong versions and ciphers of SSL/TLS with the ssl_protocols and ssl_ciphers directives, or you can set your own preferable protocols and ciphers:

    mail ( #... ssl_protocols TLSv1 TLSv1.1 TLSv1.2 ; ssl_ciphers HIGH:!aNULL:!MD5 ; )

Optimizing SSL/TLS for Mail Proxy

These hints will help you make your NGINX mail proxy faster and more secure:

    Set the number of worker processes equal to the number of processors with the worker_processes directive set on the same level as the mail context:

    worker_processes auto ; mail(#...)

    Enable the shared session cache and disable the built-in session cache with the auto ; mail ( server_name mail.example.com ; auth_http localhost : 9000 /cgi-bin/nginxauth.cgi ; proxy_pass_error_message on ; ssl on ; ssl_certificate /etc/ssl/certs/server.crt ; ssl_certificate_key /etc/ssl/certs/server. key ; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 ; ssl_ciphers HIGH:!aNULL:!MD5 ; ssl_session_cache shared:SSL:10m ; ssl_session_timeout 10m ; server ( listen 25 ; protocol smtp ; smtp_auth login plain cram-md5 ; ) server ( listen 110 ; protocol pop3 ; pop3_auth plain apop cram-md5 ; ) server ( listen 143 ; protocol imap ; ) )

    In this example, there are three email proxy servers: SMTP, POP3 and IMAP. Each of the servers is configured with SSL and STARTTLS support. SSL session parameters will be cached.

    The proxy server uses the HTTP authentication server – its configuration is beyond the scope of this article. All error messages from the server will be returned to clients.

NGINX can be used not only as a web server or http-proxy, but also for proxying mail via SMTP, IMAP, POP3 protocols. This will set up:

  • A single entry point for a scalable mail system.
  • Load balancing between all mail servers.

This article installs on a Linux operating system. As mail service, to which requests are sent, you can use postfix, exim, dovecot, exchange, iredmail assembly and more.

Principle of operation

NGINX accepts requests and authenticates to the web server. Depending on the result of the login and password verification, the proxy will return a response with several headers.

In case of success:

Thus, we determine the server and port of the mail server based on authentication. This gives a lot of opportunities with the appropriate knowledge of programming languages.

In case of failure:

Depending on the authentication result and the header, the client is redirected to the mail server we need.

Server preparation

Let's make some changes to the server security settings.

SELinux

Disable SELinux if using CentOS or if using this system Security on Ubuntu:

vi /etc/selinux/config

SELINUX=disabled

Firewall

If we use firewalld(default on CentOS):

firewall-cmd --permanent --add-port=25/tcp --add-port=110/tcp --add-port=143/tcp

firewall-cmd --reload

If we use iptables(default in Ubuntu):

iptables -A INPUT -p tcp --dport 25 -j ACCEPT

iptables -A INPUT -p tcp --dport 110 -j ACCEPT

iptables -A INPUT -p tcp --dport 143 -j ACCEPT

apt-get install iptables-persistent

iptables-save > /etc/iptables/rules.v4

* in this example we allowed SMTP (25), POP3 (110), IMAP (143).

Installing NGINX

Depending on the operating system, installing NGINX is slightly different.

or Linux centos:

yum install nginx

or Linux ubuntu:

apt install nginx

We allow autostart of the service and start it:

systemctl enable nginx

systemctl start nginx

If NGINX is already installed on the system, check which modules it works with:

We will get a list of options with which the web server is built - among them we should see --with-mail. If the required module does not exist, you need to update nginx

Setting up NGINX

Open the nginx configuration file and add the option mail:

vi /etc/nginx/nginx.conf

mail (

auth_http localhost:80/auth.php;

Server(
listen 25;
protocol smtp;
smtp_auth login plain cram-md5;
}

Server(
listen 110;
protocol pop3;

}

Server(
listen 143;
protocolimap;
}
}

* where:

  • server_name— the name of the mail server that will be displayed during the SMTP greeting.
  • auth_http— web server and URL for authentication request.
  • proxy_pass_error_message— allows or denies the display of a message in case of unsuccessful authentication.
  • listen— port on which requests are listened.
  • protocol is the application protocol for which the corresponding port is listening.
  • smtp_auth— available authentication methods for SMTP.
  • pop3_auth— available authentication methods for POP3.

In the http - server section add:

Server(
listen 80 default_server;
listen [::]:80 default_server;
...

Location ~ \.php$ (
set $root_path /usr/share/nginx/html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $root_path$fastcgi_script_name;
include fastcgi_params;
fastcgi_param DOCUMENT_ROOT $root_path;
}
...

Restart the nginx server:

systemctl restart nginx

Installing and configuring PHP

To perform authentication using PHP, the following packages must be installed on the system.

If a CentOS:

yum install php php-fpm

If a ubuntu:

apt-get install php php-fpm

Start PHP-FPM:

systemctl enable php-fpm

systemctl start php-fpm

Authentication

The login and password are checked by a script, the path to which is set by the auth_http option. In our example, this is a PHP script.

An example of an official template for a login and password verification script:

vi /usr/share/nginx/html/auth.php

* this script accepts any login and password and redirects requests to servers 192.168.1.22 and 192.168.1.33 . To set the authentication algorithm, we edit lines 61 - 64. Lines 73 - 77 are responsible for returning the servers to which the redirect is in progress - in this example, if the login begins with characters "a", "c", "f", "g", then the redirect will be to the server mailhost01, otherwise, on mailhost02. The mapping of server names to IP addresses can be set on lines 31, 32, otherwise, the call will go by the domain name.

Mail server setup

The data exchange between the NGINX proxy and the mail server is in the clear. It is necessary to add the possibility of authentication by the PLAIN mechanism to the exception. For example, to configure dovecot, do the following:

vi /etc/dovecot/conf.d/10-auth.conf

Add lines:

remote 192.168.1.11 (
disable_plaintext_auth = no
}

* in this example, we allowed PLAIN requests for authentication from the server 192.168.1.11 .

We also check:

* if ssl will matter required, the check will not work, since it turns out that on the one hand the server allows requests in the clear, but requires ssl encryption.

Restart Dovecot service:

systemctl restart dovecot

Client setup

You can proceed to check our proxy settings. To do this, in the client settings, specify the address or name of the nginx server as IMAP/POP2/SMTP, for example:

* in this example, the mail client is configured to connect to the server 192.168.1.11 on open ports 143 (IMAP) and 25 (SMTP).

Encryption

Now let's set up an SSL connection. Nginx must be built with a module mail_ssl_module- check with the command:

In the absence of the required module, we rebuild nginx.

After editing our configuration file:

vi /etc/nginx/nginx.conf

mail (
server_name mail.domain.local;
auth_http localhost/auth.php;

proxy_pass_error_message on;

SSL on;
ssl_certificate /etc/ssl/nginx/public.crt;
ssl_certificate_key /etc/ssl/nginx/private.key;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

Server(
listen 110;
protocol pop3;
pop3_auth plain apop cram-md5;
}

Server(
listen 143;
protocolimap;
}

Reason: The SELinux security system is triggered.

Solution: disable or configure SELinux.

iRedMail is an open source mail server build. The assembly is based on the Postfix SMTP server (Mail Transfer Agent, MTA for short). The assembly also includes: Dovecot, SpamAssassin, Greylist, ClamAV, SOGo Roundcube, NetData and NGINX.

Dovecot- IMAP/POP3 server.

Spamassassin- spam filtering tool.

Greylist- anti-spam tool based on gray lists.

ClamAV- antivirus.

Roundcube and SOGo- Web clients for working with e-mail.

Netdata- a program for monitoring server operation in real time.

Nginx- web server.

Supports operating systems: CentOS 7, Debian 9, Ubuntu 16.04/18.04, FreeBSD 11/12 and OpenBSD 6.4.

iRedMail has paid and free versions, which differ from each other by the functionality of iRedAdmin's own web interface of the mail assembly. AT free version you can only create domains, user mailboxes, and administrator mailboxes. If you need to create an alias, you won't be able to do it in the free version via iRedAdmin. Luckily, there is a free solution called PostfixAdmin that allows you to do this. PostfixAdmin easily integrates with iRedMail and works great with it.

Installation

To install, we need one of the operating systems listed above. I will be using Ubuntu Server 18.04. You must also have a purchased domain name and a configured DNS zone. If you use the DNS server of your domain registrar, then you need to make two records in the domain zone management section: A and MX. You can also use your own DNS by setting up delegation in your domain name registrar's personal account.

Setting up a domain zone when using a DNS registrar

Note! Entry time DNS settings in effect from a few hours to one week. Until the settings take effect, the mail server will not work correctly.

To install, download the current version from the iRedMail website. At the moment it is 0.9.9.

# wget https://bitbucket.org/zhb/iredmail/downloads/iRedMail-0.9.9.tar.bz2

Then unpack the downloaded archive.

# tar xjf iRedMail-0.9.9.tar.bz2

Unpacking the archive

And go to the created folder.

# cd iRedMail-0.9.9

Folder with iRedMail installer

Checking the contents of a folder

Folder content

And run the iRedMail installation script.

# bash iRedMail.sh

The installation of the mail system will start. During the installation process, you will need to answer a number of questions. We agree to start the installation.

Installation start

Selecting an installation directory

Now you need to select a web server. The choice is not great, so we choose NGINX.

Selecting a web server

Now you need to select the database server that will be installed and configured to work with the mail system. Choose MariaDB.

Selecting a database server

Set the root password for the database.

Creating a database root password

Now we specify our mail domain.

Creating a mail domain

Then create a password for the admin box [email protected] domain.ru.

Create a mail administrator password

Selecting Web Components

We confirm the specified settings.

Confirmation of settings

Installation started.

Installation

Upon completion of the installation, confirm the creation of the rule iptables for SSH and restart the firewall. iRedMail works with iptables. In Ubuntu, the most commonly used firewall management utility UVW. If for one reason or another you have such a need, then install UVW (apt install ufw) and add rules to UVW(example: ufw allow "nginx full" or ufw allow Postfix) did not block the mail server. You can view the list of available rules by running the command: ufw app list. Then turn on UVW: ufw enable.

Create an iptables rule

Firewall restart

This completes iRedMail installation. The system gave us web interface addresses and login credentials. To enable all components of the mail system, you must restart the server.

End of installation

We reboot.

# reboot

Setting

First you need to make sure everything works. We try to enter the iReadAdmin control panel at https://domain/iredadmin. Login [email protected] domain.ru, the password was created during installation. There is a Russian-language interface.

As you can see everything works. While logging into iRedAdmin, you most likely received a security error related to the certificate. This is because iRedMail has a self-signed certificate, which the browser swears at. To fix this problem, you need to install a valid SSL certificate. If you have purchased one, you can install it. In the example, I will install free SSL from Let's Encrypt.

Installing a Let's Encrypt SSL Certificate

We will install the certificate using the certbot utility. Let's add a repository first.

# add-apt-repository ppa:certbot/certbot

Then install certboot itself with necessary components.

# apt install python-certbot-nginx

We receive a certificate.

# certbot --nginx -d domain.ru

After running the command, the system will ask you to enter an email address, enter. After that, you will most likely get an error that it is not possible to find the server block for which the certificate was generated. In this case, this is normal, since we don’t have any server block. For us, the main thing is to get a certificate.

Obtaining a certificate

As you can see, the certificate was successfully received and the system showed us the paths to the certificate itself and to the key. They are just what we need. In general, we received 4 files that will be stored in the "/etc/letsencrypt/live/domain" folder. Now we need to tell the web server about our certificate, that is, replace the embedded certificate with the one we just received. To do this, we need to edit just one file.

# nano /etc/nginx/templates/ssl.tmpl

And change the last two lines in it.

Replacing the SSL certificate

We change the paths in the file to the paths that the system told us when we received the certificate.

Replacing an SSL Certificate

And restart NGINX.

# service nginx restart

Now let's try to log in again. iRedAdmin.

SSL certificate verification

There is no more certificate error. The certificate is valid. You can click on the lock and see its properties. When the certificate expires, certboot should renew it automatically.

Now let's talk about the Dovecot and Postfix certificate. To do this, we will edit two configuration files. We do:

# nano /etc/dovecot/dovecot.conf

Finding a block:

#SSL: Global settings.

And we change the certificate registered there to ours.

Certificate replacement for Dovecot

Also pay attention to the line "ssl_protocols". Its value must be "!SSLv3" otherwise you will get a "Warning: SSLv2 not supported by OpenSSL. Please consider removing it from ssl_protocols" error when restarting Dovecot.

# nano /etc/postfix/main.cf

Finding a block:

# SSL key, certificate, CA

And we change the paths in it to the paths to the files of our certificate.

Certificate replacement for Postfix

This completes the installation of the certificate. It is necessary to restart Dovecot and Postfix, but it is better to reboot the server.

# service dovecot restart

# reboot

Installing PHPMyAdmin

This item is optional, but I recommend that you complete it and install PHPMyAdmin for your database experience.

# apt install phpmyadmin

The installer will ask you to work with which web server to configure PHPMyAdmin, since NGINX is not in this list, just press TAB and move on.

Installing PHPMyAdmin

After the installation is complete, in order for phpmyadmin to work, you need to make a symlink to the directory with which NGINX works by default.

# ln -s /usr/share/phpmyadmin /var/www/html

And try to go to https://domain/phpmyadmin/

PHPMyAdmin is working. The connection is protected by a certificate, there are no errors. Move on. Create a database administrator MySQL data(MariaDB).

# mysql

And we get into the MariaDB management console. Next, we execute the commands one by one:

MariaDB > CREATE USER "admin"@"localhost" IDENTIFIED BY "password";
MariaDB > GRANT ALL PRIVILEGES ON *.* TO "admin"@"localhost" WITH GRANT OPTION;
MariaDB > FLUSH PRIVILEGES;

Creating a MySQL user

Everything is OK, you are logged in. PHPMyAdmin is ready to go.

Installing PostfixAdmin

In principle, PostfixAdmin, like PHPMyAdmin, can not be installed. The mail server will work just fine without these components. But then you won't be able to create mail aliases. If you do not need this, then feel free to skip these sections. If you still need aliases, then you have two options: buying a paid version of iReaAdmin or installing PostfixAdmin. Of course, you can do this without additional software by manually writing aliases in the database, but this is not always convenient and is not suitable for everyone. I recommend using PostfixAdmin, we will now consider its installation and integration with iRedMail. We start the installation:

# apt install postfixadmin

We agree and create a password for the system database of the program.

Installing PostfixAdmin

Installing PostfixAdmin

We make a symlink by analogy with the installation of PHPMyAdmin.

# ln -s /usr/share/postfixadmin /var/www/html

We make the user on whose behalf the web server is launched the owner of the directory. In our case, NGINX is run as the www-data user.

# chown -R www-data /usr/share/postfixadmin

Now we need to edit the PostfixAdmin configuration file and add information about the database that iRedAdmin uses. By default, this database is called vmail. If you go to PHPMyAdmin you can see it there. And so, in order for PostfixAdmin to make changes to the database, we prescribe it in the PostfixAdmin configuration.

# nano /etc/postfixadmin/config.inc.php

Finding the lines:

$CONF["database_type"] = $dbtype;
$CONF["database_host"] = $dbserver;
$CONF["database_user"] = $dbuser;
$CONF["database_password"] = $dbpass;
$CONF["database_name"] = $dbname;

And bring to mind:

$CONF["database_type"] = "mysqli"; # Database type
$CONF["database_host"] = "localhost"; # Database server host
$CONF["database_user"] = "admin"; # Login with write access to the vmail database. You can use the previously created admin
$CONF["database_password"] = "password"; # Password of the user specified above
$CONF["database_name"] = "vmail"; # iRedMail database name

Entering database information

If you plan to use the SOGo web mail client, there is one more thing to do additional action, namely to change PostfixAdmin encryption in paragraph $CONF["encrypt"] with "md5crypt" on the "dovecot:SHA512-CRYPT". If you do not do this, then when you try to authorize in SOGo by a user created in PostfixAdmin, you will receive an error with an incorrect login or password.

Changing the Encryption Type

Now, in order to successfully complete the installation and not get errors, you need to query the database. It is convenient to do this through PHPMyAdmin. Select the vmail database and go to the SQL tab. In the window enter:

DROP INDEX domain on mailbox;
DROP INDEX domain on alias;
ALTER TABLE alias ADD COLUMN `goto` text NOT NULL;

Database query

And press "Forward". Now we are all set, you can go to the PostfixAdmin web interface and complete the installation. To do this, in the browser you need to type: https://domain/postfixadmin/setup.php.

The following should appear:

Installing PostfixAdmin

If everything is done according to the instructions, then there should be no errors. If they still are, then they are betrayed to be eliminated, otherwise the system will not let you continue. Set the installation password and click " Generate password hash". The system will generate a hash of the password, which must be inserted into the parameter $CONF["setup_password"].

Completing the installation of PostfixAdmin

Changing configuration file settings

Now we enter the password we just created and create the PostfixAdmin administrator. It is better not to create an administrator with a postmaster login, as there may be problems with logging into the iRedAdmin administration panel.

Creating a PostfixAdmin Administrator

Everything, the administrator is created. You can sign in.

Please note that it is better to rename or delete the setup.php file in the postfixadmin directory from a security point of view.

Let's go: https://domain/postfixadmin/ and enter the newly created credentials. In PostfixAdmin, as well as in iRedAdmin, the Russian language is available. It can be selected during authorization.

We are trying to create a user mailbox.

Enable/Disable iRedMail Modules

iRedMail modules are managed by iRedAPD. It has a configuration file that contains working modules. If you do not need a particular module, you can remove it from the configuration file and it will stop working. We do:

# nano /opt/iredapd/settings.py

Find the line plugins" and remove the components you do not need from it. I will remove the component "greylisting". Of course, it protects against spam quite effectively, but the necessary letters often do not reach.

Greylist (grey list) is an automatic spam protection technology based on analysis of the behavior of the sender's server. When "greylisting" is enabled, the server refuses to accept a letter from an unknown address for the first time, reporting a temporary error. In this case, the sender server must retry the send later. Spammers usually don't do this. If the letter is sent again, it is added to the list for 30 days and the mail is already exchanged from the first time. Use this module or not decide for yourself.

Enabling/Disabling mail modules

You must restart after making changes. iRedAPD.

# service iredapd restart

Mail server testing

This completes the iRedMail mail server setup. You can proceed to the final stage - testing. Let's create two mailboxes. To check one through iRedAdmin, the second through PostfixAdmin and send a letter from one mailbox to another and vice versa. Create a mailbox in iRedAdmin [email protected] domain.ru. In PostfixAdmin - [email protected] domain.ru

Creating a user in iRedAdmin

Creating a user in PostfixAdmin

Check that users have been created.

If you pay attention to the "To" column in the list of PostfixAdmin mailboxes, you can see the difference between mailboxes created in iRedAdmin and PostfixAdmin. Mailboxes created in iRedAdmin are marked as " forward only", and those created in PostfixAdmin as - " Mailbox". At first, for a long time I could not understand why this happens and what is the difference between them, and finally I noticed one thing. Mailboxes in iRedAdmin are created without aliases, and mailboxes in PostfixAdmin with an alias to itself.

And if these aliases are deleted, then the mailboxes will be displayed as those created in iRedAdmin" forward only".

Removing aliases

Aliases have been removed. Check PostfixAdmin.

As you can see, all the boxes have become "Forward only". In the same way, if you create an alias for yourself in the mailbox created in iRedAdmin, it will become "Mailbox". In principle, this does not affect the performance of mail. The only thing is that you will not be able to create an alias on a mailbox created in PostfixAdmin. Instead of creating an alias, you will need to edit an existing one. Speaking of aliases, new version iRedMail needs to make a change to one of the Postfix maps that handles aliases. And if you do not do this, then the created aliases will not work. For this it is necessary in the file /etc/postfix/mysql/virtual_alias_maps.cf to correct:

We do:

# nano /etc/postfix/mysql/virtual_alias_maps.cf

And we fix it.

Setting up aliases

Restart Postfix:

# service postfix restart

After that everything should work.

And so, let's start checking the mail. In a box user1 we will go through Roundcube, and into the box user2- via SOGo and send a letter from the mailbox user1 on the user2 and back.

Sending an email with Roundcube

Receiving an email in SOGo

Sending an email to SOGo

Receiving email in Roundcube

Everything works without any problems. Delivery of the letter takes from two to five seconds. In the same way, letters are perfectly delivered to Yandex and mail.ru servers (checked).

Now let's check the aliases. Let's create a box user3 and make an alias from the mailbox user1 on the box user2. And send a letter from the box user3 on the box user1. In this case, the letter will have to come to the box user2.

Create an alias

Sending an email from user3's mailbox to user1's mailbox

Receiving a letter on user2's mailbox

With the work of aliases, too, everything is in order.

Let's test the work of the mail server through the local mail client. In an example, consider Mozilla Thunderbird. Let's create two more users: client1 and client2. We will connect one mailbox via IMAP, the other via POP3 and send a letter from one mailbox to another.

Connecting via IMAP

POP3 connection

We send a letter from Client 1 to Client 2.

Dispatch from Customer 1

Receive on Client 2

And in reverse order.

Sending from Client 2

Receive on Client 1

Everything is working.

If you go to: https://domain/netdata, then you can observe the graphs of the state of the system.

Conclusion

This completes the installation, configuration and testing of the iRedMail mail system. As a result, we got a completely free full-fledged mail server with a valid SSL certificate, two different web-based mail clients, two control panels, as well as anti-spam and anti-virus built into the mail. If you wish, instead of web mail clients, you can use local mail clients such as Microsoft Outlook or Mozilla Thunderbird. If you do not plan to use web mail clients, you can not install them at all so as not to overload the server, or install one thing that you like more. I personally like SOGo more because its interface is optimized for mobile devices, making it very convenient to view email from your smartphone. The same applies to NetData and iRedAdmin, if you do not plan to use it, it is better not to install it. This mail system is not very demanding on resources. All this runs on a VPS server with 1024 MB of RAM and one virtual processor. If you have any questions about this mail system, write in the comments.

P.S. During testing of this product on various operating systems with 1 GB of RAM (Ubuntu, Debian, CentOS) it turned out that 1 GB is not enough for ClamAV to work. In almost all cases, when using 1 GB of memory, the antivirus referred to a database error. At the same time, on the Debian and Ubuntu operating systems, the antivirus simply did not scan the mail passing through the server, otherwise everything worked fine. On CentOS, the situation was somewhat different. The clamd service completely hung up the system, thereby making it impossible normal work server. When trying to log in to the web interfaces, NGINX periodically gave 502 and 504 errors. Mail was also sent through time. At the same time, if you add RAM up to 2 GB, then in all cases there were no problems with the operation of the antivirus and the server as a whole. ClamAV scanned the mail passing through the mail server, and wrote about it in the logs. When trying to send a virus in an attachment, the send was blocked. Memory consumption was approximately 1.2 - 1.7 GB.

Nginx- small in size, very fast, quite functional web server and mail proxy server, developer Igor Sysoev (rambler.ru). Due to the very low consumption of system resources and speed, as well as configuration flexibility, web nginx server often used as a front end to heavier servers such as Apache, in high load projects. The classic option is a bunch, Nginx - Apache - FastCGI. Working in such a scheme, nginx server, accepts all requests coming via HTTP, and, depending on the configuration and the request itself, decides whether to process the request itself and give the client a ready response or send a request for processing to one of the backends ( Apache or FastCGI).

As you know, the Apache server processes each request in a separate process (thread), which, it must be said, consumes a rather NOT small amount of system resources, if there are 10-20 such processes, it’s nonsense, and if there are 100-500 or more of them, the system becomes not fun.

Let's try to imagine such a situation. Suppose on Apache 300 HTTP requests come from clients, 150 clients sit on fast leased lines, and the other 150 on relatively slow Internet channels, even if not on modems. What happens in this situation? And the following happens, the Apache web server, in order to process these 300 connections, creates for each process (thread), it will generate content quickly, and 150 fast clients will immediately take the result of their requests, the processes that served them will be killed and resources will be released , and 150 are slow, and it will take the results of their queries slowly, due to the narrow Internet channel, as a result of which 150 processes will hang in the system Apache, waiting for the clients to take the content generated by the web server, devouring a lot of system resources. Naturally, the situation is hypothetical, but I think the essence is clear. To fix the above situation and the bundle helps. After reading the entire request from the client, it passes it on for processing Apache, which in turn generates content and returns the ready response to Nginx as quickly as possible, after which it can, with a clear conscience, beat the process and release its occupied system resources. Nginx web server, receiving the result of the request from Apache, writes it to a buffer or even to a file on disk and can give it to slow clients for an arbitrarily long time, while its worker processes consume so few resources that .. "it's even ridiculous to talk about it" ©. :) Such a scheme significantly saves system resources, I repeat, but Nginx worker processes consume a meager amount of resources, this is all the more relevant for large projects.

And this is only a small part of what the Nginx server can do, do not forget about the possibility of caching data and working with memcached. Here is a list of the main features of the Nginx web server.

Functionality of the Nginx server as an HTTP server

  • Static content handling, index files, directory listing, open file descriptor cache;
  • Accelerated proxying with caching, load balancing and failover;
  • Accelerated Support FastCGI servers with caching, load balancing and fault tolerance;
  • Modular structure, support for various filters (SSI, XSLT, GZIP, resume, chunked responses);
  • Support for SSL and TLS SNI extensions;
  • ip-based or name-based virtual servers;
  • Working with KeepAlive and pipelined connections;
  • The ability to configure any timeouts as well as the number and size of buffers, at the level Apache server;
  • Performance various activities depending on the address of the client;
  • Changing the URI using regular expressions;
  • Special error pages for 4xx and 5xx;
  • Access restriction based on client address or password;
  • Setting log file formats, log rotation;
  • Limiting the speed of response to the client;
  • Limiting the number of simultaneous connections and requests;
  • Support for PUT, DELETE, MKCOL, COPY and MOVE methods;
  • Changing settings and updating the server without stopping work;
  • built-in Perl;

The functionality of the Nginx server, as a mail proxy server

  • Forwarding to IMAP/POP3 backend using an external HTTP authentication server;
  • Checking the user's SMTP on an external HTTP authentication server and forwarding to an internal SMTP server;
  • Support for the following authentication methods:
    • POP3 - USER/PASS, APOP, AUTH LOGIN/PLAIN/CRAM-MD5;
    • IMAP - LOGIN, AUTH LOGIN/PLAIN/CRAM-MD5;
    • SMTP - AUTH LOGI / PLAIN / CRAM-MD5;
  • SSL support;
  • support for STARTTLS and STLS;

Operating systems and platforms supported by the Nginx web server

  • FreeBSD, platforms 3 to 8, i386 and amd64;
  • Linux, from 2.2 to 2.6 - i386 platform; Linux 2.6 - amd64;
  • Solaris 9 - i386 and sun4u platforms; Solaris 10 - i386, amd64 and sun4v platforms;
  • MacOS X platforms ppc, i386;
  • windows xp, Windows Server 2003; (on the this moment in beta stage)

Nginx server architecture and scalability

  • The main (master) process, several (configured in the configuration file) worker processes running under an unprivileged user;
  • Support for the following connection handling methods:
    • select is the standard method. The corresponding Nginx module is built automatically if a more efficient method is not found on a given platform. You can force the build of this module to be forced on or off using the --with-select_module or --without-select_module configuration options.
    • poll is the standard method. The corresponding Nginx module is built automatically if a more efficient method is not found on a given platform. You can force the build of this module to be forced on or off using the --with-poll_module or --without-poll_module configuration options.
    • kqueue - effective method, used on FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0, and MacOS X operating systems. May cause kernel panic when used on dual-processor MacOS X machines.
    • epoll is an efficient method used in Linux 2.6+. Some distributions, such as SuSE 8.2, have patches to support epoll in the 2.4 kernel.
    • rtsig - real time signals, an efficient method used in Linux 2.2.19+. By default, there can be no more than 1024 signals in the queue for the entire system. This is not enough for servers with a high load, the queue size must be increased using the /proc/sys/kernel/rtsig-max kernel parameter. However, as of Linux 2.6.6-mm2 this option is removed, instead each process has a separate signal queue, the size of which is determined by RLIMIT_SIGPENDING.
    • When the queue is full, nginx server resets it and handles connections with the poll method until the situation returns to normal.
    • /dev/poll- effective method, supported in operating rooms solaris systems 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A+.
    • eventport - event ports, an effective method used in Solaris 10. Before use, a patch must be installed to avoid kernel panic.
  • Using the features of the kqueue method, such as EV_CLEAR, EV_DISABLE (to temporarily disable the event), NOTE_LOWAT, EV_EOF, number of available data, error codes;
  • Work with sendfile (FreeBSD 3.1+, Linux 2.2.+, Mac OS X 10.5+), sendfile64 (Linux 2.4.21+) and sendfilev (Solaris 8 7/01+);
  • Support for accept filters (FreeBSD 4.1+) and TCP_DEFER_ACCEPT (Linux 2.4+);
  • 10,000 idle HTTP keep-alive connections consume approximately 2.5M of memory;
  • The minimum number of data copy operations;

Nginx is rapidly gaining popularity, evolving from just a static serve accelerator for Apache to a full-featured and advanced web server that is increasingly being used in isolation. In this article, we will talk about interesting and non-standard scenarios for using nginx, which will allow you to get the most out of the web server.

Mail proxy

Let's start with the most obvious - nginx's ability to act as a mail proxy. This function is in nginx initially, but for some reason it is used in production extremely rarely, some do not even know about its existence. Be that as it may, nginx supports proxying POP3, IMAP and SMTP protocols with various authentication methods, including SSL and StartTLS, and does it very quickly.

Why is this needed? There are at least two uses for this functionality. First, use nginx as a shield against annoying spammers trying to send junk mail through our SMTP server. Usually spammers do not create many problems, as they quickly bounce off at the authentication stage, however, when there are really a lot of them, nginx will help save CPU resources. Second, use nginx to redirect users to multiple POP3/IMAP mail servers. Of course, another mail proxy could also handle this, but why fence the server garden if nginx is already installed on the frontend to serve static via HTTP, for example?

The mail proxy in nginx is not quite standard. It uses an additional layer of authentication implemented by means of HTTP, and only if the user passes this barrier, he is passed on. This functionality is provided by creating a page / script, to which nginx gives the user data, and she / he returns a response in the form of standard OK or a refusal reason (such as “Invalid login or password”). The script runs with the following headers:

Authentication script input HTTP_AUTH_USER: user HTTP_AUTH_PASS: password HTTP_AUTH_PROTOCOL: mail protocol (IMAP, POP3 or SMTP)

And it returns like this:

Authentication script output HTTP_AUTH_STATUS: OK or failure reason HTTP_AUTH_SERVER: real mail server to redirect HTTP_AUTH_PORT: server port

A remarkable feature of this approach is that it can be used not at all for authentication itself, but to scatter users across different internal servers, depending on the username, data on current loads on mail servers, or even by organizing the simplest load balancing using round-robin . However, if you just need to transfer users to an internal mail server, you can use a stub implemented by nginx itself instead of a real script. For example, the simplest SMTP and IMAP proxy in the nginx config will look like in the following way:

# vi /etc/nginx/nginx.conf mail ( # Address of the authentication script auth_http localhost:8080/auth; # Disable the XCLIENT command, some mail servers do not understand it xclient off; # IMAP server server ( listen 143; protocol imap; proxy on; ) # SMTP server server ( listen 25; protocol smtp; proxy on; ) )

# vi /etc/nginx/nginx.conf http ( # Map to the correct port of the mail server depending on the port sent in the HTTP_AUTH_PROTOCOL header map $http_auth_protocol $mailport ( default 25; smtp 25; imap 143; ) # Implementation of the authentication "script" - always returns OK and redirects the user to the internal mail server, setting the correct port using the above mapping server ( listen 8080; location / auth ( add_header "Auth-Status" "OK"; add_header "Auth-Server" "192.168.0.1" ; add_header "Auth-Port" $mailport; return 200; ) ) )

It's all. This configuration allows you to transparently redirect users to the internal mail server without creating an overhead in the form of a script that is unnecessary in this case. Using a script, this configuration can be significantly expanded: set up load balancing, check users against the LDAP database, and perform other operations. Writing a script is beyond the scope of this article, but it is very easy to implement even with only a rudimentary knowledge of PHP and Python.

Video streaming

Setting up a regular nginx-based video hosting is easy. It is enough just to upload the transcoded video to a directory accessible to the server, add it to the config, and configure the Flash or HTML5 player so that it takes video from this directory. However, if you want to set up continuous video broadcasting from some external source or webcam, this scheme will not work, and you will have to look towards special streaming protocols.

There are several protocols that solve this problem, the most efficient and supported of them is RTMP. The only trouble is that almost all RTMP server implementations suffer from problems. Official Adobe Flash media server paid. Red5 and Wowza are written in Java, and therefore do not provide the desired performance, another implementation, Erlyvideo, is written in Erlang, which is good in case of a cluster setup, but not so efficient for a single server.

I suggest another approach - use the RTMP module for nginx. It has excellent performance and also allows you to use one server to serve both the web interface of the site and the video stream. The only problem is that this module is unofficial, so you will have to build nginx with its support yourself. Fortunately, the assembly is carried out in a standard way:

$ sudo apt-get remove nginx $ cd /tmp $ wget http://bit.ly/VyK0lU -O nginx-rtmp.zip $ unzip nginx-rtmp.zip $ wget http://nginx.org/download/nginx- 1.2.6.tar.gz $ tar -xzf nginx-1.2.6.tar.gz $ cd nginx-1.2.6 $ ./configure --add-module=/tmp/nginx-rtmp-module-master $ make $ sudo make install

Now the module needs to be configured. This is done, as usual, through the nginx config:

Rtmp ( # Activate the broadcast server on port 1935 at site/rtmp server ( listen 1935; application rtmp ( live on; ) ) )

The RTMP module does not work in a multi-threaded configuration, so the number of nginx worker processes will have to be reduced to one (later I will tell you how to get around this problem):

worker_processes 1;

Now we can save the file and have nginx re-read the configuration. The nginx setup is complete, but we don’t have the video stream itself yet, so we need to get it somewhere. For example, let it be the video.avi file from the current directory. To turn it into a stream and wrap it into our RTMP broadcaster, let's use the good old FFmpeg:

# ffmpeg -re -i ~/video.avi -c copy -f flv rtmp://localhost/rtmp/stream

If the video file is not in H264 format, it should be recoded. This can be done on the fly using the same FFmpeg:

# ffmpeg -re -i ~/video.avi -c:v libx264 -c:a libfaac -ar 44100 -ac 2 -f flv rtmp://localhost/rtmp/stream

The stream can also be captured directly from the webcam:

# ffmpeg -f video4linux2 -i /dev/video0 -c:v libx264 -an -f flv rtmp://localhost/rtmp/stream

To view the stream on the client side, you can use any RTMP-enabled player, such as mplayer:

$ mplayer rmtp://example.com/rtmp/stream

Or embed the player directly into the web page, which is served by the same nginx (an example from the official documentation):

The simplest RTMP web player

There are only two important lines here: “file: “stream””, indicating the RTMP stream, and “streamer: “rtmp://localhost/rtmp””, which specifies the address of the RTMP streamer. For most tasks, these settings will be enough. Several different streams can be launched at one address, and nginx will effectively multiplex them between clients. But this is not all that the RTMP module is capable of. With its help, for example, you can organize the retransmission of a video stream from another server. The FFmpeg server is not needed for this at all, just add following lines to config:

# vi /etc/nginx/nginx.conf application rtmp ( live on; pull rtmp://rtmp.example.com; )

If you need to create multiple streams in different qualities, you can call the FFmpeg transcoder directly from nginx:

# vi /etc/nginx/nginx.conf application rtmp ( live on; exec ffmpeg -i rtmp://localhost/rtmp/$name -c:v flv -c:a -s 320x240 -f flv rtmp://localhost /rtmp-320x240/$name; ) application rtmp-320x240 ( live on; )

With this configuration, we will get two broadcasters at once, one of which will be available at rtmp://site/rtmp, and the second, broadcasting in 320 x 240 quality, at rtmp://site/rtmp–320x240. Further on the site, you can hang a flash player and quality selection buttons that will slip the player one or another address of the broadcaster.

And finally, an example of broadcasting music to the network:

while true; do ffmpeg -re -i "`find /var/music -type f -name "*.mp3"|sort -R|head -n 1`" -vn -c:a libfaac -ar 44100 -ac 2 -f flv rtmp://localhost/rtmp/stream; done

git proxy

The Git version control system is able to provide access to repositories not only via the Git and SSH protocols, but also via HTTP. Once upon a time, the implementation of HTTP access was primitive and unable to provide full-fledged work with the repository. Since version 1.6.6, the situation has changed, and today this protocol can be used to, for example, bypass firewall restrictions on both sides of the connection, or to create your own Git hosting with a web interface.

Unfortunately, the official documentation only talks about organizing access to Git using the Apache web server, but since the implementation itself is an external application with a standard CGI interface, it can be attached to almost any other server, including lighttpd and, of course, nginx. This does not require anything other than the server itself, installed Git and a small FastCGI server fcgiwrap, which is needed because nginx does not know how to work with CGI directly, but it can call scripts using the FastCGI protocol.

The whole scheme of work will look like this. The fcgiwrap server will hang in the background and wait for a request to execute a CGI application. Nginx, in turn, will be configured to request the execution of the git-http-backend CGI binary through the FastCGI interface each time the address we specified is accessed. Upon receiving a request, fcgiwrap executes git-http-backend with the specified CGI arguments passed in by the GIT client and returns the result.

To implement such a scheme, we first install fcgiwrap:

$ sudo apt-get install fcgiwrap

You do not need to configure it, all parameters are passed via the FastCGI protocol. It will also start automatically. Therefore, it remains only to configure nginx. To do this, create the file /etc/nginx/sites-enabled/git (if there is no such directory, you can write to the main config) and write the following to it:

# vi /etc/nginx/sites-enabled/git server ( # Hang on port 8080 listen 8080; # Address of our server (don't forget to add a DNS entry) server_name git.example.ru; # Logs access_log /var/log/nginx /git-http-backend.access.log; error_log /var/log/nginx/git-http-backend.error.log; # Base address for anonymous access location / ( # When trying to download, send the user to a private address if ($arg_service ~* "git-receive-pack") ( rewrite ^ /private$uri last; ) include /etc/nginx/fastcgi_params; # Address of our git-http -backend fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; # Git repository address fastcgi_param GIT_PROJECT_ROOT /srv/git; # File address fastcgi_param PATH_INFO $uri; # Server address fcgiwrap fastcgi_pass 127.0.0.1:9001; ) # Address for write access location ~/private(/.*)$ ( # User permissions auth_basic "git anonymous read-only, authenticated write"; # HTTP authentication based on htpasswd auth_basic_user_file /etc/nginx/htpasswd; # Settings FastCGI include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param GIT_PROJECT_ROOT /srv/git; fastcgi_param PATH_INFO $1; fastcgi_pass 127.0.0.1:9001; ) )

This config assumes three important things:

  1. The repository address will be /srv/git, so set the appropriate permissions: $ sudo chown -R www-data:www-data /srv/git
  2. The repository itself must be readable by Anonymous and allow upload via HTTP: $ cd /srv/git $ git config core.sharedrepository true $ git config http.receivepack true
  3. Authentication is done using the htpasswd file, you need to create it and add users to it: $ sudo apt-get install apache2-utils $ htpasswd -c /etc/nginx/htpasswd user1 $ htpasswd /etc/nginx/htpasswd user2 ...

That's all, restart nginx:

Microcaching

Let's imagine a situation with a dynamic, frequently updated site that suddenly starts to receive very large loads (well, it got on the page of one of the largest news sites) and ceases to cope with the return of content. Competent optimization and implementation of the correct caching scheme will take a long time, and problems need to be addressed now. What we can do?

There are several ways to get out of this situation with the least losses, but the most interesting idea came from Fenn Bailey (fennb.com). The idea is to simply put nginx in front of the server and force it to cache all transmitted content, but not just cache, but just for one second. The highlight here is that hundreds and thousands of site visitors per second, in fact, will generate only one call to the backend, most of them getting a cached page. At the same time, hardly anyone will notice the difference, because even on a dynamic site, one second usually means nothing.

The config with the implementation of this idea will not look so complicated:

# vi /etc/nginx/sites-enabled/cache-proxy # Configure cache proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:5m max_size=1000m; server ( listen 80; server_name example.com; # Cache address location / ( # Cache enabled by default set $no_cache ""; # Disable cache for all methods except GET and HEAD if ($request_method !~ ^(GET|HEAD) $) ( set $no_cache "1"; ) # If the client uploads content to the site (no_cache = 1), we make sure that the data given to him is not cached for two seconds and he can see the download result if ($no_cache = "1") ( add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/"; add_header X-Microcachable "0"; ) if ($http_cookie ~* "_mcnc") ( set $no_cache "1 "; ) # Enable / disable the cache depending on the state of the variable no_cache proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # Proxy requests to the real server proxy_pass http://appserver.example.ru; proxy_cache microcache; proxy_cache_key $scheme$host$request_method$ request_uri; proxy_cache_valid 200 1s; # Protection against Thundering herd problem proxy_cache_use_stale updating; # Add standard headers proxy_set_h header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Don't cache files larger than 1 MB proxy_max_temp_file_size 1M; ) )

A special place in this config is occupied by the line "proxy_cache_use_stale updating;", without which we would have received periodic bursts of load on the backend server due to requests that came during the cache update. Otherwise, everything is standard and should be clear without further explanation.

Approximation of the proxy to the target audience

Despite the widespread global increase in Internet speeds, the physical remoteness of the server from the target audience still continues to play a role. This means that if a Russian site is running on a server located somewhere in America, the speed of access to it will be a priori slower than with Russian server with the same channel width (of course, if you close your eyes to all other factors). Another thing is that it is often more profitable to host servers abroad, including in terms of maintenance. Therefore, to get a profit, in the form of higher return rates, you will have to go for a trick.

One of the possible options: to place the main productive server in the West, and not too resource-demanding front-end, giving statics, to deploy in Russia. This will allow you to win in speed without serious costs. The nginx config for the frontend in this case will be a simple proxy implementation that is familiar to all of us:

# vi /etc/nginx/sites-enabled/proxy # Store cache for 30 days in 100 GB storage proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=static:32m inactive=30d max_size=100g; server ( listen 80; server_name example.com; # Actually, our proxy location ~* .(jpg|jpeg|gif|png|ico|css|midi|wav|bmp|js|swf|flv|avi|djvu|mp3) $ ( # Backend address proxy_pass back.example.com:80; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffer_size 16k; proxy_buffers 32 16k; proxy_cache static; proxy_cache_valid 30d; proxy_ignore_headers "Cache-Control" "Expires"; proxy_cache_key "$uri$is_args$args"; proxy_cache_lock on; ) )

findings

Today, with the help of nginx, you can solve many different tasks, many of which are not related to the web server and the HTTP protocol at all. A mail proxy, a streaming server, and a Git interface are just some of the tasks.