• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

Linux How to build NGINX from source - and optimize it for security and performance. Including TLS.

222222

Advanced OT User
Joined
Jul 3, 2007
Messages
223
Reaction score
178
Section 1: Building and installing NGINX from source

This is an extensive tutorial I wrote together. It goes through how to build and install NGINX. I'm also talking about how to properly configure your TLS certificates for your domain name. The reason you want to build it from source is because you get the latest stable version that way. Otherwise, you risk getting an older version, which is often what's included in the apt repository.

NGINX is a popular software that can be used for a lot of things. It can be used as a web server, a reverse proxy or a load balancer, just to name a few things. I recommend people to use it because it adds an extra layer of security to your server applications. It's also compatible with pretty much any type of software, whether that's an OT server, a Node.js web application or a regular website. What we want to use it for, is a reverse proxy. NGINX will sit as a middleman between the user and your OT server or website. All communication sent to/from your server will in other words be routed through NGINX. This is good because we can filter out bad actors and optimize websites for performance and ensure security headers are applied.

1714816039934.png

In this tutorial I will cover everything you need to know about NGINX. We're going to be building it from scratch so that you get the latest stable version of it. I'll also be going over how you can configure it to be as secure and performant as possible. We're going to do all of this on Linux, namely a Debian 12 server. You can follow along on other operating systems as well but there might be differences in how to do things.



Important note: I'd also like to mention that the main developer of NGINX, Maxim Dounin, recently left the NGINX project and created his own fork called FreeNGINX due to some corporate drama. It might be a good option to switch over to FreeNGINX. He made the most development and changes to NGINX over the years. He is essentially NGINX. And his FreeNGINX fork is exactly the same as regular NGINX, just that it will continue to get great updates. In this tutorial however, I'm going to be using NGINX (and not FreeNGINX). If you want to read more about FreeNGINX, you can read his statement. The process (building and securing) is exactly the same.



Now let's start by looking at the NGINX website and figure out what the latest stable version is. Since the goal is to make a secure and performant NGINX installation, we do not want to use any beta version of this software. At the time of this post, NGINX version 1.26.0 is the latest stable. Now open up your Linux server and sign in to it and we'll start building it. We will have to install some build essentials for this, so run the following to install all essential tools and download the latest stable version of NGINX.

Bash:
sudo apt update
sudo apt dist-upgrade
sudo apt install build-essential libpcre3-dev libssl-dev zlib1g-dev libgd-dev
wget http://nginx.org/download/nginx-1.26.0.tar.gz
tar -xzvf nginx-1.26.0.tar.gz
cd nginx-1.26.0

Before we build and install NGINX, we have to setup a new system user on the server. Since this will run as a service, we do not want our regular user account to run it. Instead we want to limit the access rights and setup a custom user whose only purpose is to run the NGINX server. We will simply call the new user and its group "nginx". Run the following to add the new user:

Bash:
sudo adduser --system --no-create-home --shell /bin/false --disabled-login --group nginx

Now let's build the installer for NGINX using all recommended modules. NGINX allows us to customize the build and include/exclude certain modules (features). Below, I've compiled the list of all modules you may need for a webserver and OT server. Run the following command to configure the installer:

Bash:
./configure --prefix=/var/www/html --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/etc/nginx/nginx.pid --lock-path=/etc/nginx/nginx.lock --user=nginx --group=nginx --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_mp4_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_secure_link_module --with-http_slice_module --with-http_stub_status_module --http-log-path=/var/log/nginx/access.log --with-stream --with-stream_ssl_module --with-stream_realip_module --with-compat --with-pcre-jit

Now it's time to build the installer and install NGINX. Depending on your CPU cores, adjust the number accordingly. I'm going to be using "j4" since my server has 4 CPU cores. If you have 8 for example, use "j8" to make it build faster.

Bash:
make -j4
sudo make install

To finalize the installation of NGINX, let's setup a systemd service so that we can easily start/stop/restart the server whenever we need to. We'll create a new system service called "nginx" like this:

Bash:
sudo nano /etc/systemd/system/nginx.service

Then paste the following into the NGINX service configuration. You can save the file using Ctrl+O → Enter → Ctrl+X.

Code:
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/etc/nginx/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/usr/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Now all that's left is to refresh the systemd services and start NGINX.

Bash:
sudo systemctl daemon-reload
sudo systemctl start nginx
sudo systemctl status nginx

So if you ever want to stop NGINX, simply run "sudo systemctl stop nginx" for example. Or "start" / "restart" to start or restart it.
Since we started NGINX, you should now be able to see an NGINX landing page if you visit your server's IP address (or domain name, if you have one).

1714817657083.png

You have now installed the latest version of NGINX. Now let's continue by configuring it for the best security and performance. By default, it doesn't come with much security features enabled. And if you have a website domain name linked to your server, it's important that we properly setup TLS certificates in NGINX. So for now, let's close the NGINX service until it has been configured.

In Section 5 below I will talk about security and performance in NGINX.

Bash:
sudo systemctl stop nginx



Section 2: Installing and configuring TLS certificate in NGINX

This section of the tutorial only applies to users that have purchased a domain name and plan to put NGINX in front of your webserver (for example your OT server website). If you do not have a domain name, or do not plan to use NGINX for your web, skip this section. We will start off by configuring TLS in our domain registrar.

The most recommended domain registrar is Cloudflare, due to them offering a lot of security and performance features, such as anti-DDoS protection, and they also do not add any extra fees to domain names. So if you haven't already, I urge you to use Cloudflare for your domain names. When you make public websites these days, you have to use a TLS certificate in order to protect communication on your website. TLS was previously known as SSL ("https"). There are many different versions of TLS. What I recommend is to only allow TLS 1.2 and above. This ensures only modern devices will be compatible with your website. For users visiting it using for example Windows XP or an older browser, they will simply not be able to use it because their device is not compatible with TLS 1.2 and above.

Once you have purchased your domain in Cloudflare, head to the "SSL/TLS → Overview" page in your Cloudflare dashboard and make sure you have selected to use "Full (Strict)" TLS encryption.

1714818453116.png

Then head to the "SSL/TLS → Edge Certificates" page and set the following settings to improve the overall security of your domain.

Code:
Always Use HTTPS: enabled
HTTP Strict Transport Security (HSTS):
    - Enable HSTS (Strict-Transport-Security): enabled
    - Max Age Header (max-age): 12 months
    - Apply HSTS policy to subdomains (includeSubDomains): enabled
    - Preload: enabled
    - No-Sniff Header: enabled
Minimum TLS Version: TLS 1.2
Opportunistic Encryption: enabled
TLS 1.3: enabled
Automatic HTTPS Rewrite: enabled

Lastly, setup DNS records so that your domain name points to your server IP address. You do that by going to the "DNS → Records" page in Cloudflare dashboard. Create two new DNS records using the following information:

Code:
Type: A
Name: @
IPv4 address: <ipAddress of your server>
Proxy status: enabled (proxied)

Type: A
Name: www
IPv4 address: <ipAddress of your server>
Proxy status: enabled (proxied)

We are now done configuring TLS in Cloudflare. Now let's configure TLS in NGINX. The way TLS works is that these certificates are handed out by so called "Certificate Authorities" (CA). However, Cloudflare and most other domain registrars are not CAs. Instead, they act as a middleman between you and a CA. In order to make this work properly, we have to setup a so called Origin Server Certificate. It is essentially a certificate that tells a CA that Cloudflare may create TLS certificates on behalf of you, like a contract. Some domain registrars (including Cloudflare) offers to create Origin Server Certificates via their website, but just to make sure we are the only ones with its private key, we will create one ourselves.

Now things are going to be a bit confusing, but in order to create the Origin Server Certificate, we have to generate a so called Certificate Signing Request (CSR) file and give it to Cloudflare. So in short, first we create a CSR file linked to our own private key and upload it to Cloudflare. Cloudflare will then generate an Origin Server Certificate for us. We will then upload this file to our server and that way we will get a TLS certificate. This is the best way of doing it if you want full control over the TLS of your website. It involves more steps than necessary to get TLS, but if you want to do it right, this is how it's done.

So let's start by creating a new CSR file on our server. We will place it inside "/etc/ssl".
Change "<domain>" to your domain name in the command below.

Bash:
cd /etc/ssl
sudo openssl req -newkey rsa:2048 -keyout <domain>.key -out <domain>.csr

You will be asked to create a password for this CSR file. Make sure you write it down and store it somewhere safe. It will then ask a few questions about your country and such. You skip those questions by just pressing Enter. However, when it asks for a "Common Name" you must type the following (changing <domain> to your domain name).

Code:
*.<domain>

This means that we will enable TLS not only on our domain name, but all of its subdomains as well.
After that it will ask about "Email address", "Challenge password" and "Optional company name". Leave all of them blank and skip them by pressing Enter.
You should now have two new files created, named after your domain:
  • A private key file (".key")
  • A CSR file (".csr")
We will now have to create a file on your server that will contain the password you setup for your CSR file, so that the NGINX service will be able to read it. Let's create a new file called "cert" inside the "/opt" folder.

Bash:
sudo nano /opt/cert

Paste your CSR file password into the file and save it (Ctrl+O → Enter → Ctrl+X). Then we need to give the nginx user permission to read this file. Run the following:

Bash:
sudo chown nginx:nginx /opt/cert
sudo chmod 600 /opt/cert

At this point, I recommend you to make backups of your CSR and private key file, in case you ever lose them. They can be used to change the TLS certificate of your domain. So it's important that you do not lose them.

We are now ready to upload the CSR file to Cloudflare. In turn, they will generate an Origin Server Certificate file for us. Head to the Cloudflare dashboard and enter the "SSL/TLS → Origin Server" page and click on "Create Certificate". Select the option "Use my private key and CSR" and paste the contents of your CSR file into the form. Then in the "hostnames" field enter both of these (changing "<domain>" to your domain).

Code:
*.<domain>
<domain>

Select how long your Origin Server Certificate should be valid for. This is how long time you want to give Cloudflare permission to generate and renew TLS certificates, on behalf of you. I'm personally going with 15 years, for convenience sake. Then click on "Create" and your Origin Server Certificate file will be generated. Copy all of its content and save it on your computer, somewhere safe.

Now all you need to do is install the Origin Server Certificate on the server. Let's create its file on the server (changing "<domain>" to your domain name):

Bash:
sudo nano /etc/ssl/<domain>.pem

And paste in the content from the Cloudflare website into the file and save it (Ctrl+O → Enter → Ctrl+X). You should now have three files: A private key (".key"), a CSR file (".csr") and an Origin Server Certificate (".pem") all named after your domain name. E.g. "example.com.key", "example.com.csr" and "example.com.pem". You can verify that you have them by running the following:

Bash:
ls -l /etc/ssl

1714820315370.png

The last thing to do here is to allow NGINX to access these files. We only want the "nginx" user to do this and nothing else on your server. Change "<domain>" to your domain name.

Code:
sudo chown nginx:nginx /etc/ssl/<domain>.*
sudo chmod 600 /etc/ssl/<domain>.key
sudo chmod 644 /etc/ssl/<domain>.csr
sudo chmod 644 /etc/ssl/<domain>.pem



Section 3: Generating a Diffie-Hellman key exchange

This section only applies to users that did Section 2 in this tutorial.

A Diffie-Hellman (DH) key exchange ensures that the exchange of the TLS certificate itself is being transferred via an encrypted communication channel. While a TLS certificate encrypts communication on your website, a DH key encrypts the transfer of the TLS certificate between you and your website visitors. Unfortunately, not many people know about this and its importance in terms of security. So if you want the highest grade of encryption on your website's communication, use a DH key as a compliment to your TLS certificate.

To generate a DH key, run the following command and let it finish. It can take anywhere between 5min and 1 hour to complete, depending on your server's hardware specification. We will later on make sure our NGINX configuration loads this file.

Bash:
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096



Section 4: Creating a CAA DNS record

This section only applies to users that did Section 2 in this tutorial.

I already explained what a Certificate Authority (CA) is. But what a CAA DNS record is, is essentially a whitelist of CAs that we allow to generate TLS certificates for us. There are in fact bad CAs on the market (thankfully they get shutdown rather quickly). But to only allow "letsencrypt.org" to generate TLS certificates for our domain, we can setup a CAA DNS record in Cloudflare. This means we will only allow Lets Encrypt as a CA.

Open up the Cloudflare dashboard and create a new DNS record inside the "DNS → Records" page with the following information:

Code:
Type: CAA
Name: @
Flags: 0
TTL: Auto
Tag: Only allow specific hostnames
CA domain name: letsencrypt.org




Section 5: Optimizing NGINX for security and performance

This is the most important section of this tutorial. NGINX offers us to customize a lot of settings. There are over 100 different settings that have to be adjusted if you want to properly secure and get full performance out of it. All this configuration is done inside the NGINX configuration file on your server, which you can find here:

Bash:
sudo nano /etc/nginx/nginx.conf

Because there are so many settings, I have chosen to upload an optimized NGINX configuration template ("nginx.txt") attached in this tutorial. I have placed comments for everything in that file. Please read it through as you will have to fill out your domain name at some places in it (line 203-204). Once you have replaced the default NGINX configuration with the new one and saved it (using Ctrl+O → Enter → Ctrl+X) it's time to enable NGINX for autostart on server boot, and to start the service:

Bash:
sudo systemctl enable nginx
sudo systemctl start nginx

You have now properly setup NGINX and your TLS certificates for your website and server application. Note that it also adds Cloudflare as a CDN in front of your website, to further enhance performance and security. So whenever a user visits your website, they will route like this:

User → Cloudflare CDN → NGINX Reverse Proxy → Backend

Create as many "server" blocks as you want inside the NGINX server configuration. I included on for a website as an example, running on port 8080 (change it to whatever port your website app is running on). You can also add one for your OT server application and port 7171. Just make sure to restart the NGINX service whenever you make any changes to it.

Bash:
sudo systemctl restart nginx

You can then test your TLS certificate on SSL Labs website. By following this tutorial you will end up with an A+ score which is the highest rating. So not only will you get a very secure and performant NGINX installation, but you will also get the highest grade of TLS possible, for free, by following this tutorial. There is no need to purchase any TLS certificates from DigiCert for hundreds of Euros for example, just to get a better rating than what 99% of this world use. This is by far the best way to do it and should be done for any website that you host, regardless of its importance.

If you have any questions, please post them down below.
 

Attachments

Last edited:
To add to what I already mentioned about FreeNGINX (maintained by Maxim Dounin), it might be a good idea to switch over to it. He's been doing a better job than the remaining people at NGINX. I'll personally probably be going over to FreeNGINX in the next coming weeks. If you liked the way NGINX worked in recent years, it was because of him. Other core developers are starting to move over to it as well.
 
I see it's nginx tutorial, but you mentioned CloudFlare and there are 3 important things to configure, when you use CloudFlare on OTS website:

1. Configure firewall (ex. by using iptables) to block access to server HTTP/HTTPS from other IPs than CloudFlare server IPs. Run this is linux terminal on root:
Bash:
for i in `curl https://www.cloudflare.com/ips-v4`; do iptables -I INPUT -p tcp -m multiport --dports http,https -s $i -j ACCEPT; done
iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
Otherwise someone can still attack your server www by using your server IP, not IP of your domain (protected by CloudFlare) - CloudFlare protection won't work.

2. Disable IPv6 access to website in CloudFlare [I know it's controversial advice]
You will probably add server IPv4 as your server address in CloudFlare and not even think about IPv6.
Maybe someday IPv6 will be a standard, but right now - after 15 years! - only 45% of Google users connect using IPv6 ( IPv6 – Google (https://www.google.com/intl/en/ipv6/statistics.html) ).
In case of OTS users, they all use IPv4, as Tibia client - at least old clients - work only with IPv4 protocol.
So IPv6 is pretty useless for your website, but makes it very easy to attack. It's easy to get a lot of IPv6 addresses and bypass any 'requests per second per IP' limits.
Problem is that CloudFlare by default activates IPv6 compatibility: user can access your domain in CloudFlare using IPv6 and Cloudflare will proxy this request to your IPv4 nginx server. How to disable IPv6 compatibility in CloudFlare: Disabling IPv6 on Cloudflare - Media Realm (https://www.mediarealm.com.au/articles/cloudflare-disable-ipv6/)

3. Change log format in nginx.conf to log:
  • country of user - reported by CloudFlare in "$HTTP_CF_IPCOUNTRY" - ex. PL
  • domain used to by user ("$http_host") ex. paypal.otupdate.ovh
  • PHP execution time ("$upstream_response_time") ex. 0.003

Replace:
Code:
    # Logging paths
    access_log /var/log/nginx/access.log;
with:
Code:
    log_format cloudflarelog '$remote_addr "$HTTP_CF_IPCOUNTRY" "$http_host" [$time_local] '
                             '"$request" $status $body_bytes_sent "$upstream_response_time" '
                             '"$http_referer" "$http_user_agent"';
    # Logging paths
    access_log /var/log/nginx/access.log cloudflarelog;
Why should you log these things:
- when someone use cheap 'stresser' to attack your www, you will be able to find from what country are servers that are attacking you - you can block that country in CloudFlare panel (often: RU, IN, PH, TH, CN, T1 - Russia, India, Philippines, Thailand, China or TOR [T1 in CF panel]).

- IDK if it's still possible with Edge Certificates+SSL configuration described above, but probably it is: someone can add your server IP to his domain in his CloudFlare panel and disable all protection on his domain. Then attack his domain and CF will proxy all attackers to your server, but with CloudFlare servers IPs as connecting IP, so firewall (iptables) configuration won't stop it.
You can't stop it without very expensive CF service (Argo or something like that - paid by number of requests), but you can check domain send in request and do not pass requests to www server/PHP, if domain is other than yours. It will heavily reduce CPU usage.
Obvious protection against it is to use extra IP for www - not same as for OTS - and keep it secret. Then no one will be able to add your VPS/dedic IP in his CloudFlare panel. Most datacenters sell extra IPs (aka 'floating IP' aka 'failover IP') for VPSes/dedics.
Not tested it yet: use IPv6 for nginx and CloudFlare. Most servers get IPv4 + IPv6, so you can use IPv4 for OTS (old Tibia clients work only with IPv4) and IPv6 for www.

- when www server starts using a lot of CPU, you will be able to check what PHP script is attacked/bugged by checking PHP execution time.
 
I see it's nginx tutorial, but you mentioned CloudFlare and there are 3 important things to configure, when you use CloudFlare on OTS website:

1. Configure firewall (ex. by using iptables) to block access to server HTTP/HTTPS from other IPs than CloudFlare server IPs. Run this is linux terminal on root:
Bash:
for i in `curl https://www.cloudflare.com/ips-v4`; do iptables -I INPUT -p tcp -m multiport --dports http,https -s $i -j ACCEPT; done
iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
Otherwise someone can still attack your server www by using your server IP, not IP of your domain (protected by CloudFlare) - CloudFlare protection won't work.

2. Disable IPv6 access to website in CloudFlare [I know it's controversial advice]
You will probably add server IPv4 as your server address in CloudFlare and not even think about IPv6.
Maybe someday IPv6 will be a standard, but right now - after 15 years! - only 45% of Google users connect using IPv6 ( IPv6 – Google (https://www.google.com/intl/en/ipv6/statistics.html) ).
In case of OTS users, they all use IPv4, as Tibia client - at least old clients - work only with IPv4 protocol.
So IPv6 is pretty useless for your website, but makes it very easy to attack. It's easy to get a lot of IPv6 addresses and bypass any 'requests per second per IP' limits.
Problem is that CloudFlare by default activates IPv6 compatibility: user can access your domain in CloudFlare using IPv6 and Cloudflare will proxy this request to your IPv4 nginx server. How to disable IPv6 compatibility in CloudFlare: Disabling IPv6 on Cloudflare - Media Realm (https://www.mediarealm.com.au/articles/cloudflare-disable-ipv6/)

3. Change log format in nginx.conf to log:
  • country of user - reported by CloudFlare in "$HTTP_CF_IPCOUNTRY" - ex. PL
  • domain used to by user ("$http_host") ex. paypal.otupdate.ovh
  • PHP execution time ("$upstream_response_time") ex. 0.003

Replace:
Code:
    # Logging paths
    access_log /var/log/nginx/access.log;
with:
Code:
    log_format cloudflarelog '$remote_addr "$HTTP_CF_IPCOUNTRY" "$http_host" [$time_local] '
                             '"$request" $status $body_bytes_sent "$upstream_response_time" '
                             '"$http_referer" "$http_user_agent"';
    # Logging paths
    access_log /var/log/nginx/access.log cloudflarelog;
Why should you log these things:
- when someone use cheap 'stresser' to attack your www, you will be able to find from what country are servers that are attacking you - you can block that country in CloudFlare panel (often: RU, IN, PH, TH, CN, T1 - Russia, India, Philippines, Thailand, China or TOR [T1 in CF panel]).

- IDK if it's still possible with Edge Certificates+SSL configuration described above, but probably it is: someone can add your server IP to his domain in his CloudFlare panel and disable all protection on his domain. Then attack his domain and CF will proxy all attackers to your server, but with CloudFlare servers IPs as connecting IP, so firewall (iptables) configuration won't stop it.
You can't stop it without very expensive CF service (Argo or something like that - paid by number of requests), but you can check domain send in request and do not pass requests to www server/PHP, if domain is other than yours. It will heavily reduce CPU usage.
Obvious protection against it is to use extra IP for www - not same as for OTS - and keep it secret. Then no one will be able to add your VPS/dedic IP in his CloudFlare panel. Most datacenters sell extra IPs (aka 'floating IP' aka 'failover IP') for VPSes/dedics.
Not tested it yet: use IPv6 for nginx and CloudFlare. Most servers get IPv4 + IPv6, so you can use IPv4 for OTS (old Tibia clients work only with IPv4) and IPv6 for www.

- when www server starts using a lot of CPU, you will be able to check what PHP script is attacked/bugged by checking PHP execution time.

Great advice, I just added the Cloudflare IPv4 and IPv6 list to my "ufw" (I don't use iptables personally). If anyone else want to do it, these are the rules. And of course remove the 80 and 443 I added in my other guide for Linux security (in case anyone followed it).

Lua:
sudo ufw allow proto tcp from 173.245.48.0/20 to any port 80,443
sudo ufw allow proto tcp from 103.21.244.0/22 to any port 80,443
sudo ufw allow proto tcp from 103.22.200.0/22 to any port 80,443
sudo ufw allow proto tcp from 103.31.4.0/22 to any port 80,443
sudo ufw allow proto tcp from 141.101.64.0/18 to any port 80,443
sudo ufw allow proto tcp from 108.162.192.0/18 to any port 80,443
sudo ufw allow proto tcp from 190.93.240.0/20 to any port 80,443
sudo ufw allow proto tcp from 188.114.96.0/20 to any port 80,443
sudo ufw allow proto tcp from 197.234.240.0/22 to any port 80,443
sudo ufw allow proto tcp from 198.41.128.0/17 to any port 80,443
sudo ufw allow proto tcp from 162.158.0.0/15 to any port 80,443
sudo ufw allow proto tcp from 104.16.0.0/13 to any port 80,443
sudo ufw allow proto tcp from 104.24.0.0/14 to any port 80,443
sudo ufw allow proto tcp from 172.64.0.0/13 to any port 80,443
sudo ufw allow proto tcp from 131.0.72.0/22 to any port 80,443
sudo ufw allow proto tcp from 2400:cb00::/32 to any port 80,443
sudo ufw allow proto tcp from 2606:4700::/32 to any port 80,443
sudo ufw allow proto tcp from 2803:f800::/32 to any port 80,443
sudo ufw allow proto tcp from 2405:b500::/32 to any port 80,443
sudo ufw allow proto tcp from 2405:8100::/32 to any port 80,443
sudo ufw allow proto tcp from 2a06:98c0::/29 to any port 80,443
sudo ufw allow proto tcp from 2c0f:f248::/32 to any port 80,443
sudo ufw deny proto tcp from any to any port 80,443
 
Last edited:
Otherwise someone can still attack your server www by using your server IP, not IP of your domain (protected by CloudFlare) - CloudFlare protection won't work.
There is one more problem related to this and solution for problem '3'.
Even if you use other IPv4 for CloudFlare - other than OTS IP for otservlist - someone can still 'find out' your www server real IP just by sending HTTP request to load site xxx.com to all IPv4 addresses - probably only your site will answer with acc. maker homepage. There are 4kkk IPv4 addresses available, but for hackers it's just few minutes to send request to them all.

So even if you use other IPv4 for www, you must block access for other IPs than CloudFlare, to block possibility of 'IPv4 scan' to find your www server IP address.
 
I see it's nginx tutorial, but you mentioned CloudFlare and there are 3 important things to configure, when you use CloudFlare on OTS website:

1. Configure firewall (ex. by using iptables) to block access to server HTTP/HTTPS from other IPs than CloudFlare server IPs. Run this is linux terminal on root:
Bash:
for i in `curl https://www.cloudflare.com/ips-v4`; do iptables -I INPUT -p tcp -m multiport --dports http,https -s $i -j ACCEPT; done
iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
Otherwise someone can still attack your server www by using your server IP, not IP of your domain (protected by CloudFlare) - CloudFlare protection won't work.

2. Disable IPv6 access to website in CloudFlare [I know it's controversial advice]
You will probably add server IPv4 as your server address in CloudFlare and not even think about IPv6.
Maybe someday IPv6 will be a standard, but right now - after 15 years! - only 45% of Google users connect using IPv6 ( IPv6 – Google (https://www.google.com/intl/en/ipv6/statistics.html) ).
In case of OTS users, they all use IPv4, as Tibia client - at least old clients - work only with IPv4 protocol.
So IPv6 is pretty useless for your website, but makes it very easy to attack. It's easy to get a lot of IPv6 addresses and bypass any 'requests per second per IP' limits.
Problem is that CloudFlare by default activates IPv6 compatibility: user can access your domain in CloudFlare using IPv6 and Cloudflare will proxy this request to your IPv4 nginx server. How to disable IPv6 compatibility in CloudFlare: Disabling IPv6 on Cloudflare - Media Realm (https://www.mediarealm.com.au/articles/cloudflare-disable-ipv6/)

3. Change log format in nginx.conf to log:
  • country of user - reported by CloudFlare in "$HTTP_CF_IPCOUNTRY" - ex. PL
  • domain used to by user ("$http_host") ex. paypal.otupdate.ovh
  • PHP execution time ("$upstream_response_time") ex. 0.003

Replace:
Code:
    # Logging paths
    access_log /var/log/nginx/access.log;
with:
Code:
    log_format cloudflarelog '$remote_addr "$HTTP_CF_IPCOUNTRY" "$http_host" [$time_local] '
                             '"$request" $status $body_bytes_sent "$upstream_response_time" '
                             '"$http_referer" "$http_user_agent"';
    # Logging paths
    access_log /var/log/nginx/access.log cloudflarelog;
Why should you log these things:
- when someone use cheap 'stresser' to attack your www, you will be able to find from what country are servers that are attacking you - you can block that country in CloudFlare panel (often: RU, IN, PH, TH, CN, T1 - Russia, India, Philippines, Thailand, China or TOR [T1 in CF panel]).

- IDK if it's still possible with Edge Certificates+SSL configuration described above, but probably it is: someone can add your server IP to his domain in his CloudFlare panel and disable all protection on his domain. Then attack his domain and CF will proxy all attackers to your server, but with CloudFlare servers IPs as connecting IP, so firewall (iptables) configuration won't stop it.
You can't stop it without very expensive CF service (Argo or something like that - paid by number of requests), but you can check domain send in request and do not pass requests to www server/PHP, if domain is other than yours. It will heavily reduce CPU usage.
Obvious protection against it is to use extra IP for www - not same as for OTS - and keep it secret. Then no one will be able to add your VPS/dedic IP in his CloudFlare panel. Most datacenters sell extra IPs (aka 'floating IP' aka 'failover IP') for VPSes/dedics.
Not tested it yet: use IPv6 for nginx and CloudFlare. Most servers get IPv4 + IPv6, so you can use IPv4 for OTS (old Tibia clients work only with IPv4) and IPv6 for www.

- when www server starts using a lot of CPU, you will be able to check what PHP script is attacked/bugged by checking PHP execution time.

I realized that in your logformat, you are now getting get Cloudflare IP address in the log, and not the client IP address. To fix this, pass the variables "$http_cf_connecting_ip" instead of "$remote_addr".

Also, to make it easier to work with the log format, you can try using JSON instead. There's so many tools that can easily parse JSON and even make nice charts for viewing the data. After your reply I started looking into customizing the log format and it's something I wish I had paid more attention to earlier. Lots of things can be customized!

The Cloudflare country code seems to be enabled by default too (in the Cloudflare dashboard), so that's nice!

I made my logging format like this:

Code:
log_format logger-json escape=json '{"source": "nginx", "date": "$time_iso8601", "clientIpAddress": "$http_cf_connecting_ip", "clientCountryCode": "$http_cf_ipcountry", "requestProtocol": "$server_protocol", "requestMethod": "$request_method", "responseStatusCode": $status, "host": "$http_host", "requestURI": "$request_uri", "contentType": "$content_type", "requestLength": "$request_length", "requestDuration": "$request_time"}';

Or in a more readable format (don't use this inside the config file):

Code:
log_format logger-json escape=json '{
    "source": "nginx",
    "date": "$time_iso8601",
    "clientIpAddress": "$http_cf_connecting_ip",
    "clientCountryCode": "$http_cf_ipcountry",
    "requestProtocol": "$server_protocol",
    "requestMethod": "$request_method",
    "responseStatusCode": $status,
    "host": "$http_host",
    "requestURI": "$request_uri",
    "contentType": "$content_type",
    "requestLength": "$request_length",
    "requestDuration": "$request_time"
}';

If you use a lot of JSON logs from different services, add a custom field like "source": "nginx" to make it easier to identify this as an NGINX log entry. The only downside I found with this is that it doesn't append them nicely in a list with commas. So you'll have to extract all lines and add a "," at the end of each and wrap everything in an array "[]" to be able to iterate through it with parsers. But that's quick and easy with grep or sed.

List of all NGINX variables: Alphabetical index of variables (https://nginx.org/en/docs/varindex.html)
You can add so much more to it. UserAgent might be useful too.

And just as I was writing this, a bot visited my site. This is how the log entry looks like:

{"source": "nginx", "date": "2024-05-31T01:26:56+02:00", "clientIpAddress": "89.248.xxx.xxx", "clientCountryCode": "NL", "requestProtocol": "HTTP/2.0", "requestMethod": "GET", "responseStatusCode": 404, "host": "xxxxx.com", "requestURI": "/wp-json/litespeed/v1/cdn_status", "contentType": "", "requestLength": "490", "requestDuration": "0.003"}
 
Last edited:
Back
Top