Showing posts with label servers. Show all posts
Showing posts with label servers. Show all posts

Friday, October 13, 2023

NGINX Reverse Proxy for an IPv6-Only Server Farm

Introduction

This work presents a very simple way to offer dual-stack web access to an IPv6-only server farm using NGINX. The continued growth of the Internet and the gradual adoption of the IPv6 protocol means that it is essential to ensure connectivity and accessibility for clients using both IPv4 and IPv6. We will explain how to configure NGINX to support dual-stack web access, we will address how to configure NGINX as a reverse proxy that listens on both IPv4 and IPv6 addresses, as well as how to correctly route incoming requests to backend servers with only IPv6 addresses. By the way, among many other benefits, what we will discuss in the following article is an important step towards the preservation of IPv4 addresses.



What is a Reverse Proxy?

In [1], Cloudflare defines a Reverse Proxy Server as follows: “A reverse proxy is a server that sits in front of web servers and forwards client (e.g. web browser) requests to those web servers. Reverse proxies are typically implemented to help increase security, performance, and reliability. In order to better understand how a reverse proxy works and the benefits it can provide, let’s first define what a proxy server is.”


What is a Proxy Server

In [1], Cloudflare also provides the following definition for a proxy server: “A forward proxy, often called a proxy, proxy server, or web proxy, is a server that sits in front of a group of client machines. When those computers make requests to sites and services on the Internet, the proxy server intercepts those requests and then communicates with web servers on behalf of those clients, like a middleman.”



What are the benefits of a Reverse Proxy?

  • • A reverse proxy can offer IPv4 or transparent IPv6 to clients serviced from an IPv6- only server farm (which is what we will focus on). • Scalability: The use of a reverse proxy allows adding or removing backend servers as needed without affecting end users. This makes it easier for applications to scale out, allowing them to handle a larger number of concurrent users and requests. • Static content caching: NGINX can cache static content such as images, CSS files, and JavaScript, thus reducing the load on backend servers and increasing content delivery speed. This decreases page load times and the required bandwidth. • Security: NGINX acts as a point of entry to the application, providing an additional layer of security. It can perform functions such as request filtering, DDoS attack prevention, SQL injection protection, and client authentication. NGINX can also enable the use of SSL/TLS encryption for communication between clients and the backend server. • Advanced routing: A reverse proxy allows performing advanced routing based on various criteria, such as domain name, URL, or HTTP headers. This is useful when we need to direct traffic to different backend servers based on the specific attributes of the requests. • Consolidation of services: NGINX can act as a single point of entry for various backend services. This simplifies the infrastructure by consolidating multiple services on a single server, thus simplifying management and maintenance. • Enhanced performance: NGINX is lightweight and resource efficient by design. Its streamlined architecture and ability to handle large numbers of concurrent connections make it a popular choice for improving web app performance. • Load balancing: A reverse proxy such as NGINX can distribute incoming traffic across several backend servers. This helps balance the load and guarantees that no server is overloaded, which improves an application's performance and responsiveness.



Topology


What is our Goal Today?

The edge server (Reverse Proxy Server) will be able to receive IPv4 and IPv6 HTTP requests, and depending on the website a user wishes to visit (domain), will forward the request to the right server. This is what will happen in our example: 

The client visits: The request is sent to: 

server-a.com → 2001:db8:123::1 

server-b.com → 2001:db8:123::2 

server-c.com → 2001:db8:123::3 

server-a.com → 2001:db8:123::101 

server-b.com → 2001:db8:123::102 

server-c.com → 2001:db8:123::103



Requirements

  • • Linux with NGINX on the Reverse Proxy Server • Super user access • Web server on each of the servers in the farm • IPv4 and IPv6 Internet connectivity • Internal IPv6 connectivity


Let's get started


Let's get started 

1) Install NGINX in all servers #apt update #apt install nginx 

2) Create the websites in the NGINX reverse proxy 

File /etc/nginx/sites-available/server-a.com 

server { listen 80; listen [::]:80; 

  server_name server-a.com; 

  location / { 

  proxy_pass http://[2001:db8:123::101]; } 


File /etc/nginx/sites-available/server-b.com


server { listen 80; listen [::]:80; server_name server-b.com; location / { proxy_pass http://[2001:db8:123::102]; } }



Archivo  /etc/nginx/sites-available/server-b.com

server {

listen 80;

listen [::]:80;


    server_name server-b.com;

    location / {

        proxy_pass http://[2001:db8:123::102];

    }


}



File /etc/nginx/sites-available/server-c.com 
server { 
  listen 80; listen [::]:80; 
  server_name server-c.com; 
  location / { 
  proxy_pass http://[2001:db8:123::103]; 
  }
}

3) Create symbolic links to enable the sites configured above:


root@ProxyReverseSRV:/etc/nginx/sites-enabled# ln -s /etc/nginx/sitesavailable/server-a.com /etc/nginx/sites-enabled/server-a.com 


root@ProxyReverseSRV:/etc/nginx/sites-enabled# ln -s /etc/nginx/sitesavailable/server-b.com /etc/nginx/sites-enabled/server-b.com 


root@ProxyReverseSRV:/etc/nginx/sites-enabled# ln -s /etc/nginx/sitesavailable/server-c.com /etc/nginx/sites-enabled/server-c.com



4) Remember to restart NGINX

$sudo systemctl restart nginx



About the logs

Logs are extremely important for any company or ISP that wishes to review incoming connections. 

By default, NGINX will use its own IP address for outgoing connections, which results in the loss of the address of the client that originated the HTTP request. But don't worry. NGINX has the solution: proxy_set_header. This requires configuring both the end server and the Reverse Proxy server. 

1) On the Reverse Proxy Server, we must configure the website assets. 
# Example of nginx reverse proxy that allows logging the client's 
# original address and port number 

location /examples { 
   proxy_pass http://[2001:db8:123::103]; 
   proxy_buffering off; 
   proxy_set_header X-Real-IP $remote_addr; 
   proxy_set_header X-Forwarded-Host $host; 
   proxy_set_header X-Forwarded-Port $server_port; 
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
 } 

2) On the end server, add the following in the http section of the /etc/nginx/nginx.conf file:

set_real_ip_from 2001:db8:123::100; #replace the IP address with that of the proxy 
real_ip_header X-Forwarded-For; 
real_ip_recursive on; 

Example: 
http { 
   … 
   set_real_ip_from 2001:db8:123::100; 
   real_ip_header X-Forwarded-For; real_ip_recursive on; 
   … 
  } 

With these settings, the receiving server will trust the X-Forwarded-For header set to 2001:db8:123::100 and will log the client's source IP to /var/log/nginx/access.log.



Summary

The proposed design allows managing a 100% IPv6-only web server farm with access to both the IPv4 and the IPv6 worlds in a very simple, scalable, and efficient manner. This results in various benefits, including having to manage only one TCP/IP stack, simplicity, security, and even saving IPv4 addresses.


References


    • [1] https://www.cloudflare.com/es-es/learning/cdn/glossary/reverse-proxy/ • https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-areverse-proxy-on-ubuntu-22-04 
    • GitHub. LACNIC Blog Post Help Files for the entire project: https://github.com/LACNIC/BlogPostHelpFiles/tree/main/2023_Ofreciendo_conectivid ad_Dual_Stack_a_servidores_Web_en_una_granja_de_servidores_100_IPv6_Only

    Monday, February 29, 2016

    Read a BGP live stream from CAIDA

    Objective
      Read a BGP live stream from CAIDA and insert them into a BGP session

    What do we need
      bgpreader from the bgpstream core package provided by Caida
      bgp_simple.pl obtained in github

    Overview
      We will read the BGP live stream feed using bgpreader, then the standard output of it will be redirected to a pipe file (mkfifo) where a perl script called bgpsimple will be reading this file. This very same script will established the BGP session against a BGP speaker and announce the prefixes received in the stream.

    LAB Topology
      The configuration was already tested in Cisco & Quagga
      The BGP Speaker (Cisco/Quagga) has the IPv4 address 192.168.1.1
      The BGP Simple Linux box has the IP 192.168.1.2

    How does it works?
      bgpreader has the ability to write his output in the -m format used by libbgpdump (by RIPENCC), this is the very same format bgpsimple uses as stdin. That's why myroutes is a PIPE file (created with mkfifo).

    Steps:  

    INSTALL BGP READER - UBUNTU 15.04

    First install general some packages:
    apt-get install apt-file libsqlite3-dev libsqlite3 libmysqlclient-dev libmysqlclient
    apt-get install libcurl-dev libcurl  autoconf git libssl-dev
    apt-get install build-essential zlib1g-dev libbz2-dev
    apt-get install libtool git
    apt-get install zlib1g-dev

    Also intall wandio
    wandio-1.0.3
    git clone https://github.com/alistairking/wandio

    ./configure

    cd wandio
    ./bootstrap.sh
    ./configure && ./make && ./make install
    wandiocat http://www.apple.com/library/test/success.html

    to test wandio:
    wandiocat http://www.apple.com/library/test/success.html

    Download bgp reader tarball from:
    https://bgpstream.caida.org/download

    #ldconfig (before testing)

    #mkfifo myroutes

    to test bgpreader:
    ./bgpreader -p caida-bmp -w 1453912260 -m
    (wait some seconds and then you will see something)

    # git clone https://github.com/xdel/bgpsimple


    Finally run everything
    In two separate terminals (or any other way you would like to do it):

    ./bgpreader -p caida-bmp -w 1453912260 -m > /usr/src/bgpsimple/myroutes
    ./bgp_simple.pl -myas 65000 -myip 192.168.1.2 -peerip 192.168.1.1 -peeras 65000 -p myroutes

    One more time, what will happen behind this?
    bgpreader will read an online feed from a project called caida-bmp with starting timestamp 1453912260 (Jan 27 2016, 16:31) in "-m" format, It means a libbgpdump format (see references). The stardard output of all this will be send to the file /usr/src/bgpsimple/myroutes which is a "pipe file". At the same time, bgp_simple.pl will create an iBGP session againts peer 192.168.1.1/AS65000 (a bgp speaker such as Quagga or Cisco). bgp_simple.pl will read myroutes files and send what it seems in this file thru the iBGP Session.

    Important information
    - The BGP Session won't be established until there is something in the file myroutes
    - eBGP multi-hop session are allowed
    - You have to wait short time (few seconds) until bgpreaders start to actually see something and bgp_simple.pl starts to announce to the BGP peer

    References / More information:
    -Part of the work was based on:
    http://evilrouters.net/2009/08/21/getting-bgp-routes-into-dynamips-with-video/

    - Caida BGP Stream:
    https://bgpstream.caida.org/

    - bgpreader info:
    https://bgpstream.caida.org/docs/tools/bgpreader

    - RIPE NCC libbgpdump:
    http://www.ris.ripe.net/source/bgpdump/

    - Introduction of "Named Pipes" (pipe files in Linux):
    http://www.linuxjournal.com/article/2156

    Saturday, April 13, 2013

    In case of emergency break glass. ¡Console cable inside!

    ( I really don't know if this image/joke existed before, it came to my mind attending a failure last week)


    If you need financiallygenius , then the team of professionals from financiallygenius is here to help you.

    Monday, January 28, 2013

    Installing Linux in a Sun Fire Server

    Introduction:
    The following describes the procedure for installing the operating system Debian GNU / Linux on a Sun Fire V210 Hardware. 


    First:
    * As we know, this has not Hardware Out Video Card VGA or PS2 port for Keyboard. What if it has a serial port for Management.  
    * This server is  64-bit SPARC architecture.

    Procedure:

    1. - Download Image.
    You can download the distro from http://cdimage.debian.org/debian-cd/6.0.3/sparc/iso-cd/
    You will need least CD number 1
    Then downloaded ISO: http://cdimage.debian.org/debian-cd/6.0.3/sparc/iso-cd/debian-6.0.3-sparc-CD-1.iso


    Please remember to burn the image a low speed, it's good to avoid some drawbacks.



    2.- Place the Debian CD image to the CD / DVD ROM Server.
    3. - Now, we must establish a connection via Serial COM with the Sun Fire V210 server.
    We can use Hyperterminal, Minicom or even PuTTYtel.

    To create the serial connection you will need the following parameters:
     9600,8, n, 1 (Default). 

    The cable is a cable type used Rollover (NOT crossover). Typically console cables Cisco equipment will work.

    3. - Start the server. The challenge is to make Boot from CDROM drive.
    To do this, do the following: When the server is starting, we pressed Sequence 'STOP + A'.
    In a conventional keyboard, this sequence is the same as 'CTRL + SHIFT + BREAK' or 'CTRL + BREAK'.
    In doing so, you will get PROMPT {} Ok, when that happens we do the following:
    {} Ok printenv auto-boot (To see the State Flag of the auto-boot)
    {} Ok setenv auto-boot false (for setting the auto-boot Flag False)
    {} Ok reset-all (Reboot the System)
    4. - When the computer restarts, return to Press 'STOP + A', and at the prompt {} Ok we do isinstructed to do Boot from CDROM, so:{} Ok boot cdrom 

    4.- From that moment, the server should begin to start from the CDROM drive. I recommend to use the terminal in Full Screen mode to see the installation as if it were a monitor connected to the server.

    5.- From this point, it follows exactly the Debian installation procedure (Users, Partitions, Repositories, etc..)
    Key Points:- Download the ISO image for SPARC 64.- The sequence 'STOP + A', which can be also 'CTRL + BRAK' or 'CTRL + SHIFT + BREAK'.- Place the screen in full screen.- If at any time the connection is lost Serial (usual with PuTTYtel), simply close and reopen theSerial Connection and type any key to recover the installation.- The system boots from the CDROM cuandl being the PROMPT {} Ok you type 'boot cdrom (Very Important ...!)
    - To start the server automatically is necessary in ok prompt type the following:

    auto-boot? = trueboot-device = disk


     
    I hope it is useful.






    Manual based on documentation of Professor Jose Gregorio Cotua