Thursday, February 1, 2024

A Much-Needed BGP RFC: AS Path Prepending

Introduction

The Border Gateway Protocol (BGP) plays a critical role in building and maintaining Internet routing tables, so much so that it is considered the “glue” that holds the Internet together. In this context, a long-standing and very popular technique known as ‘AS Path Prepending’ has been devised as a key strategy for influencing route selection and optimizing an AS’s inbound and outbound traffic.

In this document, we will navigate through the IETF draft titled “AS Path Prepending” [1], which includes several ideas and concepts that are of great value to the community.


About draft-ietf-grow-as-path-prepending

This draft has been under discussion within the Global Routing Operation (GROW) Working Group since 2020 and is currently on version 10. The document has seven co-authors: M. McBride, D. Madory, J. Tantsura, R. Raszuk, H. Li., J. Heitz, and G. Mishra. It predominantly received support on the discussion list (including my own). You can read the draft here.


What is AS Path Prepending?

AS Path Prepending is a technique that involves repetitively adding one’s autonomous system identifier (ASN) to the list of ASs in a BGP route path (AS_PATH). Its goal is to influence route selection by making certain paths less attractive to inbound/outbound traffic. In other words, it consists of adding our autonomous system to the AS_PATH and therefore artificially “lengthening the path” to a prefix on the Internet.




In the figure above, without prepends, Router A prefers to go to C via B. However, when three prepends are added on B, router A decides to reach C via D.


Why is AS Path Prepending used and what is it used for?

AS Path prepending is used for multiple reasons. The main reason is undoubtedly traffic engineering, which in turn is used to influence an AS’s inbound and outbound traffic. It is very likely that the AS wishes to achieve one of the following goals:

  • to distribute traffic among two or more upstream providers, or
  • to have an upstream backup provider.
  • Whatever the case, the goal is traffic engineering.


To prepend or not to prepend, that is the question

Prepending is a bit like NAT in that it is often a necessary evil. As we will explain, its excessive and sometimes unnecessary use can become a vulnerability with significant implications for network stability.


What’s wrong with using AS Path Prepending?

We all know that AS Path Prepending is a very common technique to influence BGP decisions. However, its excessive, incorrect, and sometimes unnecessary use can have negative consequences. For example:


  • Creation of suboptimal traffic flows. In other words, we may achieve our goal of distributing traffic in the immediate links; however, beyond our immediate upstream, traffic is not optimized to reach our autonomous system and vice versa.
  • Prefix de-aggregation. When implementing traffic engineering, it is very common to de-aggregate prefixes, which affects the Internet ecosystem.
  • In the event of a route-leak, under normal circumstances, the as-path of our advertisements would be shorter than that of the leaked route. However, if we artificially lengthen the path by prepending, the as-path of the leaked routes may be shorter than those we are legitimately announcing for our legitimate prefix, which would have lower preference, leading to potential route hijacking, attacks, and a long etcetera.
  • Memory consumption. As expected, these AS Path Prepends are learned by BGP Speakers, thus increasing their memory usage. To this I would add that prepending introduces a small additional CPU usage penalty for each prefix.


Given that AS Path Prepend is no longer recommended, what alternatives are available?

There are many techniques for performing traffic engineering in BGP. I will mention some that appear in the draft:


  • Leveraging BGP communities. In addition to the well-known BGP communities, I recommend talking to your BGP peers to optimize traffic. There are numerous BGP communities implemented by providers, which might certainly benefit your setup.
  • Announcing more specific routes to your main upstreams.
  • Manipulating the AS Origin Code. Remember that this attribute is also found in the BGP route selection algorithms.
  • Using Multi Exit Discriminator (MED), a non-transitive attribute that can be used with excellent results for manipulating inbound traffic when we have several links to the same provider
  • Using Local-Preference, another non-transitive attribute, perfect for influencing the traffic that leaves our autonomous system


This is all well and good, but I still need to use AS Path Prepending. Any suggestions?

The draft mentions the best current practices when using AS Pat Prepending, which I will summarize below:

  1. Only use AS Path Prepending if it is absolutely necessary.
  2. Due to some traffic manipulation techniques, when using AS Path Prepending, we may not see significant changes in the traffic distribution, which is why it is important to talk to our peers to see if they will honor the prepends.
  3. Use local-preference on our network.
  4. Don’t prepend ASNs that you don’t own.
  5. Don’t prepend if you are connected to a single ISP using a single link, i.e., single homed (this one is not included in the draft).
  6. If we prepend a prefix, it might not be necessary to use that prepend for all our peers.
  7. There is no need to use more than five prepends. The reason for this is that more than 90% of path are five ASs or fewer in length.



Final Considerations

The use of AS Path Prepending is a valuable strategy but should be used only when necessary and with caution, following best practices. Excessive use of prepends may cause unforeseen events that may affect our autonomous system from a traffic and a security perspective.

We invite you to read the full draft (available here and to join the discussion on the LACNOG mailing list.

We also encourage you to comment on this post to let us know if you are prepending your ASN, as well as why and what you are using this for.


References:

[1] https://datatracker.ietf.org/doc/draft-ietf-grow-as-path-prepending/

Monday, January 8, 2024

Two very short jokes, one TCP and other UDP

TCP:

 ” You wanna hear a TCP joke ?
 You wanna hear a TCP joke ?
 You wanna hear a TCP joke ?
 You wanna hear a TCP joke ?

 [...]”


UDP:

I’d tell you a UDP joke, but you might not get it. 



Tuesday, December 5, 2023

BGP: IPv6 Only example between OpenBGPD and FRR

FRR:

show run

frr# sh run 

Building configuration...


Current configuration:

!

frr version 8.1

frr defaults traditional

hostname frr

log syslog informational

service integrated-vtysh-config

!

interface l0

 ipv6 address 2001:db8::1/128

exit

!

router bgp 65001

 bgp router-id 1.1.1.1

 no bgp ebgp-requires-policy

 neighbor 2001:db8:12::2 remote-as 65002

 !

 address-family ipv6 unicast

  redistribute connected

  neighbor 2001:db8:12::2 activate

  neighbor 2001:db8:12::2 soft-reconfiguration inbound

 exit-address-family

exit

!



OpenBGPD

Archivo: /etc/bgpd.conf

# macros

ASN="65002"

fib-update yes

log updates


# global configuration

AS $ASN

router-id 2.2.2.2


network 2001:db8::2/128

network inet6 connected


neighbor 2001:db8:12::1 {

    descr "epa"

    remote-as 65001

    announce IPv6 unicast

}


deny from any

deny to any

allow from 2001:db8:12::1

allow to 2001:db8:12::1


#

(please note the blank space between the last line and the second to last line)

Monday, December 4, 2023

How to create an IPv6 route to null/blackhole in Linux

 Case:

    How to create an IPv6 route to null/blackhole in Linux

Command:

   ip -6 route add blackhole fd00:12:34::0/48




I hope it is useful

Sunday, October 29, 2023

How to temporarily disable IPv4 on an interface within Linux

Case:

   We want to disable IPv4 on an interface


Solution:

   sudo ip -4 addr flush dev enp0s1


Explanation:

   The above command removes all IPv4 addresses for interface enp0s1. Important, remember that this disabling is only temporary.

Friday, October 13, 2023

How to uninstall brew in MAC

How to uninstall brew in MAC

  Option 1: 

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)"

 Option 2: 
NONINTERACTIVE=1 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)"


Tomado de: https://github.com/homebrew/install#uninstall-homebrew


NGINX Reverse Proxy for an IPv6-Only Server Farm

Introduction

This work presents a very simple way to offer dual-stack web access to an IPv6-only server farm using NGINX. The continued growth of the Internet and the gradual adoption of the IPv6 protocol means that it is essential to ensure connectivity and accessibility for clients using both IPv4 and IPv6. We will explain how to configure NGINX to support dual-stack web access, we will address how to configure NGINX as a reverse proxy that listens on both IPv4 and IPv6 addresses, as well as how to correctly route incoming requests to backend servers with only IPv6 addresses. By the way, among many other benefits, what we will discuss in the following article is an important step towards the preservation of IPv4 addresses.



What is a Reverse Proxy?

In [1], Cloudflare defines a Reverse Proxy Server as follows: “A reverse proxy is a server that sits in front of web servers and forwards client (e.g. web browser) requests to those web servers. Reverse proxies are typically implemented to help increase security, performance, and reliability. In order to better understand how a reverse proxy works and the benefits it can provide, let’s first define what a proxy server is.”


What is a Proxy Server

In [1], Cloudflare also provides the following definition for a proxy server: “A forward proxy, often called a proxy, proxy server, or web proxy, is a server that sits in front of a group of client machines. When those computers make requests to sites and services on the Internet, the proxy server intercepts those requests and then communicates with web servers on behalf of those clients, like a middleman.”



What are the benefits of a Reverse Proxy?

  • • A reverse proxy can offer IPv4 or transparent IPv6 to clients serviced from an IPv6- only server farm (which is what we will focus on). • Scalability: The use of a reverse proxy allows adding or removing backend servers as needed without affecting end users. This makes it easier for applications to scale out, allowing them to handle a larger number of concurrent users and requests. • Static content caching: NGINX can cache static content such as images, CSS files, and JavaScript, thus reducing the load on backend servers and increasing content delivery speed. This decreases page load times and the required bandwidth. • Security: NGINX acts as a point of entry to the application, providing an additional layer of security. It can perform functions such as request filtering, DDoS attack prevention, SQL injection protection, and client authentication. NGINX can also enable the use of SSL/TLS encryption for communication between clients and the backend server. • Advanced routing: A reverse proxy allows performing advanced routing based on various criteria, such as domain name, URL, or HTTP headers. This is useful when we need to direct traffic to different backend servers based on the specific attributes of the requests. • Consolidation of services: NGINX can act as a single point of entry for various backend services. This simplifies the infrastructure by consolidating multiple services on a single server, thus simplifying management and maintenance. • Enhanced performance: NGINX is lightweight and resource efficient by design. Its streamlined architecture and ability to handle large numbers of concurrent connections make it a popular choice for improving web app performance. • Load balancing: A reverse proxy such as NGINX can distribute incoming traffic across several backend servers. This helps balance the load and guarantees that no server is overloaded, which improves an application's performance and responsiveness.



Topology


What is our Goal Today?

The edge server (Reverse Proxy Server) will be able to receive IPv4 and IPv6 HTTP requests, and depending on the website a user wishes to visit (domain), will forward the request to the right server. This is what will happen in our example: 

The client visits: The request is sent to: 

server-a.com → 2001:db8:123::1 

server-b.com → 2001:db8:123::2 

server-c.com → 2001:db8:123::3 

server-a.com → 2001:db8:123::101 

server-b.com → 2001:db8:123::102 

server-c.com → 2001:db8:123::103



Requirements

  • • Linux with NGINX on the Reverse Proxy Server • Super user access • Web server on each of the servers in the farm • IPv4 and IPv6 Internet connectivity • Internal IPv6 connectivity


Let's get started


Let's get started 

1) Install NGINX in all servers #apt update #apt install nginx 

2) Create the websites in the NGINX reverse proxy 

File /etc/nginx/sites-available/server-a.com 

server { listen 80; listen [::]:80; 

  server_name server-a.com; 

  location / { 

  proxy_pass http://[2001:db8:123::101]; } 


File /etc/nginx/sites-available/server-b.com


server { listen 80; listen [::]:80; server_name server-b.com; location / { proxy_pass http://[2001:db8:123::102]; } }



Archivo  /etc/nginx/sites-available/server-b.com

server {

listen 80;

listen [::]:80;


    server_name server-b.com;

    location / {

        proxy_pass http://[2001:db8:123::102];

    }


}



File /etc/nginx/sites-available/server-c.com 
server { 
  listen 80; listen [::]:80; 
  server_name server-c.com; 
  location / { 
  proxy_pass http://[2001:db8:123::103]; 
  }
}

3) Create symbolic links to enable the sites configured above:


root@ProxyReverseSRV:/etc/nginx/sites-enabled# ln -s /etc/nginx/sitesavailable/server-a.com /etc/nginx/sites-enabled/server-a.com 


root@ProxyReverseSRV:/etc/nginx/sites-enabled# ln -s /etc/nginx/sitesavailable/server-b.com /etc/nginx/sites-enabled/server-b.com 


root@ProxyReverseSRV:/etc/nginx/sites-enabled# ln -s /etc/nginx/sitesavailable/server-c.com /etc/nginx/sites-enabled/server-c.com



4) Remember to restart NGINX

$sudo systemctl restart nginx



About the logs

Logs are extremely important for any company or ISP that wishes to review incoming connections. 

By default, NGINX will use its own IP address for outgoing connections, which results in the loss of the address of the client that originated the HTTP request. But don't worry. NGINX has the solution: proxy_set_header. This requires configuring both the end server and the Reverse Proxy server. 

1) On the Reverse Proxy Server, we must configure the website assets. 
# Example of nginx reverse proxy that allows logging the client's 
# original address and port number 

location /examples { 
   proxy_pass http://[2001:db8:123::103]; 
   proxy_buffering off; 
   proxy_set_header X-Real-IP $remote_addr; 
   proxy_set_header X-Forwarded-Host $host; 
   proxy_set_header X-Forwarded-Port $server_port; 
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
 } 

2) On the end server, add the following in the http section of the /etc/nginx/nginx.conf file:

set_real_ip_from 2001:db8:123::100; #replace the IP address with that of the proxy 
real_ip_header X-Forwarded-For; 
real_ip_recursive on; 

Example: 
http { 
   … 
   set_real_ip_from 2001:db8:123::100; 
   real_ip_header X-Forwarded-For; real_ip_recursive on; 
   … 
  } 

With these settings, the receiving server will trust the X-Forwarded-For header set to 2001:db8:123::100 and will log the client's source IP to /var/log/nginx/access.log.



Summary

The proposed design allows managing a 100% IPv6-only web server farm with access to both the IPv4 and the IPv6 worlds in a very simple, scalable, and efficient manner. This results in various benefits, including having to manage only one TCP/IP stack, simplicity, security, and even saving IPv4 addresses.


References


    • [1] https://www.cloudflare.com/es-es/learning/cdn/glossary/reverse-proxy/ • https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-areverse-proxy-on-ubuntu-22-04 
    • GitHub. LACNIC Blog Post Help Files for the entire project: https://github.com/LACNIC/BlogPostHelpFiles/tree/main/2023_Ofreciendo_conectivid ad_Dual_Stack_a_servidores_Web_en_una_granja_de_servidores_100_IPv6_Only