Thursday, December 1, 2022

An Interesting Change Is Coming to BGP

A route leak is defined as the propagation of routing announcement(s) beyond their intended scope (RFC 7908). But why do route leaks occur? The reasons are varied and include errors (typos when entering a number), ignorance, lack of filters, social engineering, and others.


Although there are several ways to prevent route leaks and, in fact, their number has decreased over the past three years (thanks to RPKI, IRR, and other mechanisms), I will try to explain what I believe BGP configurations will look like in the future. To do so, I will talk about RFC 9234, Route Leak Prevention and Detection Using Roles in UPDATE and OPEN Messages. And the part I would like to highlight is “role detection” as, after this RFC, in the future, we will assign roles in our BGP configurations.


To understand what we want to achieve, let’s recall some typical situations for an ISP: 


a new customer comes along with whom we will speak BGP,

a connection to an IXP,

the ISP buys capacity from a new upstream provider,

a new private peering agreement.

In all these cases decisions need to be made. There are multiple ways to configure BGP, including route maps, AS filters, prefix lists, communities, ACLS, and others. We may even be using more than one of these options.  


This is where RFC 9324 enters the picture: the document establishes the roles in the BGP OPEN message, i.e., it establishes an agreement of the relationship on each BGP session between autonomous systems. For example, let’s say that I am a router and I speak to another router and tell them that I am a “customer”; in turn, the other router’s BGP session can say “I am your provider.” Based on this exchange, all configurations (i.e., filters) will be automatic, which should help reduce route leaks.


These capabilities are then negotiated in the BGP OPEN message.


The RFC defines five roles:


Provider – sender is a transit provider to neighbor;

Customer – sender is a transit customer of neighbor;

RS – sender is a Route Server, usually at an Internet exchange point (IX);

RS-client – sender is client of an RS;

Peer – sender and neighbor are peers.

How are these roles configured?

If, for example, on a relationship in a BGP session between ASes, the local AS role is performed by the Provider, the remote AS role must be performed by the Customer and vice versa. Likewise, if the local AS role is performed by a Route Server (RS), the remote AS role must be performed by an RS-Client and vice versa. Local and remote AS roles can also be performed by two Peers (see table).







An example is included below.




BGP Capabilities

BGP capabilities are what the router advertises to its BGP peers to tell them which features it can support and, if possible, it will try to negotiate that capability with its neighbors. A BGP router determines the capabilities supported by its peer by examining the list of capabilities in the OPEN message. This is similar to a meeting between two multilingual individuals, one of whom speaks English, Spanish and Portuguese, while the other speaks French, Chinese and English. The common language between them is English, so they will communicate in that language. But they will not do so in French, as only one of them speaks this language. This is basically what has allowed BGP to grow so much with only a minor impact on our networks, as it incorporates these backward compatibility notions that work seamlessly.


This RFC has added a new capability.




Does this code work? Absolutely. Here’s an example in FRR:



Strict Mode

Capabilities are generally negotiated between the BGP speakers, and only the capabilities supported by both speakers are used. If the Strict Mode option is configured, the two routers must support this capability.


In conclusion, I believe the way described in RFC 9234 will be the future of BGP configuration worldwide, replacing and greatly improving route leaks and improper Internet advertisements. It will make BGP configuration easier and serve as a complement to RPKI and IRR for reducing route leaks and allowing for cleaner routing tables.


Click here to watch the full presentation offered during LACNIC 38 LACNOG 2022.


https://news.lacnic.net/en/events/an-interesting-change-is-coming-to-bgp



Tuesday, August 9, 2022

DNSSEC Deployment in the Region – Statistics and Measurements

By Hugo Salgado, Research and Development at NIC Chile; Dario Gomez Technology Consultant; Alejandro Acosta R+D Coordinator at LACNIC


Introduction

In this article we would like to talk about some recent studies we have conducted on a topic we are very passionate about: DNSSEC. Note that we are using the plural term “studies”, as there are two studies on DNSSEC that we began at the same time… please continue reading to find out what these two studies are about!


About DNSSEC

DNSSEC incorporates additional security to the DNS protocol, as it allows checking the integrity and authenticity of the data, preventing spoofing and tampering attacks through the use of asymmetric cryptography, better known as public/private key cryptography. By using these keys and digital signatures based on public key cryptography, it is possible to determine whether a query has been altered, which allows guaranteeing the integrity and authenticity of the message. If these signatures are checked and they do not match, it means that the chain of trust has been broken and the query cannot be validated as legitimate.


Having DNSSEC depends on your ISP or Internet service provider, who is the one responsible for configuring the protocol. There are several different tools to find out if you have DNSSEC, such as the following:


https://dnssec-analyzer.verisignlabs.com/


https://dnsviz.net/


As many already know, the DNSSEC protocol has been growing a lot in recent years. Four aspects have marked the growth of DNSSEC deployment:


    DNSSEC is enabled by default on some recursive servers (BIND).

    Great progress has been achieved in making it easier for major registrars to enable DNSSEC on the different domains.

    All major recursive DNS servers perform DNSSEC validation (Google Public DNS, Cloudflare, Quad9, etc.).

    Apple very recently informed that iOS16 and macOS Ventura will allow DNSSEC validation at the stub resolver.


DNSSEC has always been very important for LACNIC, and we have organized many events and activities around this topic. However, to date we had never conducted our own study on the matter.


What is the study about?

LACNIC’s R&D department wanted to carry out a study to understand the status and progress of DNSSEC deployment in the region.


Data sources

We have two very reliable data sources:


    We used RIPE ATLAS probes https://atlas.ripe.net/

    We performed traffic captures (tcpdump) that were then anonymized on authoritative servers


Dates:


We began gathering data in November 2021. Currently, data is gathered automatically, and weekly and monthly reports are produced. [1]


How can one identify if a server is performing DNSSEC?


We need to look at two different things:


Atlas probes:


DNS resolution requests for a domain name that has intentional errors in its signatures and therefore cannot be validated with DNSSEC are sent to all available probes in Latin America and the Caribbean. An error response (SERVFAIL) means that the resolver used by that probe is using DNSSEC correctly. If, on the other hand, a response is obtained (NOERROR), the resolver is not performing any validation. Note that, interestingly, the goal is to obtain a negative response from the DNS server – this is the key to knowing whether the recursive server validates DNSSEC. Now, an example. If you visit dnssec-failed.org and can open the page, it means that your recursive DNS is not performing DNSSEC validation – you shouldn’t be able to open the page! :-)


Traffic captures (tcpdump):


Before sharing what we do with traffic captures, we will first expand a bit on the concept of DNSSEC. Just as DNS has traditional records (A, AAAA, MX, etc.), new records were added for DNSSEC: DS, RRSIG, NSEC, NSEC3, and DNSKEY. In other words, a recursive DNS server can query the AAAA record to learn the IPv6 address of a name and also the Delegation Signer (DS) record to check the authenticity of the child zones. The key to this study is that servers that don’t perform DNSSEC validation don’t query DNSSEC records!


Based on the above, when making the capture, tcpdump is asked to take the entire packet (flag -s 0). Thus, we have all its content, from Layer 3 to Layer 7. While processing the packet, we look for specific DNSSEC records (again: DS, RRSIG, NSEC, NSEC3, and DNSKEY). If we can obtain any of these records, then the recursive server is indeed performing DNSSEC validation, otherwise it is not.


Where is this capture performed?


The capture is performed specifically in one of the instances of the reverse DNS D server (D.IP6-SERVERS.ARPA). The following command is used: /usr/sbin/tcpdump -i $INTERFAZ -c $CANTIDAD -w findingdnssecresolvers-$TODAY.pcap -s 0 dst port $PORT and ( dst host $IP1 or dst host $IP2 )


Processing the data

First, processing the data comprises several steps, all of which are performed entirely using open-source software, specifically, Bash, Perl, and Python3 on Linux.


Second, let’s not forget that there are two sources of information: traffic captures (PCAPs) and Atlas probes. Below is the methodology followed in each case.


    Processing of PCAPs: After obtaining the PCAPs, a series of steps are performed, including the following:

        Processing of PCAP files in Python3 using the pyshark library.

        Cleaning of unprocessable data (incorrectly formed, damaged, non-processable packets, etc.)

        Elimination of duplicate addresses

        Anonymization of the data

        Generation of result.

        Generation of charts and open data


    Processing of the information obtained in RIPE Atlas:


Traffic data captures are performed with monthly measurements on the RIPE Atlas platform, using its command line API. They are then collected and processed using a series of Perl scripts and, finally, they are plotted using the Google Charts API. In addition, we always make the data available in open data format.


Let’s keep in mind that, in order to determine if a probe is using a validating resolver, it uses a domain name with intentionally incorrect signatures. Because the name is not valid according to DNSSEC, a validating probe should return an error when attempting to resolve that name. On the contrary, a positive response is obtained for the name, the resolver is not validating, as it has ignored the incorrect signature.


Results

The chart below shows the number of servers that were studied, specifying those that use DNSSEC and those that do not. The blue and red lines represent servers for which DNSSEC is and is not enabled, respectively.







Nro de Servidores DNSs estudiados

Chart No. 1


As the image shows, on 2 June 2022 there were more recursive servers that did not perform DNSSEC validation than servers that did. Between 33,000 and 55,000 IP addresses were analyzed each week. In general, an average of approximately 55% of servers that do not use the protocol and 45% of positive samples is maintained.







Nro de Servidores IPv6 DNSs estudiados

Chart No. 2


Chart No. 2 shows the history of DNSSEC queries over IPv6. It is worth noticing that for various sampling periods there were more DNSSEC queries over IPv6 than over IPv4. Undoubtedly, the intention is for the red line to gradually decrease and for the blue line to gradually increase.


Ranking of countries with the highest DNSSEC validation rates


Using the RIPE Atlas measurement platform, it is possible to measure each country’s ability to validate DNSSEC. Each measurement can be grouped by country to create a ranking:














Ranking DNSSEC por pais

Chart No. 3


Ranking based on average DNSSEC validation rates from networks in the countries of Latin America and the Caribbean corresponding to May 2022.


The numbers inside the bars show the number of participating ASs for each country. Countries where we measured only one AS were not included.


Summary

Based on the study of traffic captures with data obtained over a period of eight months, the graph suggests a slow decrease in the number of NO-DNSSEC servers. There also appears to be greater DNSSEC deployment in IPv6 than in IPv4 servers.


It is to be expected that an analysis of the data obtained using Atlas probes suggests greater deployment of DNSSEC validation than other, more generic data sources, as these probes are usually hosted on more advanced networks or by users who would deliberately enable DNSSEC. However, it somehow represents the “upper limit” of DNSSEC penetration and is also an important indicator of its evolution over time.


Open data

As usual, we at LACNIC want to make our information available so that anyone who wishes to do so can use it in their work:


https://stats.labs.lacnic.net/DNSSEC/opendata/

https://mvuy27.labs.lacnic.net/datos/


This data is being provided in the spirit of “Time Series Data.” In other words, we are making available data collected over time, which will make it very easy for our statistics to fluctuate and to identify increases and/or drops in DNSSEC deployment by country, region, etc.


As always when we work on this type of project, we welcome suggestions for the improvement of both the implementation and the visualization of the information obtained.


References:

[1] https://stats.labs.lacnic.net/DNSSEC/dnssecstats.html

https://mvuy27.labs.lacnic.net/datos/dnssec-ranking-latest.html

Wednesday, July 13, 2022

Fixing/Solved "Unable to parse package file " after apt

Problem: 
   We get an error after executing any apt command in linux 

Solution 
   The solution is very easy, I spent so many hours fixing it. 
   You just have to delete the file mentioned in the error, in my case I got: "E: Unable to parse package file /var/lib/apt/extended_states (1)" 
   I just deleted the file /var/lib/apt/extended_states 

 Example: 
   #sudo rm /var/lib/apt/extended_states 

 That's it

Thursday, February 24, 2022

Why Is IPv6 So Important for the Development of the Metaverse?

 By Gabriel E. Levy B. and Alejandro Acosta

Over almost half a century, the TCP/IP protocols have helped connect billions of people.

Since the creation of the Internet, they have been the universal standards used to transmit information, making it possible for the Internet to function[3].

The acronym ‘IP’ can refer to two different but interrelated concepts. The first is a protocol (the Internet Protocol) whose main function is to be used in bidirectional (source and destination) data transmission based on the Open System Interconnection (OSI) standard[4]The second possible reference when talking about IP involves the assignment of physical addresses in the form of numbers known as ‘IP Addresses’. An IP address is a logical and hierarchical identifier assigned to a network device that uses the Internet Protocol (IP). It corresponds to the network layer, or layer 3 of the OSI model.

IPv4 refers to the fourth version of the Internet Protocol, a standard for the interconnection of Internet-based networks which was implemented in 1983 for the operation of ARPANET and its subsequent migration to the Internet.[5].

IPv4 uses 32-bit addresses, the equivalent of 4.3 billion unique numbering blocks, a figure that back in the 80s appeared to be inexhaustible. However, thanks to the enormous and unforeseen growth of the Internet, by 2011 the unexpected had happened: IPv4 addresses had been exhausted[6].

IPv6 as a solution to the bottleneck

To address the lack of available address resources, the engineering groups responsible for the Internet have resorted to multiple solutions, ranging from the creation of private subnets so that multiple users can connect using a single address, to the creation of a new protocol called IPv6 which promises to be the definitive solution to the problem and which was officially launched on 6 June 2012[7]:

“Anticipating IPv4 address exhaustion and seeking to provide a long-term solution to the problem, the organization that promotes and develops Internet standards (the Internet Engineering Task Force or IETF) designed a new version of the Internet Protocol, specifically version 6 (IPv6), with provides almost limitless availability based on a the use of 128-bit addresses, the equivalent to approximately 340 undecillion addresses”**[[8]**.

It should be noted, however, that the creation of the IPv6 protocol did not anticipate a migration or shift from one protocol to another. Instead, a mechanism was designed to allow both protocols to coexist for a time.

To ensure that the transition would be transparent to users and to allow a reasonable amount of time for vendors to incorporate the new technology and for Internet providers to implement the new version of the protocol in their own networks, along with the IPV6 protocol itself, the organization responsible for Internet protocol standardization — the IETF — designed a series of transition and coexistence mechanisms.

“Imagine this is a weight scale where the plate carrying the heaviest weight currently represents IPv4 traffic. However, little by little and thanks to this coexistence, as more content and services become available over IPv6, the weight of the scale will shift to the other plate, until IPv6 traffic outweighs IPv4. This is what we call the transition”**[9]**.

If both protocols (IPv4 and IPv6) are available, IPv6 is preferred by design. This is why the balance shifts naturally, depending on multiple factors and without us being able to determine how long IPv4 will continue to exist and in what proportion. If one had a crystal ball, one might say that IPv6 will become the dominant protocol in three or four years, and that IPv4 will disappear from the Internet — or at least from many parts of the Internet — in the same timeframe”[10].

Without IPv6 there may be no Metaverse

Metaverses or metauniverses are environments where humans interact socially and economically through their avatars in cyberspace, which is an amplified metaphor for the real world, except that there are no physical or economic limitations[11].

“You can think about the Metaverse as an embodied Internet, where instead of just viewing content — you are in it. And you feel present with other people as if you were in other places, having different experiences that you couldn’t necessarily have on a 2D app or webpage.” Mark Zuckerberg, Facebook CEO**[12]**.

The Metaverse necessarily runs on the Internet, which in turn uses the Internet Protocol (IP) to function.

The Metaverse is a type of simulation where avatars allow users to have much more immersive and realistic connections by displaying a virtual universe that runs online. This is why it is necessary to ensure that the Metaverse is immersive, multisensory, interactive, that it runs in real time, that it allows each user to be precisely differentiated, and that it deploys simultaneous and complex graphic tools, among many other elements. All this would be impossible to guarantee with the IPv4 protocol, as the number of IP addresses would not be enough to cover each connection, nor would it be possible to guarantee that technologies such as NAT would function properly.

Key elements:

  • IPv6 is the only protocol that can guarantee enough IP resources to support the Metaverse.
  • IPv6 avoids the use of NAT mechanisms that would create technological difficulties for the deployment of the Metaverse.
  • IPv6 links have lower RTT delay than IPv4 links, and this allows avatar representations, including holograms, to be displayed synchronously.
  • Considering the huge amount of data involved in the deployment of the Metaverse, it is necessary to ensure the least possible data loss. This is why IPv6 is the best option, as evidence shows that data loss is 20% lower when using IPv6 than when using IPv4[13].

The role of small ISPs

Considering that small ISPs are responsible for the connectivity of millions of people across Latin America and that, as previously noted, they are largely responsible for reducing the digital divide[14], it is very It is important for these operators to accelerate their migration/transition to IPv6, not only to remain competitive in relation to larger operators, but also to be able to guarantee their users that technologies such as the Metaverse will work on their devices without major technological headaches.

In conclusion, while the true scope of the Metaverse remains to be seen, its deployment, implementation, and widespread adoption will be possible thanks to the IPv6 protocol, a technology that has provided a solution to the availability of IP resources, avoiding the cumbersome network address translation (NAT) process, improving response times, reducing RTT delay, and decreasing packet losses, while at the same time allowing simultaneous utilization by an enormous number of users.

All of the above leads us to conclude that the Metaverse would not be possible without IPv6.

Disclaimer: This article contains a review and analysis undertaken in the context of the digital transformation within the information society and is duly supported by reliable and verified academic and/or journalistic sources, which have been delimited and published.

The information contained in this journalistic and opinion piece does not necessarily represent the position of Andinalink or of the organizations commercially related with Andinalink.

[1] Andinalink article: Metaversos y el Internet del Futuro (Metaverses and the Internet of the Future)
[2] Andinalink article: Metaversos: Expectativas VS Realidad (Metaverses: Expectations vs Reality)
[3] In the article titled: El agotamiento del protocolo IP (The Exhaustion of the IP Protocol) we explain the characteristics of the TCP protocol. 
[4] Standard reference document on the OSI connectivity model
[5] The article titled: ¿Fue creada Arpanet para soportar una guerra nuclear? (Was Arpanet Created to Withstand a Nuclear War?) details the features and history of Arpanet.
[6] LACNIC document on the Phases of IPv4 Exhaustion
[7] Document published by the IETF about World IPv6 Launch on its sixth anniversary
[8] Guide for the Transition to IPv6 published by the Colombian Ministry of Information and Communications Technology
[9] Guide for the Transition to IPv6 published by the Colombian Ministry of Information and Communications Technology
[10] Guide for the Transition to IPv6 published by the Colombian Ministry of Information and Communications Technology
[11] Andinalink article on Metaverses
[12] Mark in the Metaverse: Facebook’s CEO on why the social network is becoming ‘a metaverse company: The Verge Podcast
[13] Analysis by Alejandro Acosta (LACNIC) of the impact of IPv6 on tactile systems
[14] Andinalink article: Los WISP disminuyen la brecha digital (WISPs Reduce the Digital Divide)