Wednesday, February 12, 2025

The History behind Netmasks.

Introduction

Do you remember when you were learning about netmasks? You probably thought that they were useless, that you wouldn’t need them, and wondered why they had invented something so insane. In addition to putting a smile on your face, I hope to convince you of their importance within the gigantic Internet ecosystem.


Goal

This blog post summarizes the history and milestones behind the concept of netmasks in the world of IPv4. This story begins in a world where classes didn’t exist (flat addressing), it then goes through a classful era and concludes with a totally classless Internet (CIDR). The information is based on excerpts from RFCs 790, 1338, and 1519, as well as on ‘Internet-history’ mailing list threads.


Do you know what a netmask is?

If you’re reading this document, I assume you do :-) but here’s a mini explanation: a netmask is used to identify and divide an IP address into a network address and a host address, in other words, it specifies the subnet partitioning system.


What is the purpose of netmasks?

Routing: Netmasks are used by routers to determine the network part of an IP address and route packets correctly.


Subnetting: Netmasks are used to create smaller networks.


Aggregation: Netmasks allow creating larger prefixes.


Have netmasks always existed?

Interestingly, netmasks haven’t always existed. In the beginning, IP networks were flat, and it was always assumed 8 bits were used for the network and 24 bits for the host. In other words, the first octet represented the network, while the remaining three octets corresponded to the host. It is also worth noting that many years ago, they were also referred to as bitmasks or simply masks — the latter term is still widely used today.


This means that classes (A, B, C, D) have not always existed

Classes were not introduced until Jon Postel’s RFC was published (September 1981), in other words, there was a time before classless and classful addressing. The introduction of the classful system was driven by the need to accommodate networks of different sizes, as the original 8-bit network ID was insufficient (256 networks). While the classful system attempted to address the limitations of a flat address space, it also faced scalability limitations. In the classful world, the netmask was implicit.


Classes did not solve every issue

Although the classful system represented an improvement over the original (flat) design, it was not efficient. The fixed size of the network and host portions of IP addresses led to exhaustion of the IP address space, particularly with the growing number of networks larger than a Class C but smaller than a Class B. This resulted in the development of Classless Interdomain Routing (CIDR), which uses Variable Length Subnet Masks (VLSM).


Excerpt from RFC 790


Did you know that netmasks were not always written with contiguous bits “on” from left to right?

In the beginning, netmasks didn’t have to be “lit” or “turned on” bit by bit from left to right. This means that masks such as 255.255.192.128 were entirely valid. This configuration was accepted by routers (IMPs, the first routers) and various operating systems, including BSDs and SunOSs. In other words, until the early 1990s, it was still possible for netmasks to have non-contiguous bits.


Why was it decided that it would be mandatory for bits to be turned on from left to right?

There were several reasons for this decision, the main one relating to routing and the well-known concept of “longest match” where routers select the route with the longest subnet mask that matches the packet’s destination address. If the bits are not contiguous, the computational complexity is very high. In short, efficiency.


Back then, IPv4 exhaustion was already underway

IPv4 resource exhaustion is not a recent phenomenon. In fact, item #1 of the first section of RFC 1338 mentions the exhaustion of Class B network address space, noting that Class C is too small for many organizations, and Class B is too large to be widely allocated. This led to pressure on the Class B address space, which was exhausted. Furthermore, item #3 of the same RFC mentions the “Eventual exhaustion of the 32-bit IP address space” (1992).


CIDR tackles the solutions of the past, which later became the problems of the time

The creation of classes led to the creation of more networks, which meant an increase in prefixes and consequently a higher consumption of memory and CPU. Thus, in September 1993, RFC 1519 introduced the concept of CIDR, which brought with it solutions to different challenges, including the ability to perform supernetting (i.e., being able to turn off bits from right to left) and attempting to reduce the number of network prefixes. It should be noted that RFC 1338 also maintained similar concepts.


Finally, prefix notation (/nn) also appeared thanks to CIDR and was possible because the “on” and “off” bits of the netmask were contiguous.


In summary, the primary goals of CIDR were to slow the growth of routing tables and improve efficiency in the use of IP address space.




Timeline




Conclusions

The concept of the netmask has evolved significantly since its origin, from not existing in a flat addressing scheme, to a rigid and then to a flexible model with CIDR. Initially, classful networks and non-contiguous masks created inefficiency and scalability issues as the Internet expanded.

A key change was the requirement of contiguous “on” bits, as this simplified the route selection process and allowed routers to operate more efficiently.

This document highlights the key milestones and motivations behind the evolution of IP addressing and underscores the importance of understanding the historical context to fully appreciate the Internet’s current architecture.


References

RFC https://datatracker.ietf.org/doc/html/rfc1338

RFC https://datatracker.ietf.org/doc/html/rfc1380

RFC https://datatracker.ietf.org/doc/html/rfc1519

RFC https://datatracker.ietf.org/doc/html/rfc790

ISOC “Internet History” mailing list, thread with the subject “The netmask”: https://elists.isoc.org/pipermail/internet-history/2025-January/010060.html

Thursday, August 22, 2024

A Practical Improvement in DNS Transport over UDP over IPv6

By Hugo Salgado and Alejandro Acosta


Introduction and problem statement

In this document we want to discuss an existing IETF draft (a working document that may become a standard) that caught our attention. This draft involves two fascinating universes: IPv6 and DNS. It introduces some best practices for carrying DNS over IPv6.


Its title is “DNS over IPv6 Best Practices” and it can be found here.


What is the document about and what problem does it seek to solve?

The document describes an approach to how Domain Name Protocol (DNS) should be carried over IPv6 [RFC8200].

Some operational issues have been identified in carrying DNS packets over IPv6 and the draft seeks to address them.


Technical context

The IPv6 protocol requires a minimum link MTU of 1280 octets. According to Section 5 “Packet Size Issues” of RFC8200, every link in the Internet must have an MTU of 1280 octets or greater. If a link cannot convey a 1280-octet packet in one piece, link-specific fragmentation and reassembly must be provided at a layer below IPv6.


Successful operation of PMTUD in an example adapted to 1280-byte MTU

Image source: https://www.slideshare.net/slideshow/naveguemos-por-internet-con-ipv6/34651833#2


Using Path MTU Discovery (PMTUD) and IPv6 fragmentation (source only) allows larger packets to be sent. However, operational experience shows that sending large DNS packets over UDP over IPv6 results in high loss rates. Some studies —quite a few years old but useful for context— found that around 10% of IPv6 routers drop all IPv6 fragments, and 40% block “Packet Too Big” messages, making client negotiation impossible. (“M. de Boer, J. Bosma, “Discovering Path MTU black holes on the Internet using RIPE Atlas”)

Most modern transport protocols like TCP [TCP] and QUIC [QUIC] include packet segmentation techniques that allow them to send larger data streams over IPv6.


A bit of history

The Domain Name System (DNS) was originally defined in RFC1034 and RFC1035. It was designed to run over several different transport protocols, including UDP and TCP, and has more recently been extended to run over QUIC. These transport protocols can be run over both IPv4 and IPv6.

When DNS was designed, the size of DNS packets carried over UDP was limited to 512 bytes. If a message was longer than 512 bytes, it was truncated and the Truncation (TC) bit was set to indicate that the response was incomplete, allowing the client to retry with TCP.

With this original behavior, UDP over IPv6 did not exceed the IPv6 link MTU (maximum transmission unit), avoiding operational issues due to fragmentation. However, with the introduction of EDNS0 extensions (RFC6891), the maximum was extended to a theoretical 64KB. This caused some responses to exceed the 512-byte limit for UDP, which resulted in sizes that exceeded the Path MTU and triggered TCP connections, impacting the scalability of the DNS servers.


Encapsulating a DNS packet in an Ethernet frame


Let’s talk about DNS over IPv6

DNS over IPv6 is designed to run over UDP or other transport protocols like TCP or QUIC. UDP only provides for source and destination ports, a length field, and simple checksum. It is a connectionless protocol. In contrast, TCP and QUIC offer additional features such as packet segmentation, reliability, error correction, and connection state.

DNS over UDP over IPv6 is suitable for small packet sizes, but becomes less reliable with larger sizes, particularly when IPv6 datagram fragmentation is required.

On the other hand, DNS over TCP or QUIC over IPv6 work well with all packet sizes. However, running a stateful protocol such as TCP or QUIC places greater demands on the DNS server’s resources (and other equipment such as firewalls, DPIs, and IDS), which can potentially impact scalability. This may be a reasonable tradeoff for servers that need to send larger DNS response packets.

The draft’s suggestion for DNS over UDP recommends limiting the size of DNS over UDP packets over IPv6 to 1280 octets. This avoids the need for IPv6 fragmentation or Path MTU Discovery, which ensures greater reliability.

Most DNS queries and responses will fit within this packet size limit and can therefore be sent over UDP. Larger DNS packets should not be sent over UDP; instead, they should be sent over TCP or QUIC, as described in the next section.


DNS over TCP and QUIC

When larger DNS packets need to be carried, it is recommended to run DNS over TCP or QUIC. These protocols handle segmentation and reliably adjust their segment size for different link and path MTU values, which makes them much more reliable than using UDP with IPv6 fragmentation.

Section 4.2.2 of [RFC1035] describes the use of TCP for carrying DNS messages, while [RFC9250] explains how to implement DNS over QUIC to provide transport confidentiality. Additionally, operation requirements for DNS over TCP are described in [RFC9210].


Security

Switching from UDP to TCP/QUIC for large responses means that the DNS server must maintain an additional state for each query received over TCP/QUIC. This will consume additional resources on the servers and affect the scalability of the DNS system. This situation may also leave the servers vulnerable to Denial of Service (DoS) attacks.


Is this the correct solution?

While we believe this solution will bring many benefits to the IPv6 and DNS ecosystem, it is a temporary operational fix and does not solve the root problem.

We believe the correct solution is ensuring that source fragmentation works, that PMTUD is not broken along the way, and that security devices allow fragmentation headers. This requires changes across various Internet actors, which may take a long time, but that doesn’t mean that we should abandon our efforts or stop educating others about the importance of doing the right thing.


Sunday, June 9, 2024

Cisco hidden command: bgp bestpath as-path multipath-relax

Hidden command

  bgp bestpath as-path multipath-relax


What for is this?

By default, Cisco does not do load-balance or distribute traffic between different ASs, this command allows it. Important, you must also use the maximum-paths command


Example:

router bgp 65001

 bgp router-id 1.1.1.1

 bgp log-neighbor-changes

 bgp bestpath as-path multipath-relax

 neighbor 2001:DB8:12::2 remote-as 65002

 neighbor 2001:DB8:12:10::2 remote-as 65002

 neighbor 2001:DB8:13:11::3 remote-as 65003

 !

 address-family ipv4

 no neighbor 2001:DB8:12::2 activate

 no neighbor 2001:DB8:12:10::2 activate

 no neighbor 2001:DB8:13:11::3 activate

 exit-address-family

 !

 address-family ipv6

 maximum-paths 3

 neighbor 2001:DB8:12::2 activate

 neighbor 2001:DB8:12:10::2 activate

 neighbor 2001:DB8:13:11::3 activate

 exit-address-family


Output after implementation:

     Network          Next Hop            Metric LocPrf Weight Path

 *m  2001:DB8::4/128  2001:DB8:12:10::2

                                                              0 65002 65004 ?

 *>                   2001:DB8:12::2                         0 65002 65004 ?

 *m                   2001:DB8:13:11::3

                                                              0 65003 65004 ?

 *m  2001:DB8:24:11::/64

                       2001:DB8:12:10::2

                                                              0 65002 65004 ?

 *>                   2001:DB8:12::2                         0 65002 65004 ?

 *m                   2001:DB8:13:11::3

                                                              0 65003 65004 ?

 *m  2001:DB8:34::/64 2001:DB8:12:10::2

                                                              0 65002 65004 ?

 *>                   2001:DB8:12::2                         0 65002 65004 ?

 *m                   2001:DB8:13:11::3

                                                              0 65003 65004 ?

Friday, June 7, 2024

Video: IPv6 LAC Race - May 2014 - Jun 2024

Do you want to know how the evolution of IPv6 has been in LAC? In this video of just a minute you will have your answer #barchartrace #ipv6





Sunday, June 2, 2024

Solved: "The following security updates require Ubuntu Pro with 'esm-apps' enabled"

Situation

  When you want to do some operations in Ubuntu using apt/do-release-upgrade you receive the message:

"The following security updates require Ubuntu Pro with 'esm-apps' enabled"


Solution

 The solution that worked for me was to run this:


cd /etc/apt/sources.list.d

for i in *.list; do mv ${i} ${i}.disabled; donated

apt clean

apt autoclean

sudo do-release-upgrade



Reference

https://askubuntu.com/questions/1085295/error-while-trying-to-upgrade-from-ubuntu-18-04-to-18-10-please-install-all-av 



Monday, April 29, 2024

Solved: Error: eth0 interface name is not allowed for R2 node when network mode is not set to none in containerlab

 Problem:

   Containerlab returns a similar error:

Error: eth0 interface name is not allowed for R2 node when network mode is not set to none


Solution:

 In the .yml file in the node section indicating the topology error specify:


network-mode: none


Example:

topology:

  kinds: 

    linux:

      image: quay.io/frrouting/frr:8.4.1

  nodes:

    R1:

      kind: linux

      image: quay.io/frrouting/frr:8.4.1

      network-mode: none


 Rerun the topology with clab dep -t file.yml and that's it!


Luck.