Tuesday, June 10, 2025

7 Challenges IPv6 Faced and How They Were Overcome

Did you ever think IPv6 wouldn’t really catch on? If so, this post is for you.

Over the past 20 years, IPv6 has faced multiple obstacles that have led many to question its future. From the outset, it encountered serious technical challenges: it wasn’t compatible with IPv4, many older devices didn’t support it, and as is often the case, there was considerable resistance from operators and companies. On top of that, several myths—like IPv6 was too complex or less secure—also worked against it.


But time and technology did their thing. Thanks to transition mechanisms, better routing practices, and the development of more advanced hardware, IPv6 proved not only that it could scale (we’re talking about 340 undecillion available addresses!), but also that it’s more efficient and secure than the old IPv4 protocol.


Today, IPv6 is no longer a promise: it’s a reality. It powers 5G, the future 6G, the large-scale Internet of Things, and the hyperconnected cloud. And it also solves problems we’ve been struggling with for years, such as address exhaustion and network fragmentation.


In this article, we’ll debunk some of the most common myths—like the idea that IPv6 slows down performance or doesn’t work well with legacy systems—and show, through data and real-world examples, why migrating to IPv6 is not only possible, but necessary if you want your network to be ready for the future.


1. Improved Packet Switching at the Hardware Level

Over the last 15 years, application-specific integrated circuits (ASICs) for networks have evolved from limited support to native and optimized IPv6 implementation. Before 2010, IPv6 processing relied on general-purpose CPUs, which led to high latency and low performance. Between 2010 and 2015, manufacturers such as Cisco and Broadcom integrated hardware-based IPv6 forwarding tables (TCAM), NDP/ICMPv6 support, and efficient lookup in chips such as the Cisco Nexus 7000 and Broadcom StrataXGS. By 2015-2020, ASICs had matured with scalable routing tables, IPv6 extension offloading (headers, tunneling), and integration with SDN/NFV, exemplified by Broadcom Tomahawk and Cisco Silicon One.


Since 2020, ASIC design has prioritized IPv6, introducing advanced capabilities such as accelerated IPv6 Segment Routing (SRv6), native security (hardware-based IPsec), and optimization for IoT/5G. Chips like Broadcom Jericho 2 (2020), Marvell Octeon 10 (2022), and Intel Tofino 3 (2023) support millions of IPv6 routes and programmable processing (P4), cementing IPv6 as the standard in modern networks. This evolution reflects the transition of IPv6 from a software add-on to a critical network hardware component.

Timeline IPv6 Support in ASICs Limitations

Pre-2010 Minimal or software-based High CPU cost, low efficiency

2010-2015 First implementations in TCAMs/ASICs Limited IPv6 tables

2015-2020 Maturity in enterprise routers/switches  

2020-Today Native IPv6, optimized for cloud/5G/SRv6  


Summary Comparison of ASIC Evolution


2. The Chicken-and-Egg Dilemma in IPv6

In the context of IPv6, the chicken-and-egg dilemma refers to the problem of promoting adoption of this new version of the Internet Protocol. It’s like launching a new type of phone that no one buys because there are no apps for it, while developers don’t build these apps because there aren’t enough users.


On the one hand, content providers (such as streaming platforms and websites) need enough IPv6 users to justify investing in infrastructure and optimization for the protocol. On the other, end users need access to content over IPv6 to feel motivated to transition away from IPv4. Without a solid commitment from both sides, a vicious cycle is created: the lack of users limits available content, and the lack of content discourages users from adopting IPv6.


In 2025, the situation is quite different: many of the world’s leading Content Delivery Networks (CDNs) and websites have supported IPv6 for years. As a result, customers who haven’t yet deployed IPv6 often experience slightly lower connectivity compared to those who have.


A driver of IPv6 adoption among providers is the impact of the gaming industry and its community. It is a well-known fact that major gaming consoles such as Xbox (since 2013) and PlayStation (since 2020) have broadly supported IPv6.

CDN/Website Year of IPv6 Adoption  

Cloudflare 2011 First CDN to support IPv6 globally

Google (Search, YouTube) 2012 Gradual rollout

Facebook/Instagram 2013 Full adoption in 2014

Wikipedia 2013 One of the first sites to adopt IPv6

Akamai 2014 Gradual support by region

Netflix 2015 Prioritizes IPv6 to reduce latency

Amazon CloudFront 2016 Full support in edge locations

Apple (App Store) 2016 Mandatory requirement for iOS apps

Microsoft Azure CDN 2017  

Fastly 2018 Native support in their entire network


IPv6 Content Adoption Table. Source: DeepSeek (May 2025)


3. Prefix Delegation Routing

Something very interesting happened 10-15 years ago: many Internet Service Providers (ISPs) faced multiple issues when implementing DHCPv6-PD (Prefix Delegation). Routing issues were often encountered. For example, a host or remote network (CPE) would successfully receive an IPv6 prefix, but the routes needed to reach that prefix were not configured at the ISP. It was as if the mailman knew your address but didn’t have a map to find his way to your home.


Today, ISPs have upgraded their infrastructure to automatically handle prefix delegation routing, while modern routers—both residential and enterprise—include robust support for DHCPv6-PD. Now, when a client receives an IPv6 block, the ISP immediately propagates the necessary routes, and the local router automatically configures the internal subnets. This has made IPv6 prefix delegation as reliable as traditional IPv4 DHCP, eliminating one of the early pain points of the transition.

Aspect 10 years ago Today (2024)

Prefix assignment DHCPv6-PD without core routing DHCPv6-PD + BGP/IGP auto advertisement

CPE behavior Received the prefix or didn’t configure routes Automatically configures LAN + routes

Connectivity Outbound traffic only (inbound traffic was lost) Fully bidirectional (inbound/outbound)

Workarounds NAT66, manually configured tunnels, redistribute connected on CPE Native routing with no workarounds


Comparison: 2014 vs. 2024


4. Training and Collective Learning

Two decades ago, adopting IPv6 represented both a technical and an educational challenge. Documentation was scarce, scattered, and often overly technical, which meant that many network administrators had to learn by trial and error. Early courses focused mainly on theoretical protocol specifications, providing little practical guidance for actual implementation in operational networks. This lack of quality training resources initially slowed IPv6 adoption, particularly in enterprise environments and small operators.


Today, various organizations, including LACNIC, have made a massive effort to educate and thus reduce the entry barriers for IPv6. Examples include the LACNIC Campus, which offers courses on IPv6 ranging from basic to advanced levels, along with blog posts, videos, podcasts, and other educational materials.


Initiatives like LACNOG (and other regional NOGs) as well as LACNIC’s now-classic hands-on IPv6 workshops have also contributed to creating spaces for training and technical discussions on IPv6 implementation in real-world networks.


In addition, many private companies have included IPv6 as a mandatory topic in their training and certification programs, covering it in both coursework and exams.


The technical community has also played its part: hundreds of individuals, from students to senior network engineers, regularly share resources on social media platforms, producing articles, videos, blog posts, and technical notes that help close knowledge gaps and strengthen collective learning around IPv6.


5. Application Support

Twenty years ago, IPv6 support in applications was a bit of a lottery. If both IPv4 and IPv6 were present on the network, things got even more complicated. For instance, on an IPv6-only network, any application using IPv4 addresses would inevitably fail. Many developers assumed that IPv4 would be available in all networks. Operating systems also had outdated libraries that didn’t support the new protocol. This created an absurd situation: even when a user or organization had a perfectly configured IPv6 network, their everyday tools—such as their email clients—would simply stop working.


Today, the situation has changed dramatically. Major platforms such as Apple’s App Store (since 2016) and Google Play now require new apps to be IPv6-compatible (although the latter doesn’t explicitly state this). At the same time, mechanisms such as Happy Eyeballs support the transition to IPv6 at the software level in a transparent manner. Major programming libraries (such as Python, Java, and Node.js) have included native support for IPv6 for years, eliminating excuses for developers. Companies like Microsoft, Google, and Cloudflare have led this change, demonstrating that IPv6 can outperform IPv4. What was once a challenge has become a competitive advantage: applications that are early adopters of IPv6 benefit from lower latency, better security, and access to the next generation of connected users.


6. Fragmentation and MTU (Maximum Transmission Unit)

Unlike IPv4, IPv6 eliminates fragmentation at intermediate routers. This means that packets must adhere to the MTU (Maximum Transmission Unit) across the entire path from source to destination. While this design decision improves overall network efficiency, in the early years of IPv6 deployment (10, 15, or 20 years ago), it caused quite a few headaches: many devices implemented the Path MTU Discovery (PMTUD) mechanism incorrectly, resulting in loss of connectivity in certain common situations.


Specifically, older routers and unpatched operating systems were unable to properly handle ICMPv6 “Packet Too Big” messages, which are essential for the sender to adjust the size of the packets. As a result, communication broke down on networks where the MTU was lower than expected.


Today, modern operating systems and network equipment handle PMTUD correctly, responding to and dynamically adjusting packet size based on ICMPv6 messages. Thanks to these improvements, these issues are much less common, and networks run with greater stability and efficiency under IPv6.


7. DNS Forwarding via RA

In the early years of IPv6 (up until around 2010), configuring DNS servers on network clients was a more complex and indirect process than it is today. Routers sent Router Advertisement (RA) messages with the O (Other Configuration) flag set, forcing clients to make an additional request via DHCPv6 to obtain DNS server information. Inherited from the IPv4 world, this approach had several drawbacks: higher configuration latency, dependency on an additional service, and greater complexity for simple networks or devices with limited resources, such as many IoT endpoints.


This limitation was addressed with the introduction of the RDNSS (Recursive DNS Server) option in ICMPv6 RA messages, formalized in RFC 6106 (2010). From then on, routers could directly advertise DNS servers to clients, drastically simplifying the autoconfiguration process.


Although initially met with some resistance from operating system and router vendors, support for RDNSS became popular between 2015 and 2017: Windows 10, Linux (with systemd-networkd), iOS 9+, and most enterprise routers had already implemented it.


Today, this functionality is almost universally available on modern devices and is considered a best practice in IPv6 networks, eliminating the need to use DHCPv6 only for DNS and enabling much simpler plug-and-play deployments.



Conclusions

In retrospect, after more than two decades of evolution, IPv6 has overcome obstacles that once seemed insurmountable, transforming from a largely theoretical protocol into the backbone of the modern Internet.


As a result of collaboration between the industry and organizations such as the IETF, standardized, efficient, and widely adopted solutions have now been found for the technical challenges that once created uncertainty. Myths such as its alleged “complexity” or “incompatibility” have been dispelled by concrete evidence of improved performance, greater security, and true scalability.


IPv6 is the present. With nearly 50% of global traffic now running over IPv6 (and almost 40% in Latin America and the Caribbean), native support across CDNs, operating systems, and applications, and a key role in technologies such as 5G, IoT, and hyperconnected cloud, full transition is only a matter of time. Continuing to delay IPv6 adoption doesn’t only mean missing out on technical advantages: it means moving towards obsolescence.


The lesson is clear: IPv6 adoption is a strategic imperative. Failing to implement IPv6 means risking isolation from an Internet that has already taken the next step.

References:

    RFC 6106: https://datatracker.ietf.org/doc/html/rfc6106

    LACNIC Statistics: https://stats.labs.lacnic.net/IPv6/graph-access.html

    https://developer.apple.com/support/downloads/terms/app-review-guidelines/App-Review-Guidelines-20250430-English-UK.pdf

Wednesday, February 12, 2025

The History behind Netmasks.

Introduction

Do you remember when you were learning about netmasks? You probably thought that they were useless, that you wouldn’t need them, and wondered why they had invented something so insane. In addition to putting a smile on your face, I hope to convince you of their importance within the gigantic Internet ecosystem.


Goal

This blog post summarizes the history and milestones behind the concept of netmasks in the world of IPv4. This story begins in a world where classes didn’t exist (flat addressing), it then goes through a classful era and concludes with a totally classless Internet (CIDR). The information is based on excerpts from RFCs 790, 1338, and 1519, as well as on ‘Internet-history’ mailing list threads.


Do you know what a netmask is?

If you’re reading this document, I assume you do :-) but here’s a mini explanation: a netmask is used to identify and divide an IP address into a network address and a host address, in other words, it specifies the subnet partitioning system.


What is the purpose of netmasks?

Routing: Netmasks are used by routers to determine the network part of an IP address and route packets correctly.


Subnetting: Netmasks are used to create smaller networks.


Aggregation: Netmasks allow creating larger prefixes.


Have netmasks always existed?

Interestingly, netmasks haven’t always existed. In the beginning, IP networks were flat, and it was always assumed 8 bits were used for the network and 24 bits for the host. In other words, the first octet represented the network, while the remaining three octets corresponded to the host. It is also worth noting that many years ago, they were also referred to as bitmasks or simply masks — the latter term is still widely used today.


This means that classes (A, B, C, D) have not always existed

Classes were not introduced until Jon Postel’s RFC was published (September 1981), in other words, there was a time before classless and classful addressing. The introduction of the classful system was driven by the need to accommodate networks of different sizes, as the original 8-bit network ID was insufficient (256 networks). While the classful system attempted to address the limitations of a flat address space, it also faced scalability limitations. In the classful world, the netmask was implicit.


Classes did not solve every issue

Although the classful system represented an improvement over the original (flat) design, it was not efficient. The fixed size of the network and host portions of IP addresses led to exhaustion of the IP address space, particularly with the growing number of networks larger than a Class C but smaller than a Class B. This resulted in the development of Classless Interdomain Routing (CIDR), which uses Variable Length Subnet Masks (VLSM).


Excerpt from RFC 790


Did you know that netmasks were not always written with contiguous bits “on” from left to right?

In the beginning, netmasks didn’t have to be “lit” or “turned on” bit by bit from left to right. This means that masks such as 255.255.192.128 were entirely valid. This configuration was accepted by routers (IMPs, the first routers) and various operating systems, including BSDs and SunOSs. In other words, until the early 1990s, it was still possible for netmasks to have non-contiguous bits.


Why was it decided that it would be mandatory for bits to be turned on from left to right?

There were several reasons for this decision, the main one relating to routing and the well-known concept of “longest match” where routers select the route with the longest subnet mask that matches the packet’s destination address. If the bits are not contiguous, the computational complexity is very high. In short, efficiency.


Back then, IPv4 exhaustion was already underway

IPv4 resource exhaustion is not a recent phenomenon. In fact, item #1 of the first section of RFC 1338 mentions the exhaustion of Class B network address space, noting that Class C is too small for many organizations, and Class B is too large to be widely allocated. This led to pressure on the Class B address space, which was exhausted. Furthermore, item #3 of the same RFC mentions the “Eventual exhaustion of the 32-bit IP address space” (1992).


CIDR tackles the solutions of the past, which later became the problems of the time

The creation of classes led to the creation of more networks, which meant an increase in prefixes and consequently a higher consumption of memory and CPU. Thus, in September 1993, RFC 1519 introduced the concept of CIDR, which brought with it solutions to different challenges, including the ability to perform supernetting (i.e., being able to turn off bits from right to left) and attempting to reduce the number of network prefixes. It should be noted that RFC 1338 also maintained similar concepts.


Finally, prefix notation (/nn) also appeared thanks to CIDR and was possible because the “on” and “off” bits of the netmask were contiguous.


In summary, the primary goals of CIDR were to slow the growth of routing tables and improve efficiency in the use of IP address space.




Timeline




Conclusions

The concept of the netmask has evolved significantly since its origin, from not existing in a flat addressing scheme, to a rigid and then to a flexible model with CIDR. Initially, classful networks and non-contiguous masks created inefficiency and scalability issues as the Internet expanded.

A key change was the requirement of contiguous “on” bits, as this simplified the route selection process and allowed routers to operate more efficiently.

This document highlights the key milestones and motivations behind the evolution of IP addressing and underscores the importance of understanding the historical context to fully appreciate the Internet’s current architecture.


References

RFC https://datatracker.ietf.org/doc/html/rfc1338

RFC https://datatracker.ietf.org/doc/html/rfc1380

RFC https://datatracker.ietf.org/doc/html/rfc1519

RFC https://datatracker.ietf.org/doc/html/rfc790

ISOC “Internet History” mailing list, thread with the subject “The netmask”: https://elists.isoc.org/pipermail/internet-history/2025-January/010060.html

Thursday, August 22, 2024

A Practical Improvement in DNS Transport over UDP over IPv6

By Hugo Salgado and Alejandro Acosta


Introduction and problem statement

In this document we want to discuss an existing IETF draft (a working document that may become a standard) that caught our attention. This draft involves two fascinating universes: IPv6 and DNS. It introduces some best practices for carrying DNS over IPv6.


Its title is “DNS over IPv6 Best Practices” and it can be found here.


What is the document about and what problem does it seek to solve?

The document describes an approach to how Domain Name Protocol (DNS) should be carried over IPv6 [RFC8200].

Some operational issues have been identified in carrying DNS packets over IPv6 and the draft seeks to address them.


Technical context

The IPv6 protocol requires a minimum link MTU of 1280 octets. According to Section 5 “Packet Size Issues” of RFC8200, every link in the Internet must have an MTU of 1280 octets or greater. If a link cannot convey a 1280-octet packet in one piece, link-specific fragmentation and reassembly must be provided at a layer below IPv6.


Successful operation of PMTUD in an example adapted to 1280-byte MTU

Image source: https://www.slideshare.net/slideshow/naveguemos-por-internet-con-ipv6/34651833#2


Using Path MTU Discovery (PMTUD) and IPv6 fragmentation (source only) allows larger packets to be sent. However, operational experience shows that sending large DNS packets over UDP over IPv6 results in high loss rates. Some studies —quite a few years old but useful for context— found that around 10% of IPv6 routers drop all IPv6 fragments, and 40% block “Packet Too Big” messages, making client negotiation impossible. (“M. de Boer, J. Bosma, “Discovering Path MTU black holes on the Internet using RIPE Atlas”)

Most modern transport protocols like TCP [TCP] and QUIC [QUIC] include packet segmentation techniques that allow them to send larger data streams over IPv6.


A bit of history

The Domain Name System (DNS) was originally defined in RFC1034 and RFC1035. It was designed to run over several different transport protocols, including UDP and TCP, and has more recently been extended to run over QUIC. These transport protocols can be run over both IPv4 and IPv6.

When DNS was designed, the size of DNS packets carried over UDP was limited to 512 bytes. If a message was longer than 512 bytes, it was truncated and the Truncation (TC) bit was set to indicate that the response was incomplete, allowing the client to retry with TCP.

With this original behavior, UDP over IPv6 did not exceed the IPv6 link MTU (maximum transmission unit), avoiding operational issues due to fragmentation. However, with the introduction of EDNS0 extensions (RFC6891), the maximum was extended to a theoretical 64KB. This caused some responses to exceed the 512-byte limit for UDP, which resulted in sizes that exceeded the Path MTU and triggered TCP connections, impacting the scalability of the DNS servers.


Encapsulating a DNS packet in an Ethernet frame


Let’s talk about DNS over IPv6

DNS over IPv6 is designed to run over UDP or other transport protocols like TCP or QUIC. UDP only provides for source and destination ports, a length field, and simple checksum. It is a connectionless protocol. In contrast, TCP and QUIC offer additional features such as packet segmentation, reliability, error correction, and connection state.

DNS over UDP over IPv6 is suitable for small packet sizes, but becomes less reliable with larger sizes, particularly when IPv6 datagram fragmentation is required.

On the other hand, DNS over TCP or QUIC over IPv6 work well with all packet sizes. However, running a stateful protocol such as TCP or QUIC places greater demands on the DNS server’s resources (and other equipment such as firewalls, DPIs, and IDS), which can potentially impact scalability. This may be a reasonable tradeoff for servers that need to send larger DNS response packets.

The draft’s suggestion for DNS over UDP recommends limiting the size of DNS over UDP packets over IPv6 to 1280 octets. This avoids the need for IPv6 fragmentation or Path MTU Discovery, which ensures greater reliability.

Most DNS queries and responses will fit within this packet size limit and can therefore be sent over UDP. Larger DNS packets should not be sent over UDP; instead, they should be sent over TCP or QUIC, as described in the next section.


DNS over TCP and QUIC

When larger DNS packets need to be carried, it is recommended to run DNS over TCP or QUIC. These protocols handle segmentation and reliably adjust their segment size for different link and path MTU values, which makes them much more reliable than using UDP with IPv6 fragmentation.

Section 4.2.2 of [RFC1035] describes the use of TCP for carrying DNS messages, while [RFC9250] explains how to implement DNS over QUIC to provide transport confidentiality. Additionally, operation requirements for DNS over TCP are described in [RFC9210].


Security

Switching from UDP to TCP/QUIC for large responses means that the DNS server must maintain an additional state for each query received over TCP/QUIC. This will consume additional resources on the servers and affect the scalability of the DNS system. This situation may also leave the servers vulnerable to Denial of Service (DoS) attacks.


Is this the correct solution?

While we believe this solution will bring many benefits to the IPv6 and DNS ecosystem, it is a temporary operational fix and does not solve the root problem.

We believe the correct solution is ensuring that source fragmentation works, that PMTUD is not broken along the way, and that security devices allow fragmentation headers. This requires changes across various Internet actors, which may take a long time, but that doesn’t mean that we should abandon our efforts or stop educating others about the importance of doing the right thing.


Sunday, June 9, 2024

Cisco hidden command: bgp bestpath as-path multipath-relax

Hidden command

  bgp bestpath as-path multipath-relax


What for is this?

By default, Cisco does not do load-balance or distribute traffic between different ASs, this command allows it. Important, you must also use the maximum-paths command


Example:

router bgp 65001

 bgp router-id 1.1.1.1

 bgp log-neighbor-changes

 bgp bestpath as-path multipath-relax

 neighbor 2001:DB8:12::2 remote-as 65002

 neighbor 2001:DB8:12:10::2 remote-as 65002

 neighbor 2001:DB8:13:11::3 remote-as 65003

 !

 address-family ipv4

 no neighbor 2001:DB8:12::2 activate

 no neighbor 2001:DB8:12:10::2 activate

 no neighbor 2001:DB8:13:11::3 activate

 exit-address-family

 !

 address-family ipv6

 maximum-paths 3

 neighbor 2001:DB8:12::2 activate

 neighbor 2001:DB8:12:10::2 activate

 neighbor 2001:DB8:13:11::3 activate

 exit-address-family


Output after implementation:

     Network          Next Hop            Metric LocPrf Weight Path

 *m  2001:DB8::4/128  2001:DB8:12:10::2

                                                              0 65002 65004 ?

 *>                   2001:DB8:12::2                         0 65002 65004 ?

 *m                   2001:DB8:13:11::3

                                                              0 65003 65004 ?

 *m  2001:DB8:24:11::/64

                       2001:DB8:12:10::2

                                                              0 65002 65004 ?

 *>                   2001:DB8:12::2                         0 65002 65004 ?

 *m                   2001:DB8:13:11::3

                                                              0 65003 65004 ?

 *m  2001:DB8:34::/64 2001:DB8:12:10::2

                                                              0 65002 65004 ?

 *>                   2001:DB8:12::2                         0 65002 65004 ?

 *m                   2001:DB8:13:11::3

                                                              0 65003 65004 ?

Friday, June 7, 2024

Video: IPv6 LAC Race - May 2014 - Jun 2024

Do you want to know how the evolution of IPv6 has been in LAC? In this video of just a minute you will have your answer #barchartrace #ipv6





Sunday, June 2, 2024

Solved: "The following security updates require Ubuntu Pro with 'esm-apps' enabled"

Situation

  When you want to do some operations in Ubuntu using apt/do-release-upgrade you receive the message:

"The following security updates require Ubuntu Pro with 'esm-apps' enabled"


Solution

 The solution that worked for me was to run this:


cd /etc/apt/sources.list.d

for i in *.list; do mv ${i} ${i}.disabled; donated

apt clean

apt autoclean

sudo do-release-upgrade



Reference

https://askubuntu.com/questions/1085295/error-while-trying-to-upgrade-from-ubuntu-18-04-to-18-10-please-install-all-av