Showing posts with label ip. Show all posts
Showing posts with label ip. Show all posts

Wednesday, February 12, 2025

The History behind Netmasks.

Introduction

Do you remember when you were learning about netmasks? You probably thought that they were useless, that you wouldn’t need them, and wondered why they had invented something so insane. In addition to putting a smile on your face, I hope to convince you of their importance within the gigantic Internet ecosystem.


Goal

This blog post summarizes the history and milestones behind the concept of netmasks in the world of IPv4. This story begins in a world where classes didn’t exist (flat addressing), it then goes through a classful era and concludes with a totally classless Internet (CIDR). The information is based on excerpts from RFCs 790, 1338, and 1519, as well as on ‘Internet-history’ mailing list threads.


Do you know what a netmask is?

If you’re reading this document, I assume you do :-) but here’s a mini explanation: a netmask is used to identify and divide an IP address into a network address and a host address, in other words, it specifies the subnet partitioning system.


What is the purpose of netmasks?

Routing: Netmasks are used by routers to determine the network part of an IP address and route packets correctly.


Subnetting: Netmasks are used to create smaller networks.


Aggregation: Netmasks allow creating larger prefixes.


Have netmasks always existed?

Interestingly, netmasks haven’t always existed. In the beginning, IP networks were flat, and it was always assumed 8 bits were used for the network and 24 bits for the host. In other words, the first octet represented the network, while the remaining three octets corresponded to the host. It is also worth noting that many years ago, they were also referred to as bitmasks or simply masks — the latter term is still widely used today.


This means that classes (A, B, C, D) have not always existed

Classes were not introduced until Jon Postel’s RFC was published (September 1981), in other words, there was a time before classless and classful addressing. The introduction of the classful system was driven by the need to accommodate networks of different sizes, as the original 8-bit network ID was insufficient (256 networks). While the classful system attempted to address the limitations of a flat address space, it also faced scalability limitations. In the classful world, the netmask was implicit.


Classes did not solve every issue

Although the classful system represented an improvement over the original (flat) design, it was not efficient. The fixed size of the network and host portions of IP addresses led to exhaustion of the IP address space, particularly with the growing number of networks larger than a Class C but smaller than a Class B. This resulted in the development of Classless Interdomain Routing (CIDR), which uses Variable Length Subnet Masks (VLSM).


Excerpt from RFC 790


Did you know that netmasks were not always written with contiguous bits “on” from left to right?

In the beginning, netmasks didn’t have to be “lit” or “turned on” bit by bit from left to right. This means that masks such as 255.255.192.128 were entirely valid. This configuration was accepted by routers (IMPs, the first routers) and various operating systems, including BSDs and SunOSs. In other words, until the early 1990s, it was still possible for netmasks to have non-contiguous bits.


Why was it decided that it would be mandatory for bits to be turned on from left to right?

There were several reasons for this decision, the main one relating to routing and the well-known concept of “longest match” where routers select the route with the longest subnet mask that matches the packet’s destination address. If the bits are not contiguous, the computational complexity is very high. In short, efficiency.


Back then, IPv4 exhaustion was already underway

IPv4 resource exhaustion is not a recent phenomenon. In fact, item #1 of the first section of RFC 1338 mentions the exhaustion of Class B network address space, noting that Class C is too small for many organizations, and Class B is too large to be widely allocated. This led to pressure on the Class B address space, which was exhausted. Furthermore, item #3 of the same RFC mentions the “Eventual exhaustion of the 32-bit IP address space” (1992).


CIDR tackles the solutions of the past, which later became the problems of the time

The creation of classes led to the creation of more networks, which meant an increase in prefixes and consequently a higher consumption of memory and CPU. Thus, in September 1993, RFC 1519 introduced the concept of CIDR, which brought with it solutions to different challenges, including the ability to perform supernetting (i.e., being able to turn off bits from right to left) and attempting to reduce the number of network prefixes. It should be noted that RFC 1338 also maintained similar concepts.


Finally, prefix notation (/nn) also appeared thanks to CIDR and was possible because the “on” and “off” bits of the netmask were contiguous.


In summary, the primary goals of CIDR were to slow the growth of routing tables and improve efficiency in the use of IP address space.




Timeline




Conclusions

The concept of the netmask has evolved significantly since its origin, from not existing in a flat addressing scheme, to a rigid and then to a flexible model with CIDR. Initially, classful networks and non-contiguous masks created inefficiency and scalability issues as the Internet expanded.

A key change was the requirement of contiguous “on” bits, as this simplified the route selection process and allowed routers to operate more efficiently.

This document highlights the key milestones and motivations behind the evolution of IP addressing and underscores the importance of understanding the historical context to fully appreciate the Internet’s current architecture.


References

RFC https://datatracker.ietf.org/doc/html/rfc1338

RFC https://datatracker.ietf.org/doc/html/rfc1380

RFC https://datatracker.ietf.org/doc/html/rfc1519

RFC https://datatracker.ietf.org/doc/html/rfc790

ISOC “Internet History” mailing list, thread with the subject “The netmask”: https://elists.isoc.org/pipermail/internet-history/2025-January/010060.html

Wednesday, February 22, 2023

The Game of Dominoes and TCP/IP

As many of us do, I have more than one passion: family, work, sports, a category in which I include the beautiful game of dominoes. I reached my best level in this game about 25 years ago, when I participated in several tournaments (a few of which I won) and the icing on the cake was that I took the sixth place in a national tournament. I should also mention that I come from a family with a certain tradition of domino fans, including two uncles, my father, and my brother.

Playing dominoes is one of the most beautiful things that family and close and not so close friends can share. But how is the game of dominoes similar to the TCP/IP protocol? Some of you must probably be thinking “Alejandro has gone crazy.” Perhaps I haven’t gone crazy but likely already was.

But I digress… I will show you that TCP/IP and dominoes do have a lot in common.

IBM [1] defines the TCP/IP protocols as follows:

“Protocols are sets of rules for message formats and procedures that allow machines and application programs to exchange information. These rules must be followed by each machine involved in the communication in order for the receiving host to be able to understand the message.”

Sounds interesting… but you probably still don’t see what this has to do with dominoes and think it’s crazy. Do not despair, we’ll get there in a moment!

This is what ChatGPT has to say about the game of dominoes:

“The game of dominoes is a board game in which players place tiles with numbers on both ends on a board. The object of the game is to place all the tiles before the other players.In the game of dominoes, communication between players is based on strategy and planning. Players must communicate which tiles they have and which tiles they can play, and they must work together to find the best way to place the tiles on the board. Players must also pay attention to the other players’ moves and adapt their strategy accordingly. In short, communication during a game of dominoes is essential to a team’s success and to winning the game.”

Now, let’s think at the macro level. At this point, we can see that both involve elements that must be sent/played, and which must maintain an order to achieve successful communication. Likewise, both require strategy and planning to achieve the objective: one connects tiles, the other connects devices. Am I starting to convince you?

Now let’s talk about communication and partnership dominoes, a game where players are paired into teams. This is at the heart of the topic that I want to get to.

Regardless of the style of dominoes (Cuban, Latino, Mexican, Chilean, etc.), partnership dominoes is a communication game and is not different from a data network. A player needs to communicate with his partner (A → B, B → A) to specify which tiles he has or doesn’t have.

But how do partners communicate if one of the rules is that you are not allowed to talk? Therein lies the greatness of any good player: just like TCP/IP —and any other communication protocol— there are certain rules that must be followed.

After three decades of experience in both worlds, I will share with you what I believe are the main moves in the partnership dominoes ecosystem and their counterpart in TCP/IP:

The pose

In a game of dominoes, the “pose” —the first tile to be placed on the board— is identical to a TCP SYN Packet, more specifically, a TCP Fast Open SYN with payload. This is a beautiful comparison, as in partnership dominoes the pose *always* transmits information, usually related to the suit (number) one most desires. TCP Fast Open —a thing of beauty from the world of TCP— is defined in RFC 7413 and its main objective is to be able to send information in the first packet with which all TCP communication begins (SYN).

The pauses (double ACK)

Payload is considered unnecessary in some networks, yet it may be worth the trouble. A player’s “pause” explicitly reveals that the player has more than one tile of the suit (number) they have played. With this, the player is efficiently communicating information to their partner, who should be aware of these pauses, much like those servers and networks where devices are configured to send more than one ACK in TCP (acknowledgment).

Passing (packet dropped)

In TCP, when a packet is dropped, the “Congestion Avoidance” phase begins. There, the TCP window decreases by 50% and so does transmission speed. Nothing more similar to passing in a game of dominoes. Of course, here a player goes into a panic, especially when they have a lot to transmit.

The version

In the world of IP, we are used to IPv4 and IPv6; in dominoes, the only difference is that versions range from 0 (blank) to 6. (Yes, I know, double-9 domino sets have the numbers 0 to 9, every rule has its exception ;-) 

False thinking

TCP Half-Open. Remember? This is when a three-way handshake (SYN, SYN+ACK, ACK) cannot be completed (but careful! this is the modern concept and, to be honest, it does not follow RFC 793). It is also commonly used to launch DoS attacks.

Load size (total length)

As we draw our tiles, how many points do we add?

Source and destination address

Here we are specifically in the Layer-3 world. This is a very interesting case where, just as in a communication network, one host communicates with another. The same thing happens in dominoes: while some “packets” are aimed at your partner, some may be aimed at our opponents, as needed (for example, if we are trying to find a specific suit).

Thinking about each move

Bufferbloat is a very particular situation, yet it occurs quite frequently. Basically, these are situations where hosts (mainly middleware, routers, firewalls, or switches) add delay (excess buffering) when switching packets. This creates unnecessary latency and jitters. If you manage a network, please be sure to check if you are suffering from bufferbloat.

Block the game/hand

In dominoes, this is the moment when it is no longer possible to make a play. In TCP/IP, it means that the network is down, so packets cannot be sent.

Consequences of good and poor communication

Just as in any network protocol, in partnership dominoes both proper and poor communication have their consequences. In a game of dominoes, if communication is good, it will most likely lead to victory. If communication is deficient, the team will lose the game. In TCP/IP, if communication is good, the connection will be properly established and the data will be delivered. If the communication is poor, the data will be corrupted and/or the connection will not be established.

Head and tail

In dominoes, a “head and tail” (capicúa) is when your last tile has two different numbers. What can this be compared to? Well, it can be compared to TCP Multipath (MPTCP), defined in RFC 8684, which allows operating connections through different paths. This is why MPTCP provides redundancy and efficiency in terms of bandwidth consumption.

Have I not convinced you yet? OK, I have one more chance to see if I can do it.

The table below shows a layer-to-layer comparison of TCP/IP and dominoes.

TCP/IP model (+ user layer)Domino model
UserPlayer
ApplicationSet up the play
TransportSelect the tile
NetworkPick up the tiles
LinkTiles
PhysicalPlace the tile on the table

In the TCP/IP model, a packet is built from the upper to the lower layers (it is then injected into the network, etc.), it is received by the destination host, and processed “in reverse”, from the physical to the application layer. The exact same thing happens in the domino model: the player sets up their move, selects their tile, picks it up, and then injects it into the game.

Conclusion

Comparing the game of partnership dominoes and the TCP/IP communication protocol may seem strange at first. However, a closer look reveals the similarities. In dominoes, there are two players who act as senders and receivers of information, as they each have their own set of tiles and must communicate with each other to decide which tile to play each turn. In the same way, in TCP/IP there are devices that act as senders and receivers of information.

These contrasts illustrate how similarities can often exist between seemingly dissimilar systems, and that understanding these similarities can help us understand complex concepts and see things from a different perspective.

Friday, June 25, 2021

IPv6 Deployment - LACNIC region

The video below shows the IPv6 deployment for the LACNIC region using a Bar Chart Race format




Made with https://flourish.studio/

Thursday, August 24, 2017

Google DNS --- Figuring out which DNS Cluster you are using

(this is -almost- a copy / paste of an email sent by Erik Sundberg to nanog mailing list on 
August 23).

This post is being posted with his explicit permission.
I sent this out on the outage list, with a lots of good feedback sent to me. So I figured it 
would be useful to share the information on nanog as well. A couple months ago had to 
troubleshoot a google DNS issue with Google’s NOC. Below is some helpful information 
on how to determine which DNS Cluster you are going to. Let’s remember that Google runs 
DNS Anycast for DNS queries to 8.8.8.8 and 8.8.4.4. Anycast routes your DNS queries to 
the closes DNS cluster based on the best route / lowest metric to 8.8.8.8/8.8.4.4. Google 
has deployed multiple DNS clusters across the world and each DNS Cluster has multiple 
servers. So a DNS query in Chicago will go to a different DNS clusters than queries from 
a device in Atlanta or New York.

How to get a list of google DNS Cluster’s.
dig -t TXT +short locations.publicdns.goog. @8.8.8.8

How to print this list in a table format. 
-- Script from: https://developers.google.com/speed/public-dns/faq -- 

#!/bin/bash
IFS="\"$IFS"
for LOC in $(dig -t TXT +short locations.publicdns.goog. @8.8.8.8)
do
  case $LOC in
    '') : ;;
    *.*|*:*) printf '%s ' ${LOC} ;;
    *) printf '%s\n' ${LOC} ;;
  esac
done
---------------

Which will give you a list like below. This is all of the IP network’s that 
google uses for their DNS Clusters and their associated locations. 
74.125.18.0/26 iad
74.125.18.64/26 iad
74.125.18.128/26 syd
74.125.18.192/26 lhr
74.125.19.0/24 mrn
74.125.41.0/24 tpe
74.125.42.0/24 atl
74.125.44.0/24 mrn
74.125.45.0/24 tul
74.125.46.0/24 lpp
74.125.47.0/24 bru
74.125.72.0/24 cbf
74.125.73.0/24 bru
74.125.74.0/24 lpp
74.125.75.0/24 chs
74.125.76.0/24 cbf
74.125.77.0/24 chs
74.125.79.0/24 lpp
74.125.80.0/24 dls
74.125.81.0/24 dub
74.125.92.0/24 mrn
74.125.93.0/24 cbf
74.125.112.0/24 lpp
74.125.113.0/24 cbf
74.125.115.0/24 tul
74.125.176.0/24 mrn
74.125.177.0/24 atl
74.125.179.0/24 cbf
74.125.181.0/24 bru
74.125.182.0/24 cbf
74.125.183.0/24 cbf
74.125.184.0/24 chs
74.125.186.0/24 dls
74.125.187.0/24 dls
74.125.190.0/24 sin
74.125.191.0/24 tul
172.217.32.0/26 lhr
172.217.32.64/26 lhr
172.217.32.128/26 sin
172.217.33.0/26 syd
172.217.33.64/26 syd
172.217.33.128/26 fra
172.217.33.192/26 fra
172.217.34.0/26 fra
172.217.34.64/26 bom
172.217.34.192/26 bom
172.217.35.0/24 gru
172.217.36.0/24 atl
172.217.37.0/24 gru
173.194.90.0/24 cbf
173.194.91.0/24 scl
173.194.93.0/24 tpe
173.194.94.0/24 cbf
173.194.95.0/24 tul
173.194.97.0/24 chs
173.194.98.0/24 lpp
173.194.99.0/24 tul
173.194.100.0/24 mrn
173.194.101.0/24 tul
173.194.102.0/24 atl
173.194.103.0/24 cbf
173.194.168.0/26 nrt
173.194.168.64/26 nrt
173.194.168.128/26 nrt
173.194.168.192/26 iad
173.194.169.0/24 grq
173.194.170.0/24 grq
173.194.171.0/24 tpe
2404:6800:4000::/48 bom
2404:6800:4003::/48 sin
2404:6800:4006::/48 syd
2404:6800:4008::/48 tpe
2404:6800:400b::/48 nrt
2607:f8b0:4001::/48 cbf
2607:f8b0:4002::/48 atl
2607:f8b0:4003::/48 tul
2607:f8b0:4004::/48 iad
2607:f8b0:400c::/48 chs
2607:f8b0:400d::/48 mrn
2607:f8b0:400e::/48 dls
2800:3f0:4001::/48 gru
2800:3f0:4003::/48 scl
2a00:1450:4001::/48 fra
2a00:1450:4009::/48 lhr
2a00:1450:400b::/48 dub
2a00:1450:400c::/48 bru
2a00:1450:4010::/48 lpp
2a00:1450:4013::/48 grq

There are
IPv4 Networks: 68
IPv6 Networks: 20
DNS Cluster’s Identified by POP Code’s: 20
DNS Clusters identified by POP Code to City, State, or Country. Not all 
of these are 
Google’s Core Datacenters, some of them are Edge Points of Presences (POPs). 
https://peering.google.com/#/infrastructure and 
https://www.google.com/about/datacenters/inside/locations/ 


Most of these are airport codes, it did my best to get the location correct.
iad          Washington, DC
syd         Sydney, Australia
lhr          London, UK
mrn        Lenoir, NC
tpe         Taiwan
atl          Altanta, GA
tul          Tulsa, OK
lpp          Findland
bru         Brussels, Belgium
cbf         Council Bluffs, IA
chs         Charleston, SC
dls          The Dalles, Oregon
dub        Dublin, Ireland
sin          Singapore
fra          Frankfort, Germany
bom       Mumbai, India
gru         Sao Paulo, Brazil
scl          Santiago, Chile
nrt          Tokyo, Japan
grq         Groningen, Netherlans

Which Google DNS Server Cluster am I using. I am testing this from Chicago, 
IL
# dig o-o.myaddr.l.google.com -t txt +short @8.8.8.8 "173.194.94.135" 
<<<<<<DNS Server IP, reference the list above to get the cluster, Council 
Bluffs, IA 
"edns0-client-subnet 207.xxx.xxx.0/24" <<<< Your Source IP Block Side note, 
the google 
dns servers will not respond to DNS queries to the Cluster’s Member’s IP, 
they will 
only respond to dns queries to 8.8.8.8 and 8.8.4.4. So the following will 
not work.

dig google.com @173.194.94.135 Now to see the DNS Cluster load balancing 
in action. 
I am doing a dig query from our Telx\Digital Realty POP in Atlanta, GA. 
We do peer 
with google at this location. I dig a dig query about 10 times and received 
the 
following unique dns cluster member ip’s as responses. 

dig o-o.myaddr.l.google.com -t txt +short @8.8.8.8
"74.125.42.138"
"173.194.102.132"
"74.125.177.5"
"74.125.177.74"
"74.125.177.71"
"74.125.177.4"

Which all are Google DNS Networks in Atlanta.
74.125.42.0/24

atl

74.125.177.0/24

atl

172.217.36.0/24

atl

173.194.102.0/24

atl

2607:f8b0:4002::/48

atl



Just thought it would be helpful when troubleshooting google DNS 
issues.



(this is -almost- a copy / paste of an email sent by Erik Sundberg to nanog mailing list on 
August 23 2017).
 This post is being posted with his explicit permission.

Wednesday, February 17, 2016

Animation: The sad tale of the ISP that did not deploy IPv6


Hello,
  The following animation is based on the story called: "The sad tale of the ISP that didn't deploy IPv6" [1]. Hope you enjoy it:


[1] http://portalipv6.lacnic.net/en/the-sad-tale-of-the-isp-that-didnt-deploy-ipv6/



Tuesday, May 26, 2015

IPv6 Song presented during Lacnic23 (Lima, Peru) - IPv6 Latin American Forum





(note that you can turn on captioning if you wish)

Antonio Esguerra: Head Engineer
Michael Schulze: Co-producer
Eidan Molina: Co-producer. Composer.
Music and Lyrics by Eidan Molina
Agrupacion de produccion: Fifth Floor Studios
Idea: Alejandro Acosta

Monday, January 26, 2015

The sad tale of the ISP that didn’t deploy IPv6

Once upon a time in the not so distant past, a large ISP dominated a country’s telecommunications market and felt powerful and without competition. Whenever someone needed to log on to the Internet they would use their services. Everyone envied their market penetration.

This large ISP, however, had never wanted to deploy IPv6 because they thought their stock of IP addresses was enough and saw no indicator telling them that they needed the new protocol.

During the course of those years, another smaller ISP began implementing IPv6 and slowly began to grow, as they realized that the protocol did indeed make a difference in the eyes of their clients and that it was helping them win over new users.

The small ISP’s market penetration continued to grow, as did their earnings and general respect for their services. As they grew, it became easier for them to obtain better equipment, traffic and interconnection prices. Everything was going very well. The small ISP couldn’t believe that something as simple as deploying IPv6 could be paying off so spectacularly. Their customers told them their needs included running VPNs and holding conference calls with partners in other parts of the world, and that their subsidiaries, customers and business partners in Europe and Asia had already adopted IPv6.

Despite being so powerful, the large ISP began experiencing internal problems that were neither billing nor money related. Sales staff complained that they were having trouble closing many deals because customers had started asking for IPv6 and, although their ISP was so large and important, they simply did not have IPv6 to offer. Both corporate customers and residential users were asking for IPv6; even major state tenders were requiring IPv6.

When this started happening, the Sales Manager complained to the Products, Engineering and Operations departments. The latter were left speechless and some employees were let go by the company. In the end, Sales did not care where the fault lay – they were simply unable to gain new customers. Realizing that they were losing customers, some of the salespeople accepted job offers at the small ISP who was looking to grow their staff as they could now afford the best sales force. Then the same thing happened with the larger ISP’s network manager, an expert who knew a lot about IPv6 but who had been unable to overcome the company’s bureaucracy and bring the new protocol into production. Logically, the network manager was followed by his trusted server administrator and head of security. The large ISP couldn’t believe what was happening right before their very eyes. The sales force hired by the smaller ISP (those who used to work for the large ISP) brought with them their huge customer base, all of them potential prospects.

A stampede of the large ISP’s clients was on the way. The months went by and the smaller ISP was no longer simply offering Internet access – its Data Center had grown, major companies brought in new cache servers and much more. They were now offering co-location, hosting, virtual hosting, voice and video, among many other services.

When the large provider decided to deploy IPv6, it had to do so very quickly. Things went wrong; many errors were made. In addition, certain consultants and companies took advantage of their problems and charged higher rush fees. Network downtime increased, as did the number of calls to the call center. The large ISP’s reputation started to crumble.

As expected, in the end, everyone who was part of this story – clients and providers alike – ended up deploying IPv6. Some ended up happier than others, but everyone adopted IPv6 on their networks.

Monday, November 17, 2014

$GENERATE A records using BIND. Match forward and rDNS

Hi,
This post is very short but perhaps very useful. There is less documentation on the Internet than expected.

Objective: 
a) Set the reverse DNS and forward DNS match for a / 24 in BIND9 using $GENERATE.

Requirements: 
- A  /24 network (of course, you can adapt the example to other networks)
- BIND9
- We will use A and PTR records

Example: 
Network: 192.168.30.0/24
Domain: example.com

Let's make the rDNS for 192.168.30.X resolved to: X.client.example.com
Similarly, X.client.example.com to resolve to 192.168.30.X

It would be like this:
192.168.30.1 ---> 1.client.example.com
192.168.30.2 ---> 2.client.example.com
192.168.30.3 ---> 3.client.example.com
1.client.example.com ---> 192.168.30.1
2.client.example.com ---> 192.168.30.2
3.client.example.com ---> 192.168.30.3
(Etc)

Steps: 
We create reverse zone in /etc/bind/named.conf.

a) The reverse zone:

zone "30.168.192.in-addr.arpa" {
type master;
file "30.168.192.in-addr.arpa.db";
allow-query {any; };
};


After that, then in file 30.168.192.in-addr.arpa.db place the following: 

$TTL    86400 ; 24 hours, could have been written as 24h or 1d
@  1D  IN        SOA localhost.     hostmaster.example.com. (
                              2002022401 ; serial
                              3H ; refresh
                              15 ; retry
                              1w ; expire
                              3h ; minimum
                             )
; Name servers for the zone - both out-of-zone - no A RRs required
                        NS      localhost.

$GENERATE 1-255 $ PTR $.client.example.com.


b) The forward DNS is doing the following in the client.example.com zone file: 

$TTL    86400 ; 24 hours, could have been written as 24h or 1d
@  1D  IN        SOA localhost.     hostmaster.example.com. (
                              2002022401 ; serial
                              3H ; refresh
                              15 ; retry
                              1w ; expire
                              3h ; minimum
                             )
; Name servers for the zone - both out-of-zone - no A RRs required
                        NS      localhost.

$GENERATE 1-255.client.example.com $ A 192.168.30.$



Testing: 
#dig -x 192.168.30.3 (reverse dns) 
#dig 3.cliente.ejemplo.com (forward dns) 



P.S. As usual there can be more than way of doing this kind of things.

Saturday, October 18, 2014

Is there any relationship regarding the deployment of IPv6 between entities with ASNs of 16 and 32 bits?

Hi,
  Today this came to my mind and fortunately I knew the data existed and to locate the answer was more or less simple. I've been reviewing some data from APNIC (* 1) for several weeks (always focused on the area of ​​coverage Lacnic).
  APNIC stores a percentage value per ASN called called ipv6capable, basically is a number that indicates what percentage of an ASN has capacity for IPv6. They have another variable called ipv6 preferred but we will not talk about it right now.
  Going back to my micro study: I took a sample for three days: 11, 12 and October 15, 2014

My results (for ipv6capable) were:

Day October 15 (Wednesday)
1.34% in ASNs of 16 bits
1.12% in ASNs of 32 bits
1.26% in all ASNs
----
Day October 11 (Saturday)
1.34% in 16-bit ASN
1.15% in 32-bit ASN
1.27% all ASN
---
Day October 12 (Sunday)
1.35% in 16-bit
1.12% in 32-bit
1.27% all ASN (ASN 3313)
----

I evaluated about 2050  ASNs of 16-bits  and 1200 ASN of 32 bits. The difference between ipv6capable ASNs 16 and 32 bits is about 0.2, it is certainly a very small number but when it comes to traffic and users on the Internet these numbers represent significant volume.

My humble finding for this study is simple: It seems that there is a *very* slight preference for the deployment of IPv6 in 16-bit ASN.

Regards,

(*1) http://labs.apnic.net/ipv6-measurement/AS/