Thursday, September 3, 2020

Solved: Closing connection because of an I/O error in FRR - at least in Ubuntu

 If you are getting this message in FRR:

Closing connection because of an I/O error in FR


The solution is straight forward. You have to compile FRR with this flag:

--enable-systemd


So, it would be something like:

./configure \

    --prefix=/usr \

    --includedir=\${prefix}/include \

    --enable-exampledir=\${prefix}/share/doc/frr/examples \

    --bindir=\${prefix}/bin \

    --sbindir=\${prefix}/lib/frr \

    --libdir=\${prefix}/lib/frr \

    --libexecdir=\${prefix}/lib/frr \

    --localstatedir=/var/run/frr \

    --sysconfdir=/etc/frr \

    --with-moduledir=\${prefix}/lib/frr/modules \

    --with-libyang-pluginsdir=\${prefix}/lib/frr/libyang_plugins \

    --enable-configfile-mask=0640 \

    --enable-logfile-mask=0640 \

    --enable-snmp=agentx \

    --enable-multipath=64 \

    --enable-user=frr \

    --enable-group=frr \

    --enable-vty-group=frrvty \

    --with-pkg-git-version \

    --enable-systemd

    --with-pkg-extra-version=-MyOwnFRRVersion



you can follow those instructions and adding my previous solution:

http://docs.frrouting.org/projects/dev-guide/en/latest/building-frr-for-ubuntu2004.html

Tuesday, September 1, 2020

Tiny script: how to get the RRSIG DNS records with python3

 (I guess there are many other ways to do this, even more elegant)


import dns.resolver
domain='lacnic.net'
domain = dns.name.from_text(domain)
request = dns.message.make_query(domain, dns.rdatatype.ANY)
response = dns.query.tcp(request,'8.8.8.8')
#print(response)

for item in str(response).splitlines( ):
  if 'RRSIG' in item: print (item)


Monday, August 10, 2020

My review about coin.space (in summary, not recommended)

 Hello there,

  I know many of you are looking for good BTC wallets, good cryptocurrency fees and so on.

  Unfortunately I can not tell you which one is the best or the cheapest, but I can tell you a real BAD one: coin.space

  This site (coin.space) starts charging “normal” fees for sending money, however, they start increasing the fee ONLY for certain accounts, not all of them.

  I will tell you a very simple test I performed from the very same computer, at the very same time.

  • I opened two different browsers (Chrome & Firefox)
  • In one browser I tried to send money using an “old coin.space” account
  • In the other browser I tried to send money using a very new coin.space account
  • Trying to send about usd 50, for the “old account” it wanted to charge about usd 12
  • Trying to send about usd 50, for the “new account” it wanted to charge about  use 4


  What do you think?.., sorry coin.space but I had to publish this.


Thanks


Friday, September 6, 2019

The sad tale of the ISP that didn’t deploy IPv6

Once upon a time in the not so distant past, a large ISP dominated a country’s 
telecommunications market and felt powerful and without competition. Whenever someone 
needed to log on to the 
Internet they would use their services. Everyone envied their market penetration.

This large ISP, however, had never wanted to deploy IPv6 because they thought their stock
 of IP addresses was enough and saw no indicator telling them that they needed the new 
protocol.
During the course of those years, another smaller ISP began implementing IPv6 and slowly 
began to grow, as they realized that the protocol did indeed make a difference in the eyes of their 
clients and that it was helping them win over new users.

The small ISP’s market penetration continued to grow, as did their earnings and general respect
 for their services. As they grew, it became easier for them to obtain better equipment, traffic 
and interconnection prices. Everything was going very well. The small ISP couldn’t believe that
something as simple as deploying IPv6 could be paying off so spectacularly. Their customers
 told them their needs included running VPNs and holding conference calls with partners in 
other parts of the world, and that their subsidiaries, customers and business partners in Europe
 and Asia had already adopted IPv6.

Despite being so powerful, the large ISP began experiencing internal problems that were 
neither billing nor money related. Sales staff complained that they were having trouble closing 
many deals because customers had started asking for IPv6 and, although their ISP was so
 large and important, they simply did not have IPv6 to offer. Both corporate customers and 
residential users were asking for IPv6; even major state tenders were requiring IPv6.

When this started happening, the Sales Manager complained to the Products, Engineering and 
Operations departments. The latter were left speechless and some employees were let go by 
the company. In the end, Sales did not care where the fault lay – they were simply unable to 
gain new customers. Realizing that they were losing customers, some of the salespeople 
accepted job offers at the small ISP who was looking to grow their staff as they could now 
afford the best sales force. 

Then the same thing happened with the larger ISP’s network manager, an expert who knew 
a lot about IPv6 but who had been unable to overcome the company’s bureaucracy and bring
 the new protocol into production. Logically, the network manager was followed by his trusted 
server administrator and head of security. The large ISP couldn’t believe what was happening
 right before their very eyes. The sales force hired by the smaller ISP (those who used to 
work for the large ISP) brought with them their huge customer base, all of them potential prospects.

A stampede of the large ISP’s clients was on the way. The months went by and the smaller
 ISP was no longer simply offering Internet access – its Data Center had grown, major 
companies brought in new cache servers and much more. They were now offering co-location, 
hosting, virtual hosting, voice and video, among many other services.

When the large provider decided to deploy IPv6, it had to do so very quickly. Things went 
wrong; many errors were made. In addition, certain consultants and companies took 
advantage of their problems and charged higher rush fees. Network downtime increased,
 as did the number of calls to the call center. The large ISP’s reputation started to crumble.


As expected, in the end, everyone who was part of this story – clients and providers alike – 
ended up deploying IPv6. Some ended up happier than others, but everyone adopted IPv6
 on their networks.

By Alejandro Acosta

Monday, July 8, 2019

BGP: To filter or not to filter by prefix size. That is the question


Introduction


In order to write these post the R+D team and the WARP team joint together after analyzing some security incidents related with BGP, network accessibility, network hijacks and network visibilities.


As you probably know, in the BGP world, there are dozen of ways to filter prefixes. The goal of this post is to show some recommendations in order to have a more stable network, keep the visibility of your prefixes as much as possible and of course reduce the calls to the NOC


Scenario

Many ISPs around the world can not (or do not wish) to receive the full routing table (DFZ) which by the time of writing this text is about 750.000 prefixes (IPv4)


  • The above description could be due to some -not limited- of the following causes:
  • The routers does not have enough RAM to learn all the prefixes (please also note that there could be also several BGP session at the same time)
  • The network admin wants to save CPU cycles in their devices
  • The upstream providers is not announcing the full routing table
  • The network admin wishes to keep his network simple and easy

Anyhow, in the end of the story, the router is not learning the full routing table.



Problem

Not learning the full routing table can bring many partial inconveniences that in the end brings connectivity problems, users complaints, issues accessing some web sites and more.



Why?

Please try to imagine the following case



  1. I have a router (property of EXAMPLE) in Internet that is ONLY learning a partial DFZ
  2. The routers mentioned in “1” is only learning “big network”, over /20. I mean, the router learns /20, /19, /18, etc. (of course, we are talking about IPv4)
  3. Based in the configuration indicated in number “2”, the router is not going to learn prefixes such as /21, /22, /23 nor /24
  4. So far so good. However, somewhere in Internet, some people hijacked an /21 to the company ACME (hijack, bad configuration, whatever)
  5. ACME decides to perform more specific prefix advertisements, so, he takes his /21 and announces 8 /24 prefixes to the DFZ.
  6. Because of the filters configured by EXAMPLE, he will never learn the legitimate /24 prefixes advertised by ACME
  7. EXAMPLE will keep learning the hijacked /21 which obviously will bring connectivity problems aim to the legitimate owner of the network




Topology

The following diagram represents the hypothesis presented in the previous point in a graphic manner to facilitate its understanding.








Recommendation

The following recommendations only for networks that CAN NOT LEARN the full DFZ. One more time, they were found after studying many cases of connectivity problem and network hijacks.:


  • Do not filter more specific network,. We mean, it’s better to learn more specific networks such as: /24, /23, /22 (IPv4 world)
  • Filter using AS_PATH (like, 2,3 or 4 deep ASs)
  • Please create your ROS and use RPKI



Some examples (Cisco like)1. Learning only /22, /23 or /24:


router bgp 65002
  neighbor 10.0.0.1 remote-as 65001
  neighbor 10.0.0.1 route-map FILTRO-IN in
!
ip prefix-list SMALLNETWORKS seq 5 permit 0.0.0.0/0 ge 22 le 24
!
route-map FILTRO-IN permit 10
  match ip address prefix-list SMALLNETWORKS
!



2. Only learning two AS beyond us:

router bgp 65001
  neighbor 10.0.0.2 remote-as 65002
  neighbor 10.0.0.2 route-map ASFILTER-IN in


!
ip as-path access-list 5 permit ^[0-9]+_$
ip as-path access-list 5 permit ^[0-9]+ [0-9]+_$
!
route-map ASFILTER-IN permit 10
  match as-path 5
!



More information

Prefix hijack demonstration 1 / 2 (in Spanish):
https://www.youtube.com/watch?v=X5RNSs8y8Ao&t=39s


Prefix hijack demonstration 2/2 (in Spanish):
https://www.youtube.com/watch?v=m51WtuEZOKI


BGP Prefix-Based Outbound Route Filtering
http://www.cisco.com/c/en/us/td/docs/ios/12_2s/feature/guide/fsbgporf.html


Ejemplo de configuración de BGP con dos prestadores de servicio diferentes (conexiones múltiples)
http://www.cisco.com/cisco/web/support/LA/7/75/75930_27.html


Certificación de Recursos (RPKI)
http://www.lacnic.net/web/lacnic/certificacion-de-recursos-rpki


Información General sobre Certificación de Recursos (RPKI)
http://www.lacnic.net/web/lacnic/informacion-general-rpki




Authors:

Dario Gomez (https://twitter.com/daro_ua)

Alejandro Acosta (https://twitter.com/ITandNetworking)

Friday, July 13, 2018

How to run Flask framework to listen in both IPv4 and IPv6 (DualStack)

Issue:
  How to run Flask framework to listen in both IPv4 and IPv6 (DualStack)

Answer:
  app.run(host='::',port=5005)



Start your machine learning or AI journey with Runpod. If you need affordable GPU rental , RunPod has the lowest prices. Rent a NVIDIA RTX A6000 with 48GB VRAM, or other models like NVIDIA 3090.

Thursday, August 24, 2017

Google DNS --- Figuring out which DNS Cluster you are using

(this is -almost- a copy / paste of an email sent by Erik Sundberg to nanog mailing list on 
August 23).

This post is being posted with his explicit permission.
I sent this out on the outage list, with a lots of good feedback sent to me. So I figured it 
would be useful to share the information on nanog as well. A couple months ago had to 
troubleshoot a google DNS issue with Google’s NOC. Below is some helpful information 
on how to determine which DNS Cluster you are going to. Let’s remember that Google runs 
DNS Anycast for DNS queries to 8.8.8.8 and 8.8.4.4. Anycast routes your DNS queries to 
the closes DNS cluster based on the best route / lowest metric to 8.8.8.8/8.8.4.4. Google 
has deployed multiple DNS clusters across the world and each DNS Cluster has multiple 
servers. So a DNS query in Chicago will go to a different DNS clusters than queries from 
a device in Atlanta or New York.

How to get a list of google DNS Cluster’s.
dig -t TXT +short locations.publicdns.goog. @8.8.8.8

How to print this list in a table format. 
-- Script from: https://developers.google.com/speed/public-dns/faq -- 

#!/bin/bash
IFS="\"$IFS"
for LOC in $(dig -t TXT +short locations.publicdns.goog. @8.8.8.8)
do
  case $LOC in
    '') : ;;
    *.*|*:*) printf '%s ' ${LOC} ;;
    *) printf '%s\n' ${LOC} ;;
  esac
done
---------------

Which will give you a list like below. This is all of the IP network’s that 
google uses for their DNS Clusters and their associated locations. 
74.125.18.0/26 iad
74.125.18.64/26 iad
74.125.18.128/26 syd
74.125.18.192/26 lhr
74.125.19.0/24 mrn
74.125.41.0/24 tpe
74.125.42.0/24 atl
74.125.44.0/24 mrn
74.125.45.0/24 tul
74.125.46.0/24 lpp
74.125.47.0/24 bru
74.125.72.0/24 cbf
74.125.73.0/24 bru
74.125.74.0/24 lpp
74.125.75.0/24 chs
74.125.76.0/24 cbf
74.125.77.0/24 chs
74.125.79.0/24 lpp
74.125.80.0/24 dls
74.125.81.0/24 dub
74.125.92.0/24 mrn
74.125.93.0/24 cbf
74.125.112.0/24 lpp
74.125.113.0/24 cbf
74.125.115.0/24 tul
74.125.176.0/24 mrn
74.125.177.0/24 atl
74.125.179.0/24 cbf
74.125.181.0/24 bru
74.125.182.0/24 cbf
74.125.183.0/24 cbf
74.125.184.0/24 chs
74.125.186.0/24 dls
74.125.187.0/24 dls
74.125.190.0/24 sin
74.125.191.0/24 tul
172.217.32.0/26 lhr
172.217.32.64/26 lhr
172.217.32.128/26 sin
172.217.33.0/26 syd
172.217.33.64/26 syd
172.217.33.128/26 fra
172.217.33.192/26 fra
172.217.34.0/26 fra
172.217.34.64/26 bom
172.217.34.192/26 bom
172.217.35.0/24 gru
172.217.36.0/24 atl
172.217.37.0/24 gru
173.194.90.0/24 cbf
173.194.91.0/24 scl
173.194.93.0/24 tpe
173.194.94.0/24 cbf
173.194.95.0/24 tul
173.194.97.0/24 chs
173.194.98.0/24 lpp
173.194.99.0/24 tul
173.194.100.0/24 mrn
173.194.101.0/24 tul
173.194.102.0/24 atl
173.194.103.0/24 cbf
173.194.168.0/26 nrt
173.194.168.64/26 nrt
173.194.168.128/26 nrt
173.194.168.192/26 iad
173.194.169.0/24 grq
173.194.170.0/24 grq
173.194.171.0/24 tpe
2404:6800:4000::/48 bom
2404:6800:4003::/48 sin
2404:6800:4006::/48 syd
2404:6800:4008::/48 tpe
2404:6800:400b::/48 nrt
2607:f8b0:4001::/48 cbf
2607:f8b0:4002::/48 atl
2607:f8b0:4003::/48 tul
2607:f8b0:4004::/48 iad
2607:f8b0:400c::/48 chs
2607:f8b0:400d::/48 mrn
2607:f8b0:400e::/48 dls
2800:3f0:4001::/48 gru
2800:3f0:4003::/48 scl
2a00:1450:4001::/48 fra
2a00:1450:4009::/48 lhr
2a00:1450:400b::/48 dub
2a00:1450:400c::/48 bru
2a00:1450:4010::/48 lpp
2a00:1450:4013::/48 grq

There are
IPv4 Networks: 68
IPv6 Networks: 20
DNS Cluster’s Identified by POP Code’s: 20
DNS Clusters identified by POP Code to City, State, or Country. Not all 
of these are 
Google’s Core Datacenters, some of them are Edge Points of Presences (POPs). 
https://peering.google.com/#/infrastructure and 
https://www.google.com/about/datacenters/inside/locations/ 


Most of these are airport codes, it did my best to get the location correct.
iad          Washington, DC
syd         Sydney, Australia
lhr          London, UK
mrn        Lenoir, NC
tpe         Taiwan
atl          Altanta, GA
tul          Tulsa, OK
lpp          Findland
bru         Brussels, Belgium
cbf         Council Bluffs, IA
chs         Charleston, SC
dls          The Dalles, Oregon
dub        Dublin, Ireland
sin          Singapore
fra          Frankfort, Germany
bom       Mumbai, India
gru         Sao Paulo, Brazil
scl          Santiago, Chile
nrt          Tokyo, Japan
grq         Groningen, Netherlans

Which Google DNS Server Cluster am I using. I am testing this from Chicago, 
IL
# dig o-o.myaddr.l.google.com -t txt +short @8.8.8.8 "173.194.94.135" 
<<<<<<DNS Server IP, reference the list above to get the cluster, Council 
Bluffs, IA 
"edns0-client-subnet 207.xxx.xxx.0/24" <<<< Your Source IP Block Side note, 
the google 
dns servers will not respond to DNS queries to the Cluster’s Member’s IP, 
they will 
only respond to dns queries to 8.8.8.8 and 8.8.4.4. So the following will 
not work.

dig google.com @173.194.94.135 Now to see the DNS Cluster load balancing 
in action. 
I am doing a dig query from our Telx\Digital Realty POP in Atlanta, GA. 
We do peer 
with google at this location. I dig a dig query about 10 times and received 
the 
following unique dns cluster member ip’s as responses. 

dig o-o.myaddr.l.google.com -t txt +short @8.8.8.8
"74.125.42.138"
"173.194.102.132"
"74.125.177.5"
"74.125.177.74"
"74.125.177.71"
"74.125.177.4"

Which all are Google DNS Networks in Atlanta.
74.125.42.0/24

atl

74.125.177.0/24

atl

172.217.36.0/24

atl

173.194.102.0/24

atl

2607:f8b0:4002::/48

atl



Just thought it would be helpful when troubleshooting google DNS 
issues.



(this is -almost- a copy / paste of an email sent by Erik Sundberg to nanog mailing list on 
August 23 2017).
 This post is being posted with his explicit permission.