I have written about automation with DNA-C before, and how it becomes really efficient once you use some external scripting. Sure you can get a lot out of the web GUI, but you are still locked in a situation where you have to act human middleware. This post is about how to utilize tools such as Python and GIT to acheive both time and quality.
Regardless if you are one person working with different DNA-C installations, or many people working with one, you have an operational model that demands some thinking – what is our source of truth?read more
DNA-C includes many things out of the box. Semi-automation, PnP and SWIM are extremely valuable. But if you combine this with some external scripting thats when you really unleash the beast!read more
MPLS TE, or Traffic Engineering is quite an interesting topic. A lot of discussions regarding segment routing ends up discussing how it can be used for TE purposes, but a lot can be done already in a standard MPLS setup using LDP and RSVP. Lets dig in!read more
When reading about DHCP snooping at CCNA or CCNP level, its not very hard to get. On an access layer switch, we dont trust any port by default, and only trusted ports permit DHCP server messages. Thats not a big deal and not what this post is going to be about. But what happens in other scenarios?
There are many different scenarios when DHCP snooping can provide troublesome behavior if you don’t know what really happens under the hood of our switch functions.
Did you know that OSPF can redistribute routes with their original next-hop addresses?
In the topology above, the leftmost router (lets call it router 1) injects network 172.16.1.0/24 into EIGRP. ASBR redistributes to OSPF. ASBR will as we now create a type5 LSA for the external type network. But did you know that it can preserve the original next-hop address from EIGRP? The feature is called “OSPF Type 5 LSA forwarding address feature” and lets us preserve the next-hop address in some topologies in order to spare one unnecessary hop.
One of the main issues with sparse-mode multicast is to elect a rendezvous-point(RP) and spread information about where it is. Im not going into any details about why we need an RP in this post, but i will examine Cisco’s proprietary approach to this problem – Auto RP.
IPSEC = Black magic?
IPSEC tunnels may seem like some kind of black magic that firewalls just happen to figure out how they work. Especially when you start looking at IPSEC configuration at cli level. I expect the topic to come up at the CCIE lab and probably with several twists and definitely some DMVPN as well.
Lets jump right into a task, refer to the topology depicted below:
R1 = Route reflector, reflecting VPNv4 routes between PE routers
R2 and R3 = PE routers
R4 and R5 = CE routers
CE routers runs OSPF with PE routers and a MPLS L3VPN is setup, redistributing OSPF<->BGP vpnv4.
A backdoor link is setup between the CE routers directly for redundancy.
Task: The backdoor link is very poor, make sure its used for backup only, and that the mpls link is primarily used.
I just ran across some timers and realized I need to sort things out for myself to be able to repeat before the lab exam.
What timers do we need to know with BGP and what do they do?
- BGP Scanner
- BGP I/O
- BGO keepalive/holdtime interval
BGP scanner is a function that runs per BGP process. It runs through all prefixes in the BGP table and checks the NEXT_HOP reachability for each prefix to verify that its still valid. It also runs conditional advertisement, route-injection and route-dampening. It imports new routes into the BGP table from RIB via network statement and redistribute commands.
I/O handles BGP Update and keepalive messages and is configured per neighbor. Since it tells the router how often it should update its neighbor with BGP update messages, this implicitly configures prefix batching. With a higher update timer, potentially more prefixes would be sent in same update. With an update-timer of 0, each prefix update would be sent with individual update messages and would not be batched at all.
Update/keepalive is configured per neighbor with the command “advertisement interval”
Configures keepalive advertisement and holdtime for the entire BGP process which is used to verify if a BGP session is alive or dead. Default values are 60 and 180 seconds.
Did you know that you can loadbalance over unequal cost paths with BGP, just as we can do with EIGRP?
It is true. As long as bestpath selection is equal for all attributes up to MED – weight, local pref, as_path, origin, metric. And the neighbor type have to be external. To enable this feature in EIGRP we have to rely on the feasibility check to be sure that we avoid routing loops, in BGP we rely on the AS-path check. So for external routes with equal attributes including MED we can import multiple paths and loadbalance between them.