ARP – 0×806 0×0
PVST+ – 0x010B 0×0
PVST – any any lsap 0×4242 0×0
You are using an insecure version of your web browser. Please update your browser!
Using an outdated browser makes your computer unsafe. For a safer, faster, more enjoyable user experience, please update your browser today or try a newer browser.
ARP – 0×806 0×0
PVST+ – 0x010B 0×0
PVST – any any lsap 0×4242 0×0
Frame-Relay – still pretty easy, even PPP over FR
Layer 2 - Diagrams suck, and are wrong. had to configure SVI’s where it showed interfaces.
OSPF – Don’t forget the warning-only tag in lsa limiting if it doesnt tell me to specifically throttle stuff. Otherwise, OSPF is still easy.
RIP – unicast updates are cute, still easy
EIGRP – see notes on RIP
Redistribution – fairly easy scenario, and fairly simple to see that R2 is going to have sub optimal routing without adjusting OSPF external AD
BGP – still fairly easy, but got caught up a bit by the sync requirement. Would have lost points in the real thing for forgetting about bgp backdoor on R6, and should have kicked SW1′s BGP internal AD down instead of sending RIP’s up. Don’t think it matters either way, would still make BGP routers more preferred than anything. Glad I was correctly able to deduce that this was the router sync needed disabled on.
IPv6 – tunnelling is still a bitch. Need to spend some time learning to identify which fucking tunnelling tech to use. Also need to remember the bgp resdist-internal command when I’m kicking routes into IGP from BGP (think this applies to ipv4 as well).
QoS – Sick of legacy QoS shit.
NAT – got this, except ID’d the wrong interface as outside for using the trans. Had a feeling that was wrong, but should have tried it before I looked at the answer key
Multicast – have to remember the viability of tunnels, but a little pissed off at the rules being broken. Rules said don’t make any new interfaces, and solution has it making interfaces. Regardless, have to remember the usefulness of tunnels in overcoming RPF issues.
Before actually applying NetFlow to any interfaces, you must define the characteristics of the flow you want to capture.
ip flow-capture – This command defines additional characteristics of the flow to capture, such as vlan-id tags, icmp type codes, mac-addresses, packet-lengths, ttl, etc
ip flow-export – defines the version of Netflow, the destination of where to export, the source interface of the export, the interface name of the flow, and the BGP origin AS if available
ip flow-cache – defines how many entries to put into the cache
Once you’ve defined the characteristics of the flow, you put it into play by defining whether to capture flows inbound or outbound, this is done with interface level commands:
ip flow ingress – captures all traffic coming into the interface. The flow is captured before anything is applied to it (ACL, rate-limiting, NAT, encryption, etc). Ingress Netflow cannot look inside MPLS packets
ip flow egress – captures traffic that is transiting the router outbound, but not traffic that is generated locally. Can see MPLS packets, as they’re initially sent out untagged. egress netflow is captured *after* packet manipulation (ACL, rate-limiting, NAT, encryption, etc) is applied. There is a performance hit for egress Netflow
show ip cache verbose flow – this shows the flows in the routers cache. If you generate the traffic you’re looking for, it should show up here.
An aggregation cache is a separate cache from the main cache that aggregates information. This is based off entries in the routing table. Basically, this lets you see how much traffic is either going to or coming from a destined attribute, instead of specific hosts.
ip flow-aggregation cache <type> – type is the type of aggregation you want to do. You can collect data based on ASN, prefixes, destination-prefix, source-prefix, tos, ports, etc
Once this is selected, you’re in cache configuration mode, which allows you to set your parameters like export (can be a different host and port from other config) and version. Version must be NetFlow 8 or 9, as earlier versions do not support aggregation flows. In the case of prefixes, you can define a minimum mask to aggregate on. Ie, if your routing table has a 10.0.0.0/8 summary, and you want to see individual prefix flows, and you define the minimum mask as /24, then the aggregation cache will see traffic to 10.12.4.9 as 10.12.4.0/24. Longest mask always wins. For example, define the minimum mask as /8, and lets say you’re sending traffic to 188.8.131.52/16. Then the /16 will get installed in the aggregation table. Long story short, if the minimum mask is longer than the prefix in the routing table, the minimum mask is enforced. If the minimum mask is shorter than the prefix in the routing table, then the routing prefix mask is used.
Most important thing about configuring the aggregation-cache – MAKE SURE TO ENABLE IT. It is *not* turned on by default at creation.
show ip cache flow aggregation <type>
Flow Sampling allows you to sample a random amount of packets instead of every packet, which lowers overhead and resource consumption quite a bit.
It is configured by first defining a flow sampler map
This will put you in sampler config mode, which has two commands, exit and mode.
To sample 1 of every 20 packets, you would configure it as such:
mode random one-out-of 20
To apply it, you have two choices. Both choices will interfere with the previously defined interface command on ip flow, so you may need to remove them first.
1st Choice: Directly on the interface with the flow-sampler command. It takes the map name as a parameter, and egress as the final parameter. It can only be applied to egress traffic, not ingress, and you will need to disable ip flow egress, as it will override the sampler
2nd Choice: service-policy. Define a class map to match the traffic you want to sample, or use class-default in the policy-map for all traffic. Under the class, call for netflow-sampler <MAP NAME>, then apply the service-policy to the interface as normal. Again, you may need to disable the direct ip flow commands on the interface, as they will override the service-policy
1. Create an ACL with the routes you need to summarize
2. Apply ACL to unused interface , no shut the interface
3. Run the following command
SW1#sh platform tcam table acl detail | i l3Source
l3Source: 01.01.00.00 FF.FF.FC.00
Numbers are in hex, second set is the mask, convert, that gives you the mask
Tip courtesy of Narbik
backup interface is tied to the line protocol of the primary interface
backup interface is configured on the primary interface, not the secondary, with the backup interface <interface to function as secondary> command
if line protocol on s0/0/0 goes down, s0/1/0 comes up
admin down of primary link DOES NOT active the backup link
on frame-relay, LMI is what causes the line protocol to go down.
changing local lmi-type may allow you to test whether the backup link works
Line protocol doesnt always indicate end to end connectivity, for example, devices are not physically directly connected (ie, switch, frame relay switch, transport gear, etc) is in the middle, when one sides interface goes down, the other does not, so line protocol stays up.
Enhanced Object Tracking is necessary to track reachability in this case
Interface needs the following configuration
no ip address
pppoe-client dialer-pool-number <dialer pool #>
Create dialer interface
int dialer 1
dialer pool <dialer pool #>
ip address <dhcp or ip>
ip mtu 1492 (may not be necessary, unless you need to run something like ospf)
no ip address
pppoe enable group
Water Sleeps, but Enemy never rests.
Play the game, or the game plays you
You’re too smart for your own good. Shut the fuck up.
Don’t get caught