EdgeOS and Netflow

Update: Since I wrote this blog post in 2016, I’ve turned off netflow on my router. Why? Because I upgraded to gigabit fiber, and when netflow is enabled, it throttled my speed to 150 Mb/s or so. Not good! Giving up netflow is a small price to pay for full gigabit speeds. If you do enable netflow, keep this in mind.

I’ve written a lot about getting stuff working on my Ubiquiti EdgeOS router. Recently, I got the idea in my head to enable netflow on the router to do some traffic analysis. My router does support exporting netflow data, so I thought it would be fairly simple to set up. In the end, it wasn’t too hard, but it did take some research and at least one dumb mistake.

Setting up netflow on the router wasn’t too hard at all. Below is the config I ultimately enabled:

system {
	flow-accounting {
		ingress-capture post-dnat
		interface eth2
		netflow {
			enable-egress {
				engine-id 2
			}
			engine-id 1
			server 192.168.2.12 {
				port 2055
			}
			timeout {
				expiry-interval 60
				flow-generic 60
				icmp 60
				max-active-life 604800
				tcp-fin 60
				tcp-generic 60
				tcp-rst 60
				udp 60
			}
			version 9
		}
	syslog-facility daemon
	}
}

The timeout settings are all set to 60 seconds, which is far lower than the defaults. Setting them lower will make the data less choppy, as the default generic flow has a timeout of an hour, so you may not get any data from a flow until it ends if it takes less than an hour, and then all of a sudden you see that you got a large chunk of data incoming/outgoing. Finally, the version of netflow exported is Cisco v9.

Although some of those settings required tweaking, particularly the post-dnat and the timeouts, but it wasn’t hard to set up at all. The first issue cropped up when it came time to find a collector for my CentOS server. The favorite on every list, ntop, is not free, so scratch that. I then turned to pmacct, specifically nfacctd as the collector, which I couldn’t get working for reasons which will soon become obvious. Sill stymied, I turned to NFDUMP and nfcapd, which once again would not work. I got very frustrated and set it aside for a couple of weeks, coming back to it later.

What frustrated me was the fact that I could see data packets arriving on port 2055 on my server. Doing a trusty tcpdump in very verbose mode showed netflow data arriving, and I even pulled out RFC3954 and groveled over the UDP payloads to validate that yes, this was in fact netflow v9 data in the packets. So then why weren’t any of the tools actually receiving data?

It was at this point that the “duh” moment hit. IPTABLES. I have IPTABLES as a host-based firewall on the CentOS server, and of course I did not have a rule allowing udp traffic to port 2055 through. Once I did this, I started getting data via nfcapd. I learned a very important lesson: tcpdump sees traffic before IPTABLES, so even if you see data arriving via tcpdump, that doesn’t mean it is passing your firewall. D’oh!

At this point I was finally receiving data, and as readers of this blog know, once I have data, I usually want to put it into Splunk. Tune in later for that misadventure!

1 comment

  1. You’re an AWESOME geek!
    Making me remember my wayward past and all of those crazy days trying to config firewalls to allow traffic to not only pass, but to fee logged and analyzed.

Comments are closed.