[Oisf-users] Questions on suricata configuration

unite unite at openmailbox.org
Tue Jan 20 14:22:20 UTC 2015


Hi, Andreas!

On 2015-01-20 15:01, Andreas Herz wrote:
> On 19/01/15 at 17:06, unite wrote:
>> 2. Second one is about NFQ modes. If I understood correctly, the 
>> default nfq
>> mode is "accept". So, after passing through suricata, packet should be
>> accepted or dropped, so suricata won't pass it back to iptables. 
>> However,
>> when I test it in fact does pass it back. My iptables rules are:
>> iptables -A FORWARD -s 172.25.25.0/24 -j NFQUEUE --queue-num 0
>> (172.25.25.0/24 is my test net with "malicious" host, it is excluded 
>> from
>> HOME_NET variable)
> 
> How do you check that it comes back?
> I have the same setup with -j NFQUEUE and the suricata inline mode is
> the last step the packets take (unless you did configure something else
> in suricata).
> 
>> I've enabled the rule which alerts of too big ICMP packets and from
>> "malicious" host try to ping the host in my another network - 
>> 10.0.0.0/24.
>> Alerts are generated, I can see them in fast.log and also on snorby 
>> WebUI.
>> Packets still pass. Then I add the following rule:
>> iptables -A FORWARD -p icmp -j DROP (so it is the second one in the 
>> iptables
>> chain).
> 
> Does it just alert or did you convert the rules to use "drop" instead 
> of
> "alert"?
> 
>> And those ICMP packets are being dropped. New alerts keep being 
>> generated in
>> suri, so I believe the traffic passes through it, and then gets back 
>> to
>> iptables which is dropping them. If I delete this second rule traffic 
>> passes
>> again.
> 
> Can you paste your suricata config? Are you sure that these are the 
> same
> packets that go through suricata?
> 
>> The question: isn't the default NFQ mode "accept"? Or the behaviour I 
>> see is
>> expected and I just didn't got the point in suricata.yaml guide?
> 
> If you didn't change anything in the config it should be fine, yes.
> 
I've double checked everything one more time. Results are:

a) If my iptables rules are like this and the rule is set to "alert":
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
  pkts bytes target     prot opt in     out     source               
destination
     0     0 NFQUEUE    all  --  *      *       172.25.25.0/24       
0.0.0.0/0            NFQUEUE num 0
when ping passes through the alert is generated (fast.log) and ping 
passes through as it should - OK.

b) If my iptables rules are like this and the rule is set to "alert":
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
  pkts bytes target     prot opt in     out     source               
destination
     1     84 NFQUEUE    all  --  *      *       172.25.25.0/24       
0.0.0.0/0            NFQUEUE num 0
     1     84 DROP       icmp --  *      *       0.0.0.0/0            
0.0.0.0/0
The alert is generated (in fast.log, drop.log is empty)and packet is 
being dropped. I'm pretty sure that it is the same packet - counters for 
packets/bytes increment simultaneously and on same values, and also my 
testing host is the only one in the testing network - no one can 
generate it except me.

c) If my iptables rules are like this and the rule is set to "drop":
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
  pkts bytes target     prot opt in     out     source               
destination
     0     0 NFQUEUE    all  --  *      *       172.25.25.0/24       
0.0.0.0/0            NFQUEUE num 0
when ping passes through the alert (both fast.log and drop.log) is 
generated and ping is dropped as it should - OK.

d)If my iptables rules are like this and the rule is set to "drop":
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
  pkts bytes target     prot opt in     out     source               
destination
     2   168 NFQUEUE    all  --  *      *       172.25.25.0/24       
0.0.0.0/0            NFQUEUE num 0
     0     0 DROP       icmp --  *      *       0.0.0.0/0            
0.0.0.0/0
The alert is generated (both fast.log and drop.log), packet gets dropped 
as it should and the second iptables rule (which disables icmp) doesn't 
increment counters - so the packet is dropped by suricata.

My config at the moment is not far away from default - so here is the 
diff between them:
sudo diff suricata-2.0.5/suricata.yaml /etc/suricata/suricata.yaml
[sudo] password for user:
16c16
< #max-pending-packets: 1024
---
> max-pending-packets: 8192
87c87
<       enabled: yes
---
>       enabled: no
237c237
<       enabled: no
---
>       enabled: yes
405c405
<   - profile: medium
---
>   - profile: high
572c572
<   timeout: 60
---
>   timeout: 30
612a613
>   prune-flows: 5
647,649c648,650
<     new: 60
<     established: 3600
<     closed: 120
---
>     new: 30
>     established: 300
>     closed: 60
651,652c652,653
<     emergency-established: 300
<     emergency-closed: 20
---
>     emergency-established: 100
>     emergency-closed: 10
787c788
<       enabled: no
---
>       enabled: yes
988c989
<     HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"
---
>     HOME_NET: "[192.168.0.0/16,10.0.0.0/8]"
1049c1050
<   windows: [0.0.0.0/0]
---
>   windows: [10.0.0.0/8,0.0.0.0/0]
1053c1054
<   linux: [10.0.0.0/8, 192.168.1.100, 
"8762:2352:6241:7245:E000:0000:0000:0000"]
---
>   linux: [192.168.1.100, "8762:2352:6241:7245:E000:0000:0000:0000"]


>> 3. Is there a way to update the rules "on-the-fly" so, for example, to
>> enable/disable/update some rules and get them used by suricata without
>> restarting the engine itself?
> 
> Yes, you need to enable "live rule reload" and then use USR2 signal to
> trigger it. Keep in mind that there is an actual memory issue:
> 
> https://redmine.openinfosecfoundation.org/issues/1358
> 
Thanks. If so (I mean the bug) will it be OK to just restart suricata by 
killing it's PID by kill $pid-of-suricata and then start it again? Or it 
might cause some bad consequences?

>> 7. Is the hardware I mentioned above suitable for checking 100Mb/s of
>> traffic? Or do I need a more powerful machine?
> 
> That depends on the ruleset, the traffic and on the system itself
> (running other processes that need a lot of cpu?).
> But 100Mbit/s (is what you meant i guess) should be no issue in most
> cases, i have slower systems which are capable of checking >100Mbit/s.

I don't thnik that I will be too paranoid :) Thanks.

-- 
With kind regards,
Alex



More information about the Oisf-users mailing list