[Oisf-users] HTTP Logging Update
Peter Manev
petermanev at gmail.com
Thu Jun 5 19:42:21 UTC 2014
On Thu, Jun 5, 2014 at 9:14 PM, Adnan Baykal <abaykal at gmail.com> wrote:
> I am using Suricata 2.0 but I will update the config and try again.
>
If I remember correctly 2.0 had a bug where you needed to have both
eve.json (http) and http.log enabled in order to get logs written.
2.0.1 has that fixed.
P.S.
click "reply all" :)
>
>
> On Thu, Jun 5, 2014 at 2:57 PM, Peter Manev <petermanev at gmail.com> wrote:
>>
>> On Thu, Jun 5, 2014 at 8:15 PM, Adnan Baykal <abaykal at gmail.com> wrote:
>> > here is some more info:
>> >
>> > - http-log:
>> >
>> > enabled: yes
>> >
>> > filename: http.log
>> >
>> > append: yes
>> >
>> > #extended: yes # enable this for extended logging information
>> >
>> > custom: yes # enabled the custom logging format (defined by
>> > customformat)
>> >
>> > customformat: "%a [**] %{User-agent}i"
>> >
>> > #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
>> >
>> >
>> > detect-engine:
>> >
>> > - profile: custom
>> >
>> > - custom-values:
>> >
>> > toclient-src-groups: 200
>> >
>> > toclient-dst-groups: 200
>> >
>> > toclient-sp-groups: 200
>> >
>> > toclient-dp-groups: 300
>> >
>> > toserver-src-groups: 200
>> >
>> > toserver-dst-groups: 400
>> >
>> > toserver-sp-groups: 200
>> >
>> > toserver-dp-groups: 250
>> >
>> > - sgh-mpm-context: auto
>> >
>> > - inspection-recursion-limit: 3000
>> >
>> >
>> > flow:
>> >
>> > memcap: 1gb
>> >
>> > hash-size: 1048576
>> >
>> > prealloc: 1048576
>> >
>> > emergency-recovery: 30
>> >
>> >
>> > # This option controls the use of vlan ids in the flow (and defrag)
>> >
>> > # hashing. Normally this should be enabled, but in some (broken)
>> >
>> > # setups where both sides of a flow are not tagged with the same vlan
>> >
>> > # tag, we can ignore the vlan id's in the flow hashing.
>> >
>> > vlan:
>> >
>> > use-for-tracking: false
>> >
>> >
>> > flow-timeouts:
>> >
>> >
>> > default:
>> >
>> > new: 5
>> >
>> > established: 5
>> >
>> > closed: 0
>> >
>> > emergency-new: 5
>> >
>> > emergency-established: 5
>> >
>> > emergency-closed: 0
>> >
>> > tcp:
>> >
>> > new: 5
>> >
>> > established: 100
>> >
>> > closed: 10
>> >
>> > emergency-new: 1
>> >
>> > emergency-established: 5
>> >
>> > emergency-closed: 5
>> >
>> > udp:
>> >
>> > new: 5
>> >
>> > established: 5
>> >
>> > emergency-new: 5
>> >
>> > emergency-established: 5
>> >
>> > icmp:
>> >
>> > new: 5
>> >
>> > established: 5
>> >
>> > emergency-new: 5
>> >
>> > emergency-established: 5
>> >
>> >
>> >
>> >
>> > stream:
>> >
>> > memcap: 4gb
>> >
>> > checksum-validation: no # reject wrong csums
>> >
>> > inline: no # auto will use inline mode in IPS mode,
>> > yes
>> > or no set it statically
>>
>> ####
>>
>> >
>> > max-sessions: 20000000
>> >
>> > prealloc-sessions: 10000000
>>
>> ####
>>
>> Judging by this above, you are using an old config or old version of
>> Suricata.
>> Could you try the latest config and Suricata and enable eve.json (only
>> http) and see if it makes a difference?
>>
>>
>> >
>> > #midstream: true
>> >
>> > #asyn-oneside: true
>> >
>> > reassembly:
>> >
>> > memcap: 12gb
>> >
>> > depth: 1mb # reassemble 1mb into a stream
>> >
>> > toserver-chunk-size: 2560
>> >
>> > toclient-chunk-size: 2560
>> >
>> > randomize-chunk-size: yes
>> >
>> > #randomize-chunk-range: 10
>> >
>> > #raw: yes
>> >
>> >
>> >
>> >
>> >
>> >
>> > Here is the portion of the stat file:
>> >
>> > capture.kernel_packets | RxPFRp2p17 | 6073789
>> >
>> > capture.kernel_drops | RxPFRp2p17 | 0
>> >
>> > dns.memuse | RxPFRp2p17 | 848709
>> >
>> > dns.memcap_state | RxPFRp2p17 | 0
>> >
>> > dns.memcap_global | RxPFRp2p17 | 0
>> >
>> > decoder.pkts | RxPFRp2p17 | 6073790
>> >
>> > decoder.bytes | RxPFRp2p17 | 2412501092
>> >
>> > decoder.invalid | RxPFRp2p17 | 0
>> >
>> > decoder.ipv4 | RxPFRp2p17 | 6074122
>> >
>> > decoder.ipv6 | RxPFRp2p17 | 276
>> >
>> > decoder.ethernet | RxPFRp2p17 | 6073790
>> >
>> > decoder.raw | RxPFRp2p17 | 0
>> >
>> > decoder.sll | RxPFRp2p17 | 0
>> >
>> > decoder.tcp | RxPFRp2p17 | 4933100
>> >
>> > decoder.udp | RxPFRp2p17 | 595580
>> >
>> > decoder.sctp | RxPFRp2p17 | 0
>> >
>> > decoder.icmpv4 | RxPFRp2p17 | 2734
>> >
>> > decoder.icmpv6 | RxPFRp2p17 | 218
>> >
>> > decoder.ppp | RxPFRp2p17 | 0
>> >
>> > decoder.pppoe | RxPFRp2p17 | 0
>> >
>> > decoder.gre | RxPFRp2p17 | 354
>> >
>> > decoder.vlan | RxPFRp2p17 | 0
>> >
>> > decoder.vlan_qinq | RxPFRp2p17 | 0
>> >
>> > decoder.teredo | RxPFRp2p17 | 61
>> >
>> > decoder.ipv4_in_ipv6 | RxPFRp2p17 | 0
>> >
>> > decoder.ipv6_in_ipv6 | RxPFRp2p17 | 0
>> >
>> > decoder.avg_pkt_size | RxPFRp2p17 | 397
>> >
>> > decoder.max_pkt_size | RxPFRp2p17 | 1514
>> >
>> > defrag.ipv4.fragments | RxPFRp2p17 | 18078
>> >
>> > defrag.ipv4.reassembled | RxPFRp2p17 | 0
>> >
>> > defrag.ipv4.timeouts | RxPFRp2p17 | 0
>> >
>> > defrag.ipv6.fragments | RxPFRp2p17 | 0
>> >
>> > defrag.ipv6.reassembled | RxPFRp2p17 | 0
>> >
>> > defrag.ipv6.timeouts | RxPFRp2p17 | 0
>> >
>> > defrag.max_frag_hits | RxPFRp2p17 | 0
>> >
>> > tcp.sessions | RxPFRp2p17 | 110314
>> >
>> > tcp.ssn_memcap_drop | RxPFRp2p17 | 0
>> >
>> > tcp.pseudo | RxPFRp2p17 | 0
>> >
>> > tcp.invalid_checksum | RxPFRp2p17 | 0
>> >
>> > tcp.no_flow | RxPFRp2p17 | 0
>> >
>> > tcp.reused_ssn | RxPFRp2p17 | 0
>> >
>> > tcp.memuse | RxPFRp2p17 | 5348928
>> >
>> > tcp.syn | RxPFRp2p17 | 112183
>> >
>> > tcp.synack | RxPFRp2p17 | 15048
>> >
>> > tcp.rst | RxPFRp2p17 | 34856
>> >
>> > tcp.segment_memcap_drop | RxPFRp2p17 | 0
>> >
>> > tcp.stream_depth_reached | RxPFRp2p17 | 0
>> >
>> > tcp.reassembly_memuse | RxPFRp2p17 | 0
>> >
>> > tcp.reassembly_gap | RxPFRp2p17 | 0
>> >
>> > http.memuse | RxPFRp2p17 | 0
>> >
>> > http.memcap | RxPFRp2p17 | 0
>> >
>> > detect.alert | RxPFRp2p17 | 0
>> >
>> > capture.kernel_packets | RxPFRp2p18 | 5863379
>> >
>> > capture.kernel_drops | RxPFRp2p18 | 0
>> >
>> > dns.memuse | RxPFRp2p18 | 849275
>> >
>> > dns.memcap_state | RxPFRp2p18 | 0
>> >
>> > dns.memcap_global | RxPFRp2p18 | 0
>> >
>> > decoder.pkts | RxPFRp2p18 | 5863380
>> >
>> > decoder.bytes | RxPFRp2p18 | 2460791034
>> >
>> > decoder.invalid | RxPFRp2p18 | 0
>> >
>> > decoder.ipv4 | RxPFRp2p18 | 5863428
>> >
>> > decoder.ipv6 | RxPFRp2p18 | 236
>> >
>> > decoder.ethernet | RxPFRp2p18 | 5863380
>> >
>> > decoder.raw | RxPFRp2p18 | 0
>> >
>> > decoder.sll | RxPFRp2p18 | 0
>> >
>> > decoder.tcp | RxPFRp2p18 | 4880656
>> >
>> > decoder.udp | RxPFRp2p18 | 538940
>> >
>> > decoder.sctp | RxPFRp2p18 | 0
>> >
>> > decoder.icmpv4 | RxPFRp2p18 | 2987
>> >
>> > decoder.icmpv6 | RxPFRp2p18 | 201
>> >
>> > decoder.ppp | RxPFRp2p18 | 0
>> >
>> > decoder.pppoe | RxPFRp2p18 | 0
>> >
>> > decoder.gre | RxPFRp2p18 | 48
>> >
>> > decoder.vlan | RxPFRp2p18 | 0
>> >
>> > decoder.vlan_qinq | RxPFRp2p18 | 0
>> >
>> > decoder.teredo | RxPFRp2p18 | 35
>> >
>> > decoder.ipv4_in_ipv6 | RxPFRp2p18 | 0
>> >
>> > decoder.ipv6_in_ipv6 | RxPFRp2p18 | 0
>> >
>> > decoder.avg_pkt_size | RxPFRp2p18 | 419
>> >
>> > decoder.max_pkt_size | RxPFRp2p18 | 1514
>> >
>> > defrag.ipv4.fragments | RxPFRp2p18 | 17064
>> >
>> > defrag.ipv4.reassembled | RxPFRp2p18 | 0
>> >
>> > defrag.ipv4.timeouts | RxPFRp2p18 | 0
>> >
>> > defrag.ipv6.fragments | RxPFRp2p18 | 0
>> >
>> > defrag.ipv6.reassembled | RxPFRp2p18 | 0
>> >
>> > defrag.ipv6.timeouts | RxPFRp2p18 | 0
>> >
>> > defrag.max_frag_hits | RxPFRp2p18 | 0
>> >
>> > tcp.sessions | RxPFRp2p18 | 110186
>> >
>> > tcp.ssn_memcap_drop | RxPFRp2p18 | 0
>> >
>> > tcp.pseudo | RxPFRp2p18 | 0
>> >
>> > tcp.invalid_checksum | RxPFRp2p18 | 0
>> >
>> > tcp.no_flow | RxPFRp2p18 | 0
>> >
>> > tcp.reused_ssn | RxPFRp2p18 | 0
>> >
>> > tcp.memuse | RxPFRp2p18 | 5348928
>> >
>> > tcp.syn | RxPFRp2p18 | 112081
>> >
>> > tcp.synack | RxPFRp2p18 | 15365
>> >
>> > tcp.rst | RxPFRp2p18 | 34375
>> >
>> > tcp.segment_memcap_drop | RxPFRp2p18 | 0
>> >
>> > tcp.stream_depth_reached | RxPFRp2p18 | 0
>> >
>> > tcp.reassembly_memuse | RxPFRp2p18 | 0
>> >
>> > tcp.reassembly_gap | RxPFRp2p18 | 0
>> >
>> > http.memuse | RxPFRp2p18 | 0
>> >
>> > http.memcap | RxPFRp2p18 | 0
>> >
>> > detect.alert | RxPFRp2p18 | 0
>> >
>> > flow_mgr.closed_pruned | FlowManagerThread | 1151189
>> >
>> > flow_mgr.new_pruned | FlowManagerThread | 1106070
>> >
>> > flow_mgr.est_pruned | FlowManagerThread | 0
>> >
>> > flow.memuse | FlowManagerThread | 400679248
>> >
>> > flow.spare | FlowManagerThread | 1058085
>> >
>> > flow.emerg_mode_entered | FlowManagerThread | 0
>> >
>> > flow.emerg_mode_over | FlowManagerThread | 0
>> >
>> >
>> >
>> >
>> > On Thu, Jun 5, 2014 at 2:14 PM, Adnan Baykal <abaykal at gmail.com> wrote:
>> >>
>> >> Checksums are disabled.
>> >>
>> >>
>> >> On Thu, Jun 5, 2014 at 2:10 PM, Peter Manev <petermanev at gmail.com>
>> >> wrote:
>> >>>
>> >>> On Thu, Jun 5, 2014 at 8:06 PM, Adnan Baykal <abaykal at gmail.com>
>> >>> wrote:
>> >>> > disabled vlan as tracking as well.
>> >>> >
>> >>> >
>> >>>
>> >>> What is your setting for checksums in suricata.yaml - enabled or
>> >>> disabled?
>> >>>
>> >>>
>> >>> >
>> >>> > On Thu, Jun 5, 2014 at 2:04 PM, Peter Manev <petermanev at gmail.com>
>> >>> > wrote:
>> >>> >>
>> >>> >> On Thu, Jun 5, 2014 at 5:30 PM, Adnan Baykal <abaykal at gmail.com>
>> >>> >> wrote:
>> >>> >> > Here is what I did. I found a top talker - video streaming - and
>> >>> >> > put
>> >>> >> > a
>> >>> >> > bpf
>> >>> >> > filter to filter it out (not (host 1.2.3.4)). I am not dropping
>> >>> >> > as
>> >>> >> > many
>> >>> >> > packets any more (about 3%-4%).
>> >>> >> >
>> >>> >> > however, I still see extremely low number of http entries in the
>> >>> >> > http
>> >>> >> > log
>> >>> >> > and I dont see anything when I take out the midstream and async
>> >>> >> > entries
>> >>> >> > from
>> >>> >> > the yaml file.
>> >>> >> >
>> >>> >>
>> >>> >> Do you have VLANs on the mirror port?
>> >>> >>
>> >>> >>
>> >>> >> >
>> >>> >> >
>> >>> >> > On Wed, Jun 4, 2014 at 8:14 PM, Adnan Baykal <abaykal at gmail.com>
>> >>> >> > wrote:
>> >>> >> >>
>> >>> >> >> Mbit
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> On Wed, Jun 4, 2014 at 4:38 PM, Peter Manev
>> >>> >> >> <petermanev at gmail.com>
>> >>> >> >> wrote:
>> >>> >> >>>
>> >>> >> >>> On Wed, Jun 4, 2014 at 10:33 PM, Adnan Baykal
>> >>> >> >>> <abaykal at gmail.com>
>> >>> >> >>> wrote:
>> >>> >> >>> > I do load about 7K rules. I need to go back to my sensor but
>> >>> >> >>> > it
>> >>> >> >>> > is
>> >>> >> >>> > probably
>> >>> >> >>> > around 800MB/s
>> >>> >> >>> >
>> >>> >> >>> >
>> >>> >> >>>
>> >>> >> >>> Just to confirm - is that 800 Mbit or MByte?
>> >>> >> >>>
>> >>> >> >>>
>> >>> >> >>> > On Wed, Jun 4, 2014 at 4:17 PM, Peter Manev
>> >>> >> >>> > <petermanev at gmail.com>
>> >>> >> >>> > wrote:
>> >>> >> >>> >>
>> >>> >> >>> >> On Wed, Jun 4, 2014 at 10:08 PM, Adnan Baykal
>> >>> >> >>> >> <abaykal at gmail.com>
>> >>> >> >>> >> wrote:
>> >>> >> >>> >> > I have been having no HTTP logging at all on one of my
>> >>> >> >>> >> > sensors. I
>> >>> >> >>> >> > have
>> >>> >> >>> >> > posted several questions to this blog. Mind you that this
>> >>> >> >>> >> > sensor
>> >>> >> >>> >> > does
>> >>> >> >>> >> > drop
>> >>> >> >>> >> > significant amount of data (about 50%) and I do understand
>> >>> >> >>> >> > that
>> >>> >> >>> >> > there
>> >>> >> >>> >> > will
>> >>> >> >>> >> > be a lot of http traffic missed due to drops but not
>> >>> >> >>> >> > having
>> >>> >> >>> >> > any
>> >>> >> >>> >> > entry in
>> >>> >> >>> >> > the
>> >>> >> >>> >> > http.log file was concerning. I thought I would at least
>> >>> >> >>> >> > see
>> >>> >> >>> >> > some
>> >>> >> >>> >> > entries.
>> >>> >> >>> >> >
>> >>> >> >>> >> > This morning, I found a setting:
>> >>> >> >>> >> >
>> >>> >> >>> >> > midstream: true # do not allow midstream
>> >>> >> >>> >> > session
>> >>> >> >>> >> > pickups
>> >>> >> >>> >> > async_oneside: true # do not enable async stream
>> >>> >> >>> >> > handling
>> >>> >> >>> >> >
>> >>> >> >>> >> > When above setting is applied to the stream, I get limited
>> >>> >> >>> >> > HTTP
>> >>> >> >>> >> > log.
>> >>> >> >>> >> > My
>> >>> >> >>> >> > question is "can this change in behavior be explained by
>> >>> >> >>> >> > dropped
>> >>> >> >>> >> > packets"?
>> >>> >> >>> >> > does this change further support the theory that this box
>> >>> >> >>> >> > is
>> >>> >> >>> >> > significantly
>> >>> >> >>> >> > undersized and that the bigger box would operate normally
>> >>> >> >>> >> > with
>> >>> >> >>> >> > full
>> >>> >> >>> >> > http
>> >>> >> >>> >> > traffic?
>> >>> >> >>> >> >
>> >>> >> >>> >> > I am in the process of upgrading this sensor to a 32GB 20
>> >>> >> >>> >> > Core
>> >>> >> >>> >> > system
>> >>> >> >>> >> > (it is
>> >>> >> >>> >> > currently 16GB 8 Core).
>> >>> >> >>> >> >
>> >>> >> >>> >> > Thanks,
>> >>> >> >>> >> >
>> >>> >> >>> >> > --Adnan
>> >>> >> >>> >> >
>> >>> >> >>> >> >
>> >>> >> >>> >> > _______________________________________________
>> >>> >> >>> >> > Suricata IDS Users mailing list:
>> >>> >> >>> >> > oisf-users at openinfosecfoundation.org
>> >>> >> >>> >> > Site: http://suricata-ids.org | Support:
>> >>> >> >>> >> > http://suricata-ids.org/support/
>> >>> >> >>> >> > List:
>> >>> >> >>> >> >
>> >>> >> >>> >> >
>> >>> >> >>> >> >
>> >>> >> >>> >> > https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>> >>> >> >>> >> > OISF: http://www.openinfosecfoundation.org/
>> >>> >> >>> >>
>> >>> >> >>> >> In general if you have significant % of drops - you will be
>> >>> >> >>> >> missing a
>> >>> >> >>> >> lot of logs.
>> >>> >> >>> >> How much traffic do you inspect with that set up? (and how
>> >>> >> >>> >> many
>> >>> >> >>> >> rules
>> >>> >> >>> >> do you load?)
>> >>> >> >>> >>
>> >>> >> >>> >>
>> >>> >> >>> >> --
>> >>> >> >>> >> Regards,
>> >>> >> >>> >> Peter Manev
>> >>> >> >>> >
>> >>> >> >>> >
>> >>> >> >>>
>> >>> >> >>>
>> >>> >> >>>
>> >>> >> >>> --
>> >>> >> >>> Regards,
>> >>> >> >>> Peter Manev
>> >>> >> >>
>> >>> >> >>
>> >>> >> >
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> --
>> >>> >> Regards,
>> >>> >> Peter Manev
>> >>> >
>> >>> >
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Regards,
>> >>> Peter Manev
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Regards,
>> Peter Manev
>
>
--
Regards,
Peter Manev
More information about the Oisf-users
mailing list