[Oisf-users] Tuning Suricata (2.0beta1) -- no rules and lots of packet loss
Tritium Cat
tritium.cat at gmail.com
Wed Aug 21 17:14:02 UTC 2013
That is what the comma represents for the ixgbe arguments, it reflects the
setting for each port of an intel card.
Thus, for a system with two cards and a total of four ports.... modprobe
ixgbe RSS=16,16,16,16 would enable 16 queues for each port. MQ=1,1,1,1
means multi-queue for each port (default) FdirMode=3,3,3,3 means mode 3
for each port.
Unless I am horribly confused.
--TC
On Wed, Aug 21, 2013 at 10:11 AM, Tritium Cat <tritium.cat at gmail.com> wrote:
> Hello. Yes I am aware of that. You've not read the entire thread; I was
> using more than one card.
>
> --TC
>
>
> On Wed, Aug 21, 2013 at 10:00 AM, vpiserchia at gmail.com <
> vpiserchia at gmail.com> wrote:
>
>> Hello,
>>
>> Intel cards based on 82598/82599 can support up to 16 RSS queues only.
>>
>> for example read this:
>>
>> http://www.gossamer-threads.com/lists/ntop/misc/30009
>>
>> regards
>> -v
>>
>> On 08/21/2013 06:52 PM, Tritium Cat wrote:
>> > No, it doesn't work, at least in the sense of only 1% packet loss being
>> considered a success. Something odd with the Intel cards is preventing
>> more than 16 hardware queues from being used as the system will only show
>> activity with 16 cores in workers mode, all other CPUs are 100% idle. The
>> RSS parameter to the ixgbe module needs to be set for each port although it
>> claims to automatically use # of cores or # of ports, whichever is greater.
>> Also again, about FdirMode=3.. I don't think it applies here.
>> >
>> > I've since removed the additional cards and just experiment with one.
>> autofp mode isn't working as I'd expect either.
>> >
>> > Adjusting the MTU did reduce memory consumption. I suppose that is
>> meant to reflect the average pMTU of flows and not the link connected to
>> the sensor. The documentation could be written better to reflect this as
>> that part seems to imply something different. (yes, reading more about MTU
>> and IDS from various sources makes it clear). Regarding documentation the
>> af-packet section regarding the zero-copy ring size conflicting with
>> buffer_size should be updated; values that are commented out are assumed to
>> be 'defaults' like in many other configuration scenarios; I'm glad you
>> pointed this out as it is definitely not apparent to me from just looking
>> at the configuration.
>> >
>> > I'm going to go away now to read code and experiment more.
>> >
>> > --TC
>> >
>> >
>> > autofp Example:
>> >
>> > capture.kernel_packets | RxAFPeth41 | 7117283101
>> > capture.kernel_drops | RxAFPeth41 | 4885784393
>> > capture.kernel_packets | RxAFPeth42 | 7290835993
>> > capture.kernel_drops | RxAFPeth42 | 5061427961
>> > capture.kernel_packets | RxAFPeth43 | 7213432976
>> > capture.kernel_drops | RxAFPeth43 | 4941736439
>> > capture.kernel_packets | RxAFPeth44 | 7273721753
>> > capture.kernel_drops | RxAFPeth44 | 5046375696
>> > capture.kernel_packets | RxAFPeth45 | 7702660203
>> > capture.kernel_drops | RxAFPeth45 | 5473406098
>> > capture.kernel_packets | RxAFPeth46 | 6526210366
>> > capture.kernel_drops | RxAFPeth46 | 4280571057
>> > capture.kernel_packets | RxAFPeth47 | 7473635100
>> > capture.kernel_drops | RxAFPeth47 | 5264888903
>> > capture.kernel_packets | RxAFPeth48 | 8001217687
>> > capture.kernel_drops | RxAFPeth48 | 5781338601
>> > capture.kernel_packets | RxAFPeth49 | 7935510106
>> > capture.kernel_drops | RxAFPeth49 | 5684606164
>> > capture.kernel_packets | RxAFPeth410 | 6672471328
>> > capture.kernel_drops | RxAFPeth410 | 4480440331
>> > capture.kernel_packets | RxAFPeth411 | 4012330752
>> > capture.kernel_drops | RxAFPeth411 | 2650530005
>> > capture.kernel_packets | RxAFPeth412 | 6938284654
>> > capture.kernel_drops | RxAFPeth412 | 4686886437
>> > capture.kernel_packets | RxAFPeth413 | 7368646714
>> > capture.kernel_drops | RxAFPeth413 | 5117305059
>> > capture.kernel_packets | RxAFPeth414 | 5284771030
>> > capture.kernel_drops | RxAFPeth414 | 3751148947
>> > capture.kernel_packets | RxAFPeth415 | 7373582300
>> > capture.kernel_drops | RxAFPeth415 | 5176332364
>> > capture.kernel_packets | RxAFPeth416 | 7114510564
>> > capture.kernel_drops | RxAFPeth416 | 4903112771
>> > capture.kernel_packets | RxAFPeth417 | 68112
>> > capture.kernel_drops | RxAFPeth417 | 0
>> > capture.kernel_packets | RxAFPeth418 | 80839
>> > capture.kernel_drops | RxAFPeth418 | 0
>> > capture.kernel_packets | RxAFPeth419 | 77292
>> > capture.kernel_drops | RxAFPeth419 | 0
>> > capture.kernel_packets | RxAFPeth420 | 90287
>> > capture.kernel_drops | RxAFPeth420 | 0
>> > capture.kernel_packets | RxAFPeth421 | 78012
>> > capture.kernel_drops | RxAFPeth421 | 0
>> > capture.kernel_packets | RxAFPeth422 | 74278
>> > capture.kernel_drops | RxAFPeth422 | 0
>> > capture.kernel_packets | RxAFPeth423 | 79919
>> > capture.kernel_drops | RxAFPeth423 | 0
>> > capture.kernel_packets | RxAFPeth424 | 84155
>> > capture.kernel_drops | RxAFPeth424 | 0
>> > capture.kernel_packets | RxAFPeth425 | 84760
>> > capture.kernel_drops | RxAFPeth425 | 0
>> > capture.kernel_packets | RxAFPeth426 | 85328
>> > capture.kernel_drops | RxAFPeth426 | 0
>> > capture.kernel_packets | RxAFPeth427 | 81765
>> > capture.kernel_drops | RxAFPeth427 | 0
>> > capture.kernel_packets | RxAFPeth428 | 83583
>> > capture.kernel_drops | RxAFPeth428 | 0
>> > capture.kernel_packets | RxAFPeth429 | 91101
>> > capture.kernel_drops | RxAFPeth429 | 0
>> > capture.kernel_packets | RxAFPeth430 | 104013
>> > capture.kernel_drops | RxAFPeth430 | 0
>> > capture.kernel_packets | RxAFPeth431 | 92905
>> > capture.kernel_drops | RxAFPeth431 | 0
>> > capture.kernel_packets | RxAFPeth432 | 98068
>> > capture.kernel_drops | RxAFPeth432 | 0
>> >
>> >
>> >
>> > On Sun, Aug 18, 2013 at 10:43 PM, Cooper F. Nelson <cnelson at ucsd.edu<mailto:
>> cnelson at ucsd.edu>> wrote:
>> >
>> > No problem and please let us know if the 'worker' mode config works for
>> > you. I'm planning on building a 40gig sensor and it would help if I
>> > knew how it performed with multiple NICs.
>> >
>> > -Coop
>> >
>> > On 8/16/2013 5:36 PM, Tritium Cat wrote:
>> >> Cooper,
>> >
>> >> Thanks again for the explanations and supporting information.
>> >
>> >> --TC
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
>> > Site: http://suricata-ids.org | Support:
>> http://suricata-ids.org/support/
>> > List:
>> https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>> > OISF: http://www.openinfosecfoundation.org/
>> >
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20130821/059932a7/attachment-0002.html>
More information about the Oisf-users
mailing list