[Oisf-users] Suricata 4.0.3 with Napatech problems

Peter Manev petermanev at gmail.com
Wed Jan 17 11:36:17 UTC 2018


On Tue, Jan 16, 2018 at 4:12 PM, Steve Castellarin
<steve.castellarin at gmail.com> wrote:
> Hey Peter, I didn't know if you had a chance to look at the stats log and
> configuration file I sent.  So far, running 3.1.1 with the updated Napatech
> drivers my system is running without any issues.
>

The toughest part of the troubleshooting is that i dont have the set
up to reproduce this.
I didn't see anything that could lead me to definitive conclusion from
the stats log.
Can you please open a bug report on our redmine with the details form
this mialthread?

Would it be possible to share the suricata.yaml (privately if you
would like works too; remove all networks)?

Thank you

> On Thu, Jan 11, 2018 at 12:54 PM, Steve Castellarin
> <steve.castellarin at gmail.com> wrote:
>>
>> Here is the zipped stats.log.  I restarted the Napatech drivers before
>> running Suricata 4.0.3 to clear out any previous drop counters, etc.
>>
>> The first time I saw a packet drop was at the 12:20:51 mark, and you'll
>> see "nt12.drop" increment.  During this time one of the CPUs acting as a
>> "worker" was at 100%.  But these drops recovered at the 12:20:58 mark, where
>> "nt12.drop" stays constant at 13803.  The big issue triggered at the
>> 12:27:05 mark in the file - where one worker CPU was stuck at 100% followed
>> by packet drops in host buffer "nt3.drop".  Then came a second CPU at 100%
>> (another "worker" CPU) and packet drops in buffer "nt2.drop" at 12:27:33.  I
>> finally killed Suricata just before 12:27:54, where you see all host buffers
>> beginning to drop packets.
>>
>> I'm also including the output from the "suricata --dump-config" command.
>>
>> On Thu, Jan 11, 2018 at 11:40 AM, Peter Manev <petermanev at gmail.com>
>> wrote:
>>>
>>> On Thu, Jan 11, 2018 at 8:02 AM, Steve Castellarin
>>> <steve.castellarin at gmail.com> wrote:
>>> > Peter, yes that is correct.  I worked for almost a couple weeks with
>>> > Napatech support and they believed the Napatech setup (ntservice.ini
>>> > and
>>> > custom NTPL script) are working as they should.
>>> >
>>>
>>> Ok.
>>>
>>> One major difference between Suricata 3.x and 4.0.x in terms of
>>> Napatech is that they did update the code, some fixes and updated the
>>> counters.
>>> There were a bunch of upgrades in Suricata too.
>>> Is it possible to send over a stats.log - when the issue starts occuring?
>>>
>>>
>>> > On Thu, Jan 11, 2018 at 9:52 AM, Peter Manev <petermanev at gmail.com>
>>> > wrote:
>>> >>
>>> >> I
>>> >>
>>> >> On 11 Jan 2018, at 07:19, Steve Castellarin
>>> >> <steve.castellarin at gmail.com>
>>> >> wrote:
>>> >>
>>> >> After my last email yesterday I decided to go back to our 3.1.1
>>> >> install of
>>> >> Suricata, with
>>> >>
>>> >>
>>> >> the upgraded Napatech version.  Since then I've seen no packets
>>> >> dropped
>>> >> with sustained bandwidth of between 1 and 1.7Gbps.  So I'm not sure
>>> >> what is
>>> >> going on with my configuration/setup of Suricata 4.0.3.
>>> >>
>>> >>
>>> >>
>>> >> So the only thing that you changed is the upgrade of the Napatech
>>> >> drivers
>>> >> ?
>>> >> The Suricata config stayed the same -  you just upgraded to 4.0.3
>>> >> (from
>>> >> 3.1.1) and the observed effect was - after a while all (or most) cpus
>>> >> get
>>> >> pegged at 100% - is that correct ?
>>> >>
>>> >>
>>> >> On Wed, Jan 10, 2018 at 4:46 PM, Steve Castellarin
>>> >> <steve.castellarin at gmail.com> wrote:
>>> >>>
>>> >>> Hey Peter, no there is no error messages.
>>> >>>
>>> >>> On Jan 10, 2018 4:37 PM, "Peter Manev" <petermanev at gmail.com> wrote:
>>> >>>
>>> >>> On Wed, Jan 10, 2018 at 11:29 AM, Steve Castellarin
>>> >>> <steve.castellarin at gmail.com> wrote:
>>> >>> > Hey Peter,
>>> >>>
>>> >>> Are there any errors msgs in suricata.log when that happens ?
>>> >>>
>>> >>> Thank you
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Regards,
>>> >>> Peter Manev
>>> >>>
>>> >>>
>>> >>
>>> >
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Peter Manev
>>
>>
>



-- 
Regards,
Peter Manev



More information about the Oisf-users mailing list