[Oisf-users] Quick Question: Accurate/Full File Extraction for Submitting to Sandboxes
Peter Manev
petermanev at gmail.com
Sun Jan 6 09:23:25 UTC 2013
On Sun, Jan 6, 2013 at 4:04 AM, Kevin Ross <kevross33 at googlemail.com> wrote:
> Hi,
>
> I was wondering what people's experience is of accurately (and
> automatically) extracting files from network traffic (or PCAPs) for then
> automatically doing further analysis to work out if they are suspicious
> before submitting to a sandbox like Cuckoobox?
>
> My method right now for getting an accurate file really is just processing
> suricata metadata files in order to redownload any interesting files which
> obviously causes issues for stealth as well as not working if it is a
> download location generated for one time use or is expecting certain things
> to be right before allowing the download. Suricata's file extraction
> generally ends up with very small parts of the original files when storing
> to disk unless I am doing something wrong?
>
> If it can't be done accurately from live network traffic is it possible to
> get them from PCAPs in an accurate way suitable for at least static anlysis
> and ideally so it will run on a device with a decent reliability as I do
> have full packet capture although not for a huge length of time - perhaps a
> day of traffic. Does anyone else have any solutions for things they have
> done for this? I know it must be possible as various network malware
> detection companies (Fireeye, Damballa, HBGary etc) take files from the
> network although I am unsure how they accomplish this accurately to allow
> for proper execution.
>
> Thanks for any tips or thoughts.
> Kevin
>
>
>
> _______________________________________________
> Suricata IDS Users mailing list: oisf-users at openinfosecfoundation.org
> Site: http://suricata-ids.org | Support: http://suricata-ids.org/support/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
> OISF: http://www.openinfosecfoundation.org/
>
Hi Kevin,
I think the file extraction is accurate.
Have you seen/followed the guide here:
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/File_Extraction
I think you need to tweak the libhtp settings too:
> *libhtp.default-config.request-body-limit* / *
> libhtp.server-config.<config>.request-body-limit* controls how much of
> the HTTP request body is tracked for inspection by the http_client_body
> keyword, but also used to limit file inspection. A value of 0 means
> unlimited.
>
since the file extraction is done from http...
aka:
> libhtp:
> default-config:
> personality: IDS
> # Can be specified in kb, mb, gb. Just a number indicates
> # it's in bytes.
> request-body-limit: 0
> response-body-limit: 0
>
>
Setting it to unlimited depending on how much traffic you inspect could be
mem intensive - but you could surely set it to a smaller value if you will
(than 0 - unlimited) - 12Mb.
That would allow you to extract any file that is max size 12 Mb over http.
You could generally read through that guide as well, if you would like:
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/MD5
Thank you
--
Regards,
Peter Manev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openinfosecfoundation.org/pipermail/oisf-users/attachments/20130106/2ac3c1c9/attachment-0002.html>
More information about the Oisf-users
mailing list