PDA

Archiv verlassen und diese Seite im Standarddesign anzeigen : Squid & Squivi2



DBGTMaster
16.01.07, 14:56
Hallo,

ich bin dabei einen Proxy einzurichten, dieser läuft nun auch schon. Nun wollte ich nun einen Virenscanner im Squid einbinden, dabei bin ich auf Squivi2 gestoßen. Nur leider will der irgendwie nicht nach Viren scannen. Als Virenscanner benutze ich clamav.


Gut, nun will ich mal durchschreiben, was ich bereits alles konfiguriert habe:

Als erstes habe ich mal alle Vorraussetzungen, die im PDF stehen installiert.
Danach habe ich die squivi konfiguiert:


#
# configuration file for squivi2 - SSch September 2004
#



###############
# This file is part of SquiVi2.
#
# SquiVi2 is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# SquiVi2 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with SquiVi2; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# Copyright 2003, 2004 Steffen Schoch (sschoch@users.sourceforge.net)
###############



###############
# path to conf and squiviscan
basedir = /srv/www/cgi-bin
confpath = ${basedir}/etc/squivi.conf
squiviscan = ${basedir}/bin/squiviscan.pl


###############
# squid related settings
squiduser = root # user running squid
squidgroup = root # group running squid


###############
# downloaddir

#
# annotation: it that dir files will be saved for the download as well ad
# a catalog for internal use. This will happen with the uid squid is running
# as. Remember that this user needs write access to that directory! Also
# remember, that your webserver need write access to the .authdata file!
#
downloaddir = ${basedir}/download


###############
# User Output
showoutput = 1 # if set to 1 a status page will display the current progress
longoutput = 1 # short or long report
redirectftp = 1 # redirect ftp to http - this is a workaround for an problem concerning MS-IE
usenewwin = 1 # opens a new window (only if redirectftp = 1)
backnewwin = 0 # if usenewwin = 1 history.back is called after n seconds (0 to disable, recommended)
# There could be problems on some 'browser website combinations': it could build an loop
# if history.back points to a redirection
refreshtime = 5 # the sleep time until the next refresh of the report
authlifetime = 600 # max lifetime of auth information
webbase = http://192.168.15.15/cgi-bin/htdocs # where is your Webserver
squiviauth = ${webbase}/squiviauth.cgi # URL to squiviauth.cgi
squivicgi = ${webbase}/squivi.cgi # URL to squivi.cgi
squivicss = ${webbase}/squivi.css # URL to squivi.css
squivilogo = ${webbase}/squivi.png # URL to squivi.png, the logo for SquiVi2
squivilogourl = http://squivi2.sf.net/ # URL logo points to
lang = de # which language should be used in the frontend - you find the possibilities in the lang dir


###############
# wget settings (default should do for the most of you)
wgetbin = /usr/bin/wget
wgetusedata = 1 # set to 1 if you want that wget sends your referer, UserAgent, Cookies and so on...
waitfordata = 90 # how long to wait for request data (e.g. cookies, useragent, ...)
wgetcookie = --cookies=off --header='Cookie: <cookie>' # wget arg for sending cookies (<cookie> will be replaced)
wgetuseragent = -U '<useragent> SquiVi2/squivi2.sf.net' # wget arg for sending the user agent (<ua> will be replaced)
wgetreferer = --referer='<referer>' # wget arg for sending the referer url (<referer> will be replaced)
#wgetuseragent = -U "Mozilla/8 (compatible; SquiVi2; squivi2.sf.net)" # this one is my favorite!
wgetauthpass = --http-user='<user>' --http-passwd='<pass>' # wget arg for sending auth information
# (<user>, <pass> will be replaced)
# other args for wget
wgetargs = -N -S -t 1 --passive-ftp
wgetargs = --retr-symlinks
# if you want to reduce the bandwidth
#wgetargs = --limit-rate=7k


#
# annotation: if you have performance problems with MS-IE (it is fast up to 99%
# percent of your download but then you have to wait...), set the squid option
# 'client_persistent_connection' to off. That's the bug which is explained in
# the apache doucmentation (keep-alive and IE).
#


###############
# Loging - (syslog)
logfacility = LOG_USER
loglevel = 9


###############
# Limitations for downloads (only for compressed data)
# 0 = infinit (use with care!)
#
limitlevel = 10 # how many times should be searched for new compressed files
limitarchiv = 50 # max number of compressed files
limitsize = 1 # max size of all files (incl. compressed data)
limitfiles = 1000 # max number of files


###############
# Definition Packer - here you can define your own decompression tools

# extension = file extension, e.g. .zip, .tar.gz, ...
# bin = absolut path to the binary
# args = optional args

<packer tar>
extension = tar
bin = /bin/tar
args = xvf
</packer>
<packer tgz>
extension = tgz
bin = /bin/tar
args = xvfz
</packer>
<packer zip>
extension = zip
bin = /usr/bin/unzip
args = -o
</packer>
<packer rar>
extension = rar
bin = /usr/bin/unrar
</packer>
<packer gzip>
extension = gz
bin = /bin/gzip
args = -dv
</packer>
<packer bzip2>
extension = bz2
bin = /usr/bin/bzip2
args = -dv
</packer>


###############
# Einstellungen Redirector - this ar the settings for the redirector
defaultaction = noscan # the default action (should be noscan until you know what you do)


#
# template-noscan - things which will not be scanned
#
#<noscan template>
# url = eine regex
# method = eine regex
# auth = eine regex
# host = eine regex
# action = noscan|redirect http://irgendwo.de/test.cgi?url=<url>&host=<host>&auth=<auth>&method=<method>
#</noscan>
#
# annotation:
# - above you can see a template definition
# - with noscan you can define which files should not be scanned
# - it could be usefull to know how the squid redirector works
# - the following config should be applicable for the most users
# - if you use redirect as action the fields <..> will be replaced through the orignal values
# - more than one line will be connected using AND
# - use ! for NOT
# - pcre is used for patterns
# - if you need an OR build it by using | in your regex
# - your definitions must be unique! SquiVi will use an different order of your defintions
#

#
# Down here you see my example configuration. This should already fit your needs, no changes are
# necessary - for now...
#

# scanning for 'connect' dosen't make sence...
<noscan method>
method = ^connect$
action = noscan
</noscan>
# no virus is known for an text file
<noscan text>
url = \.(s?html?|txt)$
action = noscan
</noscan>
# JavaScript, CSS, Java usw.
<noscan js_css>
url = \.(class|js|css|jar)$
action = noscan
</noscan>
# Images
<noscan images>
url = \.(gif|jpe?g|png|bmp|ico|tiff?)$
action = noscan
</noscan>
# workarround for pages like google.com - .com isn't a com file!
<noscan startpage>
url = /$|//[^/]+$
action = noscan
</noscan>
# if there is no dot in the url (http://web.de/nodotinhere)
<noscan nodotinurl>
url = //[^/]+[^.]+$
action = noscan
</noscan>
# PDF and PS
<noscan pdfps>
url = \.(pdf|ps)$
action = noscan
</noscan>
# crl-files
<noscan crl>
url = \.crl$
action = noscan
</noscan>
# media (flash could use args)
<noscan media>
url = \.((swf|mp3|wav|avi|mpg|mpeg)$)|(swf\?)
action = noscan
</noscan>
# DLLs (some servers have a strange behavior if you scan dlls...)
<noscan dll>
url = \.dll\??
action = noscan
</noscan>


#
# If no noscan section matches, the followin scan sections will be tested.
# Please notice, that you can't define an action here - it is always scan!
#


# you can't receive mime information from a ftp server...
<scan ftp>
url = ^ftp://
url = \.(exe|zip|com|doc|xls|xlt|mdb|pps|ppt|rar|gz|tgz| bz2)$
</scan>
# scan http, unless it's javascript
<scan httpscan>
url = ^http
mime = ^application
mime = ! ^application\/x-javascript
# use this line if you don't want to check mime-types but still want to use basic auth!
# mime = .*
</scan>

#
# annotation: SquiVi2 can handle http status code 401 auth required. Plese
# remember that LWP only supports basic auth. Therefore SquiVi2 supports
# only basic auth, too! To use auth include a mime directive in your scan
# sections!
#


#
# Finally you can define here the virus scanners which should be uses. It is
# possible to use as how many scanners you like.
#
# annotation: please remember that the scanner is started with the uid of
# squid. Be sure that the scanner works well with that uid. I had some
# problems with bdc and clamav: after each update of their pattern files I
# had to grant all users read access on these files.
#


# BitDefender
#<scanner bdc>
# bin = /usr/bin/bdc
# args = --all --list
# exitnovirus = 0
#</scanner>

# ClamAV
<scanner clamscan>
bin = /usr/bin/clamscan
args = --verbose --recursive
exitnovirus = 0
</scanner>

# H+BEDV Datentechnik
#<scanner antivir>
# bin = /usr/bin/antivir
# args = -s --allfiles
# exitnovirus = 0
#</scanner>

# NAI uvscan
#<scanner uvscan>
# bin = /usr/local/uvscan/uvscan
# args = -r -v --summary
# exitnovirus = 0
#</scanner>

gut, danach habe ich noch in der Konfig von squid (/etc/squid/squid.conf) hinzugefügt:


redirect_programm /srv/www/cgi-bin/bin/squivi.pl -c /srv/www/cgi-bin/etc/squivi.conf

Danach habe ich mit "rcsquid restart" den proxy neu gestartet und ihn in meinen Browser eingetragen. Soweit funktioniert nun alles.

Doch, wenn ich nun einen Testvirus herunterlade, kann ich ihn ganz normal herunterladen und blockiert ihn nicht. Wenn ich die Datei mit "clamscan" überprüfen lasse, so erkennt er es als einen Virus.

Was habe ich denn falsch konfiguriert?

mfg