Archiv verlassen und diese Seite im Standarddesign anzeigen : cron spinnt
Hallo Leute,
nach einem Absturz haut mir cron.daily jeden tag fehler um die ohren.
Subject: Cron <root@sysiphus> root test -e /usr/sbin/anacron || run-parts --report /etc/cron.daily
Date: Thu, 23 Oct 2003 06:28:23 +0200
/bin/sh: line 1: root: command not found
/etc/cron.daily/exim:
gzip: /var/log/exim/mainlog.0: No such file or directory
mv: cannot stat `/var/log/exim/mainlog.0.gz': No such file or directory
mv: cannot stat `/var/log/exim/mainlog.new': No such file or directory
/etc/cron.daily/logrotate:
error: error reading top line of /var/lib/logrotate/status
run-parts: /etc/cron.daily/logrotate exited with return code 1
/etc/cron.daily/standard:
mv: cannot stat `/var/log/setuid.new.tmp': No such file or directory
/etc/cron.daily/standard:
mv: cannot stat `./setuid.changes.3.gz': No such file or directory
gzip: ./setuid.changes.0.gz: No such file or directory
gzip: ./setuid.changes.0.gz: No such file or directory
gzip: ./setuid.changes.0: No such file or directory
mv: cannot stat `./setuid.changes.0.gz': No such file or directory
die zwei sachen, kann mir jemannd sagen wie ich das beseitigen kann?
Mfg
Profbunny
Hi,
crontab zerbröselt? Poste mal die Einträge.
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file.
# This file also has a username field, that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
25 6 * * * root test -e /usr/sbin/anacron || run-parts --report /etc/cron.daily
47 6 * * 7 root test -e /usr/sbin/anacron || run-parts --report /etc/cron.weekly
52 6 1 * * root test -e /usr/sbin/anacron || run-parts --report /etc/cron.monthly
27 4 1 6 * root /usr/local/f-prot/tools/check-updates.pl -cron
hier mal die crontab, vielleicht nützt dir das was.
Profbunny
Hi,
sieht eigentlich so aus, wie sie aussehen müsste. Und was steht in /etc/cron.daily/exim, /etc/cron.daily/logrotate und /etc/cron.daily/standard?
profbunny@sysiphus:/etc/cron.daily$ cat logrotate
#!/bin/sh
/usr/sbin/logrotate /etc/logrotate.conf
profbunny@sysiphus:/etc/cron.daily$ cat standard
#!/bin/sh
# /etc/cron.daily/standard: standard daily maintenance script
# Written by Ian A. Murdock <imurdock@gnu.ai.mit.edu>
# Modified by Ian Jackson <ijackson@nyx.cs.du.edu>
# Modified by Steve Greenland <stevegr@debian.org>
bak=/var/backups
LOCKFILE=/var/lock/cron.daily
umask 022
#
# Avoid running more than one at a time -- could happen if the
# checksecurity script lands on a network drive.
#
if [ -x /usr/bin/lockfile-create ] ; then
lockfile-create $LOCKFILE
if [ $? -ne 0 ] ; then
cat <<EOF
Unable to run /etc/cron.daily/standard because lockfile $LOCKFILE
acquisition failed. This probably means that the previous days
instance is still running. Please check and correct if necessary.
EOF
exit 1
fi
# Keep lockfile fresh
lockfile-touch $LOCKFILE &
LOCKTOUCHPID="$!"
fi
#
# Backup key system files
#
if cd $bak ; then
cmp -s passwd.bak /etc/passwd || (cp -p /etc/passwd passwd.bak &&
chmod 600 passwd.bak)
cmp -s group.bak /etc/group || (cp -p /etc/group group.bak &&
chmod 600 passwd.bak)
if [ -f /etc/shadow ] ; then
cmp -s shadow.bak /etc/shadow || (cp -p /etc/shadow shadow.bak &&
chmod 600 shadow.bak)
fi
if [ -f /etc/gshadow ] ; then
cmp -s gshadow.bak /etc/gshadow || (cp -p /etc/gshadow gshadow.bak &&
chmod 600 gshadow.bak)
fi
fi
if cd $bak ; then
if ! cmp -s dpkg.status.0 /var/lib/dpkg/status ; then
cp -p /var/lib/dpkg/status dpkg.status
savelog -c 7 dpkg.status >/dev/null
fi
fi
cd /var/log
umask 027
savelog -c 7 -m 640 -u root -g adm setuid.changes >/dev/null
checksecurity >setuid.changes
#
# Check to see if any files are in lost+found directories and warn admin
#
# Get a list of the (potential) ext2 l+f directories
lflist=`df -P --type=ext2 |awk '$6 == "/" {$6 = ""} /\/dev\// {printf "%s/lost+found ", $6}'`
# In each directory, look for files
for lfdir in $lflist ; do
if [ -d "$lfdir" ] ; then
more_lost_found=`ls -1 "$lfdir" | grep -v 'lost+found$' | sed 's/^/ /'`
if [ -n "$more_lost_found" ] ; then
lost_found="$lost_found
$lfdir:
$more_lost_found"
# NOTE: above weird line breaks in string are intentional!
fi
fi
done
if [ -n "$lost_found" ]; then
cat << EOF
the result of a crash or bad shutdown, or possibly of a disk
problem. These files may contain important information. You
should examine them, and move them out of lost+found or delete
them if they are not important.
The following files were found:
$lost_found
EOF
fi
#
# Clean up lockfile
#
if [ -x /usr/bin/lockfile-create ] ; then
kill $LOCKTOUCHPID
lockfile-remove $LOCKFILE
fi
profbunny@sysiphus:/etc/cron.daily$ cat exim
#!/bin/sh
# Only do anything if exim is actually installed
if [ ! -x /usr/sbin/exim ]; then
exit 0
fi
# Uncomment the following lines to get daily e-mail reports
#if [ -x /usr/sbin/eximstats ]; then
# eximstats </var/log/exim/mainlog \
# | mail postmaster -s"$(hostname) Daily email activity report"
#fi
# Cycle logs
if [ -x /usr/bin/savelog ]; then
for i in mainlog rejectlog paniclog; do
if [ -s /var/log/exim/$i ]; then
savelog -p -c 10 /var/log/exim/$i >/dev/null
fi
done
fi
if [ -x /usr/sbin/exim_tidydb ]; then
exim_tidydb /var/spool/exim retry >/dev/null
exim_tidydb /var/spool/exim wait-remote_smtp >/dev/null
fi
lrwxrwxrwx 1 root root 4 26. Feb 2003 /bin/sh -> bash
rechte der scripte sind in ordnung, manuell ausgeführt kommt es bei keinem der scripte zu fehlern.
mfg
profbunny
Sorry,
habe den Thread irgendwie aus den Augen verloren.
Also aus deinen Posts fällt mir jetzt nicht besonderes ins Auge was falsch sein könnte. Der erste Fehler "/bin/sh: line 1: root: command not found" deutet eigentlich daraufhin, dass da was falsch konfiguriert wurde (nämlich ne Benutzerkennung reingeschrieben). Sieht aber nicht so aus.
Mal ne andere Frage: brauchst du die Skripte dringend? Wenn nicht lösche mal alles und setzte es neu auf.
Powered by vBulletin® Version 4.2.5 Copyright ©2024 Adduco Digital e.K. und vBulletin Solutions, Inc. Alle Rechte vorbehalten.