Trash cleaners

From Genecats
Revision as of 17:30, 25 June 2020 by Hiram (talk | contribs) (updated procedure)
Jump to navigationJump to search

Overview

The trash cleaning system at UCSC has evolved from a simple one-line cron job that removed older files from the /trash/ directory into a complex set of interlocking scripts. This discussion outlines the procedures and lock files that keep the system running safely.

recovery from problems

WARNING: You do not want to go around testing out commands on this system. The trash filesystem can sometimes have literally millions of files in it and a simple ls command can be a huge problem for the performance of the system. Be very wary and careful of how you work on this vital system.

Login to hgnfs1 and check if there are any currently running processes:

ps -ef | grep -i qateam

It may be the case that a previous instance simply hasn't completed yet. Let it finish, you do not want to interrupt this system.

If there is nothing running, check the most recent log file to see if there is any message about the problem in:

/export/userdata/rrLog/YYYY/MM/cleanerLog.YYYY-MM-DDTHH.txt
/export/userdata/betaLog/YYYY/MM/cleanerLog.YYYY-MM-DDTHH.txt

Or the temporary files under construction in /var/tmp/ may have the error message from a failed command. Typical file names you may find there:

-rw-rw-rw- 1 85743056 Jul 18 10:46 refreshList.O18591
-rw-rw-rw- 1  1663224 Jul 18 10:46 sessionFiles.g18585
-rw-rw-rw- 1      935 Jul 18 10:46 saveList.g18588
-rw-rw-rw- 1  1963973 Jul 18 10:46 alreadySaved.d18582
-rw-rw-rw- 1 31782116 Jul 18 11:00 trash.atime.S24127
-rw-rw-rw- 1 25326398 Jul 18 11:01 one.hour.S24127
-rw-rw-rw- 1  9604147 Jul 18 11:01 eight.hour.S24127

You will always find these two files here:

-rw-rw-rw- 1 5133861 Jul 18 11:01 rr.8hour.egrep
-rw-rw-rw- 1  650758 Jul 18 11:02 rr.72hour.egrep

They are left here for perusal, they are the listings of the files that were removed during the previous cycle of the system. If you only see these two files here, the system should have completed successfully. When it fails, it will leave some of the other temporary files. In fact, these removed file listings are archived as logs in:

/export/userdata/rrLog/removed/YYYY/MM/

When any of these scripts encounter problems and do not remove their lock files, the system remains off until the lock files can be manually removed. email is sent to hiram,galt,chmalee,braney,jgarcia when they are in this state as a reminder to check them. The log files should be examined to see if there is any real problem. The usual case is that some bottleneck was in place somewhere, the scripts merely ran into themselves after one of them failed. In this case, go to the directory /home/qateam/trashCleaners/hgwbeta and create this file:

  cd ~/trashCleaners/hgwbeta
  date > force.run

This will cause the system to run during the next cycle.

Primary trash directory

The current trash directory NFS server is on the server: hgnfs1

You can login to that machine via the qateam user.

A cron job running under the root user calls the scripts in the qateam directory. It is currently running once very 8 hours, at times: 04:10 16:10 The cluster admins maintain this root cron tab entry, it is a single command:

 /home/qateam/trashCleaners/hgwbeta/trashCleanMonitor.sh searchAndDestroy

This hgwbeta/trashCleanMonitor.sh script is going to clean trash files for hgwbeta custom tracks, and then call the primary RR trashCleanMonitor.sh to do the big job of cleaning the RR custom tracks.

WARNING: You do not want to go around testing out commands on this system. The trash filesystem can sometimes have literally millions of files in it and a simple ls command can be a huge problem for the performance of the system. Be very wary and careful of how you work on this vital system.

Cleaner lock file

The trashCleanMonitor.sh script uses a lock file to prevent it from overrunning an existing running instance of these scripts. When this lock file exists, the system will not start a new instance of the cleaners. It sends email to hiram,galt,chmalee,braney,jgarcia as an alert that the cleaners are overrunning themselves. They normally will not overrun themselves if everything is OK. If a previous instance failed, the lock file remains in place to keep the cleaners off until the error is recognized and taken care of. The complete cleaner system must finish successfully to remove the lock file.

hgwbeta cleaner

This first script hgwbeta/trashCleanMonitor.sh has become very simple with recent (2019) updates to the custom track database system. This script does call the trashCleaner.csh script which used to have a job of moving files that belonged to sessions, but this is no longer necessary. The script has become a no-op doing nothing.

There is a log created by this process in:

/export/userdata/betaLog/YYYY/MM/cleanerLog.YYYY-MM-DDTHH.txt

where YYYY is the year, MM the month, DD the date, HH the hour at the time the script runs.

Upon successful completion of the hgwbeta/trashCleaner.csh script the monitor script runs an exec command for the primary RR cleaning script

exec /home/qateam/trashCleaners/rr/trashCleanMonitor.sh searchAndDestroy

the RR cleaner

The same monitor calling script setup is working for the RR cleaner. The primary script:

/home/qateam/trashCleaners/rr/trashCleanMonitor.sh

requires the lock file initiated by the beta cleaner to exist. This script will not run if the lock file /export/userdata/cleaner.pid does not exist.

This called script:

/home/qateam/trashCleaners/rr/trashCleaner.csh

performs the job of running dbTrash to clean up the customTrash database tables with a 72 hour timeout limit.

After the custom trash database tables are cleaned, the removal of trash files begins. For performance purposes, the scanning of files and times in /export/trash/ needs to be done with a minimum of impact to the filesystem. There is a single find -type f command run on the /export/trash/ filesystem performed by a called script:

/home/qateam/cronScripts/trashMonV3.sh

That file list is used by a perl script to discover the last access times of the files in trash via a stat function in:

/home/qateam/dataAnalysis/betterTrashMonitor/fileStatsFromFind.pl

This method has been tested to show that it works very rapidly through very large file listings.

Those measuring scripts, as a side effect, maintain logs of data sizes for everything in trash. Those logs are accumulating in:

/home/qateam/trashLog/YYYY/MM/YYYY-MM-DD.HH:MM:SS

The result of the scanning scripts is a file listing with the last access time in seconds as temporary files in /var/tmp/

A simple awk of that last access time listing for the threshold expiration time produces a list of files to remove from the trash directory. Two different expiration times are in effect for different sections of the trash directory. Short lived files that are one-time use only by the browser are removed with an hour of expiration time. Custom track trash files and other files associated with browser generated data that can be used repeatedly by a user session are expired on a 64 hour expiration timeout.

The RR trashCleaner.csh script accumulates log files into:

/export/userdata/rrLog/YYYY/MM/cleanerLog.YYYY-MM-DDTHH.txt

When this script completes successfully, it removes the lock file: /export/userdata/cleaner.pid

The caller trashCleanMonitor.sh verifies a successful return code from trashCleaner.csh and a SUCCESS message in the cleanerLog file. If anything is failing, email is sent to hiram,galt,pauline,braney

trash measurement

To keep track of use statistics on the trash filesystem, the script mentioned above:

/home/qateam/cronScripts/trashMonV2.sh

is used by the trash cleaners and is also used by itself just to periodically measure the trash filesystem.

Since the trash cleaners are only running once every 4 hours, this measurement script is run during hours when the cleaners are not running. It is on the crontab of the qateam user on rrnfs1:

42 1,2,5,6,9,10,13,14,17,18,21,22 * * * nice -n 19 ~/cronScripts/measureTrash.sh

This measureTrash.sh script is calling /home/qateam/cronScripts/trashMonV2.sh and removing the temporary access time file created in /var/tmp/

It is also honoring the lock file used by trash cleaners to prevent it from overrunning their use of the measurement system: /export/userdata/cleaner.pid

The script trashMonV2.sh also has a lock file to prevent it from overrunning itself:

/var/tmp/qaTeamTrashMonitor.pid

There is an additional measurement script running that has nothing to do with the trash cleaning:

2,7,12,17,22,27,32,37,42,47,52,57 * * * * /home/qateam/cronScripts/ctFileMon.sh

It can run every five minutes because it is using a side-effect in the stat command, when run on a directory name, the indicated size is actually the file count in the directory. This side-effect is not available on all types of filesystems, it just happens to works here. These measurements are accumulating in log files in

/home/qateam/trashLog/ct/YYYY/ctFileCount.YYYY-MM.txt

customdb

The custom track database server is the customdb machine. You can login there with the qateam user.

This MySQL server has a couple of cron jobs running to help keep the customTrash database cleaned. These are qateam user cron jobs.

The customTrash database accumulates lost tables from failed custom track loads on the RR system. Their meta information doesn't get added to the metaInfo table in customTrash Thereby, they are not cleaned out by the above mentioned dbTrash command in the trash cleaner system running on rrnfs1. The cron job running here:

53 1,3,5,7,9,11,13,15,17,19,21,23 * * * /data/home/qateam/customTrash/cleanLostTables.sh

finds these lost tables by comparing the file listing of MySQL table files in:

/data/mysql/customTrash/

with the information in the metaInfo table. Files found that do not have metaInfo entries are candidates for removal. They are candidates because they are not removed immediately, but rather timed out from their last accessed time, just in case they are in process and may become legitimate tables. The expire time is 72 hours. The script cleanLostTables.sh uses a perl script to do the file finding and comparison with metaInfo:

/data/home/qateam/customTrash/lostTables.pl -age=72

Log files are maintained of this cleaning activity in:

/data/home/qateam/customTrash/log/YYYY/MM/

euroNode

Same system in place on the euroNode machine. Script called from root cron tab:

/home/qateam/trashCleaners/euroNode/trashCleanMonitor.sh

lockFile maintained in:

/data2/userdata/cleaner.pid
/var/tmp/qaTeamTrashMonitor.pid

I don't think I have yet turned on the special lost table cleaner on the euroNode as mentioned above in the customdb section. Would be good to check to see if the number of tables in the customTrash database is a constantly growing number.

hgwdev

Same system in place on the hgwdev machine. Script called from root cron tab:

/cluster/home/qateam/trashCleaners/hgwdev/trashCleanMonitor.sh

with logs accumulating in:

/data/apache/userdata/log/YYYY/MM/cleanerLog.YYYY-MM-DDTHH.txt

lockFile:

/data/apache/userdata/cleaner.pid

hgwalpha

Same system in place on the hgwalpha machine. Script called from root cron tab:

/cluster/home/qateam/trashCleaners/hgwalpha/trashCleanMonitor.sh

with logs accumulating in:

/data/apache/userdata/hgwalphaLog/YYYY/MM/cleanerLog.YYYY-MM-DDTHH.txt

lockFile:

/data/apache/userdata/cleaner.pid

log analysis

There is a vast network of cron jobs running on Hiram's account on hgwdev that is processing the logs produced by all these trash cleaners and measurement scripts to construct the bigBed and bigWig files saved in a session to display updating tracks in the browser showing all this activity and even more with the processed Apache logs and MySQL server process list measurements.