CGI Build Process: Difference between revisions
(Moving up check reports) |
(Moving up check reports) |
||
Line 412: | Line 412: | ||
On your local machine terminal window, press control-c to terminate ssh or plink. | On your local machine terminal window, press control-c to terminate ssh or plink. | ||
===Generate the code summaries and review pairings=== | ===Generate the code summaries and review pairings=== |
Revision as of 17:59, 6 June 2017
This page explains the process we use for building and releasing our CGIs. This is done on a three-week schedule.
- Before week1, the source code is called to be in "preview1" state.
- During the first week, any changes by QA or developers are added to the tree
- After week1, the source code is called to be in "preview2" state.
- During the 2nd week, just like in week1, any changes by QA or developers are added to the tree
- After week 2, the final build is compiled and copied to a sandbox
- During the third week development continues, all changes are added and compiled on genome-test, but only bugfixes (build patches) are added to the final build sandbox ("git cherry-pick").
- After week 3, the now bugfixed final build from week two is copied from its sandbox to the public site.
The build after week 2 is built into a sandboxes which is located here:
hgwdev:/usr/local/apache/cgi-bin hgwdev:/usr/local/apache/htdocs-beta
Older hgwdev builds are periodically relocated here:
/hive/groups/browser/build
Setting Up the Environment for the Build
NOTE: The actions in this section are a one-time only set up performed by the new "build-meister".
Becoming the Build-Meister
All build scripts are now run by the "build
" user. This user should already have it's environment properly configured. However, the build-meister will need to be able to log in (though ssh) as the build
user, and the build
user will need to know where to send mail.
- Set up .ssh/authorized_keys so that you can log in as the
build
user. Seek assistance from cluster-admin if you need it.
- Assign the
build
user's BUILDMEISTER environmental variable to equal the user name of the new build meister
hgwdev> ssh -X build@hgwdev # the optional '-X' allows X-windows support <build@hgwdev> edit .tcshrc # use your preferred editor # alter the following line: < setenv BUILDMEISTER tdreszer > setenv BUILDMEISTER chinli
Remember you will need to log out and log back in for the changes to take affect.
- Make sure the
build
user's cron jobs send you mail. Unlike the various build scripts which can use theBUILD_MEISTER
environmental variable to find you, cron runs without access to that variable. Instead, you should add yourself to cron'sMAILTO
variable:
hgwdev> ssh -X build@hgwdev # the optional '-X' allows X-windows support <build@hgwdev> crontab -l > cron.txt <build@hgwdev> edit cron.txt # use your preferred editor < MAILTO=rhead,tdreszer,braney,cricket > MAILTO=rhead,chinli,braney,cricket # and < MAILTO=tdreszer,braney > MAILTO=chinli,braney <build@hgwdev> crontab cron.txt <build@hgwdev> crontab -l # verify what you have done.
Optionally set up your own build environment
NOTE: The remainder of this section is historical. Nevertheless, it is worth keeping these details here, especially if the build-meister wishes to try experimental changes under their own identity.
- Before running build scripts as yourself, you will need to set up the following in your log in file:
hgwdev> cd ~ # go to your home directory as yourself. hgwdev> edit .tcshrc # use your preferred editor # add (or update) the following lines: > umask 002 > source /cluster/bin/build/scripts/buildEnv.csh # To be able to run the Java robot programs, add the following to the top of your path setting: > set path = ( /usr/java/default/bin $path ... ) # optionally add helper aliases: > # wb gets you to the scripts dir. > alias wb 'cd $WEEKLYBLD' > # cd $hg gets you to the latest build sandbox > if ( "$HOST" == "hgwdev" ) then > setenv hg $BUILDDIR/v${BRANCHNN}_branch/kent/src/hg > endif
Remember you will need to log out and log back in for the changes to take affect.
NOTE: For those more comfortable with other shells (e.g. bash), it should be possible to run build scripts from another shell. However, the main limitation is the buildEnv.csh
file which is edited and checked in every week, then sourced by .tcshrc
. Without changes it cannot be sourced by .bashrc
.
- Set up autologin among the general cluster machines
# On your local cse box (i.e. screech, pfft, whatever) screech> ssh-keygen -t dsa (use enter for all defaults) screech> cd ~/.ssh, # add yourself to the authorized keys screech> cp id_dsa.pub authorized_keys
Also put these in your home directory on hgwdev
:
screech> scp -r .ssh/ hgwdev: # Permissions on .ssh should be 700. # Permissions on files in .ssh/ should be 600 or 640.
- Set up autologin for hgdownload and hgdownload-sd by copying your public key to the list of authorized keys on those machines. You may need assistance from someone already authorized to login to hgdownload and hgdownload-sd:
hgwdev> edit ~/.ssh/id_dsa.pub # copy the public key into the clipboard and then log into hgdownload as user qateam hgwdev> ssh qateam@hgdownload hgdownload> cd ~/.ssh hgdownload> edit authorized_keys # paste the key to the authorized_keys file
- You will also need a copy of .hg.conf.beta in your $HOME directory. This should be obtained from /cluster/home/build/.hg.conf.beta.
- Build Symlinks. These are critical for building 64 bit utilities
hgwdev> cd ~/bin # Make sure you have $MACHTYPE directories hgwdev> mkdir x86_64 # Create a symlink for each $MACHTYPE hgwdev> ln -s /cluster/bin/x86_64 x86_64.cluster
The symtrick.csh
uses these automatically. If a script crashes and leaves the symlinks in an incorrect state, use unsymtrick.csh
to restore. Build scripts check to see if unsymtrick.csh
should be executed.
Preview1 Day Build : Day 1
This is day 1 in the schedule.
Run Git Reports
- Connect as "
build
" tohgwdev
. Then go to the weekly build dir on dev:
hgwdev> ssh -X build@hgwdev # the optional '-X' allows X-windows support <build@hgwdev> cd $WEEKLYBLD
- make sure you are on master branch
<build@hgwdev> git checkout master
- edit buildEnv.csh: change the 5th line then the 4th line
<build@hgwdev> edit buildEnv.csh # use your preferred editor < setenv LASTREVIEWDAY 2012-06-12 # v269 preview > setenv LASTREVIEWDAY 2012-07-03 # v270 preview # and < setenv REVIEWDAY 2012-07-03 # v270 preview > setenv REVIEWDAY 2012-07-24 # v271 preview
- re-source buildEnv.csh and check that vars are correct
<build@hgwdev> source buildEnv.csh # or just restart your shell windows <build@hgwdev> env | egrep "VIEWDAY"
- commit the changes to this file to Git:
<build@hgwdev> git pull <build@hgwdev> @ NEXTNN = ( $BRANCHNN + 1 ) ; git commit -m "v$NEXTNN preview1" buildEnv.csh <build@hgwdev> git push
- run doNewReview.csh
<build@hgwdev> screen # use screen if you wish <build@hgwdev> ./doNewReview.csh # review the variables
- run for real and direct output to a log file (this takes about 2 minutes - it runs git reports by ssh'ing to hgwdev)
<build@hgwdev> time ./doNewReview.csh real >& logs/v${NEXTNN}.doNewRev.log <build@hgwdev> ctrl-a, d # to detach from screen <build@hgwdev> tail -f logs/v${NEXTNN}.doNewRev.log # see what happens
Check the reports
- The reports are automatically built by the script into this location.
Briefly review the reports quickly as a sanity check.
Generate review pairings
(Ann takes care of this)
- Assign code-review partners in redmine.
Preview2 Day Build : Day 8
This is day 8 in the schedule.
Run Git Reports
- Connect as "
build
" tohgwdev
. Then go to the weekly build dir on dev
hgwdev> ssh -X build@hgwdev # the optional '-X' allows X-windows support <build@hgwdev> cd $WEEKLYBLD
- make sure you are on master branch
<build@hgwdev> git checkout master # or just: git branch
- edit buildEnv.csh: change the 7th line then the 6th line
<build@hgwdev> edit buildEnv.csh # use your preferred editor < setenv LASTREVIEW2DAY 2012-06-19 # v269 preview2 > setenv LASTREVIEW2DAY 2012-07-10 # v270 preview2 # and < setenv REVIEW2DAY 2012-07-10 # v270 preview2 > setenv REVIEW2DAY 2012-07-31 # v271 preview2
- re-source buildEnv.csh and check that vars are correct
<build@hgwdev> source buildEnv.csh # or just restart your shell windows <build@hgwdev> env | grep 2DAY
- commit the changes to this file to Git:
<build@hgwdev> git pull; @ NEXTNN = ( $BRANCHNN + 1 ) ; git commit -m "v$NEXTNN preview2" buildEnv.csh <build@hgwdev> git push
- run doNewReview2.csh
<build@hgwdev> screen # use screen if you wish <build@hgwdev> ./doNewReview2.csh # review the variables
- run for real and direct output to a log file (this takes about 2 minutes - it runs git reports by ssh'ing to hgwdev)
<build@hgwdev> ./doNewReview2.csh real >& logs/v$NEXTNN.doNewRev2.log <build@hgwdev> ctrl-a, d # to detach from screen <build@hgwdev> tail -f logs/v$NEXTNN.doNewRev2.log # see what happens
Check the reports
- The reports are automatically built by the script into this location.
Briefly review the reports quickly as a sanity check.
Generate review pairings
(Ann takes care of this)
- Assign code-review partners in redmine.
Final Build : Day 15
This is day 15 in the schedule.
Do the Build
- Connect as "
build
" tohgwdev
. Then go to the weekly build dir on dev
hgwdev> ssh -X build@hgwdev # the optional '-X' allows X-windows support <build@hgwdev> cd $WEEKLYBLD
- make sure you are on master branch and there is no uncommitted script change
<build@hgwdev> git checkout master <build@hgwdev> git status
- edit the buildEnv.csh file
<build@hgwdev> edit buildEnv.csh # use your preferred editor < setenv LASTWEEK 2012-06-26 # v269 final > setenv LASTWEEK 2012-07-17 # v270 final # and < setenv TODAY 2012-07-17 # v270 final > setenv TODAY 2012-08-07 # v271 final # and the big one: < setenv BRANCHNN 270 > setenv BRANCHNN 271
- re-source buildEnv.csh and check that vars are correct
<build@hgwdev> source buildEnv.csh # or just restart your shell windows <build@hgwdev> env | egrep "DAY|NN|WEEK"
- commit the changes to this file to Git:
<build@hgwdev> git pull; git commit -m "v$BRANCHNN final build" buildEnv.csh <build@hgwdev> git push
- run doNewBranch.csh
<build@hgwdev> screen # NOTE: screen is recommended this time! <build@hgwdev> ./doNewBranch.csh # review the variables
- run for real, send the output to a file and review while it is written (takes ~1 hour)
<build@hgwdev> ./doNewBranch.csh real >& logs/v${BRANCHNN}.doNewBranch.log <build@hgwdev> ctrl-a, d # to detach from the screen <build@hgwdev> tail -f logs/v${BRANCHNN}.doNewBranch.log # follow what happens
- look for files that tell you it was successful (script will report whether these files were created):
<build@hgwdev> ls -l /cluster/bin/build/scripts/GitReports.ok
- Check timestamp of CGIs in hgwdev:/usr/local/apache/cgi-bin and the version number in the browser title header.
- If you get errors it might be because the script is wrong, rather than there actually be an error. For example, to check for errors, the 'make' log file is grepped for 'error|warn' so any new 'C' file with error or warn in its name will show up as an error whether or not it compiled cleanly. You might need to change the script to remove references to files like this, eg edit buildBeta.csh to ignore references to files like gbWarn.c and gbWarn.o in the log:
<build@hgwdev> edit buildBeta.csh ... make beta >& make.beta.log # These flags and programs will trip the error detection sed -i -e "s/-DJK_WARN//g" make.beta.log sed -i -e "s/-Werror//g" make.beta.log #-- report any compiler warnings, fix any errors (shouldn't be any) #-- to check for errors: set res = `/bin/egrep -i "error|warn" make.beta.log | /bin/grep -v "gbWarn.o -c gbWarn.c" | /bin/grep -v "gbExtFile.o gbWarn.o gbMiscDiff.o"`
- What the doNewBranch.csh script does:
- edits the versionInfo.h file
- makes tags (takes 1 minute)
- builds Git reports (takes 1 minute)
- does build (takes 5-10 minutes)
- builds utils (of secondary importance)
- builds CGIs (most important)
Check the reports
- The reports are automatically built by the script into this location.
- Briefly review the reports quickly as a sanity check.
Run the Robots
- [build-meister] run doRobots.csh, and watch the log if you are interested (most log messages go to the logs/ dir mentioned below)
ssh -X build@hgwdev <build@hgwdev> screen # startup a new screen <build@hgwdev> ./doRobots.csh >& logs/v$BRANCHNN.robots.log <build@hgwdev> ctrl-a, d # detach from screen <build@hgwdev> tail -f logs/v$BRANCHNN.robots.log
- What the doRobots.csh script does:
- runs robots one at a time
- hgNear (20 min)
- hgTables (several hours)
- TrackCheck (several hours)
- LiftOverTest (quick)
- [push shepherd] Review the error logs for the robots:
error logs located here: hgwdev:/cluster/bin/build/scripts/logs
- hgNear -- sends email with results
- hgTables -- send email with results
- TrackCheck -- must check by hand: grep -i "error" logs/v$BRANCHNN.TrackCheck.log (TrackCheck person does this)
- LiftOverTest -- must check by hand: cat logs/v$BRANCHNN.LiftOverTest.log
NOTE: These robot tests take more than 6 hours, do not wait for them, do the rest of the steps like GBIB.
Genome Browser in a Box
- The
build
account onhgwdev
operates this procedure
ssh -X build@hgwdev <build@hgwdev> cd $WEEKLYBLD
- The script procedure does not function correctly because the updateBrowser.sh script now performs OS updates which require one or more reboots to complete. The manual procedure is:
- Start the browser box: VBoxHeadless -s browserbox &
- login to the box: ssh box
- wait for rsync updates to finish *and* any dpkg unattended_upgrades
- Look for 'sleep' commands, dpkg and sync: ps -ef | egrep -i "sleep|dpkg|sync"
- should be empty except for this command itself
- su to root account
- run the update script: ./updateBrowser.sh hgwdev hiram beta
- system may reboot with OS upgrades, after reboot, run the same updateBrowser.sh again
- after updateBrowser.sh has run successfully to completion without reboot, can now continue with the packaging
- time ./boxRelease.csh beta >& logs/v${BRANCHNN}.boxRelease.log
- cp -p /usr/local/apache/htdocs/gbib/gbibBeta.zip /hive/groups/browser/vBox/gbibV${BRANCHNN}.zip
- DOES NOT FUNCTION CORRECTLY The buildGbibAndZip.csh script, runs the commands as the build user.
It starts the browserbox vm if it is not already running. It updates the box, during which it rsyncs from gbib to hgwdev using a temporary public key. It builds a release gbib.zip and also saves it with the the current version to a backup location.
time ./buildGbibAndZip.csh >& logs/v${BRANCHNN}.boxRelease.log
(takes 20 minutes)
Examine log for errors:
less logs/v${BRANCHNN}.boxRelease.log
If it gets a lot of errors, this is often due to it having the vm update and reboot itself, which kills the ssh connection that the buildGbibAndZip.csh is trying to use. It does not do a great job of detecting the problem. However, it often fails withing 80 seconds. You can just re-run buildGbibAndZip.csh again as above, and check that it succeeded. It should take 20 minutes to run when working normally.
Update and Restart qateam beta GBiB
As user "build" we have access to $WEEKLYBLD and $BRANCHNN env variables.
ssh -X build@hgwdev # this may have already been done
This script may or may not update the qateam browser box.
Run a script to automatically stop the old browserboxbeta, unregister it, mv old to backup name, unzip a fresh copy from browserbox, rename it to browserboxbeta, re-register it, and restart it.
ssh -X qateam@hgwdev $WEEKLYBLD/updateBrowserboxbeta.csh $BRANCHNN >& logs/v${BRANCHNN}.updateBrowserboxbeta.log
I use the following manual steps:
ssh qateam@hgwdev # To see what may be running: VBoxManage list runningvms "browserbox" {8e474be8-2808-466a-930d-5e6670ca1cb1} # or, view all VMs: VBoxManage list vms "browserboxalpha" {9442ad82-5672-4ea1-aeab-b40fc9ae691f} "browserboxbeta" {8e474be8-2808-466a-930d-5e6670ca1cb1} # To stop the betabox: VBoxManage controlvm browserboxbeta acpipowerbutton # to unregister the betabox: VBoxManage unregistervm browserboxbeta # reorganize directories cd "VirtualBox VMs" mv browserboxbeta browserboxbeta.v${BRANCHNN} # BRANCHNN does not exist on qateam account mkdir browserboxbeta cd browserboxbeta unzip /usr/local/apache/htdocs/gbib/gbibBeta.zip # change the ports to use and the VM image name: sed -e 's/1234/1236/; s/1235/1237/; s/browserbox/browserboxbeta/;' \ browserbox.vbox > browserboxbeta.vbox # to register this new image: cd VBoxManage registervm `pwd`/"VirtualBox VMs/browserboxbeta/browserboxbeta.vbox" # To start this betabox: nice -n +19 VBoxHeadless -s browserboxbeta & # to login to the betabox: (wait a few moments for it to get fully started) ssh -p 1237 browser@localhost
- test WEB server and login account:
Does it seem to return the index.html ok?
ssh -X qateam@hgwdev wget http://localhost:1236/index.html -O /dev/stdout
Do you see the correct CGI version?
ssh -X qateam@hgwdev "wget http://localhost:1236/cgi-bin/hgTracks -O /dev/stdout | grep '<TITLE'"
Can you login to the vm interactively?
ssh -X qateam@hgwdev ssh boxBeta # use password 'browser' to login - uses .ssh/config to get the port. exit # exit from vm exit # exit from qateam
If you want to be sure that the vm has been correctly updated, has the right version, or confirm that the latest patch is working there, you can browse on browserboxbeta vm which is on hgwdev on port 1236 via SSH tunnel from you local machine on port 9991:
Open a new terminal window on your local machine.
Windows:
"C:\Program Files (x86)\PuTTY\plink.exe" -N -L 127.0.0.1:9991:127.0.0.1:1236 %USERNAME%@hgwdev.cse.ucsc.edu
Linux:
ssh -N -L 127.0.0.1:9991:127.0.0.1:1236 $USER@hgwdev.cse.ucsc.edu
Open web browser:
http://127.0.0.1:9991/
On your local machine terminal window, press control-c to terminate ssh or plink.
Generate the code summaries and review pairings
(Ann takes care of this)
- Assign code-review partners in redmine.
- Summarize the code changes that were committed during the past week. Solicit input from the engineers.
- Update these pages with the summary:
- Send an email to browser-staff with links to the summaries.
Test on hgwdev
- Wait to hear from QA about how their CGIs look on hgwbeta. QA members should update the CGI build chatter ticket in Redmine with a "done testing" message or, if applicable, "not following issues for this release" message. Each member of the QA team has testing responsiblities.
Make changes to code base as necessary
This happens on days 15, 16, 17, and 18 in the schedule.
- If there are problems with the build a developer will fix the code. This fix needs to be patched into the build on hgwdev. This page explains how to do a Cherry Pick on hgwdev.
Fixing problems in the Build
This usually happens between days 16 and 19.
QA advises buildmeister to cherry pick
- see these instructions.
Push the CGIs
This is day 22 in the schedule.
The day before the push (day 21 in the schedule) send email notice
Send email to all of browser-staff (which includes cluster-admin) letting them know that tomorrow is a push day. Something along these lines:
Just a heads up that tomorrow is a CGI push day. If you have big code changes included in this release please be available in case something goes wrong with the push of your changes. QA typically starts the push around 1:30pm.
Push to hgw0 only
- hgw0 is identical to the RR machines but not actually in the RR (i.e. changes there are not seen by the public).
- QA will send an email to push-request the morning of the push letting the pushers know that today is a CGI push day (this is their notice to be vigilant about pushing quickly).
- QA will ask for push of CGIs from hgwdev to hgw0 only. If there is a NEW CGI or file(s) going out this week, be sure to make a prominent note of it in your push request. The admins push from a script, and they will need to add your new CGI to the script. (the build-meister should not be cc'd on this email.)
As of March 2017, here's a list of the CGIs and data files we push. Note: CGIs and data files may have been added since this list was created -- this is meant to be a starting point.
cartDump cartReset das hgApi hgBeacon hgBlat hgc hgConvert hgCustom hgEncodeApi hgEncodeDataVersions hgEncodeVocab hgFileSearch hgFileUi hgGateway hgGene hgGenome hgGtexTrackSettings hgHubConnect hgIntegrator hgLiftOver hgLogin hgMenubar hgMirror hgNear hgPal hgPcr hgPublicSessions hgRenderTracks hgSession hgSuggest hgTables hgTracks hgTrackUi hgUserSuggestion hgVai hgVisiGene phyloPng
and these configuration files:
/usr/local/apache/cgi-bin/all.joiner /usr/local/apache/cgi-bin/extTools.ra /usr/local/apache/cgi-bin/greatData/* /usr/local/apache/cgi-bin/hgCgiData/* /usr/local/apache/cgi-bin/hgGeneData/* /usr/local/apache/cgi-bin/hgNearData/* /usr/local/apache/cgi-bin/hgcData/* /usr/local/apache/cgi-bin/loader/* /usr/local/apache/cgi-bin/lsSnpPdbChimera.py /usr/local/apache/cgi-bin/visiGeneData/*
For these directories we request an rsync --delete (from hgwbeta to the RR)
/usr/local/apache/htdocs/js/* /usr/local/apache/htdocs/style/*
- Run TrackCheck on hgw0. This is the responsibility of the QA person who tests hgTracks.
- make a props file which specifies the machine/db to check. Set zoomCount=1 and it will only check the default position for each assembly. Example props file for hgw0:
machine mysqlbeta.soe.ucsc.edu #This is where TrackCheck checks for active databases (active=1). These databases maynot be on the the RR and it will give errors which can be ignore. server hgw0.soe.ucsc.edu # this is the machine that you are testing. quick false dbSpec all #You can list just one database here. table all #You can list one table if need be. zoomCount 1 # if number is greater than one it will check links at higher zoom levels.
- run it from hgwdev: nohup TrackCheck hgw0.props > & $WEEKLYBLD/logs/TrackCheck-hgw0.07-13-2006
- run in the background if desired by typing Ctrl-Z then "bg", to check status type "jobs" or "ps -ef | grep TrackCheck".
- examine the file for errors.
- Monitor Apache Error Log (QA does this) see examples here:
hgw0:/usr/local/apache/logs/error_log
To watch the log without line wraps, type "less -S error_log." Typing capital "F" will all you to follow incoming errors. When errors arrise you can type Ctrl-C and use the right arrow to scroll the window over to see the entire message. *Update*: To view the error log *without* Hiram's CGI_TIME entries (for background info see: http://redmine.soe.ucsc.edu/issues/10081):
$ tail -f error_log | grep -v CGI_TIME
- Wait to hear from QA about how their CGIs look on hgw0. Each member of the QA team has testing responsiblities. Check also that TrackCheck ran successfully.
Push to hgwN only
- hgwN is one of the RR machines, hgw1-6. Each build, rotate to the next machine in numeric order i.e. hgw1 then hgw2 etc. so that one machine is not being worked more than the others.
- Once the new CGIs are on hgwN the push shepherd will watch the error logs for a short while to make sure no new errors occur under load.
Push to the rest of the RR and hgwbeta-public and euronode
- QA will ask for push from hgwbeta to the rest of the hgwN machines, as well as hgwbeta-public and euronode. The js and style directory files should also go to /usr/local/apache/htdocs/js-public/* (or style-public/*) on hgwbeta ONLY in order to keep the javascript the same on the RR and hgwbeta-public. So, in addition to asking for the rsync --delete of the directories from hgwbeta to the RR machines, we also need to ask for an rsync --delete:
from /usr/local/apache/htdocs/js/* /usr/local/apache/htdocs/style/* (on hgwbeta) to /usr/local/apache/htdocs/js-public/* /usr/local/apache/htdocs/style-public/* (on hgwbeta ONLY)
- QA will send email to the build-meister to let him/her know that the CGIs are on the RR.
Remember to keep track of new features
Anyone can add to this list at any time, but if no notes for this release have been made on the new features page, now is a good time to add some.
Final Build Wrap-up
This is day 23 in the schedule.
The buildmeister should do these steps once QA has notified you that all RR machines have been updated.
Normally this is run once at the end of the cycle. However, occassionally it is necessary to patch a build after it is already released on the RR. Depending upon the extent of the patch, it may be desirable or even necessary to rerun the wrap-up. All of these scripts can be safely rerun until the next build is made (until BRANCHNN is updated in buildEnv.csh).
- Connect as "
build
" onhgwdev
this time. Then go to the weekly build dir on dev
hgwdev> ssh -X build@hgwdev # the optional '-X' allows X-windows support <build@hgwdev> cd $WEEKLYBLD
- build and push hgcentral IF there are any changes
<build@hgwdev> ./buildHgCentralSql.csh >& logs/v${BRANCHNN}.buildHgCentralSql.log <build@hgwdev> cat logs/v${BRANCHNN}.buildHgCentralSql.log <build@hgwdev> ./buildHgCentralSql.csh real >>& logs/v${BRANCHNN}.buildHgCentralSql.log <build@hgwdev> echo $status
- check that the hgcentral.sql has been updated:
http://hgdownload.soe.ucsc.edu/admin/ http://hgdownload-sd.soe.ucsc.edu/admin/
- Now connect to
hgwdev
.
hgwdev> ssh -X build@hgwdev # the optional '-X' allows X-windows support <build@hgwdev> cd $WEEKLYBLD <build@hgwdev> screen # if desired
- build 'userApps' target (various utilities) on hgwdev and scp them to hgdownload and hgdownload-sd
<build@hgwdev> time ./doHgDownloadUtils.csh >& logs/v${BRANCHNN}.doHgDownloadUtils.log <build@hgwdev> echo $status
(takes 12 minutes)
- check that dates on the utils to verify they were updated:
http://hgdownload.soe.ucsc.edu/admin/exe/linux.x86_64 http://hgdownload-sd.soe.ucsc.edu/admin/exe/linux.x86_64
- update the beta tag to match the release:
<build@hgwdev> cd $WEEKLYBLD <build@hgwdev> env # (just to make sure it looks right) <build@hgwdev> ./tagBeta.csh >&logs/v${BRANCHNN}.tagBeta.log <build@hgwdev> cat logs/v${BRANCHNN}.tagBeta.log <build@hgwdev> ./tagBeta.csh real >>&logs/v${BRANCHNN}.tagBeta.log <build@hgwdev> echo $status
- tag the official release
<build@hgwdev> cd $WEEKLYBLD <build@hgwdev> git fetch <build@hgwdev> git tag | grep "v${BRANCHNN}_branch" # Note: use .1 or .2 or whatever is the next unused subversion number <build@hgwdev> git push origin origin/v${BRANCHNN}_branch:refs/tags/v${BRANCHNN}_branch.1 <build@hgwdev> git fetch
- zip the source code
<build@hgwdev> cd $WEEKLYBLD <build@hgwdev> time ./doZip.csh >&logs/v${BRANCHNN}.doZip.log # (this is automatically pushed to hgdownload) <build@hgwdev> echo $status
(takes 4 minutes)
- check that the source code .zip files were updated:
http://hgdownload.soe.ucsc.edu/admin/ http://hgdownload-sd.soe.ucsc.edu/admin/
- WAIT 10 minutes, then run ./userApps.sh to package up the userApps/ directory with its source and pushes it to hgdownload and hgdownload-sd htdocs/admin/exe/
time ./userApps.sh >& logs/v${BRANCHNN}.userApps.log
(takes 1 minute)
- check that dates on the userApps src.tgz to verify they were updated:
http://hgdownload.soe.ucsc.edu/admin/exe http://hgdownload-sd.soe.ucsc.edu/admin/exe
- request push to the genome browser store from hgwdev
Push
hgwdev:/usr/local/apache/htdocs/gbib/gbibBeta.zip genome-store:/var/www/browserShop/media/products/gbib.zip
- request push of incremental push updates to hgdownload from hgwdev
Push
hgwdev:/usr/local/apache/htdocs/gbib/push/ hgdownload:/mirrordata/gbib/push/
When I asked the admins about pushing this they did not remember ever doing it. It could probably be done with rsync and qateam in a script anyways. However I see no evidence of any build script that updates that gbib/push directory. Do you see any new updates that need pushing? Looks like nothing to do.
ssh build@hgwdev find /usr/local/apache/htdocs/gbib/push/ | xargs ls -ldtr
You can see it also with a browser:
http://hgdownload.cse.ucsc.edu/gbib/
Max says that there has not been much activity here lately, but that he expects to use it to put the R statistical package on gbibs when Kate releases her GTex stuff.
- WAIT a day for the nightly rsync to happen from the RR for cgi-bin/ and htdocs/ hierarchies to hgdownload
- confirm cgi-bin/ and htdocs/ on hgdownload are up to date,
ftp://hgdownload.cse.ucsc.edu/apache/cgi-bin/ ftp://hgdownload.cse.ucsc.edu/apache/htdocs-rr/
Note that when you rsync from hgdownloads, htdocs/ is mapped to htdocs-rr/ internally.
- send email to genome-mirror@soe.ucsc.edu.
Include this link to latest source: http://hgdownload.soe.ucsc.edu/admin/jksrc.zip. Use the last email as a template (see https://www.soe.ucsc.edu/pipermail/genome-mirror). If you push the hgcentral.sql, make sure to mention this has also changed in the email.
Example:
To: genome-mirror@soe.ucsc.edu Subject: v292 Genome Browser Available Good Afternoon Genome Browser Mirror Site Operators: The version v292 source is now available at: http://hgdownload.soe.ucsc.edu/admin/jksrc.zip or labelled with source number: http://hgdownload.soe.ucsc.edu/admin/jksrc.v292.zip The version v292 CGI binaries can be found at: rsync -avP rsync://hgdownload.cse.ucsc.edu/cgi-bin/ ${WEBROOT}/cgi-bin/ or: ftp://hgdownload.cse.ucsc.edu/apache/cgi-bin/ A license is required for commercial download and/or installation of the Genome Browser binaries and source code. No license is needed for academic, nonprofit, and personal use. Summaries of changes can be found here: http://genecats.soe.ucsc.edu/builds/versions.html The following CGIs were updated: cartDump cartReset das hgApi hgBlat hgConvert hgCustom hgEncodeApi hgEncodeDataVersions hgEncodeVocab hgFileSearch hgFileUi hgGateway hgGene hgGenome hgHubConnect hgLiftOver hgLogin hgNear hgPal hgPcr hgRenderTracks hgSession hgSuggest hgTables hgTrackUi hgTracks hgUserSuggestion hgVai hgVisiGene hgc phyloPng and these configuration files: /usr/local/apache/cgi-bin/all.joiner /usr/local/apache/cgi-bin/encode/cv.ra /usr/local/apache/cgi-bin/greatData/* /usr/local/apache/cgi-bin/hgCgiData/* /usr/local/apache/cgi-bin/hgGeneData/* /usr/local/apache/cgi-bin/hgNearData/* /usr/local/apache/cgi-bin/hgcData/* /usr/local/apache/cgi-bin/loader/* /usr/local/apache/cgi-bin/lsSnpPdbChimera.py /usr/local/apache/cgi-bin/visiGeneData/* Please rsync --delete these directories: /usr/local/apache/htdocs/js/* /usr/local/apache/htdocs/style/* Please rsync this directory: /usr/local/apache/htdocs/images/* The script in the source tree: src/product/scripts/updateHtml.sh can be used to update your htdocs directory. A new hgcentral.sql file is now be present at: http://hgdownload.cse.ucsc.edu/admin/ If you have any questions or concerns, please feel free to write back to this mail list. Thanks, {buildmeister}