DbSNP Track Notes

From genomewiki
Jump to: navigation, search

This page is intended to be a start-up guide for developers about to build UCSC's SNP track, based on NCBI's dbSNP. It provides some background information about dbSNP and our build process -- good stuff to know in case you need to update our code to keep up with the dbSNP developers' changes.

This page was first written during the construction of the hg18 snp128 track, based on dbSNP version 128, circa Jan. 2008. If you are working on snp136 in 2011, and this page has not been updated since then, practice skepticism.


NCBI dbSNP

NCBI produces numbered releases of dbSNP about twice a year. dbSNP includes an enormous relational database, and large specially formatted fasta files: for each SNP, there is a detailed fasta header line, followed by the left flanking sequence, a single IUPAC ambiguous base representing the SNP on a line by itself, and the right flanking sequence. We download the fasta files and a subset of the dbSNP database, and then extract the pieces used by our SNP track (more below).

a bit more about their build process in general

links to NCBI docs, dbsnp-announce email list, ftp dirs

Subset of NCBI fields used to build snpNNN track

snpNNN field NCBI dbSNP table(s)/file
chrom ContigLoc / ContigInfo / liftUp
chromStart ContigLoc / liftUp; check vs phys_pos_from
chromEnd ContigLoc / liftUp
name rs + numeric snp_id that joins all the other sources
score 0
strand ContigLoc.orientation
refNCBI ContigLoc.allele
refUCSC ContigLoc.allele if insertion, othw. from genomic
observed fasta headers
molType fasta headers
class fasta headers
valid SNP
avHet SNP
avHetSE SNP
func ContigLocusId
locType ContigLoc
weight MapInfo


UCSC snpNNN track overview

UCSC's track/table corresponding to dbSNP release NNN is snpNNN; the shortLabel is "SNPs (NNN)".

Track tables and files

db tables:

  • core track: snpNNN, snpNNNSeq, snpNNNExceptions, snpNNNExceptionDesc
  • orthologous alleles (human only): snpNNNorthoPanTro2RheMac2

gbdb files:

  • /gbdb/DB/snp/snpNNN.fa

hgdownload files (masked sequences -- for human only):

  • /usr/local/apache/htdocs/goldenPath/DB/snpNNNMask/*

Genome Browser track code

In all of these files, look for snp125*, not the corresponding snp* (older track) functions.

  • inc/snp125Ui.h, lib/snp125Ui.c
  • hgTrackUi/hgTrackUi.c
  • hgTracks/variation.c
  • hgc/hgc.c

could say a lot more here about the UI filters, special names when orthos exist, trackDb settings, hgc details...

Overview of track build process

planning to automate this... to see how it was done for hg18, search for snp128 in makeDb/doc/hg18.txt.

The process of building the core SNP track follows these basic steps:

  1. Download fasta files and subset of database table dumps from dbSNP
  2. Create a temporary db on a workhorse machine and load (subset of) NCBI tables
  3. Extract the relevant fields of NCBI tables and fasta headers into files sorted and indexed by SNP ID.
  4. Use the SNP ID to join the separate files into a single file of NCBI's encoding of SNP data. Use liftUp to translate from contig coords to chrom coords.
  5. Translate NCBI's encoding of SNP data into UCSC's representation, and check for inconsistencies or other problems with the data.
    • If necessary, work with NCBI to resolve any major issues discovered above.
    • If necessary, update the Genome Browser CGIs to handle new values (e.g. new function annotations).
  6. Install sequence file in /gbdb and load database tables.

For human, we also generate masked SNP sequences and orthologous SNP mappings, but QA can get started on those core tables in the meantime.

The first several steps are straightforward and scripted using good old unix commands like awk, sort and join, as well as hgsql to pull named fields from the NCBI tables. The translation and encoding step is performed by kent/src/hg/snp/snpLoad/snpNcbiToUcsc.c.

snpNcbiToUcsc

The most complex part of the process, and the most likely to require development work, is the translation of NCBI encodings into UCSC's format and consistency checks performed by snpNcbiToUcsc. NCBI has made some changes and extensions to dbSNP in the past several revisions, and that can be expected to continue, so our code (both snpNcbiToUcsc and the CGIs that it feeds) must keep up.

Prior to snp128, about 20 programs in hg/snp/snpLoad/ were used to collect, translate and check the data (see snp126 construction in hg18.txt). snpNcbiToUcsc was written to replace all of them (except the parts that were replaced by hgsql, awk, sort and join), in order to simplify maintenance of the code. Side benefits include speedup (single pass over all 12M rows, takes 3.5min), improved checking of formats using the regex library, and auto-generation of snpNNN.sql and snpNNNExceptionDesc.tab.

/* ATTENTION DEVELOPERS
 *
 * snpNcbiToUcsc should fail if NCBI makes any significant changes to dbSNP.
 * If it fails, or if it skips any SNPs due to errors (other than missing
 * observed / deleted SNP), please investigate.  Will the change in dbSNP 
 * require changes to our CGIs in addition to snpNcbiToUcsc?
 *
 * snpNcbiToUcsc.c has a lot of comments.  Please read them, and please
 * update them when making changes!
 */

Reformatting / adjustments to the data

NCBI uses a 0-based, fully closed coordinate system. In most cases, this can be translated to our 0-based, half open system by adding 1 to the end coordinate. However, they represent genomic insertion points as two bases long, with the insertion point between the bases. To convert those to zero-base-long points in our coord system, we increment the start and leave the end alone.

For several fields, we translate NCBI's numeric encodings into string values (represented as sets or enums in the snpNNN database table). Many of these are recognizable as names that NCBI uses in dump files (*.bcp.gz) or used to use in ASN, but not always, especially for locType. There is some history there and I have chosen to keep the same string values in snp128 and later that were used in snp125-127.

Checks for errors or oddities

snpNcbiToUcsc handles unexpected conditions in several ways depending on severity:

  • errAbort for problems that indicate wrong data file or need to update software
  • write line to snpNNNErrors.bed file, and omit row from snpNNN.bed, for serious data inconsistencies
  • write line to snpNNNExceptions.bed file for minor data inconsistencies or other conditions we want to mention in the Annotations section of the hgc details page

If there is an errAbort or error output, it probably means that dbSNP has changed something about how it encodes its data, not necessarily that there is a serious error in the data -- but always investigate to make sure.

Exceptions

snpNcbiToUcsc checks for ~18 unusual conditions, most (but not all) of which imply that the SNP might not be perfectly mapped to the genome. These are referred to as exceptions in the code/database and "Annotations" in hgc. When an exception is found, a line of bed4+ is written out to snpNNNExceptions.bed: chrom, start, end, rsId, and exception name. snpNcbiToUcsc tallies of the counts of each type of exception, and upon completion, it writes out snpNNNExceptionDesc.tab; each row has exception name, count, and a description that appears in hgc. The types of checks (each type of check might cover several different specific exceptions) are described in trackDb/snpNNN.html.

describe exceptions -- rationale, implications etc.

Reporting problems to NCBI

other sanity checks

comparison to previous version

summary?

maybe something like blood test results where you see measurement plus normal range, and flag things out of normal range? Heather has various hints in the code for how many of each type there should be. This could probably be done by the post-processing.

after loading the SNP track:

make masked sequences

update orthos

both of those (especially orthos) are quite long & involved processes, probably worthy of separate automation and doc.

stats?

in addition to all of the howto stuff... actual snp128 stats! :) maybe on a separate page. might be useful for reporting to NCBI.