ANGSD: Analysis of next generation Sequencing Data
Latest tar.gz version is (0.938/0.939 on github), see Change_log for changes, and download it here.
RealSFS
* The program was called emOptim2 in earlier versions, this has now been changed to the more appropiate: 'realSFS'
This program will estimate the (multi) SFS based on a .saf file generated from the ./angsd [options] -doSaf .
See also SFS Estimation and 2d SFS Estimation.
Brief overview
#1d sfs ./realSFS afile.saf.idx [-start FNAME -P nThreads -tole tole -maxIter -nSites ] #2dsfs ./realSFS pop1.saf.idx pop2.saf.idx [-start FNAME -P nThreads -tole tole -maxIter -nSites] #3dsfs ./realSFS pop1.saf.idx pop2.saf.idx pop3.saf.idx [-start FNAME -P nThreads -tole tole -maxIter -nSites]
The saf files are generated using ./angsd -doSaf.
- -start is a file containing a log scaled estimate of the SFS that can be used as the start point for the EM optimisation.
- -tole When the difference in successive likelihood values in the EM algorithm gets below this value the optimisation will stop
- -P number of threads to allocate to program
- -nSites Limit the optimisation to a region of this size. If nothing is supplied the program will use the entire saf file
- -maxIter maximum number of iterations in the EM algorithm
1d SFS
realSFS sfstest.saf.idx -P 4 >sfs.em
The realSFS program will read in a block of the genome (from the .saf) file, and for this region it will estimate the SFS.
The size of the block can be choosen using -nSites argument, otherwise it will try to read in the entire saf file.
If you have .saf file larger than -nSites (you can check the number of sites in the .saf.pos file), then the program will loop over the genome and output the results for each block. So each line in your Whit.saf.ml, is an SFS for a region.
2dsfs
./realSFS pop1.saf.idx pop2.saf.idx[-start FNAME -P nThreads -tole tole -maxIter -nSites ]
Output
Main results are printed to the stdout. These are the expected values. For 2dsfs the results is a single line, assuming we have n categories in population1 and m categories in population2, then the first m values will be the SFS for the first category in population1, etc.
NB
Use as many sites as possible, for more reliable estimates.
-nSites
The -nSites is used for choosing a max number of sites that should be used for the optimization. Using more sites will give you more reliable estimates. If you dont specify anything it will try to load all sites into memory.