PosteriorSamples
¶
-
class
PosteriorSamples
(tree, runNum, program='p4', mbBaseName=None, directory='.', verbose=1)[source]¶ A container for mcmc samples from files.
This would be useful if you wanted to do eg posterior predictive simulations based on the posterior distribution from an already completed run. This reads in the tree and parameter files, and puts the two together to make p4 Tree objects with a model attached. For each, you would then attach a data object and do your simulation.
After getting it instantiated, the only method is getSample(sampNum), which gets the zero-based sampNum’th sample.
This will handle output from p4 or MrBayes. It will only do the MrBayes models that p4 can do as well. It will only do one ‘run’ at a time. It will do models on partitioned data, and p4 tree-hetero models.
Args
- tree
A p4 tree, with model attached. It must also have taxNames attached, in their proper order.
- runNum
The run that you want to use. It would be zero-based for p4 files, and 1-based for MrBayes files. Eg for the p4 file mcmc_trees_0.nex you would say runNum=0, and for the MrBayes file foo.run2.p you would say runNum=2.
- program
‘p4’ or ‘mrbayes’
- mbBaseName
If you are trying to read MrBayes files, you need to supply the ‘baseName’ for the files. Eg if the run1 tree file is foo.run1.t and the run1 parameters file is foo.run1.p, you would supply mbBaseName=’foo’.
- directory
The directory where the tree files and parameter files are located. By default, it is ‘.’, which is the current directory.
- verbose
An integer, from 0-3. Zero is quiet. The bigger the number, the chattier it gets.
Example:
# We need a tree with an attached model. Lets say that we have a one # partition model, with a GTR+G model. Do the usual p4 model setup. read('myTree.nex') t = var.trees[0] # Read in some data. read('myData.nex') d = Data() t.data = d t.newComp(free=1, spec='empirical') t.newRMatrix(free=1, spec='ones') t.setNGammaCat(nGammaCat=4) t.newGdasrv(free=1, val=0.5) t.setPInvar(free=0, val=0.0) # Check to make sure its all good to go. # (This step is needed for empirical comps, at least) t.data = d t.calcLogLike() # Instantiate ps = PosteriorSamples(t, runNum=1, program='mrbayes', mbBaseName='mbout', verbose=2) # Iterate over samples, saving your favourite test quantity from simulations as you go. myStats = [] for sampNum in range(500,1000): t2 = ps.getSample(sampNum) t2.data = d t2.simulate() myStats.append(t2.data.simpleBigXSquared()[0])