Algorithm Theory

From QuB
Jump to: navigation, search
Prev: Data Theory Outline Next: Interface Theory


Contents

All Algorithms

Data Source

All QuB's algorithms operate on the Data Source. The Data Source, at upper-right, can be:

DataSource.png
sel 
selection -- the highlighted data in the current file. Specifically, what's selected in the upper (lo-res) pane of the Data window. To use the hi-res selection, right-click it and choose "Expand".
list 
selection list -- all selections in the active "List" in the current file
file 
the entire current file
file list 
some or all of the open data files

To include files in the file list, go to an algorithm's properties (right-click the button) and put checkmarks next to each file. Many algorithms work on only one file; they interpret "file list" as "file."

When the Data Overlap window has an active list, Data Source:list refers to the portion of the checked traces within the lower (hi-res) pane.

Output and Results

Most algorithms write a log of their progress in the Report window (View -> Report). Many publish results in the Results window (View -> Results). Results are stored alongside the data file in the Session File.

Stopping Tasks

When an algorithm is running, there is a progress bar in the upper-right corner. To stop all algorithms, click "Stop tasks". To stop only one task, choose "Tasks" from the "View" menu, select a task and click Stop.

Likelihood Optimizers

The algorithms which idealize data or solve for rate constants have a common structure. They compute the likelihood (probability) of the data given a model, and search for the model parameters which maximize likelihood. The gradient is the vector of the partial derivative of likelihood with respect to each parameter. The gradient is used to find more likely parameters. A gradient of zeros indicates a (local) maximum. If an algorithm stops with a large gradient, the model may be wrong.

QuB actually works with the log likelihood (LL). QuB's LL is often much larger than log(0<probability<1) because some of the terms are computed from probability distributions; however, this does not affect the maximal parameters.

Options

Max iterations 
at most how many times to repeat (calculate LL and gradient, modify parameters)
LL conv 
stop if LL increases by less than this much
Grad conv 
stop if all gradients are less than this much
Search limit 
keep parameters within [initial / searchlimit, initial * searchlimit]
Restarts 
if it stops after Max iterations, restart it this many times (works better than increasing Max iterations)
Max step 
how much to change parameters each iteration. 1.0 is the natural step size; smaller numbers may be more reliable for sensitive models, but will converge more slowly.
Run mode 
optimize (maximize LL) or check (compute LL with current parameters)


Prev: Data Theory Outline Next: Interface Theory
Personal tools