Try saving the data as text. Matlab, QUB and QUB Express can all read and write comma- and tab-separated text files. (In Matlab, use the dlmread and dlmwrite functions.) QUB interprets a blank line as a segment break.
QUB and QUB Express can also import data from the following formats: ABF (Axon/Molecular Devices binary), Acquire (Bruxton), PUL (Patchmaster), TACIDL (TacFit transition table textual output), LDT (legacy QUB format) and DWT (QUB idealized dwell times), along with our native binary format, QDF. We can also read raw integer and floating-point binary files, if you know the correct sampling rate and scaling factors.
I remembered you mention once that Vs in these formulas is defined as:
Vs (a.k.a. k1) is the voltage sensitivity (with 0.01 giving an e-fold increase in the rate for a voltage change of 100 mV)
I was wondering whether you have somewhere (in the QuB page?) a way to define the Vs in these formulas as dependent on the partial charge instead. Is there any conversion equation to use here? I am thinking that if I am going to use the combined formula below for k0 to give the "averaged" value, it'd be more useful to be able to define it as a function of the partial charge, which is what most people use.
Here is a derivation of Vs from forms of the Arrhenius equation:
k = (k0 * conc) * exp(Vs * mV) k = (k0' * conc * kB*T / h) * exp(-( charge*mV / (kB*T) )) Vs * mV = -( charge*mV/(kB*T) ) Vs = -charge / (kB*T)
The units of charge are electrons. The units of T are degrees Kelvin. kB = 8.617x10^-2 meV/K, so kB*T = 25.9 meV. At 300K:
Vs = -charge / 25.9
Vs can be positive or negative depending on its excitatory or inhibitory effect.
Temperature dependence is not directly supported yet. We may add it to the model in the future, but you can do it with today's model... Whereas our rates are calculated
k = k0*P*exp(k1*Q),
temperature dependence is calculated
k = k0 * T * exp(k1 / T)
In other words, P=T and Q=1/T. You can emulate temperature dependence by making a rate both P-sensitive to temperature and Q-sensitive to (1/temperature). For ramp simulations, this means adding a d/a channel in the acquisition protocol, from (1/T0) to (1/T1) and assigning it (channel 2) to "Voltage." Assign the original temperature signal to "Ligand." For "Mac Rates" analysis of recorded data, you have to save the data as a text file, load it into a spreadsheet, add a column calculated as e.g. "=1/B$1" (if temperature is in column B), then open it back up in qub and assign the new column (a/d channel) to "Voltage."
No. However, the linear relationship (Eyring/Arrhenius) gives rise to sigmoidal dose-response curves:
This is the classic "missed events" problem -- an event is shorter than the rise time, so it is mis-identified or missed entirely. The MIL rates optimizer avoids this problem by applying a "dead time" (td) to the model and the events. It erases any events shorter than td, and optimizes the likelihood given that all short events are missed. Set td longer than the longest bad event. You can set td in the upper right. (click "td" to switch between [samples] and [ms].) You still see the erroneous short events, but they don't go into the MIL log-likelihood.
When two states are not connected, it is still possible to go from one to the other after one sample's time. The distribution of events is exponential, and some are much faster than the sampling rate; there could be several transitions from one data point to the next. This may be rather unlikely, but the total likelihood also includes the Gaussian probability of the current being of that color. In my experience the Gaussian probabilities dominate the result.
It may help to exaggerate the amplitude distributions -- increase the std.dev of closed, and maybe decrease the std.dev of the other colors. Also, slower rate constants will decrease the probability of jumping between unconnected states. In Idealize Properties, turn off "re-estimate" so it has to use your estimates.
After running MIL on data for individual concentrations of ligand (dose dependence) I figure that although the forward rates (ligand dependent steps) increase, they are not proportional to the magnitude of concentration change of ligand. And also the backward rates due to ligand dissociation also change to a little extent (in the ideal world it should not).
Therefore I wish to enforce some kinetic constraints but quite confused regarding the one to use. What I need to know is the function of 'fix rates, scale, rates. fix exp, scale exp' etc. and the one most suitable for me.
I know this is a subjective question but any help is appreciated! Also, is there anywhere else where I can read up on this to get more clarity on the subject?
One classic use of constraints is to model multiple independent binding sites. For example, three independent binding sites might look like this:
3*a 2*a a c C0 <---------> C1 <---------> C2 <----------> C3 <------------> O b 2*b 3*b d
When three sites are available (C0) the chance of one binding is three times as much as when there is only one site available (C2). To maintain these proportions, add these constraints:
scale rates 1 2 2 3 scale rates 3 2 2 1 scale rates 1 2 3 4 scale rates 4 3 2 1
Each constraint reduces the number of free parameters by one, so there will be 4=8-4 free parameters, as expected (a, b, c, and d).
If for any reason you do not want to optimize a particular rate constant, you can hold it constant. To keep the rate from state 4 to state 5 constant, add the constraint:
fix rate 4 5
Exponential constraints (k1 -- voltage sensitivity) work the same way.
If your model has cycles, e.g. state 1 connected to state 2 connected to state 3 connected to state 1; you can require detailed balance by adding the constraint:
balance loop 1 2 3
QuB will keep the product of the forward rate constants equal to the product of the reverse rates. That would be consistent with equilibrium -- a loop driven by outside energy would not be balanced.
All these constraints represent assumptions. You can optimize with and without, and see if the constraint makes the likelihood worse.
In your situation, might there be more than one binding site? If we condense the five-state model above to three states, both k1->2 and k2->1 will be a messy function of a and b, with non-linear dependence on concentration. It could provide evidence for or against different connection schemes if the backward rates vary more or less with concentration.
Another way to limit your search, though not a "constraint," is to run MIL with data source: file list. Mark any concentration-dependent rates (check P). Right-click the MIL button, choose "file list," put a check next to any open data file to be included in the fit, and enter the Ligand concentration for each. MIL will find the best overall rate constants. Instead of a slightly varying backwards rate constant, you will see one, sort of a maximum likelihood average.
QUB Express can handle concentration steps within one file, if you record a trace (signal, a/d channel) of concentration alongside the current.
For more details, see appendix 2 of this paper: www.ucl.ac.uk/Pharmacology/dc-bits/colquhoun-biophysj-04.pdf
Following their approach, we can write each constraint system as a matrix equation Ax=B. Now form the matrix C by concatenating columns of A and B: C = [A | B]. Two constraint systems C1 and C2 are equivalent if C1 can be transformed to C2 by elementary row operations: en.wikipedia.org/wiki/Elementary_row_operations
If we have a model with two concentration depended binding steps (the receptor has two identical binding sites), how do we get k1 and k-2 to be multiplied by 2 (the statistical multiplication)? (Naturally k1 and k2 also has to be multiplied by the agonist concentration, but I think that we are already doing that right).
How can you link two rate constant so that they will always stay the same, but both vary during the MIL fitting routine?
I have managed to sucessfully analyze my data from different experimental conditions using QuB. These results are definitely superior to the ones generated by commercially available softwares like Tac and TacFit. I have compared them over many stretches of raw data.
The reason why QuB appeals to me is because its so useful for objective fitting of dwell time histograms and direct modelling of data. Now RyR is a channel which has not yet been fully characterized in terms of its detailed gating mechanisms. We know its a homotetramer and has 4 symmetrical ligand binding sites. With this knowledge how should I proceed with the modeling?
With the preliminary stuff that I have done there are 3-4 open and closed states each but how will I know what transitions (connections b/w states) are the most appropriate? I know that this will have to be based on the physical reality on how the channels gate in practice etc but is there any objective way of doing this?
Identical binding sites are usually modeled as a chain:
4k 3k 2k k 1 ----------- 2 ----------- 3 ----------- 4 ------------ 5 k' 2k' 3k' 4k'
with all the binding rates proportional to each other, and maybe also with the unbinding rates. k12 = 4*k45 because there are four times as many available binding sites. Once you have set up these ratios, you can use Kinetic Constraints (Model properties) to "scale rate" between k12 and k23, k12 and k34, k12 and k45, k21 and k54, etc. This will preserve their ratios during MIL, and if you check the box "enforce constraints" it will preserve the ratios when you edit by hand.
It is also valid to optimize without constraints. If the unconstrained one has significantly higher likelihood, it suggests the binding sites are not independent. It's also valid to try different models -- especially if you have recordings at various ligand concentrations -- rates that aren't involved in binding shouldn't change too much across concentrations, if the connections in the model are right.
(I'm not an experimental scientist but...) People say, the fewer states the better. If you could e.g. saturate it so the ligands stay bound, and the histograms indicate one open and closed state each, then you could first solve the gating rates and go from there.
In QuB, microscopic reversability is optional. By default it is ignored -- the rate constants can take any values. In model properties, kinetic constraints, there is a button to "Balance loops," which identifies the fundamental cycles in the model (>= 3 states) and constrains them so their product clockwise equals their product counter-clockwise. (In QUB Express, this button is in the Models table.)
With the constraint(s) added, you can right-click in the background of the model and choose "Enforce constraints." This changes the rate constants as needed to fit the constraints. To have the constraints enforced every time you edit a rate constant, check the box "Enforce constraints" in Model properties, under Rates. Algorithms such as MIL and Mac Rates will always honor the constraints.
Unfortunately, we can't constrain a rate to stay within certain bounds. The only constraints are Fix (hold constant) and Scale (constant ratio between two rates), along with detailed balance for loops.
It may help to fix all but one rate constant, so the program is fitting only one rate constant at a time. In QUB Express, right-click the rate constant, choose Add Constraint -> Fix rate. Or you could constrain the ratios between the three opening rates (scale a::b, then scale b::c, and a::c is implied). Then remove the constraints one by one.
Yes. Use the Model Merge function. It takes two or more source models, and generates the cartesian product. An example:
A <--> B merged with X <--> Y <--> Z AX <--> AY <--> AZ | | | BX <--> BY <--> BZ
Model Merge also generates constraints in the product model, so that k(AX, AY) == k(BX, BY), and k(AX, BX) == k(AY, BY), etc. You can selectively remove these constraints to allow for cooperative behavior.
Of course, it gets more confusing with several models.
QuB Classic: click "Model Merge" or "mmerge" on the right, under "Modeling" (you may have to click the header "Modeling" to make the Model Merge button visible, or alternatively, use the Actions menu -> Modeling -> Model Merge. To remove constraints, right-click in the background of the model -> Properties..., Kinetic Constraints Tab.
QUB Express: in the Models panel, click the menu icon next to the File menu ("Modeling Tools") -> Model Merge. To remove constraints, go to the Tables panel, Kinetic Constraints tab.
Regarding the missed events, how exactly does QuB deal with them? Can you lead me to a part in the website that refers to this? I've always understood that setting a Td (dead time) for the recording is enough to deal with it, but just wanted to make sure this is correct.
I have read somewhere that QuB uses a corrected form of the Roux and Sauve (1985) method to correct for missed events. Is this so ?
For missed events, we say that events shorter than Tdead are not reliably detected. We join each short event with the prior event (and with the next event too, if it is of the same class), to get a record with no events shorter than Tdead. Then we compute the log-likelihood of the model generating the events, given that all short events were missed. The mathematics are based on Roux and Sauve, and are given in the appendix of the original MIL paper (Qin, Auerbach, Sachs 1996), linked from here:
although the formula has changed slightly, to the one given here:
(scroll down to QtoQe)
Basically, MIL derives modified rate constants, which govern the process as observed (with no short events). So, missed events correction is a property of MIL (MIL Rates). Each time you press the MIL button, it creates a copy of the idealization and erases its missed events, as above. It doesn't erase them on screen, so you can try a different Tdead without re-idealizing. In the Results window, MIL's histograms come from the corrected copy, and the PDF curves come from the modified rates.
"Idealize" has an option "apply dead time to idealization" which erases the events from the on-screen record. This may be useful, e.g. if the short events are all noise spikes, but if you later make the dead time smaller, MIL will give the wrong answer.
"Idealize" has another option "apply dead time to statistics" which computes Results such as lifetime (P open) using the corrected record.
QuB can do this. You record each voltage level in a separate file, open all of them, and idealize each one separately, then fit together.
Classic: add voltage sensitivity to the model, by editing a rate and checking the box "Q". "k1" is the voltage sensitivity, which can be a negative number. To fit all the files together, right-click MIL, choose Data Source: file list, put a check next to each file name ("Use"), and enter each file's voltage in the table. You could also use the file list for different concentrations of Ligand, using the "P" checkbox.
Express: Right-click a rate and choose "Voltage-sensitive", then enter a reasonable starting guess for k1, the voltage sensitivity constant (maybe +/- .05?). To fit all the files together, go to the Modeling:MILRates panel and select "multi-file". This will use data from all open files with the specified group number. Assign group numbers in the "Data" table.
I have a few single-channel recordings with three concentrations (0, 1 and 10 uM agonist/antagonist) from the same patch (i.e., 2 minutes without modulator, 2 minutes with 1 uM and 2 minutes with 10uM). I was wondering whether it is possible, with QuB, to fit simultaneously these three recordings with the same model to estimate the effect of the different modulator's concentrations on the rate constants, considering that the channel gates without modulator. And if it is, how is this done?
I actually have the values for the voltage sensitivities, which I calculated using the Q using recordings at different voltages and same other conditions. Could I then, since it's going to be the same model for the agonists fitting, introduce the value for the Q and k1 from those calculations and, once I've identified the rate that are agonist dependent, fit the recordings with different agonist concentrations (same voltage)? Would this allow the separation of k0 and k1?
Open all three, right-click MIL, and choose "file list"; check the box next to each file, and enter its "Ligand" in the table:
In the Modeling:MILRates panel, choose "Multi-data", and assign the "Group" of open data files in the Data table.
Of course, you have to know or guess which rate(s) are sensitive to Ligand. You'd mark them by editing that rate constant and checking the box "P". If you want, you can edit the name of the ligand, or use the default name "Ligand".
There is a trick to find which rates are sensitive to ligand: for each rate, check the box "Q" and rename "Voltage" to "LogLigand". Then in MIL properties, in the file list (QUB Express: in the Constants table), enter log(ligand conc.) e.g. log(10) = 2.3, log(1) = 0, log(0) = ? ~= -5? When you run MIL, the exponential rate constant k1 should get bigger for sensitive rates, and approach zero for insensitive rates. Then, when you've found which rates are sensitive, turn off the "Q"s and use "P" where appropriate.
The effect of voltage is the same on all three files, so any variations are due to Ligand (or LogLigand). Paths that are not sensitive, or which have k1 ~ 0, have the same kinetics in all files. Since voltage is constant, MIL can't separate k0 and k1; it puts the combined rate constant into k0. To estimate k0 and k1 separately, you'd need recordings at multiple voltages.
With the LogLigand trick, MIL should find that the agonist-dependent paths have k1=1.0, since
k = k0 * P * exp(k1 * Q) = k0 * P = k0 * exp(1.0 * log(P))
I would expect k1 to take values other than 0 and 1, if the model has the wrong number of states, or the wrong connections.
The model will need multiple paths to an open state, and presumably multiple open states. To start, you could designate one path for voltage gating, and leave "Q" un-checked.
MIL can actually model agonist, antagonist, and voltage, all in one model. Just use different "Ligand" names for each reagent. The limitation (for now) is that, within one file, all variables must be constant, and entered into the file list.
Once you've identified the agonist-dependent rate constant(s), you can introduce the Q and k1 from before. However, at constant voltage, fitting can only make them worse (it can get the same effect by adjusting k0 or k1). To constrain a "Q" rate so the fitter holds it constant:
I am a PhD student working on single channel biophysics of the human cardiac ryanodine receptor Ca2+ release channel and feel that I would greatly benefit from using QuB to analyse my data. Under my experimental conditions, there are very few and brief channel openings and the 50% threshold method of event detection (Tac/TacFit) is not of much use and therefore QuB programs for idealization can come in handy.
I have managed to have a play around with QuB using my data traces and have somewhat idealized my data but I'm not very sure about what I'm doing. I find the online instruction manual a bit too vast and beyond my league! I would be immensely grateful if I could have a gist of the essential steps that I should follow from the start till model building (and what the program does in the backgound).
QuB has been used successfully to analyse very similar data in your lab and I'm very interested in knowing how they were done.
1. PNAS, JAn 6, 2009, vol 106, 115-120- Purohit et al. 2.JGP May 2000, vol 115, 621-635, Grosman et al.
They have a couple core techniques:
1. Circular re-estimation of idealization and rate constants:
To start the process, highlight a short section with a few events, right click in the model background and "Grab all amps." Then right-click "Idealize", make sure SKM and Re-estimation are turned on, and run it. Then alternate between A and B until everything is stable:
2. Burst/cluster modeling
Assuming you have more than one black state, one of them is longer-lived than the rest (it has small rate constants exiting). We'll call this the "interburst" state, and list the bursts of events which are separated by interburst events. Actually, we can't tell which state any specific event is in, so we pick a Tcrit, and events longer than Tcrit are assumed to be interburst. MIL reports a Tcrit, and draws it as a red line on the histograms. See also this answer.
To make the list of bursts, click "ChopIdl", enter Tcrit, and run. Now you have a "selection list" of bursts; to analyze just the bursts choose Data Source: list. Some possibilities:
3. "rate-equilibrium free energy relationship", DNA point mutations, etc... the more outside theory and information you can bring to bear, the more informative and reliable QuB's answers are.
Also see here.
QuB can estimate kinetic and conductance parameters of a markov model, from whole-cell (macroscopic) current recorded with any stimulation protocol (ligand(s), voltage, pressure), as long as you have the stimulus recorded as an analog signal along with the current. The algorithm is given here:
Milescu, L.S., G. Akk, and F. Sachs. 2005. Maximum likelihood estimation of ion channel kinetics from macroscopic currents. Biophysical J. 88(4):2494-2515.
I recommend trying QUB Express first; here are some videos of whole-cell analysis:
To analyze all files together, to get a single set of rate constants, use the MacRates panel's "multi-data" option (QUB Express) [QuB Classic: "file list"], which sums the log likelihood of all files with a certain Group number. Open all of the files first, then assign their Group number in the Data table.
It generates successive events at infinite resolution, with a dwell length that's exponentially distributed about (1/sum(exit rates)), followed by a transition to another state, picked at random, weighted by exit rate. Rate constants are recalculated whenever stimulus changes. Blatz and Magleby were focused on reconstructing an imperfect filtered thresholded event record, while QuB just samples the record and adds Gaussian noise.
When a rate is extremely fast compared to the sampling frequency, that method can be very slow due to simulating millions of short events per sample. When we detect a fast rate, we switch to an A matrix method, using the sampled transition probability matrix A=e^(Q*dt). Starting in state s, we pick the next state randomly, weighted by the entries in row s of A.
In QuB Classic, there is an option for "macroscopic" simulation, which simulates mean current from the sum of exponentials.
There are two other significant differences:
Thanks to open-source licensing, we have incorporated the HJCFIT likelihood function as an option in QUB Online, so you can compare them directly. (In "Single Molecule" mode, click "Optimize", "Show advanced settings", "use HJCFIT likelihood when available". *when available: if your model has more than two colors (black/red), we'll have to use MIL.
I'm trying to do a dwell-time analysis on a recording. I've idealized it and done a MIL rate estimation, which has produced dwell-time histograms and what I take to be the probability density functions superimposed on top.
What is the equation of the pdfs? I see the Tau and Amp values for each curve but I'm trying to work out what equation they correspond to.
I need to find out what the mean open and closed times are, and assume that I will be able to recover them from the equations. I also want the equations so that I can plot the fits in other software.
I see from the MIL web page that PDF(t) = -Amp/Tau*exp(-t/Tau), but I don't see how a single exponential term can fit the peak on my histogram.
This page derives MIL's pdfs from Q, the matrix of rate constants. Q[i,j] is the probability per second of a transition from state i to state j. Q[i,i] is chosen so each row sums to 0. Essentially, it is the integral of the exponential distribution, over the width of the bin.
The pdf for one class (open or closed) is the sum of exponential components, one component per state. If you have a two-state model, C <-----> O, the open pdf has one component, and its Tau is the mean open time. (The closed pdf also has one component.) It sounds like your histogram has more than one peak, so you might rather MIL with two open states, or fit with two exponential components, to get a mean short-opening time and a mean long-opening time.
Be aware that MIL's histograms and pdfs are both modified by missed-events correction. Events shorter than td (upper-right) are joined with their prior neighbor for binning purposes, and the pdfs are derived from "effective" rate constants (given that short events are not detected).
For traditional histograms and fitting, try the Histogram window (View -> Histogram). The third pane (green) makes duration histograms. These histograms also come from the Data Source, so make sure to choose "file" if you want a whole-file histogram. To pick closed (class 0) or open (class 1), right-click for properties. Then click the green button to update the histogram.
To fit the histogram to exponential pdfs, right-click the histogram, choose "Curve fitting." For "Curve," choose "Exponential (log x bin)." To fit the parameters "Amp" and "Tau," check those boxes, and un-check the parameters "LogBase" (since the histograms use log base 10), "Base" (no baseline offset), and "x0" (no x offset), and make sure they're 10, 0, and 0 respectively. Then click "Fit."
After closing the fit window, you can copy or save the histogram as a table of numbers, by right-clicking it and choosing one of the "Copy..." or "Extract..." items.
To run a script at startup:
$ qub-express -s my_startup_script.py arg1 arg2
The arguments are available in global var "args":
>>> print args ['arg1', 'arg2']
If the script needs to wait, e.g. for a file to finish loading, there are a few options:
>>> myfunc(arg1, arg2) write >>> gobject.idle_add(myfunc, arg1, arg2) or >>> gobject.timeout_add(1000, myfunc, arg1, arg2) # in milliseconds
>>> doze(1.0) # in seconds
In general, the way to save a figure in QUB Express is via its Notebook menu. I just fixed some bugs in the Data's notebook, so make sure to 'git pull' or update to version >= 0.12.1.
To save the onscreen data and model-fit curve, click the notebook at bottom of data, and choose Save -> Data Table, or if you prefer, Save -> Pick Columns... (there's an option to add the column selection permanently to the Notebook menu).
To save a large or arbitrary selection of data with its model-fit, use Tools:Extract.
Both of these should be recordable in the Admin:Scripts panel.
I thought I'd throw out an idea I've had for a while where I think it would be helpful for QUB-Express to be able to display both the recorded current and simulated current in terms of conductance or % activation (relative to the reversal potential and total conductance), rather than just current.
I think these plots would really help in understanding how the model is behaving, make it easier to determine how many channels/conductance is required, and even help pick up on problematic portions of recordings, a few weeks ago I had a recording with a section displaying negative conductance due to a subtraction error, this error would have been obvious when looking at conductance graph.
Along these same lines a state occupancy graph would also be helpful in understanding how the model is behaving. I want to track the specific state occupancies so I was looking for a way to access the Pt matrix. They're calculated in qubx_model.cpp but I'm not sure how to access the model object they're being stored in.
In the global namespace (qubx.global_namespace), the onscreen qubx.model.QubModel object is QubX.Models.file (the qubx.modelGTK.ModelView object is QubX.Models.view). The model-related tables are QubX.Models.file.states, QubX.Models.file.classes, QubX.Models.file.rates, and QubX.Models.file.constraints_kin. They are indexed by [row_index, field_name], e.g.
k0 = QubX.Models.file.rates[0, 'k0']
See qubx.table for more. Matrix-related functions in qubx.model include:
ModelToP0(QubModel): returns the vector[Nstate] of constant entry probabilities from the States table ("Pr"), normalized to sum to 1.0, as a numpy.ndarray.
ModelToQ(QubModel, stimulus=None): returns the numpy.matrix (Nstate x Nstate) of rate constants, with diagonals such that each row sums to zero. Off diagonal entries are calculated k = k0 * Ligand * exp(k1 * Voltage); if any rates are ligand- or voltage-sensitive, provide optional arg stimulus: a dict(stimulus name -> value), or it will simply use k = k0.
QtoA(Q, dt): returns the sampled transition probability matrix (Nstate x Nstate) after dt seconds.
QtoPe(Q): returns the numpy.array (Nstate) of equilibrium occupancy probabilities.
you can add an artificial signal computed from existing signals:
Current / (Voltage - (-60))
You can edit the "computed as" field in the Signals table at any time. To remove the computed signal, delete that row from the Signals table.
Alternatively, you define an artificial signal using Tools:Extract, Signals: Custom. Pro: faster, since the function is calculated once and saved on disk. Con: must save a copy (extract) of original data file.
At minimum, the function must operate on individual floating point numbers. It can optionally achieve higher performance by operating on arrays as well. The reason relates to our use of numpy arrays, which keep numbers in native "unboxed" form. It can be very slow to convert each array element to a python number object, multiply by other python numbers, then write the result into an array. Numpy avoids this by providing vectorized functions implemented in native code, e.g.
my_array *= 2
can be 100x faster than
for i, x in enumerate(my_array): my_array[i] = 2*x
In addition, numpy functions such as log, exp, pow, sin, etc... can take numbers or arrays as arguments. See http://docs.scipy.org/doc/numpy/reference/ufuncs.html. As a result,
def f(x): return 2*numpy.exp(-x / t)
is valid when x is a number or when x is a numpy.array. (If x is an array, it returns an array with the same dimensions, computed element-by-element.)
For more complicated functions, you may have to test whether x is an array:
is_array = hasattr(x, 'shape')
If your function does not work with arrays (raises an exception), I believe the program falls back and calls it on each element.
I think the net charge is the integral of current: charge = integral|from a to b|(I dt). Or in discrete computer terms: charge = sum(I[i] * delta_t). Using the mean value theorem for integrals, or the fact that mean(I) = (1/N)*sum(I[i]), we can substitute to get charge = N*delta_t*mean(I) = duration(in seconds)*mean(I).
You can customize QUB Express's measurement script to calculate this and other quantities:
measured['%s Mean' % signal.name] = samples_included.mean()
measured['Net %s' % signal.name] = measured['Duration']/1000 * measured['%s Mean' % signal.name]
Now if you have a signal named "Current", "Net Current" should be included in all of QUB Express's measurements in the future.
Of course, it's important to set the baseline correctly first.
Not as automatically. You can read the "Avg" and "Length" from the data tooltip, then type them into the Calculator field of the Report window; e.g. with Avg=2.0 and Length=1.0: type "1.0/1000 * 2.0" to get the answer.
In QuB Classic, it's not automatic. You can read the "Avg" and "Length" from the data tooltip, then type them into the Calculator field of the Report window; e.g. with Avg=2.0 and Length=1.0: type "1.0/1000 * 2.0" to get the answer.
Short answer: No. Anything you can do with QUB Express, you can do with QUB. However, QUB express might require fewer clicks. See this video:
(I show all the steps to make the same simulation in QUB Express and QUB)
You can use the Python Scripts window for custom analysis of idealized data. Here is a basic example which lists all events from the Data Source:
idl = QUB.getChosenIdealization() for dataset in qubtree.children(idl, 'DataSet'): for segment in qubtree.children(dataset, 'Segment'): classes = segment['Classes'].data durations = segment['Durations'].data n = segment['DwellCount'].data for i in xrange(n): QUB.Report("%i: class %i for %f ms" % (i, classes[i], durations[i]))
The output is shown in the Report window. (The Report window is faster handling many lines of output, so it's preferable to use QUB.Report instead of the builtin print function.)
You might report only lines which match your criteria, e.g.
... for i in xrange(n): if ( (1 <= i < (n-1)) # not the first or last event and (classes[i] == 0) and (classes[i-1] == 1) and (classes[i+1] == 2) ): QUB.Report("%i\t%f" % (classes[i], durations[i])) # class<tab>duration
More extensive scripting, such as modifying the idealization, can also be done.
When you Idealize or "chop" idealization, it generates measurements into a table e.g. Segments or List. Then you can plot some per-segment value such as Popen (occupancy probability of the open class) vs. segment index, and apply color(s) to that plot, in order to exclude outliers or break the data into multiple populations.
Using QuB (classic):
To analyze just the segments you've colored red in "Results: Select", right-click the red square on the left edge and choose "Make selection list..." It will make a list of only the red segments. Then choose Data Source: list and continue analyzing with the new red list.
Using QUB Express:
"Select" is renamed Figures: Charts. In version 0.9, it can plot info from the Segments or List table. (Chop is in the Lists menu, at upper-right of the Data.) After coloring points from the Segments table, use Tools: Extract with Source:other, Segments: group 1, to create a new data file with only the red segments, then continue analyzing the new file.
If you've used Figures:Charts to color some points from the List table, look at List in the table view, and click "Copy sels to list," which will make a new list with only the red selections. Then use Tools:Extract with Source:list to create a new data file, and continue analyzing that file.
I'm having an issue modelling a set of traces. The activation spike is quite short lived, 0.005 s after the voltage step the current has already come down by half, yet the QUB approximation of the voltage doesn't change until 0.01 s after the voltage step, missing almost all of the current. As a result the model has no stimulus to respond to and is unable to get a fit.
Does anyone know why this might be happening? Is there any way to change this behaviour and simulate the voltage more accurately?
First, install XCode from the App Store. Then open the Terminal and install Homebrew:
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Homebrew's default gtk+ works with an X server, but we want native Quartz graphics, so we use specific taps:
$ brew install marcsowen/gtkquartz/pygtk gsl homebrew/python/scipy boost
As of this writing, the marcsowen/gtkquartz tap is not compatible with osx 10.10. To make it compatible, go to /usr/local/Library/Taps/marcsowen/homebrew-gtkquartz/ and edit two files: In gtk+.rb, change the 'url' on line 5 to version 2.24.25, and in pango.rb, change the 'url' on line 5 to version 1.36.8. Also delete the 6th line of both files (the line starts with 'sha256'). Then remove and reinstall both recipes:
$ brew remove gtk+ pango && brew install marcsowen/gtkquartz/gtk+ marcsowen/gtkquartz/pango
To select the right compiler, add these to the ".profile" file in your home directory:
export QUBX_CCP="/usr/local/bin/g++-4.9" export QUBX_PYTHON="/usr/local/bin/python2.7"
and restart your terminal, or type "source ~/.profile". The system should be ready for QUB Express:
$ cd qub-express $ make clean $ make all
To make an app (bundle) and disk image in the dist folder:
$ make bundle $ make dmg
Wanted to know if I could derive some sort of thermodynamic expression from my kinetic schemes (from rate constants)? Will the Energy landscapes function in QUB classic help me do this? If so, what is the concept behind it. I am dealing with a C-----O scheme and sometimes a C1------O1---------O2-----------C2 gating scheme. All are equilibrium reactions and no ligand binding/unbinding is involved (purely gating).
The energy landscapes look OK qualitatively but can I get any quantitative meaning out of them? Is it in any way related to REFER?
It is the same mathematics used in REFER, e.g.
How to Turn the Reaction Coordinate into Time Anthony Auerbach
So the parabolic shape is somewhat arbitrary. The most certain thing is the relative height of the energy wells (states). As given here:
The difference in free energy from state i to state j is delG = -RT ln(Kij/Kji). To compute barrier height, we have to split the exiting rate constant K into K = A exp(-Ea/RT), so Ea = RT( ln(A) - ln(K) ). In QuB, "A" is the "reference rate", which you choose (arbitrarily) to be larger than any possible K.
A talk on REFER (Rate-equilibrium free energy relationship): videocast.nih.gov/summary.asp?Live=2397
There is no definitive answer, just a bunch of possibilities. You can refer to Biophysical Journal, Volume 92, Issue 2, 15 January 2007, Page 696 or PNAS vol. 102 no. 5 Anthony Auerbach, 1408-1412 for a non-standard interpretation of phi.
A rate constant is =A*exp(-dG+/kT). dG+ is the height of the TS barrier (usually assumed to be the point of the parabola intersection); kT have their usual meanings. A* is the 'prefactor', and is the bugger. There are several things in there, none of which are measurable. It's black, so people usually assume that it is constant, usually 1, and just ignore it. In the above 2 references I decided to put ALL of the rate constant change into A* and leave dG+ constant. It is a possibility, but cannot be proved.
Here is a guide:
by Dr. Prasad Purohit
I have used qub for the first time in several years on a new fujitsu notebook running windows 7 64 bit. Whenever now try to simulate single channel currents the programme generates what look like idealised traces. The sampling rate was set at 10 kHz.
This looks like probably a scaling issue. QuB stores data as integers, so it's important to choose a good "scaling" factor:
stored_int = round(pA * scaling)
To adjust the scaling, right-click "Simulate" and choose "new data file." After clicking OK (Simulate), scaling is in the next dialog (new data file options).
Our newer program, QUB Express, uses floating-point formats to avoid most scaling issues.
You get confluent eigenvalues when, for example, you have multiple rates with identic values. Tha'st the most common situation I encountered. It mostly happens because we generally initialize models that way (e.g., all rates are 1000s-1).
The std values are only meaningful in the last iteration, when they are actually calculatd. In rest, they are junk in memory, and reported as NAN by the Format routine. At least I think so.
Is there a way to fix the bin-width and bin-position when analysing the amplitudes? Is there a way to extract the not normalized histogram?
I need to compare events which contain several / different sublevels per event, thus I want to compare the different histogramms. The same scaling would help a lot.
The "Fix Noise" option is not used anymore in IDL/Base. Instead, one can apply amplitude or noise constraints using the constraint editor in the model.
To access the constraint editor, RightClick on the model, choose "Properties", and select the "Amplitude Constraints" panel.
The amplitude and noise of the closed states ("black") are regarded as a reference. The amplitude and noise of open states (other than "black") can be seen as absolute values, or as "excess" relative to the reference values. Thus, the constraints can be formulated in terms of absolute, or excess values. By definition, the excess amplitude and noise of the closed states are both zero.
The constraints available in the editor have the following meaning:
Fix exc Amp/Var:
Fix diff exc Amp/Var:
Scale exc Amp/Var
Scale diff exc Amp/Var
For example, when idealizing data with 3 subconductance levels that are assumed to be of equal amplitude, one can use a set of two "Scale diff exc Amp" constraints:
2 1 1 0 3 2 2 1
With these constraints, the re-estimation of amplitudes will make sure that the difference between "black" and "red" (the first level, 0-1) is the same as between "red" and "blue" (the second level, 1-2), and the same as between "blue" and "green" (the fully open level, 2-3).
Here is a trick: change the colors in the model so all of the in-burst states are red, and all the inactive states are black. Now the simulation will be "open" for the whole burst, and you can look at the open-duration histogram.
When you simulate, make sure to check "Idealized data: Class". Then choose Data Source: file, and go to the Histogram window. Right-click the third panel and choose "properties." Select class "1" (sorry, in this window 0==closed/black, 1==open/red) and adjust Bin count and log [log10 x axis]. Then close the properties and click the green button to draw the histogram.
Right-click the histogram to copy to the clipboard, or to Extract it to a text file or excel spreadsheet.
When you're working with real data, you can use a similar trick. First idealize. The next steps will modify the idealization, so you might want to save and work on a copy. Next, find the bursts using ChopIdl, e.g. terminating with a dwell in class 1 longer than Tcrit ms. (To get an estimate of Tcrit, run MIL with a 2-state model and "show iterations.") Now there is a selection list of "chopped events"; choose Data Source: List, click "EditIdl", and "change class 1 events to class 2."
At this point the idealized line is "open" for the whole burst. Use the Histogram window, as above, to see the open-duration histogram.
First, Idealize your data.
Then, create a list of bursts. To create it by hand, use the Actions under "Lists": First "Add List", then for each burst, highlight it and "Add Sel".
To quickly create a list of bursts, use ChopIdl. It will list all of the data regions whose closed events are shorter than Tcrit.
When you have a list of bursts, choose Data Source: list, and run "Stat" (Actions -> Modeling). In the results window, go to the "Segments" tab, and look at the column "interburst". Alternatively, go to the "Select" tab, and "show var..." -> 'interburst'. Interburst is the number of milliseconds since the end of the last burst. If there was no prior burst in the same data segment, 'interburst' is the number of milliseconds since the beginning of the data segment.
I generally find the idealization by SKM to be quite good.
However, sometimes SKM doesn't detect an "event" that I might want to call an event, or detects an event, but gets the duration of the event wrong. I would like to be able to manually insert new events and be able to alter the duration/amplitude of events detected by SKM. Does QUB allow you to make these kinds of manipulations easily?
Two basic operations are in the data's right-click menu:
Join Idl -- extend the event which is at the beginning of the selection across the entire selection
Delete Idl -- remove idealization from the selection
You can do more with the "Edit Idl" button (under Preprocessing on the right). For example, to insert several open events quickly:
I think maybe your data lost some precision on the way into QuB. Was the input file also text? When you open a text file, one of the number it asks for is "scaling." For historical reasons, QuB represents data internally as integers, and converts them to real numbers by
real = int / scaling
So when you enter the scaling factor, it effectively quantizes the data to increments of (1/scaling). Next time you open the input file in QuB, click into the "scaling" box; a calculator will appear with the maximum possible value and the resolution. Hopefully if you change the resolution to 0.01, Extract will work properly. Please let us know if it doesn't.
Extract the simulated recording ('Ext' button) into ASC format without checking any of the options of the 'output format' box and checking the box 'Join Segments' ('Only active channel' also checked) - I select in the window that appears next '4 bytes' and check 'Float' (although this is only to show the ASC data into QUB again, I think). This generates a TXT file.
Open the TXT file using the pClamp software - the program I use is called Clampfit and the version is 9.2.0.09 (I hope all this doesn't change with the version). In the window that appears next I check 'Gap-free' and 'Add a time column', assigning the correct sampling interval value. This imports the recording into Clampfit so that we can see the same recording we had in QUB now in Clampfit's window.
Last, save the recording as (File/Save as... option) an ABF file.
To build a list of clusters:
To join them together in a new data file:
To see amplitude and duration histograms for the listed clusters:
For duration histograms, you have to Idealize the data first (click Modeling:Idealize).
We noticed that the idealization is exquisitely sensitive to the amplitudes and standard deviations selected in the model and can change the number of states required. We have come across several different methods for choosing amps/std dev to put in our model and were wondering which one is best.
We can use:
What do you recommend?
As you have seen, we have written copy and paste for some types of files, but not all of them. The answer, for sampled data like IBW, is to change the file into a supported format like QDF. In Qub, "Extract" the whole file to QDF format, and copy/paste using the new file.
For idealized data you'll need a text editor like notepad. In Qub, do "File->Idealized Data->Save...", then open that file using notepad.
The scaling factor has units integers-per-volt. QUB calculates this from your acquisition setup -- integer range e.g. 16 bits = 32768, divided by voltage range e.g. +10 - -10 = 20.
In addition, there is a second scaling factor, for each A/D channel. This one has units of volts-per-unit, and is sometimes called the "calibration factor." So for example, if a 3V signal from the amplifier means 1pA, the calibration factor is 3.
You can change scaling and calibration factors after the fact, in data properties. You can set up calibration factors for future experiments in Options -> Acquisition Channels, or with Options -> Calibration Assistant.
When you create a new file for acquisition, it totally ignores the user-input scaling and writes the correct value for your acq setup. After that, you can change it and QUB won't know better. I think all files acquired on the same system should have the same scaling (not necessarily the same calibration), so it should be easy to check and fix.
The beta distribution is here:
I can happily add equations to Fitness in general, but the beta distribution requires implementation of the gamma function. When I enter "gamma(a)" into the curve box... it is in red, and I am presuming not recognised. I have installed Python 2.7 which includes that function in its library... but it is still not recognised?
from scipy.special import gamma
from math import * def after_initialize(): pass from scipy.special import gamma
After editing the script, we can type in the Beta function:
gamma(a+b)/(gamma(a)*gamma(b)) * (x**(a-1)) * ((1-x)**(b-1))
Note: they later modified the gamma function, "to add some Gaussian noise and normalise for the 'width' of the data (Beta distributions only run from 0 1, I think)":
ampC*gamma(a+b)/(gamma(a)*gamma(b)) * ((x/width)**(a-1)) * ((1-(x/width))**(b-1)) +ampB * exp( -(x/width - Mean)**2 / (2 * Std**2) )
We can write a function to scale one x value:
def scale_x(x): return (x-Fitness.data.xMin) / (Fitness.data.xMax - Fitness.data.xMin)
From the Python menu, choose Environment, and paste the function onto the end of the startup script. Then press Reload. Now, in your formula, replace all the "x" with "scale_x(x)", for example: "a*x**2 + b*x + c" becomes "a*scale_x(x)**2 + b*scale_x(x) + c".
Of course, it will be slower, because it will call scale_x for each data point, each iteration.
I've been trying to replicate the models behaviour under matlab and am having trouble understanding how the models run under certain conditions.
My understanding is that the rate matrix Q, is multiplied by the state distribution probability vector, P, to get the change in distribution.
dP/dt = P*Q
However, if one has k0=10, and k1=0 then q_ij = 10*e^0=10,
If there's a q_ij > 10 than it's easy to get an element of P > 1, even with a dt=0.001 if k1=1 it doesn't take much to get a large q_ij again.
So how does QUB deal with these scenarios, or am I missing a step somewhere?
Essentially, the rate constants have units of probability per second, so they must be integrated to yield transition probability. See equations 2-4 in this paper:
i was wondering if there is a way to import a "table" of data analysed by a "comercial" program. i'm totally interesting in making all my analysis with qub but i have several single channel data done under this comercial program
the program is TacFit by bruxton corporation, i have several data analyzed with this program but now i want to make all the analysis with qub i'm looking for a way to save or export these tables of duration and states, i can save data as a table like this but that's all:
WAVES /D Sweep, Transition, PreAmplitude, PostAmplitude, Level, TagValue BEGIN 2 47.872479 2.4789430E-012 7.4789430E-012 1 0.000000E+000 2 47.872848 7.4789430E-012 2.4789430E-012 0 0.000000E+000 2 47.874586 2.4789430E-012 7.4789430E-012 1 0.000000E+000 2 47.874683 7.4789430E-012 2.4789430E-012 0 0.000000E+000 2 47.988804 2.4789430E-012 7.4789430E-012 1 0.000000E+000 2 47.988911 7.4789430E-012 2.4789430E-012 0 0.000000E+000 2 48.008335 2.4789430E-012 7.4789430E-012 1 0.000000E+000 2 48.008452 7.4789430E-012 2.4789430E-012 0 0.000000E+000 2 48.014088 2.4789430E-012 7.4789430E-012 1 0.000000E+000 END
Since QuB Classic 126.96.36.199 and QUB Express 0.9.0, we can read intervals/dwells/events from TacFit tables. Some notes:
Hi, i want to know how could i test for a set of kinetic models which of them could explain better my experimental data evaluating if there is any correletion for openings and closings. I mean, if i have a data which could be fitted by these two models:
Model 1: C1<--->C2<--->O1 or Model 2: C1<--->O1<--->C2
Is there any way to destinguish between them, for instance, an analysis to test the "correlation" between C1 and O1?
I was wondering how much is a good number to set in MIL for "Max iter."?
I got a complex model with 3 closed and 3 open states without agonist or voltage stimulus. When I set the Max iter number to 10 and run Mil, I get strange constants with very high SD. But if I increase the Max iter higher than 10 (50 or 100), the program suggests very strange rate constants for some transitions (1e-5 or slower) or too fast (1e+6 or faster) for other transitions. How could I interpret this?
This is a difficult question...
The simplest answer is to go to the open state's properties, and make Std smaller, then re-Idealize.
The rate constants also have an effect on idealization. In particular, fast opening and closing rates cause it to detect more short events, so you could try smaller rate constants. Some people idealize first with a simple model, use MIL to find a more likely model, then use that model to re-idealize.
The re-estimation options in Idealize can also help. "Fix kinetics" locks in your rate constants. "Fix std", class 2, will lock in your custom open std.
There are yet more options in Idl/Base (idealize with optional baseline correction) -- try various "re-estimation" and "most likely sequence" options, and be aware the "Fix" settings have no effect.
Due to finite rise times, events shorter than some threshold can't be detected reliably. The MIL likelihood algorithm uses a "dead time" -- events shorter than Tdead (td) are discarded, and the math is adjusted. This happens invisibly each time you click MIL Rates, so you can adjust td (at upper right) and see how MIL behaves with different dead times. You will still see the short events on screen, unless you lock in a particular dead time and permanently discard the short events, by Idealizing with the option "apply dead time to idealization." So if the false events are shorter than the dead time, which is typically 2 to 3 samples, then you can ignore them because MIL ignores them too.
Of course if they are rare enough you can 'J'oin them, but that can be a lot of work.
QuB can't do column arithmetic or save as ABF. However, it can read a text file from Origin, detect events (idealize), and do further analyses such as histogram fitting and maximum likelihood modeling.
I seem to have 4 distinguishable closed states consistently in my data but not sure which Tcrit to choose.
I have gone through invidual bursts and found a ball-park value of Tcrit, this value seems to be close to the one I get for the 2nd to 3rd transition in MIL results (Tcrit value in the second column).
Again I had ignored any burst which had less than 3 open events (Chop idl) but I feel this is flawed as even one (apparent) event may consist of bursting activity which is not picked up due to limitations in the rise time of Amplifier, sampling rate, filtering etc- do you think this is right?
It depends on which rate constants you are interested in. If it's just the one fast transition between closed and open, you can choose the shortest Tcrit, and model the bursts with a 2-state model. Or you can pick a longer Tcrit, and use a model with more states.
You're probably right about keeping the bursts with only a few events. We have a lot of tools in QuB for discarding various kinds of data, but it's up to the scientist to decide when it's appropriate, and why.
What we wanted to do was to select "length" (after chopping the file) and use k-means to determine the number of groups that burst lengths fell into and then determine the mean of each group. We estimated that there would be 4-5 burst length groups per condition. To be sure we used the "k-means random" function with k=7 and tested all <= k for 25 iterations. We analyzed the output values and chose 5 groups (k=5). This felt like a somewhat arbitrary choice though. What do the numbers in the blue report represent after k-means is run? There is always a large number (sum squared deviation?) and then a bunch of numbers separated by commas in brackets.
Once we chose 5 groups, we re-ran the file with k=5 for 25 iterations and recorded the mean for each group (we are fitting >1200 lengths). What we noticed however was that if we re-ran the k-means at k=5 for another 25 iterations that it produced different numbers. We tried again, and again a different result. We also tried this for other variables such as "openings per burst" (nevent2) and noticed a similar amount of variability. Are we misusing k-means? I was under the impression that k-means output the "best" grouping after all the iterations are complete. Is this the case?
The numbers in the blue report are sum squared deviation, followed by the color of each datapoint in the best trial. I don't know how to definitively choose K, but I have heard the suggestion to plot K vs SSD and look for the inflection point.
Two sources of variability in the output:
Fitting data with conductance states that are close in amplitude is difficult. It looks like the states differ by less than 1 pA.
When fitting "difficult" data, it's important to take care in assigning amplitudes and standard deviations for the states (both matter for SKM). You can use the amp tool to assign amplitudes (I think it works for more than one conductance), or do it manually with the grab function. It is very important to run amp or use grab at the same digital filter (or unfiltered) you run SKM at to idealize the data.
I would try to collect data at 2.5 times the frequency of the physical filter, ie filter at 10 kHz and record at 25 kHz, or filter at 5 kHz and record at 12.5 kHz, or filter at 2 kHz and record at 5 kHz.
Set the dead time to 2.5 samples.
Filter at low frequency for display purposes only. Select a conductance state and "grab" unfiltered (or at the highest number digital filter possible), repeat for all your states.
When you run SKM, also run it unfiltered. Or both grab and SKM both at the highest number filter possible for good results. Make sure the idealization approximately follows what you would do by eye.
When running MIL, it works best in practice to start with simple models and build to more complex models. Start with C-O1-O2-O3 with your three conductance states first. Then build your more complex model one state by one. Make sure all the components are visible in the histogram at each step.
SKM also depends on the model and the rates that are in it. Once you have a reasonable idealization and fit, it may be useful to reSKM and MIL.
I'm not an expert on fitting binding, but I think it works best if you fit across concentrations, is this what you're doing? For the purposes of fitting at a single concentration, just set the rates to scale appropriately for your model. When I fit binding, it can be difficult to keep track of the units, make sure the units are correct for the concentration to give appropriate units on the rates.
I have a question about Tcrit. On this data file I am running MIL Rates on my data. When I add another black closed state (5 total) there is no Tcrit reported in the blue report window. When I remove a black closed state (4 total) a Tcrit is reported (as see in this file). What do you suggest doing in this situation to solve for Tcrit?
Could anyone please explain how the MIL program estimates the Tcrit and why does it give us more than one value?
In one paper in which QuB was used, it says 'Tcrit was estimated as the intersection of areas between the second and the third closed state durations'- now is there a particular reason for choosing 2nd and third closed durations?
MIL finds a Tcrit for each pair of adjacent closed components. So, with three closed states giving rise to three exponential components, it finds Tcrit between the first two, and Tcrit between the last two. Tcrit is the point at which, if you cut off the tails of both distributions, you'd be cutting off the same number of events. The solution is approximate, using gradient descent, and occasionally it doesn't find the solution and leaves it blank.
In that paper, the longest Tcrit was chosen in order to exclude desensitized states and focus on the faster kinetics.
Here's our formula to solve for Tcrit:
Amp1 * exp( - Tcrit / Tau1 ) = Amp2 * (1 - exp( - Tcrit / Tau2 ))
The reason is that each PDF component is computed as
f(t) = (Amp / Tau) * exp( -t / Tau )
and we want the area under component 1, to the right of Tcrit:
F(t) = -Amp * exp( -t / Tau), from Tcrit to infinity: F(infinity) - F(Tcrit) = 0 - (-Amp * exp( -t / Tau)) = Amp * exp(-t / Tau)
to equal the area under component 2, to the left of Tcrit:
F(t) = -Amp * exp( -t / Tau), from 0 to Tcrit: F(Tcrit) - F(0) = -Amp * exp(-Tcrit / Tau) - (-Amp) = -Amp * (exp(-Tcrit / Tau) - 1) = Amp * (1 - exp(-Tcrit / Tau))
Theoretically then, the number of component 2 events mistaken for component 1 should equal the number of component 1 events mistaken for component 2.
Some of these formulas are written more clearly at the bottom of the following page:
LL is roughly proportional to the number of events. I think they can look at the change in LL because their experiments generally have a similar number of events. I went over and asked; around 5,000 events; i.e. the threshold in terms of LL/event is around 0.002. As you guessed, this number is arbitrary, based on their experience. To get an idea of how sensitive LL/event is with one of your models, try simulating some data, then running MIL with and without extra states.
The text report window shows several things including the previous LL. To my chagrin, it seems LL/event doesn't show enough significant digits -- maybe next month, along with a better Tcrit solver. And we're kind of out of room to show e.g. delta LL/event.
Since you asked, there is a more fundamental mystery in the LL from MIL: "log likelihood" is log probability; that is, e^LL should be between 0 and 1. Instead, it's calculated by multiplying probability density matrices (Qin et al 1996) whose elements are proportional to the rate constants. So we get a "likelihood" proportional to K^(num_events), for an average rate constant K; and LL ~= num_events * log(K). Perhaps the salient quantity is (LL/event) / log(mean K)?
I first simulated 10 segments of data at equilibrium as indicated in the tutorial "Simulation-Simple", then I tryed the tutorial "Basic MIL" of "Single Channel Kinetics" by building the model C-O-C. Afterwards, I repeated MIL by building the following different model: C-C-O.
Now, if I wouldn't know from which starting model I obtained the simulated data file, how could I discriminate between the two tested models C-O-C (actually the right one) and C-C-O and tell which model fits data better?
Another question: why the shapes of the dwell closed times histograms drawn in the "Results" window and in the "Histogram" window (in both cases "count/total vs Duration(log10)" are clearly different?
There are three possible reasons the histograms are different.
The question about how to discriminate between models is more difficult. I think it has been proven, you can't discriminate between two models at equilibrium if they have the same number and color of states. So you need to get it out of equilibrium, or use some other trick.
If the channel binds to a ligand or something like ATP, one of the rates is a binding step that should be proportional to ligand concentration. Try recording the channel at different concentrations: the correct model should be the same at all concentrations, except for one rate that varies proportional to concentration.
Other things you can vary include voltage, pressure, and even point mutations. Also, you can use a stimulus to drive the channel into a particular state, then mark that as the start state in the model. If the models have loops (cycles) you can "Balance all loops" under model: properties: kinetic constraints.
I would recommend looking through our list of Research Using QuB
to see how other people have done it.
The simplest, if the amplitudes stay stable, is to type or "grab" them by hand, then use amplitude constraints to fix each class's amp (and maybe std too).
QuB 1.5 has a button "Edit Idl" under Preprocessing, which for example can change class 3 events to class 2. If you see two indistinguishable classes in the idealization, you could use EditIdl to merge them into one.
Or, idealize with a 2-color model, 3-color model, 4-color, ... and pick the largest n with unique classes.
Also, you can repair idealized data after the fact by selecting a piece, right-clicking, and "Fill Idl" -- it sets the whole selection to the class of your choice. Then to compute updated Results (occupancy, etc.), click "Stat" under Modeling.
I've never undertaken kinetics analysis of single-channel data, so please excuse my (most likely) naive questions. I would like to look at changes in open and close times for some glycine receptor data. My patches generally have double openings present and I have tried to excise these from the data using the delete function. When I used the erase function, I found that the "erased events" returned whenever I saved the data file.
My first question is that when the data is idealized the amplitudes of the openings appear to be underestimated. Glycine receptors do exhibit sub-conductance states. How do I model these correctly?
Secondly, when the data is idealized, brief closures (ie those that don't return completely to baseline) are ignored. As a result, the closed times appear to be longer than they are. Is it possible to set some sort of threshold to recognize these brief closures?
Sorry about the delete function. I've heard it doesn't always work right. Instead, try "delete idl" (leaves a gap in the idealization) or "join idl" (replaces selected idealization with the level at the beginning of the selection, e.g. closed).
Sublevels and brief closings can hopefully both be detected with a bigger model. For sublevels, add a state with a new color, e.g. blue, select a piece of data at the sublevel, right-click the blue state and "grab." Connect it as you see it in the data; i.e. between closed and open, if it happens during the transition. For brief closings, add a closed state, connected to the open state, with a really fast exit rate. The idealization is somewhat sensitive to rate constants, so it might get better as you tweak them. Also experiment with the idealization options: "fix kinetics" (rates), "fix amp", "fix std", and "re-estimate" (if re-estimate is un-checked, all rates, amps and stds are fixed).
Some missed events are okay. They're inevitable due to finite rise time in your equipment, so the rate constant optimizers MIL and MSL discard all events shorter than the cutoff "td" (dead time), in the upper-right corner. Then they adjust mathematically for their absence. The adjustment is valid as long as the missed events take up a small fraction of total time, and the rate constants are slower than 1/td.
If you really want to get the brief ones, though, you could try modeling them as their own color of substate, then use the "Edit Idl" button to change e.g. class 4 events into class 1.
QuB can save a table of each event with its mean current (as a text file):
File -> Idealized Data -> Save idealized with event stats...
[QUB Express: Tools:Idealization:Save...]
Hopefully you have a stats program which can take it from there.