Back to the Top
The following message was posted to: PharmPK
Hello All,
Please advice;
In LBAs, should blank should be included for plotting the standard
curve?
With Regards
Rahul
[LBA - ligand binding assay]
Back to the Top
Dear Rahul:
I would say yes, include a blank for any assay. You need to
determine the SD over its entire operating range, including the
machine noise at the blank. Otherwise you cannot give correct weight
to the measurements, which should be by the reciprocal of the variance
of the assay at any point, not by the CV%.
Very best regards,
Roger Jelliffe
Back to the Top
The following message was posted to: PharmPK
As of yet-never.
Back to the Top
The following message was posted to: PharmPK
No,
Blanks help you to check if there is an interference
Back to the Top
The following message was posted to: PharmPK
Hi Rahul
Yes it is required for assay acceptance criteria, but for plotting
only calibration curve blank is not required, by calibration curve you
can calculate the concentration of unknown sample.
Blank is required for acceptance criteria of Calibration curve and to
know the % interference.
Hope this help you
with regards
laxman
LifeSan Clinical Research.
Mumbai, INDIA
Back to the Top
Yes blank should be included to know the instrumental response.
Dr Zafar
Back to the Top
Dear Laxman and Rahul:
I hear various guidelines stated, for assay acceptance
criteria. However, I have not yet heard the reasons behind such
guidelines. What actually are the acceptance criteria, and what are
the reasons for setting them? I would very much like to hear them.
Why, for example, is a blank SD required for assay acceptance
criteria, but not for "only the calibration curve? What actually is
the % interference? How is it defined, and why so?
Very best regards,
Roger Jelliffe
Back to the Top
The following message was posted to: PharmPK
Hi,
The question is if a blank should be included in the std curve, by my
understanding, the answer is that a blank should not be included.
However, single blank and double blank are required for other
purposes.The blanks in multiple matrix lots also are needed for assay
validation.
Xiaodong
Back to the Top
The following message was posted to: PharmPK
Hi
Roger
If you go through the USFDA bioanalytical guideline for method
validation then you will find the following points...
Calibration/Standard Curve
A calibration (standard) curve is the relationship between instrument
response and known concentrations of the analyte. A sufficient number
of standards should be used to adequately define the relationship
between concentration and response. The number of standards used in
constructing a calibration curve will be a function of the anticipated
range of analytical values and the nature of the analyte/response
relationship.
A calibration curve should consist of a blank sample (matrix sample
processed without internal standard), a zero sample (matrix sample
processed with internal standard), and six to eight non-zero samples
covering the expected range, including LLOQ.
1. Lower Limit of Quantification (LLOQ)
The lowest standard on the calibration curve should be accepted as the
limit of quantification if the following conditions are met:
1) The analyte response at the LLOQ should be at least 5 times the
response compared to blank response (interferance).
2) Analyte peak (response) should be identifiable, discrete, and
reproducible with a precision of 20% and accuracy of 80-120%.
2. Calibration Curve/Standard Curve/Concentration-Response
The following conditions should be met in developing a calibration
curve:
1) 20% deviation of the LLOQ from nominal concentration
2) 15% deviation of standards other than LLOQ from nominal concentration
At least four out of six non-zero standards should meet the above
criteria, including the LLOQ and the calibration standard at the
highest concentration.
Hope this will help you.
Regards
laxman
LifeSan Clinical Research
Mumbai, INDIA
Back to the Top
Dear Laxman, Rahul, and all:
Once again, I hear guidelines and criteria, but NOT the
scientific reasons behind them. I hear the same old stuff. But really,
WHY do we do the things the guidelines tell us to do? What are the
REASONS behind the guidelines? Are we really all brainwashed into
being such unthinking sheep or robots? What does it take for the Pope
to "bless" any assay? I do not hear any real science behind the
criteria, such as selecting a 20% CV, for example, for assay
"acceptability".
There is a quite different idea which I think we should
consider here. It is not what a lab should do to be considered
"acceptable". Instead, the idea is "what can a lab do to deliver the
best, most precise, and most scientific service to the people who
order the tests". Things have changed a lot in the last few years.
To pontificate about whether or not a 20% CV is "acceptable"
as a measure of assay precision is simply to ignore the problem of
what to do with data that are truly below such a LLOQ or LLOD. And in
general, let us ask what is the best measure of assay precision, and
WHY? How can we possibly just ignore low data? How would you feel if
you were the patient, and some lab reported your viral load as "less
than 50 copies"? Suppose that were the cutoff because the CV was 20%
of that value? Would you feel good about it? Or would you wonder if
the truth were
45 +/- 10, or 5 +/- 10, or 1 +/- 10, or maybe even zero +/- 10?
It makes a BIG difference. The lab community CANNOT walk
away from this issue because it usually happens to someone else.
Similarly, what about a serum drug concentration that is below LLOQ?
It CANNOT be ignored by censoring the data and walking away from the
problem. There is a better way now.
The REAL issue here is the credibility, meaning, or
significance, of any single assay result. It is not a yes/no issue of
"acceptability or not". Instead, it is an issue of appropriately (in
the best way possible) evaluating the RELATIVE credibility of any
assay result, high, medium, low, or very low.
A measurement that is twice as precise as another one ought
to receive twice the weight of the other. If you look at the
statistics books, (for example, DeGroot M. Probability and Statistics,
2nd ed, Addison-Wesley, 1989, pp. 420-423) you will find that a good
quantitative measure of the relative credibility of a data point is
the reciprocal of the variance with which a measurement was made. This
is a reflection of the SD (not the CV%) of that measurement.
Obviously, we will not make each assay measurement in
replicate. That is just not a cost-effective move. However, in our
quality control procedures, we can ask ourselves "what is the SD of
the assay over its working range?" This is easy to do, and it is
already almost been done in our usual QC procedures. Using
representative replicate samples, find the general relationship
between the measured concentration and the SD with which it is
measured. Simple to do. You really have already almost done it.
One version is as follows. Once the assay has been deemed to
be "acceptably precise," put some representative samples through it
and see what you get. For example,
1. Take a blank sample. Divide it into 5 aliquots.
Measure each one, in such a way that the measurement can be either
positive or negative around zero. The mean should be zero, and the SD
will be whatever it is. Determine the SD. The CV% will obviously be
infinite. However, the SD is always capable of being found.
1a. Do NOT do this in duplicate. It is much more difficult to get
a good estimate of the SD and the variance (or of the CV%) than it is
of a mean. You ideally should do as many as you can. However,
somewhere between 5 and 10 replicates per sample is usually enough to
get a reasonable estimate of the SD and the variance. So use 5
replicates per sample.
2. Do the same thing for a low sample. In the same way,
divide it into 5 aliquots. Measure each one. Determine the mean and SD.
3. Do the same thing for a mid-range sample.
4. Do the same thing for a high sample.
5. Do the same thing for an extremely high sample, at
the top of the range.
Now, you have data of the general relationship between the sample
means and their SD's over the working range of the assay. Now you can
use this relationship to get a good estimate of the SD with which any
single sample that comes through the assay system is measured. You can
do this with any software that fits a polynomial relationship to such
data. For example, the assay SD can be expressed as
SD = A0 + A1C1 + A2C2 + A3C3, where
A1, A2, A3, and A4 are the four coefficients to be determined by the
software, and
C1 is the measured Concentration itself,
C2 is the measured Concentration squared,
C3 is the measured Concentration cubed.
Using this formula, it is easy to get a good estimate of the SD with
which any single sample is measured. Now take the SD, square it to get
the variance, and take the reciprocal of the variance (Var) to get the
best estimate of the relative credibility of any assay measurement. 1/
Var is a well-known and widely accepted quantitative measure of the
credibility of any measurement. This polynomial relation can be stored
in pharmacokinetic software to determine the correct weight to give to
any assay measurement when fitting the data. 1/Var is the best measure
of the relative credibility of any assay measurement.
Why do this? Because we cannot any longer simply put our heads in the
sand and ignore the fact that people are using lab assay data is a
quantitative sense. Those doing Therapeutic Drug Monitoring use serum
drug concentration measurements to model the pharmacokinetic (PK)
behavior of drugs in patients and in research subjects. They need to
give correct weight to the data in their fitting procedures, so they
can make correct PK models whose parameter values are estimated with
the greatest precision.
Why also to do this? BECAUSE NOW THERE IS NO NEED TO CENSOR LOW DATA
FROM ANY ASSAY. When done this way, nothing blows up and becomes
infinite as the measurement approaches zero. The SD and the weight are
ALWAYS FINITE. The precision of any assay can be determined all the
way down to and including, the blank. There is NO LLOQ or LLOD. They
become totally unnecessary, and can be discarded.
These are the REASONS for the procedure I advocate. This is the
scientific basis for not using CV%, and for using the assay SD,
instead, to get 1/var and to assign the correct quantitative measure
of credibility to each assay measurement. I would MUCH rather know
that my PCR was 2 +/- 10 instead of simply "less than 50". How about
you guys? This is what I think has real scientific reasoning behind
it. It is also quite cost-effective.
What do you think?
All the best,
Roger Jelliffe
Back to the Top
The following message was posted to: PharmPK
David
I totally agree with you. Of course the best way to present a result is to present the measured value and then the uncertainty of that measurement.
The only thing I see is that sometimes it may be difficulty to implement it. For example, regarding HPLC analysis, how would you measure a blank? If you don't see any peaks would you always consider it a zero? What is the error in this case? You could see an error when you spike the matrix with something, but, correct me if I'm wrong, the SD for zero will always be zero. Or is there a way to extrapolate an SD for zero?
And how do you quantify samples below your lowest concentration of the calibration curve? Is it correct to use a curve to quantify a value outside of it? In that case, could you do it above the highest point of the curve? Couldn't you be at risk of a non-linear behavior above a certain level?
Just curious about these things. I'm not really an expert, but I've seen this discussion coming so many times, and I've always had these doubts.
Thanks for the great discussion.
Andr=E9 Mateus
Back to the Top
It is all very well to speculate upon this. But precision is only
half the equation. The other half being accuracy. And limits on
accuracy constrain setting the LLOQ as much as do limits on precision
using any parameter.
Back to the Top
Blank responses are used to assess noise. In instrumental methods
the response is collected at the retention time of the peak. The
period is generally the peak width for the low standard. If a peak
is not present in blank the lod is calculated as 3 times the noise.
If a peak is present, the lod is calculated at a greater value such as
10 times the noise. A similar approach is used in ligand binding
assays but the multiplier is generally less. Lloq is established
generally removed from the lod. If a lot of risk can be tolerated,
the lloq is moved towards the lod. If risk is to be minimized, the
lloq is more removed from the lod. Risk involves failure of the
lloq, for either accuracy or precision, resulting in reassay of
samples if available.
Blanks are used to not only to assess background noise but to assess
endogenous interference with either the analyte or internal standard
(if one is used), and carryover.
Back to the Top
Dear Ed, Roger, et al.,
I could do with some further clarification. The ongoing, and reoccurring, discussion seems to focus on LLOQ and the use of 1/Var vs. CV%, etc. (i.e. the usual suspects). I can see the point of using 1/Var to construct, say, a calibration curve with proper (correct?) weighting. This is then the model that is built with calibration standards (levels), i.e. the training set. This is then used to predict the unknowns, i.e. the samples. I guess what I want from this prediction is: 1) the predicted value, 2) the accuracy, and 3) the precision. I find CV% more intuitive than 1/Var for the measurement and prediction precision but perhaps I need re-training/ decontamination of the brain. Roger, you use the term =93credibility=94. Is this synonymous with precision or with accuracy? From what I gather you mean =93precision=94; please correct if wrong. But as Ed pointed out, accuracy is the other side of the coin.
Let me illustrate my hypothetical problem with an example: an unknown sample has a predicted value of 10 and with a polynomial (as described by Roger) the (estimated) SD is 8, so, the CV% is 80% and 1/Var is 1/64. Another sample is predicted to be 100 with an SD of 20; CV% is 20% and 1/Var is 1/400. Which sample prediction is more credible (perhaps an irrelevant question)? Which prediction is more accurate? I don=92t know as I can=92t tell from these numbers.
Best regards,
Frederik Pruijn
Back to the Top
The following message was posted to: PharmPK
It only gets worse. With a movement towards reporting total error, the
deviations from either precision or accuracy become obscured. Since the
total error is %CV plus %Bias, it is also further removed from Roger's
suggestion.
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "LBA Standard curve" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)