Back to the Top
Dear all,
I hope that some one has experience with BLQ values in BE studies.
The problem that arised is situation where I have two or even three
BLQ values in a row and they are nested between two values above the
quantification limit.
I don't have any defined way what to do with this data.
I would really like to hear some suggestion in how to solve this and
not to be bias in my calculations.
With regards,
iva
Back to the Top
The following message was posted to: PharmPK
Iva,
There are two avoidable sources of bias:
1. Do not let your analytical chemists fail to give you the
measurements they made that are below BLQ. There is no reason not to
use these values. It is just silliness that chemists fail to give you
the measurements because of an arbitrary cut off that has no real
meaning for pharmacokinetic analysis. Omitting these values will
always cause bias.
2. Substituting zero for the values that the chemist hides from you
is even worse than only using values that not BLQ. Any value after a
measurable value is certainly not zero. It may not be easily measured
but it is not zero. Assuming it is zero is certain to be wrong.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.aaa.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
The following message was posted to: PharmPK
Dear Iva,
Replacing the concentration value by the LLOQ value or 0.5x LLOQ is a
possible approach. You have to define this in your SOP for PK
evaluation.
Kind regards
Thomas
Back to the Top
The following message was posted to: PharmPK
Dear Iva,
The measured concentration value is always the best estimate, regardless
of whether it is above or below LOQ. Therefore, replacing it with any
other value will always be worse than using the actual measured value.
Remember, values below LOQ are not invalid (a common misconception) -
they are simply likely to have more bias than an arbitrary, pre-defined
threshold. But they are still the best estimates we have.
Andrew
Andrew Volosov, PhD
Director, DMPK
Inotek Pharmaceuticals
Back to the Top
The following message was posted to: PharmPK
Andrew,
You wrote:
> The measured concentration value is always the best estimate,
regardless
> of whether it is above or below LOQ.
Excellent! Every PK person should print out that sentence and send a
copy to their chemical analyst to study.
You then wrote:
> Remember, values below LOQ are not invalid (a common misconception) -
> they are simply likely to have more bias than an arbitrary, pre-
defined
> threshold. But they are still the best estimates we have.
I dont know of any reason to believe that an adequately measured
value less than LOQ is biased. It is of course likely to be more
imprecise than values above the LOQ because the LOQ is defined in
terms of imprecision.
Discarding values less than LOQ and using the remaining values will
give biased values for true concs close to the LOQ. The measured
concs are samples of a distribution around the true conc. If you
discard the concs less than LOQ then the remaining concs must on
average be higher than the true conc i.e. the reported measurements
are biased estimates of the true conc.
One way to minimize this bias is to discard values that are less than
LOQ*CVLOQ*1.96 (approx 1.4 times the LOQ). This is because having a
cut off higher that is approx 2xSD above the LOQ will discard less
than 2.5% of measured values and thus the bias is probably ignorable.
Of course this procedure is only needed if your chemical analyst is
too dumb to understand why not reporting measured values less the LOQ
causes bias in PK analyses.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.aaa.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
The following message was posted to: PharmPK
Hi all
I completely agree with 99.9...% of all of Nick's sentiments on LOQ.
Andrew wrote:
> > Remember, values below LOQ are not invalid (a common
> misconception) - > they are simply likely to have more bias
> than an arbitrary, pre- defined > threshold. But they are
> still the best estimates we have.
Nick Wrote:
> I dont know of any reason to believe that an adequately
> measured value less than LOQ is biased. It is of course
> likely to be more imprecise than values above the LOQ because
> the LOQ is defined in terms of imprecision.
I think this is something that we probably overlook. There will be
bias in
samples below LOQ (actually in all samples irrespective of their value).
Whether bias is more significant than imprecision will depend on the
assay.
But since most assays are linear and most linear regression lines don't
intercept at zero concentration for zero "peak area" readings
(assuming for
example HPLC) - then bias will occur. There cannot be an actual
negative
concentration but the prediction may well be negative.
Nevertheless I agree with the sentiment that the assayed
concentration is
the best estimate that we have.
Regards
Steve
--
Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
E: stephen.duffull.at.otago.ac.nz
P: +64 3 479 5044
F: +64 3 479 7034
Back to the Top
Dear Nick,
I fully agree and I simply misspoke in my previous post - when I said
that a value below LOQ was likely to be more "biased", I meant
"imprecise" of course. Thanks for the correction.
Regards,
Andrew
Back to the Top
The following message was posted to: PharmPK
Steve,
I accept there may be bias in the prediction of measured
concentrations from a typical calibration curve. However, one thing
is certain for concentration calibration curves -- the average
measured concentration should be zero when the known concentration is
zero. Naive linear regression which trys to estimate an intercept
will necessarily be biased but this need not be the case if only the
slope is estimated. An 'adequately measured' conc should have no bias
as the true conc approaches zero and presumably very little bias for
concs around the LOQ. Use of an inappropriate model for higher concs
may of course cause bias at higher concs.
Another possible cause of bias is rejecting negative measurements
when the known conc is zero. There must be some kind of random error
in the measurement so that if the known conc is zero then the
measured concs must include both positive and negative values. I
suspect that chemical analysts may be truncating the distribution of
measured concs by rejecting negative values and this would therefore
cause bias in the estimate of the measured conc when the known conc
is zero.
Nick
Back to the Top
Dear all,
Having followed the discussion of Andrew Volosov and Nick Holford, I
have to put some ice to the conclusions. I could have some sympathy
(but nothing more) to your conclusions if your instrument can easily
measure let's say 1/100 concentrations of the LOQ. In other words,
the LOQ here is the lowest concentration that the method was
(arbitrarily) validated to. If the LOQ is determined at the lowest
concentration that your instrument (method) can perform (say 9 x S/
N), the values can definitively be biased and not only inaccurate.
Factors like peak shape and tailing can produce integration bias, not
only inaccuracy. There is a wide variety of other sources of bias
when you use HPLC, GC, GC/MS or LC/MS/MS (quads, traps etc.)
techniques. Also some immunoassays have a tendency to plan off. (lack
of linearity). Why do you think the LOQ is determined in the first
place?
I would also like to hear what exactly do you mean by a sub-LOQ
concentration. Where is your cut-off point? Is it 90% of the LOQ or
even below LOD i.e. anything that looks like or can be interpreted as
a signal on the machine. If you want to go this way, please, provide
some rules to the game. How would you do for instance weighting in
curve fitting if the analytical values themselves had to be weighted
for accuracy (as they probably should anyway)? Meaning: A
measurement of a concentrations of 100 mcg/mL will be more accurate
that that of 0.1 mcg/mL and even more so when the latter is at the
limit of the instrument performance. If you don't believe me, try it
yourself.
Stefan Soback
Back to the Top
The following message was posted to: PharmPK
Nick, Steve,
For pre-dose or very early or late samples, we sometimes get from our
chemical analysts negative values for plasma concentrations (although
always close to the zero-value). In order to minimize bias, do you
recommend using those negative values rather than setting them equal to
zero?
Danny
[Another question as I'm running some samples right now. The naive
linear regression on the standard curve data gives a quite small
intercept. It isn't always so small. Nick, are you suggesting forcing
the linear regression through zero is the best way to go? It would
avoid the negative values Danny is given. BTW, one of those 12 years
old 'new' columns is doing just fine now - db]
Back to the Top
The following message was posted to: PharmPK
Dear Stefan,
Good to hear from you.
I believe we are talking about slightly different things. First of all,
by sub-LOQ levels I mean just that - anything that was calculated to be
below LOQ (and therefore is likely to be dumped by an unaware analyst).
It can be 10% or 90% of LOQ. The question is not how imprecise or biased
these values are (and yes, the lower they are, the more imprecise,
inaccurate, and possibly biased they will be); the question is - what do
you do with them? There are four conceivable options (I'm talking about
PK only at the moment):
1. Leave them as they are.
2. Replace them with zero.
3. Disregard them, as if they were not measured at all (mark them as
"below LOQ").
4. Replace them with some kind of "corrected" value (0.5LOQ, for
example).
Now, I do not want to re-start the discussion as to which option is
correct, because it has already been discussed so many times on this
forum and elsewhere. The only scientifically sound option is to leave
them as they are.
Of course, you need to be diligent. For example, if you indeed can
easily measure 1/100 of the LOQ, the first question that comes to mind
is - isn't the LOQ suspiciously high? How come it is so far above the
instrument capability? Sounds like a problem with the method. And of
course you have to eyeball the data (peak integration, etc.). After all,
no amount of data massaging will improve a lousy method (nor will it
educate an incompetent analyst. I have seen CROs proudly presenting
their "robust" results with high LOQs, which only means that they dump
even more data).
As to your question why LOQ is determined in the first place - well,
answering honestly (and in a bit of a tongue-in-cheek manner) - I don't
know. For general information perhaps. But not for truncating data,
that's for sure.
By the way, LOD is a lot more meaningful, as it is inherently tied to
the actual method/instrument performance, rather than to an arbitrary
accuracy/precision cutoff.
Best regards,
Andrew
Back to the Top
The following message was posted to: PharmPK
[Another question as I'm running some samples right now. The naive
linear regression on the standard curve data gives a quite small
intercept. It isn't always so small. Nick, are you suggesting forcing
the linear regression through zero is the best way to go? It would
avoid the negative values Danny is given. BTW, one of those 12 years
old 'new' columns is doing just fine now - db]
Is it appropriate to run replicate blanks, for example, six, and
subtract
the average from the signals from the standards before doing the
regression?
That should bring the y-intercept closer to zero.
Why do we worry about the y-intercept? Is the lowest standard
concentration
very close to x = 0? If not, why should we expect a zero y-intercept?
The
response / standard relationship may very well be different and not
linear.
Regards,
Stan Alekman
Back to the Top
The following message was posted to: PharmPK
Stefan,
You ask:
"Why do you think the LOQ is determined in the first place?"
The reason is to satisfy quality control standards for chemical
analysis (e.g. see FDA 2001)
If you read the FDA guidance you will find the LLOQ and LOD are
defined but there is nothing there which says the values below the
LLOQ or below the LOD should not be reported for use by a
pharmacokinetic analyst.
You also ask:
"Where is your cut-off point? Is it 90% of the LOQ or even below LOD
i.e. anything that looks like or can be interpreted as a signal on
the machine?"
I would be happy to use "anything that looks like or can be
interpreted as a signal on the machine". If you give me the data then
I can make an intelligent decision on how to process it. If you don't
give it me then you may force me to introduce biases into the
resultant pharmacokinetic analysis.
You then ask:
"How would you do for instance weighting in curve fitting if the
analytical values themselves had to be weighted for accuracy (as they
probably should anyway)?"
It seems you are familiar with chemical analysis methods but not with
data analysis methods used by pharmacokineticists for over 20 years
(see Peck et al 1984). Please note the problem of weighting is
related to imprecision not accuracy. Models for this kind of error
are well understand and applied everyday by pharmacokineticists. The
difficulty we face however is the reluctance of chemical analysts to
supply the values that have been measured because of an imagined need
to suppress them.
I encourage you to help your pharmacokinetic colleagues by supplying
all the measurement data you have. By suppressing measurements which
are less than LLOQ you are causing biases in the pharmacokinetic
analysis. This bias would go away if you gave the PK analysts all the
data and let them deal with the imprecision using well understood
methods.
Nick
Peck CC, Beal SL, Sheiner LB, Nichols AI. Extended least squares
nonlinear regression: A possible solution to the "choice of weights"
problem in analysis of individual pharmacokinetic parameters. J
Pharmacokinet Biopharm. 1984;12(5):545-57.
Food and Drug Administration, Center for Drug Evaluation and Research
(CDER), Center for Veterinary Medicine (CVM). Guidance for Industry
Bioanalytical Method Validation; 2001 May.
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.aaa.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
The following message was posted to: PharmPK
Danny,
> Nick, Steve,
> For pre-dose or very early or late samples, we sometimes get from our
> chemical analysts negative values for plasma concentrations (although
> always close to the zero-value). In order to minimize bias, do you
> recommend using those negative values rather than setting them
equal to
> zero?
How wonderful that your chemical analysts are honest and give you all
the measurements they make! Of course you should use the negative
values in your analysis and not turn them into something they are
not. You will of course need to understand how to use an appropriate
residual error model but that is a problem that was solved over two
decades ago by Carl Peck.
David,
>
> [Another question as I'm running some samples right now. The naive
> linear regression on the standard curve data gives a quite small
> intercept. It isn't always so small. Nick, are you suggesting forcing
> the linear regression through zero is the best way to go? It would
> avoid the negative values Danny is given. BTW, one of those 12 years
> old 'new' columns is doing just fine now - db]
I would recommend forcing all calibration curves through zero. That
is something you can be totally certain about. If the fit is not very
good then you need to look somewhere else to improve it e.g. consider
a non-linear calibration curve after due thought for why the fit is
not good. DO NOT AVOID THE NEGATIVE VALUES. They are essential to
avoid a bias in the calibration curve.
Best wishes,
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.aaa.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
Hi Andrew,
Good to hear from you too!
Like you said, Andrew, I did not really know what the point was
(didn't follow the discussion from the beginning). I had the feeling
that this was about pop-PK as Nick's reference later indicated. After
admitting not to have understood, I meant to say:
Before you start your study, decide what would be the concentration
you have to be able to analyze in order to be in compliance with your
quality control requirements. Concentrations below that would be
irrelevant for any practical purpose (PK, clinical, toxicological
etc.). If your method does not meet this criterion, develop a better
one.
Why do I recommend that? Because you can do your linear (or non-
linear) calibration curve for your analytical method forced to zero
(or not) and determine your analytical zero concentration (no
signal). However, when your method gives zero (concentration) the
true plasma concentration is far from zero assuming first order
depletion. So clearly the value zero is always incorrect. In other
words, what is the LLOQ that will suffice?
I don't see your point in your reply to my 1/100 example. Your
instrument (eg. LC/MS/MS) easily detects any concentration in your
data set, but the method validation was done only for let's say a
clinically relevant range. The point made by Nick of distribution
around LOQ causing upward bias if below LOQ values are omitted is
clear and so was his correction factor. However, in my understanding
the precision and accuracy of the concentrations recorded below LOQ
are different from those above the LOQ potentially also causing a
bias. But if you say that they are still good for the PK, so be it. I
still wonder how you got in this desperation zone (sub-LOQ) in the
first place.
Yours,
Stefan
Back to the Top
Hi Nick,
It is true that my PK is really old fashioned. Yamaoka and that
generation, non-compartmental etc (i.e. I started once with PK and
ended up with analytics, not good at any of it). When I realized that
I will never be able to collect a pop-PK data set and that I can't
even get anybody else to do it for me, there was no point in
continuing into that area. Now, after these apologies the point I
wanted to make is this:
Most of the people here obviously talk about HPLC analysis (although
not mentioned). HPLC is a problematic instrument because of its poor
separation efficacy, relatively non-selective detector and it often
suffers from injector carry-over. It is also a dangerous instrument,
because, when operated isocratically, you may suddenly start to see
things that actually where injected few injections earlier, but
eluting very, very slowly. Most of these effects are minor, but their
contribution at the extreme lower limit i.e. LOD to LLOQ may be
major. These effects are also irregular. Therefore, as I wrote to
Andrew, that it is not a good place to be.
So first of all you want to be sure, that the blip (sub-LLOQ) you see
on your chromatogram really belongs to your analysis. After some
eyeballing you make your intelligent decision (hopefully correct).
However, how can you be sure that the blip contains only your
compound/metabolite of interest? Moreover, these problems will
probably only occur with your real samples, because for the
calibration curve you work you way up at increasing concentrations
hiding the effect. So what do you want from your chemist? The
chromatograms or some concentrations of something that he does not
understand how to extract?
Nick wrote:
>If you give me the data then I can make an intelligent decision on
how to process it.
This is, to my understanding, a bit diffuse expression. You
personally may be intelligent enough, but how can this be trusted to
be a universal feature of the PK people in toto. How does for
instance a reviewer deal with this dilemma?
I still think that forcing trough zero, negative values etc as
defaults is too simplistic and is very much dependent on the
analytical technique used (derivatizations, detector response
linearity, detector type, solvent properties, instrument
configuration etc, etc). Calibration curve range will also make a
difference. Therefore, I do see value in working in a concentration
range where the instrument performance can be verified. I failed to
understand why sub-LLOQ values contribute positively to the PK
analysis. Too much chemistry seems to be detrimental for your brains
(probably because I'm not a chemist).
Yours,
Stefan
Back to the Top
Nick,
The debate on how to handle BLQ values is an interesting and long
running one. However, like everything in science the response to
"How do you handle BLQ values" is "it depends." All of the
discussion so far has seemed to focus on the fact that you may need
to use the BLQ values to calculate PK parameters that are pertinent
to your understanding of the PK of your compound. However, there are
times when trying to use the BLQ values may be far more trouble than
it is worth. The case that I am thinking of is in Toxicokinetic
support to a Tox study. In this case, the primary objective of the
TK portion of the study is to describe the exposure to the drug
( i.e. Tmax, Cmax and AUC 0-tau). Other parameters such as half-
life, AUC0-inf are occasionally calculated (but not required by
Regulatory Agencies), although I must say that in most instances
where I have seen half-life and AUC0-inf calculated from TK data they
were calculated inappropriately due to the low number of samples
collected (very few samples after the Tmax). If you were to assign a
value to a BLQ value that would obviously have no effect on the Tmax
or Cmax and would most likely have an imperceptible effect on AUC 0-
tau (I always run the AUC calculations several ways to see the impact
of any substitutions that I may make).
In the above case, making a substitution for a BLQ value will not
have an impact on the interpretation of TK data and could be viewed
as being a waste of effort. Personally, I substitute a value of 0
for a BLQ value. I appreciate that the true value is not 0, but the
objective of the Tox assessment includes relating exposure to drug to
observed toxicity. By using 0 I may slightly underestimate the AUC 0-
tau but even if I did I cannot be accused of biasing the data in my
favor.
There is a related situation for which the replacement of BLQ values
could cause you a lot of trouble. There is a guideline in the EU
regarding the analysis of control samples from a Tox study. This
guideline clearly states that you should treat the control samples in
the same manner that you would treat the samples from dosed animals.
If you detect drug in the control samples you must explain why the
drug is there and if you cannot do so, the study may be invalidated
and need to be repeated. In other words, if you use the values that
are reported as BLQ for your test samples you need to accept the data
from those control samples that have values associated with them.
For the above reasons I strongly encourage that we use the right
approach for the right situation. I would hate to see people start
to routinely use values that were BLQ in their TK analysis.
Regards
Mark Milton
Tempo Pharmaceuticals
Cambridge, MA
Back to the Top
The following message was posted to: PharmPK
Dear Nick and all:
Amen to that! The setting of a BLQ limit is false, unscientific,
and very misleading. Do NOT use the CV% as the measure of the error.
Use the SD instead. Go to any general statistics book and look for CV
% as the measure of the error. Instead, you will find the Fisher
information of a data point, which is the reciprocal of the variance
with which a data point was measured. Using this correct measure of
error, there is no BLQ. You can go all the way to a blank this way.
If I were a Medicare administrator, I would urge insurance cpompanire
NOT TO PAY for a result of TDM or a PK sytudy which was reported as
BLQ. It is so easy to do the right thing - report the measurement and
its SD. Note that for a constant SD (equal precision) over the range
of an assay, the CV% doubles each time the level drops by half, even
though the actual precision remains the same. This greatly limits the
ability of the lab to be useful to the people it serves.
Very best regards,
Roger Jelliffe
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
Phone (323)442-1300, fax (323)442-1302, cell 626-484-5313
email= jelliffe.aaa.usc.edu
Our web site= http://www.lapk.org
Back to the Top
Dear Roger,
It is really depressing that I feel unable follow this route of
judgment process. I agree that the determining factor is the SD of
the estimated quantity as a function of the signal (concentration).
Obviously you have to calculate also what would be the probabilities
for false positives and false negatives (if this is an issue, but I
think it is). So clearly the determination of the LLOD is about
hypothesis testing of the probability of false positives (alfa) and
false negatives (beta). The LLOQ is a function of the predetermined
relative standard deviation. We can argue about the values, but the
process appears clear.
The common practice to set the LLOD at 3 * s corresponds to
probabilities for false positives ca. 15% and for false negatives ca.
50%. If you allow 30% RSD and your SD is concentration independent
(and alfa = beta = 0.05) the LLOQ (= k * s, where k is the reciprocal
of the RSD) will be practically the same as the LLOD. The IUPAC
default RSD is 10%.
Because the determination of the LLOD and LLOQ generally incorporate
laboratory interferences (sample processing) but seldom sampling and
sample handling interferences, LLOD and LLOQ are often optimistic
concerning the real samples.
So clearly you will have a signal (and a concentration) at LLOD to
LLOQ if they were scientifically determined. Should these be reported
as concentrations with SD (or rather the uncertainty) instead of BLQ?
I don't see anything uscientific in not doing so, but if youdo, I
certainly would like to understand how these data are used
scientifically (eg. in PK studies).
Best regards,
Stefan
Back to the Top
Hi Andrew,
Sorry to bother you one more time. I really didn't answer your 4
points questionnaire. As I said, reporting zero is clearly
scientifically wrong. Also when your instrument gives no signal, data
dumping appears to me more scientific than reporting zero. Assigning
a value ( e.g. 1/2 LLOQ) will probably be a quite good estimate of
the so called true concentration, but there is a clear danger of
having the same value in two or more subsequent samples. So let's
skip this possibility too.
Now we're down to two options. Leave the concentrations as is? Now we
also have the option that the concentration may even increase to the
next time point (need to calculate the probability). So the clearance
suddenly decreased? You can naturally leave only the concentrations
that fit your curve, but I would not really recommend that.
Therefore, data dumping below the LLOQ concentrations appears to me
the most benign option, but I have to check this experimentally.
Best regards,
Stefan
Back to the Top
The following message was posted to: PharmPK
Dear Stefan,
So many things seem to be mixed up here - I'll try to simplify and sift
through it step by step.
Let's start with a simple and very realistic example. Let's say you have
data from three rats with samples taken up to 6 hours. You want to
calculate the mean profile. The last measured (6h) samples gave you 1.5,
2.0, and 0.7 ng/ml, LOQ being 1.0 ng/ml. What do you do with the 0.7
ng/ml sample? Disregard it and mark it BLOQ? Then you will average 1.5
and 2.0, resulting in 1.75 ng/ml for the 6h time point? But you KNOW you
had another data point there and you KNOW that unbiased average should
be lower than 1.75 ng/ml. The only way to have unbiased data while
dumping the BLOQ values would be to dump ALL values at that timepoint,
including the ones above LOQ - but why on earth would you want to do
that? Please note that this is not the same situation when you, say, for
technical reasons, could not collect the 6h sample from one of the rats
and had only two samples to average - because in that case the third
(uncollected) sample would be a true unknown, whereas the disregarded
BLOQ sample is NOT AN UNKNOWN, it does have a measured value! In other
words, we cannot pretend we know nothing about the BLOQ sample - we do!
So dumping BLOQ samples is nothing but a classic example of data
massaging - you basically dump the data you do not like and keep the
data you do like.
Values under the LOQ are not some kind of random numbers between zero
and LOQ. This range - between zero and LOQ - is not a grey area where
all values become equal and/or indistinguishable all of a sudden. Think
about it - from the scientific/analytical/physical point of view - what
exactly happens that would force us to treat the data so differently
when it drops below LOQ? Nothing!!! This threshold is artificial and
exists only in our minds (or SOPs). And it has no place in either. Data
should be taken as is - you may or may not like it, but it is the best
you have. You cannot handpick a few points you like and dump the rest.
Best regards,
Andrew
Back to the Top
Hi Andrew,
Sorry to bother you one more time. I really didn't answer your 4
points questionnaire. As I said, reporting zero is clearly
scientifically wrong. Also when your instrument gives no signal, data
dumping appears to me more scientific than reporting zero. Assigning
a value ( e.g. 1/2 LLOQ) will probably be a quite good estimate of
the so called true concentration, but there is a clear danger of
having the same value in two or more subsequent samples. So let's
skip this possibility too.
Now we're down to two options. Leave the concentrations as is? Now we
also have the option that the concentration may even increase to the
next time point (need to calculate the probability). So the clearance
suddenly decreased? You can naturally leave only the concentrations
that fit your curve, but I would not really recommend that.
Therefore, data dumping below the LLOQ concentrations appears to me
the most benign option, but I have to check this experimentally.
Best regards,
Stefan
Back to the Top
The following message was posted to: PharmPK
Nick,
I fully agree with your position regarding "negative
values of concentration measurements" around LOQ and
LOD. My agreement stems from our history controlled
experience. Nowadays positive values concentration
measurements are yesterday negative ones and nowadays
negative concentration measurements will be positive
values if measured in the near(?) future (how near I
do not know).
My only concern arises from the necessity to explain
to students how to deal with logarithmic
transformation of negative numbers(concentrations). I
do not mention elaboration of calibration curves. My
point is a student level PK analysis of a set of
time/concentrations data with a very few negative
measurements.
Sorry in case I misunderstood you!
Best regards,
Dim
--
Dimiter Terziivanov, MD,PhD,DSc, Professor and
Head, Clinic of Clinical Pharmacology and Pharmacokinetics,
Univ Hosp "St. Ivan Rilski",
15 Acad. Ivan Geshov st, 1431 Sofia, Bulgaria
e-mail: dterziivanov.at.rilski.com; terziiv.-a-.yahoo.com
Back to the Top
[Sorry if these are repeats, keeping track of laptop, home and office
versions gets away from me sometimes - db]
The following message was posted to: PharmPK
Dear Stefan,
So many things seem to be mixed up here - I'll try to simplify and sift
through it step by step.
Let's start with a simple and very realistic example. Let's say you have
data from three rats with samples taken up to 6 hours. You want to
calculate the mean profile. The last measured (6h) samples gave you 1.5,
2.0, and 0.7 ng/ml, LOQ being 1.0 ng/ml. What do you do with the 0.7
ng/ml sample? Disregard it and mark it BLOQ? Then you will average 1.5
and 2.0, resulting in 1.75 ng/ml for the 6h time point? But you KNOW you
had another data point there and you KNOW that unbiased average should
be lower than 1.75 ng/ml. The only way to have unbiased data while
dumping the BLOQ values would be to dump ALL values at that timepoint,
including the ones above LOQ - but why on earth would you want to do
that? Please note that this is not the same situation when you, say, for
technical reasons, could not collect the 6h sample from one of the rats
and had only two samples to average - because in that case the third
(uncollected) sample would be a true unknown, whereas the disregarded
BLOQ sample is NOT AN UNKNOWN, it does have a measured value! In other
words, we cannot pretend we know nothing about the BLOQ sample - we do!
So dumping BLOQ samples is nothing but a classic example of data
massaging - you basically dump the data you do not like and keep the
data you do like.
Values under the LOQ are not some kind of random numbers between zero
and LOQ. This range - between zero and LOQ - is not a grey area where
all values become equal and/or indistinguishable all of a sudden. Think
about it - from the scientific/analytical/physical point of view - what
exactly happens that would force us to treat the data so differently
when it drops below LOQ? Nothing!!! This threshold is artificial and
exists only in our minds (or SOPs). And it has no place in either. Data
should be taken as is - you may or may not like it, but it is the best
you have. You cannot handpick a few points you like and dump the rest.
Best regards,
Andrew
Back to the Top
The following message was posted to: PharmPK
Andrew: It is not as simplistic as dumping the data you do not like.
It is discarding the data that do not meet the criteria against which
the assay was developed.This approach is always used in forensic tox,
where even though one can see a measurable response below a threshold
one must not report it.
Assay acceptance includes both precision and accuracy. Sample
acceptance usually includes only precision. For most HPLC assays I do
not know how you would estimate an SD for a sample with a single
replicate (HPLC) For Immunoassays, most samples are measured at least
in duplicate. Simplisticly, you may measure 0.7 with reasonable
precision, say an SD of .175 (CV=25%)of replicates.
It is also required to accept or reject assays- not samples- based on
the performance of QCs. This performance includes not only precision
but also accuracy. QC are placed to challenge the curve at certain
points including the ULOQ and LLOQ and their placement is specific
relative to these points. Judiciously placed QCs give some measure of
the reliability of the assay in terms of both precision and accuracy.
If we drive the assay to the LOD (which varies greatly relative to the
LLOQ) we would need to establish QC to challenge the LOD point as well.
Ed O'Connor, PhD
Technical Director, Immunoanalytical
Tandem Laboratories
115 Silvia Street
West Trenton, New Jersey
609-228-0243
ed.oconnor.-a-.tandemlabs.com
Back to the Top
Hi Andrew,
You're right, this is a totally different problem. In your setup, and
because every measurement is a measurement, I would take the median
of all the measurements (and not the mean). Do you think the median
would misrepresent your data set (statistically, scientifically or
something else)? If you only had the 0.7 (and the SD still is
+/-0.1!) ng/mL and the LOD to LOQ was 0.3 to 1.0 ng/mL, there is a
good chance that the true concentration really is in-between them.
Let's keep in mind that a determined value is actually part of a
distribution. (By the way, that's why you will get readings below
LOD, because LOD is the mean of that distribution.). By determining
that distribution in your analytical method characteristics and
planning the time points so that the signal distributions of the
consecutive samples do not overlap, the measurements result in
declining concentrations by default (representing a biological fact).
This will not hold in the LOD to LOQ area.
Nick wrote:
>"It seems you are familiar with chemical analysis methods but not
with data analysis methods used by pharmacokineticists for over 20
years (see Peck et al 1984). Please note the problem of weighting is
related to imprecision not accuracy. Models for this kind of error
are well understand and applied everyday by pharmacokineticists."
This made me worried, because it struck me as if bad data can now be
rendered good with the help of superior statistics. I found this data
set.
http://www.seanet.com/~bradbell/GaussNewton.pdf
which seemed relevant to the point I wanted to make (note the final 3
points). I run Gauss-Newton, Damping Gauss-Newton, Marquardt and
Nelder-Mead Simplex algorithms and they all gave exactly the same
result (but I still have to find an ELS program). Then I dumped the
last and third last points (faking sub-LOQ values below 0.1), and
there was an 8% decrease in the clearance (right/wrong/meaningful/
scientific?), which, surprisingly, agreed with my non-compartmental
(trapezoidal) analysis of the whole data package. Note, all the time
points of the terminal slope are within one half-life!
Take care,
Stefan
Back to the Top
Dim,
> My only concern arises from the necessity to explain
> to students how to deal with logarithmic
> transformation of negative numbers(concentrations).
I assume you are using log transformations because you want to teach
Disneyland PK methods (e.g. log linear regression to estimate half-
life). If you didnt try to pretend the world was flat (straight line
PK) but dealt with it as it really is then the negative log problem
would go away. I deal with PK problems every day but have not used
log transformations and log-linear regression for decades.
I think we should treat students as intelligent humans. This means
explaining the truth to them. If you cover it up then you create yet
another generation of chemical analysts who have no idea what they
are doing when they fail to report the truth of what they measured to
data analysts.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.aaa.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
The following message was posted to: PharmPK
Ed,
> It is discarding the data that do not meet the criteria against which
> the assay was developed.This approach is always used in forensic tox,
> where even though one can see a measurable response below a threshold
> one must not report it.
Would you please give some scientific justification for your
assertions. Why does forensic tox always used this approach? Are
forensic tox brains as dead as the corpses they examine? Which law of
the land says "one must not report it"?
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.-at-.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
The following message was posted to: PharmPK
Most binding assays rely on 4PL, 5PL fits. Not Mickey Mouse at all.
Ed O'Connor, PhD
Technical Director, Immunoanalytical
Tandem Laboratories
115 Silvia Street
West Trenton, New Jersey
609-228-0243
ed.oconnor.-at-.tandemlabs.com
Back to the Top
The following message was posted to: PharmPK
Ed,
Forensic tox is a totally different story. They are not looking for the
best way to describe each sample - rather, it is the proof of
presence/absence of a compound that is critical; therefore, used of LOQ
in forensic tox is not only justified, but essential.
In PK, you are looking to describe/characterize/measure every sample the
best way possible. LOQ does nothing to help that - quite to the
contrary.
If Michelson had an LOQ applied to his measurements of the speed of
light, his data most likely would have been dumped and maybe Einstein
would not have discovered the relativity. No kidding.
Andrew
Back to the Top
The following message was posted to: PharmPK
I think the root of the problem is that someone, somehow managed to
hammer into everybody's (well, almost) head that BLOQ levels are invalid
and need to be dealt with - corrected, adjusted, ignored. Note that most
discussions focus on HOW to deal with BLOQ values, as if the fact that
BLOQ values need to be "dealt with" is a given. It is not! Can anybody
adequately explain WHY LOQ is useful in PK? Just please, don't say it's
because it says so in the SOP...
Andrew
Back to the Top
In my opinion, LOQ are there to provide assurances that measurements
taken near the LOQ concentration are valid within the acceptance
criteria for the method. In the case of PK it is important to be
able to describe as much as possible the PK profile, AUC, etc. This
may require to determine "very low" concentrations. In our lab we
use a rule of thumb that requires of a method to have a LOQ low
enough to reliably detect concentrations as low as those at 4-half-
lives for a given dose.
Luis E. Sojo, PhD
Associate Director, Analytical Development
Bioanalytical Unit
QLT Inc
Phone:604-707-7398
email:lsojo.at.qltinc.com
Back to the Top
The following message was posted to: PharmPK
Well, it relates to the acceptable accuracy of the method, and to borrow
an analogy from Bob Newhart and similar issues with navigation "we are
either off the coast of Manhattan or Rio de Janero".
As for the utility of LLOQ-and ULOQ for that matter, each is an
indication of the working range of an assay. LOD is fine, but
understand from day to day this will change permitting on some days a
value of say .1 fg/mL to be measured where on other days 1 ng/mL may be
the LOD. The concept of an LLOQ separated the limit of the working
range from the LOD permitting some measure of assay reliability- not
just precision but accuracy as well. And the LLOQ has two components.
In addition to the precision component there is also an accuracy
component. If the LLOQ fails either criteria, it must be reset and
retested, the same being true for the upper limit or ULOQ.
QC were established to monitor not just storage but performance of the
assay. Because of the changing nature of the LOD, QC were related to
the LLOQ and ULOQ, and were more or less fixed relative to these assay
parameters.
If pharmacokineticist's are now saying that the reliability, precision
and accuracy of an analytical method are of minor or no concern, what
claims can be made regarding the reliability of cmax, tmax and other PK
parameters estimates derived from data collected without limitation?
Ed O'Connor, PhD
Technical Director, Immunoanalytical
Tandem Laboratories
115 Silvia Street
West Trenton, New Jersey
ed.oconnor.-at-.tandemlabs.com
Back to the Top
The following message was posted to: PharmPK
Ed,
> If pharmacokineticist's are now saying that the reliability,
precision
> and accuracy of an analytical method are of minor or no concern, what
> claims can be made regarding the reliability of cmax, tmax and
other PK
> parameters estimates derived from data collected without limitation?
The pharmacokineticist would like the chemical analyst to strive to
the utmost to develop a sensitive, reproducible and accurate
analytical procedure. The chemical analyst is encouraged e.g. by FDA
Guidance ref 1, to report the conc at which the CV is around 20%
(this is the lower limit of quantitation - LLOQ).
However, the LLOQ is a metric describing the assay performance. It is
not a cut-off value for discarding measurements which are to be used
for pharmacokinetic analysis.
After the best efforts of the chemical analyst it is still often the
case that attempts are made to measure concentrations less than LLOQ.
This is necessarily the case when sampling is performed for extended
periods after the last dose.
Thanks to advances in statistical estimation procedures (see ref 2)
it is reasonable for the pharmacokinetic analyst to use all measured
concentrations in order to estimate pharmacokinetic parameters. Note
that pharmacokinetic statistics such as Cmax and Tmax are not very
sensitive to the biases caused by discarding measurements below the
LLOQ. AUC is often insensitive unless a large fraction of the AUC has
to be estimated by extrapolation.
Attempts to develop multi-compartment PK models (real PK -- not
Mickey Mouse PK) can however be difficult if samples below the LLOQ
are discarded. The bias in the reported measurements can make it seem
like an extra compartment is needed in the model.
Nick
1. Food and Drug Administration, Center for Drug Evaluation and
Research (CDER), Center for Veterinary Medicine (CVM). Guidance for
Industry Bioanalytical Method Validation; 2001 May.
2. Peck CC, Beal SL, Sheiner LB, Nichols AI. Extended least squares
nonlinear regression: A possible solution to the "choice of weights"
problem in analysis of individual pharmacokinetic parameters. J
Pharmacokinet Biopharm. 1984;12(5):545-57.
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.at.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
The following message was posted to: PharmPK
Mark,
I'd like to respond to just one of your comments:
> I appreciate that the true value is not 0, but the
> objective of the Tox assessment includes relating exposure to drug to
> observed toxicity. By using 0 I may slightly underestimate the
AUC 0-
> tau but even if I did I cannot be accused of biasing the data in my
> favor.
I agree that the purpose of a PK analysis is to relate exposure to
response. But why deliberately and knowingly use a biased estimate of
exposure when there are methods available to produce better predictions?
This is the crux of this discussion. Ignoring BLQ values or replacing
them with 0 will inevitably cause biased estimates of PK parameters.
Using the actual measured value with an appropriate statistical model
to account for the residual error will always be better.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.-a-.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
The following message was posted to: PharmPK
Hi Mark and all,
can you explain why you are underestimating the AUC if you take 0 as
your last point. Do you mean "underestimating" because your ignore
your last point and therefore calculate an AUCt? Otherwise how do you
calculate the AUC. Do you use the loglin option in WNL (or any close
algorithm) i.e. lin when conc increases and log when conc decreases?
In such cases you would overestimate your AUC if you were taking 0
(because the algorithm revert to lin trapozoidal calculation). Hence
taking any value above 0, like half LOQ or LOQ or observed (never
done here for regulatory reasons: sorry Nick!) would calculate a
lower estimate than taking 0.
What do you think?
Thank you
Back to the Top
The following message was posted to: PharmPK
Dear Nick,
It is disappointing that no
one replied to allometric scaling and BA question posted over a week
ago but a
very intense debate on BQL values is on going.
Nick, would you please do
everyone a favour and post a case study and walk through step-by-step
treatment
of BQL data? Please do not refer people to publications. By now, you
have
noticed that this approach or debating it dose not work. You have
spent a lot
of time and posted over 10 emails of each about half a page long in
this latest
round of BQL debate. Would you please spend a fraction of this effort
to walk
through an example, step-by-step, equation-by-equation and number-by-
number to
make your points?
Rostam
Back to the Top
The following message was posted to: PharmPK
Rostam,
Thank you for your comments.
The issues being debated are on matters of principle not of data. If
you cannot read and understand the literature then perhaps you should
seek help elsewhere.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.aaa.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
Dear Rostam,
ELS algorithms are being improved even today. There is no question
about the advantages of ELS. ELS generally makes the fit better, but
not the data to which it is fitted. I gave one link to a data set
that dealt with this ( http://www.seanet.com/~bradbell/
GaussNewton.pdf ). The so called Mickey Mouse PK may result from
various reason. I don't see that use of Mickey Mouse concentrations
are helpful in improving that situation.
In my personal opinion concentration at the LLOQ level where CV% is
20 is seriously pushing the limit that can be considered
concentration determination. Below that, again in my opinion, the
determination is not significantly different from an arbitrary number
or a guess. After reading proficiency testing reports for almost 20
years, I think I know what I'm talking about.
If the concentration(s) measured below LLOQ change the PK
significantly, I have a problem. PK becomes analysis dependent, not
physiology dependent.
Stefan
Back to the Top
Nick, Pascal.
Perhaps the best way to make my point is to use a real example. In a
TK study, the following mean plasma concentrations were obtained
after the first dose:
Time (hr) Concentration (ng/mL)
0 BLQ
1 367
2 439
4 175
8 55.3
24 BLQ
The LOQ was 3.0 ng/mL. Plasma concentrations were reported to 3
significant figures, as is standard practice.
The AUC0-24hr was calculated using the linear trapezoidal rule. The
concentration at 0 hr was assumed to be 0 ng/mL since the sample was
taken prior to the first dose and therefore no drug can be present in
the plasma. By the same token, the concentration at 24 hrs post
dose was set = 0 ng/mL.
In this case, the actual concentration at 24 hrs post dose was 2.17
ng/mL. The AUC0-24hr was calculated using values of 0 ng/mL at 0 hrs
and 2.17 ng/mL at 24 hrs.
The calculated AUCs were 2120.86 ng*hr/mL and 2103.5 ng*hr/mL when a
concentration of 0 and 2.17 ng/mL, respectively were used for the 24
hr concentration. The rounded values were 2120 and 2100 ng*hr/mL,
respectively. By using 0 ng/mL as the concentration at 24 hrs one
gets an AUC that is 99% that of the AUC obtained when a value of 2.17
ng/mL is used as the concentration at 24 hrs post dose.
As can be seen, if you use a number that is greater than 0 at 24 hrs
for calculating the AUC 0-24 then you will get an AUC that is greater
than if you used a concentration of 0 hr.
TK data are used as part of a risk assessment. The exposure at the
NOEL/NOAEL dose in the Tox species is divided by the exposure at a
given human dose in order to calculate a safety margin. Given the
fact that the two AUCs only differ by 1%, it is highly unlikely that
a different numerical value will be obtained for the safety margins.
Even if a different safety margin was calculated, the difference will
not likely result in different risk analyses. Additionally, by using
0 ng/mL for the concentration at 24 hrs you will end up with a lower
AUC in animals and therefore a smaller safety margin. You cannot be
accused of trying to overstate the safety margin. Therefore, using 0
ng/mL is the more conservative approach.
Nick asked why one would deliberately and knowingly use a biased
estimate of exposure when there are methods available to produce
better predictions? My answer to that is that the use of a
concentration of 0 ng/mL for TK analysis does not meaningfully change
the interpretation of the data. Also, by using a concentration of 0
ng/mL, I am giving the nonclinical reviewers a data set that they are
familiar with and do not have to spend much time thinking about. A
lot of time and effort will have to be spent educating people
(Regulators and Industrial scientists alike) on why it is more
appropriate to use a value that is below the limit of quantitation
but we feel is appropriate to use. Just look at the amount of
effort we have put in to this topic on this list.
The other factor that makes me feel very nervous about using values
that are BLQ is the fact that there is a now a requirement to analyze
samples from control group animals. If you show the presence of
drug in these samples you will have to do a lot of explaining to the
EU Regulators as to why they should not regard your studies as being
flawed. The worst case scenario is that they invalidate your
studies and force you to redo them. I would hate to be the person
that caused that to happen due to the fact that I had asked by
Bioanalytical Group to report values that were BLQ. It would be
hypocritical to have one approach for samples from control animals
and another approach for samples from dosed animals.
To me, this whole debate shows that we always need to think about how
we will use the data before we analyze it. We can often use a
sledgehammer to crack a walnut. If the practice of substituting 0 ng/
mL for concentrations that are reported as BLQ is regarded as Mickey
Mouse pharmacokinetics, then I must be a mouseketeer. Actually, I
am proud to practice Mickey Mouse pharmacokinetics.
One of the biggest challenges that I have faced in my 15 years in the
pharmaceutical industry has been to ensure that people (usually
recent graduates) only interpret data to the extent that the data can
be interpreted and do not over analyze the data. I guess that the
overanalysis of plasma concentrations obtained from a Tox study is a
pet peeve of mine 0 and don't even get me started on the utility/
futility of allometric scaling!
Regards
Mark Milton
Back to the Top
Dear Mark,
A good example. If I was the reviewer of this study, I would probably
pass it in the form you suggested (zero 24 hour concentration) as a
reasonable conclusion of the data available and for the purpose
intended. I would not pass it if you had used the 2.17 ng/mL that was
reported. Simply because having an LLOQ at 3 ng/mL you cannot
possibly report values at 0.01 ng/mL accuracy (the baseline RSD noise
is 0.3 ng/mL = 1/3 of the LLOD = 1/10 LLOQ) indicating that the
analyst does not have a clue what he is doing.
The data set is limited because of the lack of time points at the
terminal slope. A 12 and/or 16 hour sample would have made a big
difference while the 24 hour sample has little value.
You express your concern about the control data. It appears that when
control data are part of the package, the confidence in LOD (false
positives) seems to disappear and the LLOQ as the cut off point is
much more appealing. I share your concern, because the LOD and the
LLOQ, the way they are generally set in a clinical data set analysis,
are optimistic values.
In a pop-PK data set in which a considerable number of samples are in
the range from LOD to LOQ, one can think differently. The question
is, why.
Stefan
Back to the Top
The following message was posted to: PharmPK
Dear Mark,
A nice explanation and I do agree with your thoughts as long as it is a
sensitive method.
Regards,
KANTHI
V. S. KANTHI KIRAN VARANASI, M.Pharm, MBA
Principal Scientist-Pharmacokinetics and Drug Metabolism
--
Glenmark Pharmaceuticals Ltd.,
Research Centre, A-607, T.T.C Industrial Area,
MIDC, Mahape
Navi Mumbai - 400709. INDIA
Email: kanthikiranv.at.glenmarkpharma.com
Back to the Top
The following message was posted to: PharmPK
Dear Mark,
A nice explanation and I do agree with your thoughts as long as it is a
sensitive method.
Regards,
KANTHI
V. S. KANTHI KIRAN VARANASI, M.Pharm, MBA
Principal Scientist-Pharmacokinetics and Drug Metabolism
--
Glenmark Pharmaceuticals Ltd.,
Research Centre, A-607, T.T.C Industrial Area,
MIDC, Mahape
Navi Mumbai - 400709. INDIA
Email: kanthikiranv.-at-.glenmarkpharma.com
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "BLQ values" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)