Back to the Top
The following message was posted to: PharmPK
Dear all,
I would like to know your views on the calibration
curve for the case described below-
I have a compound which is dosed as pre-pro-drug, gets
converted to pro-drug and further transformed as drug
at the site of action.
Do we need to run a separate calibration curve for
each of these 3 analytes, as all the 3 may be present
in sample (plasma or target tissue).
What would be your suggestion if they get convert
during the sample work up. Something I noticed in my
calibration standards (<10%) in the spiked plasma (
from plasma proteases etc) and protein precipitation
(chemical instability-hydrolysis, oxidation, reduction
etc)- should that argument to run them together, so
that any conversion during work up of actual sample is
taken care by the calibration curve.
I feel I should run them together (not just to save
time and effort but due to reason stated above) and
would be happy to implement your views or someone
similar experience.
Thanks in advance.
Best regards,
Jagdish
Back to the Top
Dear Jagadish,
Ideal choice is to run a combined calibration and QC samples , so that
you have shorter batch run as well as lesser no of samples to process.
If you expect sample gets converted during processing, you need to
identify the cause of sample instability. Then you will be able to
modify method to address this issue.
If you are unable to get a combined CC running, then of course you
will have to run three sets of CC and QC's (and feel exhausted when
you finish assay!!!!!!!)
Vinayak Nadiger
Manager , Bioanalytical Chemistry
11 Biopolis Way, Helios #08-05
Singapore 138667
E Mail: vnadiger.aaa.combinatorx.com
Back to the Top
Dear Jagadish,
Ideal choice is to run a combined calibration and QC samples , so that
you have shorter batch run as well as lesser no of samples to process.
If you expect sample gets converted during processing, you need to
identify the cause of sample instability. Then you will be able to
modify method to address this issue.
If you are unable to get a combined CC running, then of course you
will have to run three sets of CC and QC's (and feel exhausted when
you finish assay!!!!!!!)
Vinayak Nadiger
Manager , Bioanalytical Chemistry
11 Biopolis Way, Helios #08-05
Singapore 138667
E Mail: vnadiger.-a-.combinatorx.com
Back to the Top
Dear Jagdish,
Regarding calibration standards and curves for prodrugs of limited
stability (QC samples also):
I have experienced exactly the situation you describe more than once:
relatively unstable diesters or triesters that eventually cleaved down
to the parent/active species. We analyzed all the species at least
once if possible, kept samples -70, and kept thawed samples cold.
Feedback from both the FDA and from others in the company prompted us
to do that, and after showing that a specific precursor was always
negligible compared to actual parent/active species, we proposed
dropping one or more from future determinations, and were told it was
acceptable. This had to be done for both tox species samples and for
human samples.
Our initial contingency plan for very unstable precursors had been to
demonstrate through actual experiments that it could not be reliably
analyzed and levels were negligible compared to parent.
Unfortunately, among the several prodrugs we worked on, none was
sufficiently unstable to justify ignoring it from the start. (And
esterase inhibitors were only marginally helpful in our cases).
Tom
Thomas L. Tarnowski, Ph.D.
Bioanalytical Development
Elan Pharmaceuticals, Inc.
800 Gateway Boulevard
South San Francisco, CA 94080
thomas.tarnowski.-at-.elan.com
Back to the Top
Unless you are developing stability indicating assay for your drug
(Ideally your analytical method should segragate the main analyte from
its degradation products)
Design calibration curve for the main analyte only. Select the
appropriate mobile phase and optimize your method.
s.o.o
Back to the Top
The following message was posted to: PharmPK
Jagadish: You need to be confident that there is no interference of one
compound on the response of any of the others.
You may combine the analytes in QCs, but the curves should be kept
distinct
unless you know for certain there is no impact of one compound upon the
other.
If your concern is with conversion you must add something to stabilize
the
analyte(s).
In validation you should demonstrate that combined analyte QCs read
the same
whether you use combined analyte curves or individual analyte curves.
If you show interference you must go to separate methods and minimize
the
interference if you intend to measure all components.
Back to the Top
You should find the ways to avoid/ minimize the conversion of your
drug during sample processing. This may include using ice bath, use of
slightly acidic buffers, using some esterase inhibitors during sample
work up. And once you validate it, you can run all three analytes
simultaneously.
You can also refer to few papers describing Simultaneous estimation of
lovastatin/lovastatin acid & simvastatin/simvastatin acid in plasma.
Back to the Top
Jagdish
Are you trying a simultaneous assay? What kind of enzyme is
responsible for converting one form to the other? if you are trying to
perform a simultaneous assay, you would have to collect the blood
samples with an appropriate metabolic inhibitor that would prevent
conversion of one form to the other. Depending upon the relative
amounts of the three forms you would expect in your actual blood
samples, you may have to spike your control samples (CCurve) with the
three analytes and the ranges for the three may be different. If you
are doing separate assays then of course the Calibration curves will
be independent from each other. if you are valiadating the assay in a
target tissue then you would really need to stop the conversions by
throwing in an inhibitor.
i am not sure if by using the calibaration curve would really
eliminate the issue because the metabolic conversion is time dependent
and depending upon how long you take to process each sample, the
conversion will be different which will further complicate the matter.
i would strongly suggest to include a potent inhibitor that will make
you assay more robust.
Hope i have tried to answer your Q's.
Manish
Back to the Top
The following message was posted to: PharmPK
Vinayak,
Could you be more specific regarding "run a combined calibration and QC
samples".
Thanks.
Xiaodong
Back to the Top
Dear jagdish,
First of all you have to check for conversion of ur analytes in
aqueous solution by performing cross talk experiment. Then for plasma
you have to add some stability agent in order to stop the conversion
to further metabolite. You have add stability agent at the collection
of samples in vaccutainer.
As per my view it is better to analyse all three analytes in a single
run with one cc.
With best regards
Kintan Patel
Back to the Top
Jagdish
I agree with manish on using a potent inhibitor for this assay to stop
the conversion of prodrug to drug which will make your life easier
otherwise you may have to account for the possible conversion at each
step of sample collection to processing. Because if you don't stop
the conversion you may get the inaccurate results if you don't know
how fast is the conversion during the sampling and storage of the
samples itself leaving behind the processing time.
For checking the actual conversion during the sample processing you
can do the following expt:
Make two sets of QC samples- one with each analyte separately and one
with all the three analytes together then quantitate both the sets
against the curve with single analytes. This will give you the exact
conversion of the anlytes,if any, during the sample processing. You
can also use the same plan to prove any possible conversion during the
storage by storing both sets of QCs under the identical conditions and
quantifying them against the fresh single analyte curves.
If by the above expt you can prove that there is no conversion during
the storage or processing you can run a single calibartion curve using
all the three analytes but if it shows the conversion ( higher values
of drug in QCs with all the three analytes than single analyte QCs)
then you may need to stop/check the conversion somehow( Manish's
suggestion).
Hope it will be of help.
Lakshmikant Bajpai
Back to the Top
Dear Vinayak
I have a curiosity with respect to the calibration curve and the case
mentioned below.
If it is really the instability of the compound, we may expect a
constant conversion of the pre/pro drug but, if is the action of
enzymes the LLOQ in the CC may have high turnover than highest
concentration. How should we address the issue.
Regards
Chris
Back to the Top
Dear Vinayak, Chris and others,
The conversion of prodrug into one or more chemical moieties
(metabolites???) in plasma or other matrix may be due to either
chemical (pH) or enzymatic reaction or both. You need to identify the
reason for conversion/degradation and then take the measures to
prevent or at least minimize it.
I had to face similar issue when I was analyzing lovastatin and
lovastatin acid in rodent plasma. The conversion of lovastatin into
lovastatin acid was attributed to both plasma pH and esterase
activity. So I collected the blood in EDTA coated tubes (as a
inhibitor of enzyme activity) and ammonium acetate buffer, pH 6.0 ( to
make plasma pH slightly acidic). That worked nicely for me.
You can also refer to following publication by William Jusko in this
regard.
International Journal of Pharmaceutics 301 (2005) 262-266
Hope it helps.
Back to the Top
Dear Vinayak, Chris and others,
The conversion of prodrug into one or more chemical moieties
(metabolites???) in plasma or other matrix may be due to either
chemical (pH) or enzymatic reaction or both. You need to identify the
reason for conversion/degradation and then take the measures to
prevent or at least minimize it.
I had to face similar issue when I was analyzing lovastatin and
lovastatin acid in rodent plasma. The conversion of lovastatin into
lovastatin acid was attributed to both plasma pH and esterase
activity. So I collected the blood in EDTA coated tubes (as a
inhibitor of enzyme activity) and ammonium acetate buffer, pH 6.0 ( to
make plasma pH slightly acidic). That worked nicely for me.
You can also refer to following publication by William Jusko in this
regard.
International Journal of Pharmaceutics 301 (2005) 262-266
Hope it helps.
Back to the Top
The following message was posted to: PharmPK
Thanks guys for your replies.
Chris- You have raised a good point, probably to
address that we need to use some sort of enzyme poison
to arrest enzymatic conversion. EDTA might not be
sufficient in many cases.
cheers,
Jagdish
Back to the Top
Your point is quite valid as enzymatic reactions are concentration
dependent. In such kind of case I think we have to use enzyme
inhibitor and we should use it along with anticoagulant in sample
collection tube to stop at list in Vito conversion, We had done this
way with one of our molecule.
Bhavesh
Back to the Top
We have run one calibration curve on LC/MS/MS containing 8 points in
which all the std points (STD-A to STD-H) found in acceptace criteria
of % accuracy. But in STD-G area found in both drug and internal std.
found 9 time low as compair to other STDs ,since area ratio of STD-G
was correct, % accuracy is 98.00 %. Shall i excluded STD-G from the
calibration curve or included? i want to submitted data to FDA.
Back to the Top
Hi,
did you confirmed this finding by day-to-day accuracy/stability
measurements ?
Actually this sounds to me much like a injection issue with you
calibration curve sample / autosampler.
I think you should follow you SOP for acceptance criteria, which
should also cover the deviation for the amount of the detected
internal standard.
kind regards
Dirk
--
Dr. Dirk Scharn
Senior Scientist, DMPK
Jerini AG
Invalidenstrasse 130
10115 Berlin, Germany
eMail: scharn.aaa.jerini.com
Back to the Top
Dear Rajaved 786,
Regarding how to handle a high calibration standard (I assume that it
was the highest) that showed only 1/9th of both drug and internal
standard response compared to the others (and all other calibration
standards passed acceptance), it could be that the standard was mis-
injected, or inadvertently diluted, or the instrument had a momentary
malfunction. Many labs have a SOP-stated practice to exclude
standards or other samples when the internal standard counts falls
below a certain expected level (e.g., less than a certain % of an
initially injected system suitability sample), so if you don't have
such a practice, then consider implementing it for future runs or for
the method. I think that you have justifiable reason to reject this
point (e.g., for anomalously poor chromatographic result), even if you
don't have a specific rule in an SOP. As further justification, you
might check to see that other characteristic peaks in the chromatogram
of that standard are also very low compared to those in other standards.
If you reject this point, then I would expect other samples and QCs
that showed similar behavior in the same run would also be rejected.
Thomas L. Tarnowski, Ph.D.
Bioanalytical Development
Elan Pharmaceuticals, Inc.
800 Gateway Boulevard
South San Francisco, CA 94080
thomas.tarnowski.at.elan.com
Back to the Top
The following message was posted to: PharmPK
What does your SOP or analytical protocol indicate regarding
rejection of a curve point ? Most will allow elimination of a curve
point due to accuracy or precision failures as long as it is neither
the LLOQ nor the ULOQ. Do you run two curves? were both unacceptable
re the "G"?. Can you identify an analytical error (calculation or
processing)?
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.-at-.matrixbioanalytical.com
Back to the Top
Dear Ed and All:
Here again is the same old chestnut - the illusion of LLOQ or
ULOQ. CV% is certainly NOT the correct measure of error of any assay,
but a cultural illusion built into the lab assay culture. Assay
results are still presumed to be presented to someone who wants to
receive only a general impression from the result given. However, this
is passing. More and more people are really using such results to fit
the data to a specific model, either a population PK/PD model or an
individualized Bayesian posterior model. CV% provides no reliable
statistical method for weighting such data in order to fit it by the
credibility of each measurement. You never see CV% in the statistics
books as a measure of error.
A further discussion is below. Your comments are most
welcomed. We need a continued discussion of this topic. In the past,
as somewhat far-out ideas were expressed about this point, the
discussion was cut off. I would hope very much that this is not the
case this time. This point has been discussed ad nauseam, but it still
keeps cropping up. The far-out ideas are especially what need to be
discussed. As far as I can tell, this is true not only of reporting PK
data, but of reporting any lab data. Who cares about getting the PCR
down below 50 copies, or some other so-called LLOQ? Wouldn't it be
better for all to find a result, for example, of 0.2 +/- 15 units?
Even physicians, god forbid, can make sense of that! There is
absolutely no need to censor such data. Results can be reported
correctly and reliably all the way down to and including a blank.
Very best regards to all,
Roger Jelliffe.
More discussion follows. It is taken from a draft of a paper we are
working on, entitled "Tools for Optimal Individualization of Drug
Dosage Regimens for Patient Care".
2. Fitting data by its Fisher Information, not by assay
coefficient of variation (cv%) or by some assumed overall Error Model.
2.1 Consequences of using the assay CV%.
Use of the percent error of an assay to describe its
precision has several significant consequences. Using CV%, the
apparent precision of an assay drops markedly as the measured
concentration becomes lower and lower. An "acceptable" categorical
lower limit of quantification is often taken as something like a CV of
10 or 15 or 20%. This varies between laboratories. Regulatory bodies
often make decisions about what is said to be "acceptable" based on
judgment, but, sadly, not on science. An intuitively taken policy is
simply decided upon. There is much discussion at meetings about just
what constitutes an "acceptable" CV% which reflects "acceptable"
precision. Below this value a categorical cutoff is made, and the data
is censored. However, even a blank sample is measured with a certain
SD. Of course at the blank, the CV% is infinite. However, the SD at
the blank is always finite. It is machine and blank noise.
2.2 The first major problem with CV% - believing that one has to
censor data.
Very low measurements are thought of as the signal being
"lost in the noise". Below a selected cutoff, the measurement is not
felt to be "precise enough" for acceptable quantification (the lower
limit of quantification, or LLOQ), or further down, even for detection
(the lower limit of detection, or LLOD). Data below these judgmentally
selected cutoffs are censored and are either not reported, or are
reported simply as being "less than" some selected LLOQ or LLOD. Such
data reported as "below detectable limits" often eventually become
regarded by physicians (and by their patients as well) as though the
substance being measured (a Philadelphia chromosome, an HIV PCR, or a
drug concentration, for example) somehow is not really there. This
often leads to serious clinical and pharmacokinetic misperceptions, as
nondetectable eventually becomes mentally equated with zero. Actually,
several special policies have been developed to deal with this problem
[4]. None has been successful. The actual measurement, whatever it is,
is the best reflection of what is actually there, along with its SD.
2.3 The second major problem with CV% - no way to give correct
weighting of measured data for modeling.
The other problem, an increasingly important one, is that
there is no way to assign a proper quantitative measure of credibility
to a data point using CV%. This is a problem relatively new to the
laboratory community. Data points are now used clinically for
therapeutic drug monitoring and Bayesian updating of individual
patient pharmacokinetic models. Indeed, in the statistics books, one
simply does not find CV% as a mathematical or statistical measure of
credibility. Instead, one finds the Fisher information of a data
point. This is the reciprocal of the variance with which any data
point was measured.
It is also thought that the assay SD is much less constant
over its operating range. This is often not the case (see Figure 2
below). What is important is to have and use a well known quantitative
measure of the relative credibility of a data point. It should not be
corrupted by the measurement itself, as is the CV%.
2.4 Fisher Information - the reciprocal of the assay variance.
The Fisher information of a data point is the reciprocal of the
variance with which that data point was measured. Take the assay SD at
that point. Instead of dividing it by the measurement to obtain the CV
%, simply square the SD to obtain the variance, V. Take its
reciprocal, 1/V. Multiply the measured result by 1/V to assign proper
weight to that assay measurement. This procedure is a well known and
widely used measure of statistical credibility [5-7].
2.5 Relationship between CV% and Fisher information.
Let us consider a hypothetical assay with a coefficient
of variation of 10% throughout its range. Suppose there is a
measurement of 10 units. Its SD is 1.0 unit, as its CV is 10%. Because
of this, its variance is 1.0, and its Fisher information is also 1.0.
Now consider another measurement from another sample, where the value
is 20 units. The CV being 10%, the SD is now 2.0. The variance,
however, is now 4.0, and the Fisher information is now \0x00.
This is the important difference between the Fisher
information and the CV%. It is because the variance about a data point
is the square of the SD. So if an assay has a constant CV%, doubling
the measured value results in a weight of \0x00. Also, as an assay
result gets lower and approaches zero, the SD usually gets smaller and
smaller, though not always (see Figure 2 below). In any event, while
the assay SD usually gets smaller and the Fisher information becomes
greater, the CV%, as everyone knows, becomes greater, and eventually
becomes infinite. One may erroneously think that the measurement
becomes "lost in the noise". This is the perceptual problem when using
CV%. It is because of the perception of assay error as CV% that leads
people to make artificial and categorical cutoffs such as LLOQ and
LLOD. Data are then arbitrarily withheld and censored. This problem is
illustrated in Figure 2 below. The figure is based on the documented
error of the Gentamicin assay at the Los Angeles County - USC Medical
Center several years ago. At the high end, a value of 12 ug/ml,
measured in quadruplicate, had an SD of 1.71 ug.ml, and a CV of 14.3%.
A value of 8.0 ug/ml, similarly measured in quadruplicate, had an SD
of 0.79 ug/ml and a CV of 9.96%. A value of 4.0 ug/ml, again in
quadruplicate, had an SD of 0.41 ug/ml and a CV of 10.83%. A value of
2.0 ug/ml, again in quadruplicate, had an almost identical SD of 0.42
ug/ml, but the CV now rose to 21.15%. Finally, a blank measurement,
also done in quadruplicate, had an SD of 0.57 ug/ml. The CV%, of
course, was infinite.
Figure 2. Relationship between measured concentration
(horizontal scale), CV% (right hand scale) and Assay SD (left hand
scale). CV% (diamond symbols) increases as shown at low values. On the
other hand, the assay SD is always finite at any value, all the way
down to and including the blank. Because of this, there is no need to
censor any data at all. The measurement and the SD, done in this way,
enhance the sensitivity of any assay all the way down to and including
the blank, with a well documented statistical measure of credibility.
2.6 Using Fisher Information, there is no LLOQ or LLOD, and no
need to censor data.
A problem arises when a result is in the gray zone, below
the LLOQ but a little above the blank. There has been much discussion
about what the best thing is to do about this problem. Some have said
it should be set to zero. Others say it should perhaps be set to
halfway between the blank and the LLOD. Commonly, laboratories have
reported the result simply as being "less than" whatever the LLOQ, in
their judgment, is considered to be.
However, when doing therapeutic drug monitoring or any
pharmacokinetic modeling, this is a most unsatisfactory situation. The
measurement simply cannot be used in any procedure to fit data
quantitatively or to make a population pharmacokinetic model of a drug
in a patient.
It is extremely easy to do all this, and to make both the
toxicologists and the pharmacokineticists happy at the same time, by
reporting the result both ways. For example, a gentamicin sample might
be reported as having a measured concentration of "0.2 ug/ml, below
the usual LLOQ of 0.5 ug/ml". Both parties can easily have what each
needs for their work. The assay error polynomial can be stored in
software to do the proper weighting and fitting of the data.
It is a good thing that much attention has been paid to
determining the error of assays. However, once the assay has been
shown to be "acceptably" precise, that error has usually been
forgotten or neglected. For example, many error models simply use the
reciprocal of the assay result itself, or its squared value, and
forget the actual error of the assay. On the other hand, they often
assume a model for the overall error pattern and estimate its
parameter values. This is usually done because it is assumed that the
assay SD is only a small part of the overall error SD, due to the many
other significant remaining environmental sources of error. That is
clearly not so, as we shall see further on.
2.7 Determining the Assay Error Polynomial
In the USC*PACK software collection [8,9], for example,
one is encouraged first to determine the error pattern of the assay
quite specifically, by determining several representative assay
measurements in at least quintuplicate, and to find the standard
deviation (SD) of each of these points, as shown in Figure 3.
Figure 3. Graph of the relationship between serum
Gentamicin concentrations, measured by our hospital's assay in at
least quadruplicate (the dots) and the standard deviations (SD's) of
the measurements. The relationship is captured by the polynomial
equation shown at the top. Y = assay SD, X = measured serum
concentration, Xsq = square of serum concentration.
One can measure, in at least quintuplicate (and the more
the better - some say 10), a blank sample, a low one, an intermediate
one, a high one, and a very high one. One can then fit the
relationship between the serum concentration (or any other measured
response) and the SD with which it has been measured, with a
polynomial of up to third order if needed, so that one can then
compute the Fisher information associated with any single sample that
goes through the laboratory assay system.
One can then express the relationship as
SD = A0 + A1C + A2C2 + A3C3 (1)
where SD is the assay SD, A0 through A3 are the coefficients of the
polynomial, C is the measured concentration, C2 is the concentration
squared, and C3 is the concentration cubed. A representative plot of
such a relationship, using a second order polynomial to describe the
error pattern of an assay of gentamicin, is shown in Figure 3.
2.8 Determining the Remaining Environmental Error
In addition, a parameter which we have called gamma, a
further measure of all the other environmental sources of intra-
individual variability, can also be computed by software for
population PK modeling. It is used in the USC*PACK and the newer MM-
USCPACK BigNPAG program as a multiplier of each of the coefficients of
the assay error polynomial as described above. The nominal value of
gamma is 1.0, indicating that there is no other source of variability
that the assay error pattern itself. Gamma is therefore usually
greater than 1.0. It includes not only the various environmental
errors such as those in preparing and administering the doses,
recording the times at which the doses were given, and recording the
times at which the serum samples were obtained, but also the errors in
which the structural model used fails to describe the true events
completely (model misspecification), and also any possible changes in
the model parameter values over time, due to the changing status of
the patient during the period of data analysis. Gamma is thus an
overall measure of all the other sources of intraindividual
variability besides the assay error.
In this way, one can calculate just how much of the
overall SD is due to the assay SD, and how much is due to the
remaining environmental SD. Determining gamma helps greatly to explain
the impact of the environmental variability found in any fit. If gamma
is small (2-4), it suggests that the sum of the environmental sources
of noise is small. If it is large (10), it suggests that the overall
environmental noise (the total effect of all the other factors
mentioned above) is large.
Hope this helps. I look forward to all your comments.
Roger Jelliffe
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
email= jelliffe.aaa.usc.edu
Our web site= http://www.lapk.org
[Without the figures - db]
Back to the Top
The following message was posted to: PharmPK
One of the possible reason is that the STD-H is beyond the linear
range (high limit) if it is not a preparation error. Do some
investigate is worth: REPREPARE a new STD-H and a few other STDs and
inject again (you can include the previous STD-H sample for
comparison), then you can confirm the results and find the reason.
Shanjun
Back to the Top
Hi Javed,
You are facing problem with Sample G and not H. Hence it is not
detector saturation. Obviously you have lost sample during processing,
I believe. Because of IS, back calculated accuracy is passing. As a
good bioanalytical practice, you may not like to retain this point for
calculation and reject it. Regulators allow to delete one or two
points depending on number of standards used. If you go ahead and
submit this data, be ready with investigation etc for this result.
However, if possible, try to submit a clean data set.
Look at the FDA guideline for method validation- Section VI
(acceptance criteria):
"Matrix-based standard calibration samples: 75%, or a minimum of six
standards, when back-calculated (including ULOQ) should fall within
+/-15%, except for LLOQ, when it should be +/-20% of the nominal value".
Having said this, I go with Roger Jelliffe in terms of overemphasis on
all this LLOQ , ULOQ etc number games. A system should evolve just to
assure the correctness of data used for calculation.
Regards,
Vinayak
Vinayak Nadiger
Manager , Bioanalytical Chemistry
11 Biopolis Way, Helios #08-05
Singapore 138667
E Mail: vnadiger.-a-.combinatorx.com
Website:www.combinatorx.com
Back to the Top
The following message was posted to: PharmPK
Unfortunately, the text of the FDA guidances and corollary industry
white papers couch assay performance in terms of accuracy (%Bias, %
DFE) and precision (%CV, %RSD, SD) or both (total error-TE). Curves
are defined by a low calibrator and a high calibrator which are
described more in terms of performance than of actual level.
When convenient, we should look at a typical set of data from a
validation run both in terms of standard acceptance and the fisher
information available, then work this up to demonstrate your point.
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.aaa.matrixbioanalytical.com
Back to the Top
The following message was posted to: PharmPK
Dear Javed
Are you using Deutereated internal standard. then it std-G should be
in-line with the curve because Std-H must be of bigher concentration. Is
it possible to runn std.-I if it is well fitted in calibration curve
then
there must be something wrong with G. You may check out possible
contamination in samples as well.
Dr zafar
Back to the Top
Dear Javed,
Since it passes in accuracy (hope you have acceptable 'r' value), I
would suggest to keep it in calibration curve, with remark in routine
observation sheet that it could be either processing error or
autosampler malfunction. The inclusion of internal standard in
bioanalysis holds true for such incidences.
If it would have happened with unknown sample (study sample), where we
don't know the actual conc., repeat analysis should be judged on the
basis of internal standard area compare to standards and QCs.
Regards,
Jignesh
Back to the Top
Roger: One issue which prevents wider application is that
instrumental methods usually make a single determination of
concentration whereas in most ligand binding assays measures are at
least in duplicate, sometimes in triplicate.
Ed F. O'Connor,PhD
78 Marbern Drive
Suffield, CT 06078-1533
email: efoconnor.aaa.cox.net
Back to the Top
Dear Ed:
Thanks so much. Why do they do that? Do they average the
results? Do they think that makes them more precise? What then do they
do with the results? How do they define precision?
Thanks a bunch,
Roger
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
email= jelliffe.at.usc.edu
Our web site= http://www.lapk.org
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Calibration curve" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)