Back to the Top
What is the procedure to be followed for the selection of a suitable
weighting factor (1/X, 1/X2, 1/Y, 1/Y2) in a linearity experiment ?
Back to the Top
The following message was posted to: PharmPK
They are all acceptable. Examine the impact of the weighting on the
accuracy returned for each standard as well as on the coeeficient of
determination. Select the simplest weighting that provides you with
acceptable accuracy and precision, then look at r2. This can be done
quickly. You will want to save the iterations to support your choice.
The guidance is to use the simplest model that fits the data.
No weighting >1x (or Y) > 1x2(or Y2)
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.aaa.matrixbioanalytical.com
Back to the Top
The following message was posted to: PharmPK
Hi,
r2 is not a good parameter. You can calculate the sum
of absolute percent error of all standards of your std
curves. Which one continuesly gives you least sum
error, then you use that weighing factor. Most of
time, 1/x or 1/x2 is used.
Hope this help.
Xiaodong
Back to the Top
The following message was posted to: PharmPK
Hi all,
r2 can not describe linearity. http://en.wikipedia.org/wiki/Correlation#Correlation_and_linearity
Sum of absolute percent error is a 'weighted' metric method,
because it is divided by x or y.
Based on absolute percent error, you will choose weighted method.
The weighted method is more appropriate theoretically, because it
takes care of small values.
But actually, at most times r2 and 1 are enough to check
linearity when you do your experiments again and again. :)
Please point out, if I am misunderstanding.
Guangli
Back to the Top
The following message was posted to: PharmPK
Wrt 1 of the points made, I agree that r**2 is not a good
discriminator for linearity and, in fact, is an extremely weak
statistic especially when it is (ab)used in some hypothesis test, e.g.
Pearson's. If you want to formally test that a calibration
relationship is linear you need to set up true replicates (NOT
dilutions from the same master stock, or repeated injections) of your
calibration and do an analysis of variance of the regression using all
these replicated data - The variability remaining after fitting the
slope and intercept parameters to a linear model can then be tested
against an estimate of the residual (unexplained) random error in the
data.
Cheers
BC
Bruce CHARLES, BPharm(Hons), PhD, GradDipBusAdmin, MPS
Reader
School of Pharmacy
The University of Queensland, 4072 Australia
CRICOS Number: 00025B
http://www.uq.edu.au/pharmacy/brucecharles/charles.html
b.charles1.-a-.uq.edu.au
Back to the Top
Since weighting refers to heterogeneity of variance of the data,
application of the correct weighting give you comparable weighted
residuals for all data in the analysis.
Typically for bioanlytical calibration curves 1/y and for PK anlysis 1/
Y2, but it's not a fixed criterion, obviously it depends on variance
heterogeneity and variance distribution.
For dose proportianility check, after possible data transformation,
you can apply the some general criterion by checking weighted residuals.
Best regards
Stefano
Back to the Top
Since weighting refers to heterogeneity of variance of the data,
application of the correct weighting give you comparable weighted
residuals for all data in the analysis.
Typically for bioanlytical calibration curves 1/y and for PK anlysis 1/
Y2, but it's not a fixed criterion, obviously it depends on variance
heterogeneity and variance distribution.
For dose proportianility check, after possible data transformation,
you can apply the some general criterion by checking weighted residuals.
Best regards
Stefano
Back to the Top
Since weighting refers to heterogeneity of variance of the data,
application of the correct weighting give you comparable weighted
residuals for all data in the analysis.
Typically for bioanlytical calibration curves 1/y and for PK anlysis 1/
Y2, but it's not a fixed criterion, obviously it depends on variance
heterogeneity and variance distribution.
For dose proportianility check, after possible data transformation,
you can apply the some general criterion by checking weighted residuals.
Best regards
Stefano
Back to the Top
Here you have a pair of useful references about it:
Linear regression for calibration lines revisited: weighting schemes
for bioanalytical methods
Almeida, A. M.; Castel-Branco, M. M.; Falcao, A. C.
J. Cromatograph. B; vol. 774, pp. 215-222 2002
Weighted least-squares regression in practice: Selection of the
weighting exponent.
H.J. Kuss. LC-GC Europe pp. 819-823 December (2003)
And the tip
Concepts in Calibration Theory,Part III : Weighted Least - Squares
Regression
Bonate, P.L.
LC-GC, vol.10, no 6, pp.448 - 450 1992
Back to the Top
The following message was posted to: PharmPK
Please explain to me in very simple terms why the r2 does not describe
linearity for linear regressions, weighted or unweighted. It is
meaningless in nonlinear regressions.
Weighting with 1/x or 1/y or 1/x2 1/y2 weights values to the left of the
curve, where concentration increases from left to right and response
increases from low to high.
Weighting with ln x or ln y weights values/responses to the right of the
curve, for high concentrations and hihg response values
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.at.matrixbioanalytical.com
Back to the Top
The following message was posted to: PharmPK
To add my 2 eurocents to the discussion:
Perhaps my lack of knowledge about weighting factors does not obstruct
my thinking, but I think the only thing that counts is the result.
There is no need for difficult theories about weighting. One should
always use the weighting factor that gives the lowest bias and highest
precision across the validated concentration range.
Cheers,
Rob ter Heine
--
Rob ter Heine, MSc, PharmD
Department of Pharmacology, Slotervaart Hospital
Amsterdam, The Netherlands
E: rob.terheine.-a-.slz.nl
Back to the Top
The following message was posted to: PharmPK
Absolutely!
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.aaa.matrixbioanalytical.com
Back to the Top
The following message was posted to: PharmPK
Dear Dr O'Connor,
"Please explain to me in very simple terms why the r2 does not describe
linearity for linear regressions, weighted or unweighted"
You may find this quite interesting:
http://www.tufts.edu/~gdallal/anscombe.htm
Best regards,
Frederik Pruijn
Back to the Top
With respect to an appropriate weighting factor, the residual plots
give useful information to validate the chosen regression model.
Initially, residual plots should be used to check whether the
underlying assumptions, such as normality of the residuals and
homoscedasticity, are met as for evaluating the goodness of fit of the
regression model.
If the assumption of equal variance of the residuals is not met, then
the weighting factor should be explored. Ideally, the inverse of
the variance of the y-data should be used (if you have the
replicates!). Approximates of this ideal weighting factor include 1/
x, 1/x2, 1/y, 1/y2 and they are, for the most part, proportional to
the variance of the y-data. Which weighting factor to use is
dependent on which one results in all residual squared terms
approximating the same magnitude.
Satjit Brar
Back to the Top
Dear O'Connor,
you write
"Please explain to me in very simple terms why the r2 does not describe
linearity for linear regressions, weighted or unweighted. It is
meaningless in nonlinear regressions."
I agree: r2 is not "meaningless" in linear regression.
To better explain and justify some comments coming about r2 I can give
you some ideas.
Unfortunately PK/PD people look to r2 in many different ways,
depending on r2 use and environment.
I understand there are statistical concerns not often used in
bioanalytical environment since it's not always an issue the use of
different parameter number, manly because people look to r2 as
"goodness of fit" in linear regression. Otherwise, people working in
PK/PD and fitting look to r2 in a different way and you can find
helpful suggestions about cautionary use of r2 in
Kvalseth, TO Cautionary note about R2. American Statistician 1985; 39:
279-285
As one of many possible examples of this "partial stability" of r2,
you can observe that when additional terms of are added to the model,
r2 will always increases (this is well known in PK/PD Modelling). This
is because residual sum of squares will always decrease when model
terms increase. Possible alternative way is the use of "adjusted
r2" (i.e r2 corrected for parameter number).
Best regards
Stefano
--
Dr. Stefano Porzio
PK/PD Scientist
Merck Serono Research
Meck Serono S.A.
RBM S.p.A. - Istituto di Ricerche Biomediche "A. Marxer"
Via Ribes 1
10010 - Colleretto Giacosa (TO), Italy
E-Mail stefano.porzio.aaa.merckserono.net
Back to the Top
The following message was posted to: PharmPK
The residuals plot may have some use during development-but the bias and
cv terms will give much same information.
>From an operational standpoint, for linear regressions only, the
errors
at each point and the r2 remain efficient acceptance criteria.
When using anything other than a linear regression, polynomial log/log.
Logit or 4PL, the value of r2 is inflated and acceptance is almost
solely based on the error term.
That being said, there really is no definitive way of handling failed
points. Are they to be left in the curve or edited out? In some cases
leaving the points in the curve-but noting the failure improves the
overall performance compared to the effect od removing them.
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.aaa.matrixbioanalytical.com
Back to the Top
Regarding weighting factors for curve fitting, I would suggest you use
unity (none), 1/y and 1/y^2 for all subjects. You can add more
weighting factors if you wish. Then visually inspect the data
relative to the theoretical curves. The most important factor to use
in making your decision is randomness of scatter of data points about
the fitted curve. You can also use residual plots, but visual
inspection, IMHO, is almost always the best evaluator. For a thorough
discussion, see my oldie but goodie paper on this subject (see below
for the citation). The F test I describe is as good as the AIC, but
the AIC test is fine as well (and seems to be used by most/all).
Happy curve-fitting (which is as much an art as a science). Sorry,
but no electronic copies are available.
Harold Boxenbaum, Ph.D.
Pharmacokinetic Consultant
Arishel Inc.
14621 Settlers Landing Way
North Potomac, MD 20878-4305
Email: harold.-a-.arishel.com
Website: www.arishel.com
J Pharmacokinet Biopharm. 1974 Apr;2(2):123-48
[The links didn't seem to work - db]
Back to the Top
The following message was posted to: PharmPK
Dear Ed O'Connor,
I think we all understand well. Linear regression is chosen by
ourselves, which does not mean 'data is really linear'. So, r2 can not
describe linearity for an 'arbitrary' linear regression.
I think you are right when r2 is used to calibration curves,
because if r2 can not be used, the analytic methods maybe are not
acceptable.
Guangli
Back to the Top
The following message was posted to: PharmPK
Dear all,
One of the topics in the discussion on 'Weighting factor' is the use
of r2. I would like to add a few comments.
Ed O'Connor wrote:
> The residuals plot may have some use during development-
> but the bias and cv terms will give much same information.
I don't agree. Residuals plots are indispensible for choosing the most
appropriate structural and statistical model. The example shown by
Frederik Pruijn is a clear warning that statistics alone are not
sufficient.
> From an operational standpoint, for linear regressions only,
> the errors at each point and the r2 remain efficient acceptance
> criteria.
How do you translate the errors at each point and the r2 as acceptance
criteria? This aspect has not been discussed. In my opinion there is
no objective criteron for r2, such as r2 should be 0.99... (depending
on the number of points) or higher. You mentioned correctly that r2 is
affected by the weighting factors or transformations, which makes the
situation even more complicated. In short, I do not see any use of r2,
but this may be my lack of knowledge. Therefore I would like to learn
how you use r2 as acceptance criterion.
Guangli wrote:
> I think we all understand well. Linear regression is chosen by
> ourselves, which does not mean 'data is really linear'. So, r2 can
> not describe linearity for an 'arbitrary' linear regression.
As has been stated by others in this discussion, r2 is not a measure
of linearity, not for linear regression and not for other
relationships. Please note that r2 is the variance in the Y-values
explained by the regressor (X-value), divided by the total variance of
the Y-values. As shown in the example presented by Frederik Pruijn,
this value has nothing to do with the linearity of the relationship
between X and Y.
> I think you are right when r2 is used to calibration curves,
> because if r2 can not be used, the analytic methods maybe are not
> acceptable.
Could you please be more specific? What do you mean by 'if r2 can not
be used'? If this is because the results are so bad that r2 is very
low, then of course the analytic method is not acceptable. But if this
is because r2 is not the right criterion, I don't understand your
statement.
best regards,
Hans Proost
Johannes H. Proost
Dept. of Pharmacokinetics and Drug Delivery
University Centre for Pharmacy
Antonius Deusinglaan 1
9713 AV Groningen, The Netherlands
Email: j.h.proost.-a-.rug.nl
Back to the Top
The following message was posted to: PharmPK
I think that r2 only says that the two series of values are related,
which is necessary for linear relationship to exist but it is not
enough. An ANOVA analysis of regression is necessary to prove linearity.
CI Colino
Back to the Top
The following message was posted to: PharmPK
Dear Hans Proost,
The example I post first is same with the second post. :)
I guess what Ed O'Connor means is that when linear model is
proven as the best model, r2 indicates linearity. From this viewpoint,
I agree.
Please check the examples on the page. There is nonlinear
relationship between X and Y. And linear regression was used. There
are two examples about outliers. Because r2 is a parametric method, it
can not be used here.
Could you please provide some examples to show that when linear
model is the best and data is normally distributed, r2 does not
indicate linearity?
Based on the above points, that r2 can not be used indicates
that there are nonlinear relationships between X and Y, or outliers. I
am sorry that I did not express exactly. What I mean is not a low
value of r2, but the condition that r2 can be used. I am not sure that
an analytic method which produces nonlinear relationship or outlier
can be accepted generally.
Guangli
Back to the Top
R**2 is only a measure of the shared covariance between two random
variables. It does not assess linearity.
pete bonate
Back to the Top
Dear Pete Bonate,
Yes...You are right. I said that r2 does not describe linearity
in my first post.
This is from a mathematical or a statistical viewpoint.
And then, I understand that in chemical analysis, 'linearity is a term
for uncertainty of measurements'
Am I right, Ed O'Connor?
I believe that everybody is clear now. What we discuss are two
concepts. :)
Guangli
Back to the Top
The following message was posted to: PharmPK
Guangli,
Guangli Ma wrote:
> Yes...You are right. I said that r2 does not describe linearity
> in my first post.
> This is from a mathematical or a statistical viewpoint.
>
> And then, I understand that in chemical analysis, 'linearity is a
term
> for uncertainty of measurements'
> Am I right, Ed O'Connor?
>
> I believe that everybody is clear now. What we discuss are two
> concepts. :)
It does not make it clear to me if there are 2 very different meanings
for linearity!
As Pete Bonate has correctly stated the R2 statistic is not and never
can be a measure of linearity. It is a measure of association of 2
variables. IMHO linearity can only be tested by proposing an
alternative non-linear model and then failing tor reject the null that
the linear model is as good as the non-linear model.
I also would not agree that 'linearity is a term for uncertainty of
measurements'. Uncertainty is typically a term applied to imprecision
of parameter estimates and is usually described by the standard error
(but other metrics such as 90% confidence intervals might be more
informative). With respect to measurements made on a single sample
with a chemical analysis method then it is possible to describe the
uncertainty of the measured concentration by using replicate
measurements to estimate the standard error and other statistics of
the measurement error distribution.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holford.at.auckland.ac.nz
www.health.auckland.ac.nz/pharmacology/staff/nholford
Back to the Top
The following message was posted to: PharmPK
Yes; It is also listed in the validation for chromatographic methods
guidance as an acceptance parameter
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.-a-.matrixbioanalytical.com
Back to the Top
Dear Peter, Dear David, Dear all:
There are so many things to discuss under the heading of
weighting. I hardly know where to begin. I think you have discussed
the issue of Rsq very well, Peter.
However, I sense a general impression among the group, for
example, that the best weighting scheme is generally chosen as that
one which will obtain the "best" overall fit, with the least
heteroschedasticy in the residuals.
I would disagree with this. That approach suggests that
weighting of data is an art form, and is chosen to make the fit be
"best" and the residuals be evenly distributed. This may not be at all
realistic. Because of that, I do not think this is correct, for the
following reasons.
1. Weighting should NOT be chosen or assumed. It should be done
realistically, based on the credibility of each data point. It is not
enough to assume that the assay error is only a small part of the
overall noise, and that it is therefore OK to assume some form of an
error model and to estimate its parameter values. This is done so
often it has become almost the custom . Weighting should actually be
based on reality, not assumptions. Weights should be determined, NOT
assumed.
2. The assay error should first be empirically determined, to
find the relationship between a measurement and its measured, not
assumed, error. Further, the correct error index is NOT the assay CV,
which becomes infinite as the measurement approaches zero. That is why
the lab community has invented the idea of LLOQ and then has censored
the data when it is low. This issue has been discussed ad nauseam in
this forum, so that David felt the need to put the brakes on it, but
here comes the same issue all over again in another form. You may need
a LLOQ for toxicology. There the sample itself is the only source of
information as to whether or not a substance is present. If the
measurement is 2-3 SD above the blank, then you can be pretty
confident something is present.
However, in PK/PD studies, and in clinical TDM and patient
care, we know the drug is present. It goes away with a half time, and
we know from the lab slip the time of the dose and the time the sample
was drawn. So we know the drug is present. The only remaining question
is how much is present.
3. The CV% is NOT the correct measure of the error. If you go to
any statistics book, you never find CV% as a valid measure of error.
That exists only in the illusion developed by the lab community, who,
for some reason, have never looked beyond this, and who have not ever
considered that such data might be examined and acted upon according
to a quantitative measure of its credibility. The statistics books all
cite the Fisher information of a data point as a good quantitative
measure of the credibility of a data point.
The constant angle seen graphically with a plot of a constant
assay CV% gives the illusion that it a constant percent error is OK.
However, the Fisher information is the reciprocal of the variance with
which a data point is measured.
Consider a measurement with a 10% CV and a measured value of
10 units. The SD is 1, the variance is 1, and the Fisher info is also
1. Now double the measurement. It is now 20 units, the SD now is 2,
the variance now is 4, and the Fisher info is now 1/4. That is one
reason why CV% is not the correct quantitative measure of assay error.
The lab community has not considered this before, and it needs to.
Using CV severely limits the capability of the lab in reporting its
data. Fisher info is much more useful clinically.
4. The other reason CV% is bad is that it leads one to censor low
data values. Usually a CV of 10, 15, or 20% is negotiated by some
group discussion, not by any real science, as the LLOQ. Data below
this is then censored and witheld. A vital clinical data point is
deliberately witheld by the lab. If I were a patient and a lab
reported that my trough gentamicin level, for example, was "<0.3 ug/
ml, I would not pay for such a result, and I do not think the
insurance companies or Medicare should either!
LLOQ is not science. It is a culturally invented illusion.
For example, the SD of our hospital's EMIT gent assay was 0.58 ug/ml
at the blank (CV=infinite!). 0.42 at 2 ug/ml(CV=21%), also 0.41 at 4.0
ug/ml (CV=10.3%), 0.79 at 8.0ug/ml (CV=9.96%), and 1.72 at 12.0 ug/ml
(CV=114.3%). Note that the 4.0 ug/ml value might well be considered
"acceptable" with a CV of 10%, while the 2.0 value, while measured
with basically the same SD, but a CV of 20%, would not. In each case,
the precision of the measurement is the same.
The assay SD can be fitted with a polynomial, usually second
order is enough, to capture that relationship, which is clearly not
linear. Nevertheless, there it is. In the case of the assay described
above, the SD = 0.57 - 0.1056C + 0.0168Csq, where C is the measured
concentration, and Csq is C squared. In this way, one does not need to
quibble about inter and intra day variation and all that, set a LLOQ,
and then forget it all after one finds it to be "acceptable". The
Fisher info, like the SD, is always finite. This is in marked contrast
to the CV. This is another good reason why Fisher info is much more
useful than CV%, as there is no need to censor low data. You can go
all the way to a measured value of zero, for example, quite correctly.
In the example above, the SD of a zero measurement is correctly noted
as +/- 0.57 ug/ml.
5. When you are doing clinical TDM or doing population PK/PD
modeling, you need all the data. You can easily get it all, even low
data that may closely approach zero. There is no LLOQ, at least for PK/
PD work. Use Fisher info as the correct measure of the error. Use it
to give correct weight to the lab data in fitting it. Ask the lab
people to look in the real statistical books about this point.
6. Another important point here is that fitting is not an art.
You must not mess with Mother Nature. You must characterize the errors
in a realistic way, and then you must live with what you get. It may
often be quite heteroschedastic.
7. After the assay error polynomial is determined, you can enter
it in the software to obtain a good estimate of the SD with which any
single measurement is determined as the sample goes randomly through
the assay system. Then you can either estimate the remaining
environmental sources of error either as an additive model (we call it
lambda in our NPAG software), or as a multiplicative term (we call it
gamma in our software). In a well done study, gamma is often about 2,
meaning that the assay error was about half the overall error. This
suggests that the clinical environment was good, and that the other
sources of error were small (errors in preparing and giving doses,
errors in recording the times of dosage and sample drawing, model mis-
specification, and unknown changes in model parameter values during
the data analysis. This is most useful information to know. It has
usually not been asked for by most of the modeling community. That, I
think, is quite unfortunate. Getting a gamma of 10, for example,
suggests much more noise in the clinical environment, and makes one
much more skeptical of the results.
8. Why are 1/y and 1/ysq, etc used? Again, none of this seems to
be based on the reality of the assay error itself.
9. All this has been discussed many times before. However, I
think it is most important to deal with and examine the many different
ideas people have concerning this issue. Some have compared this to
discussing "homeopathic" very low measured concentrations, and have
suggested that the control data should be similarly examined. Let's
talk about all of these things, so we can all say our piece and get
all our very different ideas out in the open, examine them, and
evaluate them.
Very best regards to all,
Roger Jelliffe
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
email= jelliffe.-a-.usc.edu
Our web site= http://www.lapk.org
Back to the Top
The following message was posted to: PharmPK
Dear Nick,
I think you are always right. :)
In mathematics or statistics, there is no metric method for
linearity.
When linearity is used to quantify a method or a machine, this
linearity is different from the linearity in math&stat.
It is very hard to answer ?How about linearity of this analytic
method?? from a mathematic background.
Yes, standard error and confidence interval are right. We can
build several calibration curves and then calculate are and r?s SD to
show whether this method is good.
IMHO, r2 is employed by calibration curve to reveal total
uncertainty of samples from different concentrations. SD or confidence
interval is more appropriate to multiple samples on same concentration
or to parameters calculated based on several experiments.
All models are wrong, maybe r2 is useful? Anyway, it has been
used for?god knows how many years.
I just try to understand why ?linearity? is used on an analytic
method or a machine. I hope that my poor understanding did not make
some misunderstanding.
Best regards,
Guangli
Back to the Top
Guangli,
The reason why linearity is used is because the Beer-Lambert law says
there is a linear relationship between concentration and response by
spectroscopy (in dilute solution).
W/o that law as the underpinning, we would need to demonstrate
linearity for each method from scratch.
Regards,
Stan Alekman
Back to the Top
Linearity:
In contrast to the understanding of the concept "system" common in
physiology,
in our work
http://www.uef.sav.sk/advanced.htm
and in this reply this concept is used to formalize the relationship
between the cause (e.g. drug administration) and the outcome (e.g. the
drug concentration-time profile).
Linearity refers to the behavior of the system if the system outcome
(e.g. the drug concentration-time profile) varies in direct proportion
to the cause (e.g. drug administration). In fact, the concept of
"system" used here is that, a "system" is a set of interacting
elements in which is a close fit, matching cause and outcome.
The system is called linear if it satisfies the properties of
superposition and scaling.
If a system is linear, the output-versus-input graph appears as a
straight line. This is very well known in the field called
pharmacokinetics, where linearity is tested by increasing the drug
dose and analyzing the dose-AUC relationship.
If the system which formalizes the drug behavior in the body satisfies
the principle of superposition, several drug inputs introduced
simultaneously into the body produce the concentration-time profile,
which is the sum of the concentration-time profiles referring to the
individual drug inputs.
With best regards,
Maria Durisova DSc (Math/Phys)
http://www.uef.sav.sk/durisova.htm
Back to the Top
The following message was posted to: PharmPK
Nick,
> IMHO linearity can only be tested by proposing an
> alternative non-linear model and then failing tor reject the null
that
> the linear model is as good as the non-linear model.
I don't agree. Absence of evidence is not evidence of absence.
Furthermore, I think you can only test a linear model against a
particular
class of non-linear models, and not against non-linear models as a
whole.
Therefore, the linear model may be as bad as the non-linear model you
have
chosen, which in turn does not prove linearity.
"All models are wrong but some are useful" George E.P. Box
Peter
Global Biostatistics
Merck Serono Development/Global Clinical Operations
Email: peter.wolna.at.merck.de
Back to the Top
The following message was posted to: PharmPK
Peter,
I think you missed the point. My statement was just the traditional
null hypothesis statement that all statisticians use. What I added was
to point out that if you dont consider a non-linear model as the
alternative hypothesis then you haven't tested the null hypothesis. Of
course there are as many non-linear models out there as you want. I
wasn't suggesting there is only one.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holford.-at-.auckland.ac.nz
www.health.auckland.ac.nz/pharmacology/staff/nholford
Back to the Top
The following message was posted to: PharmPK
Guangli,
Guangli Ma wrote:
> When linearity is used to quantify a method or a machine, this
> linearity is different from the linearity in math&stat.
> It is very hard to answer ?How about linearity of this analytic
> method?? from a mathematic background.
I have no idea why you keep saying linearity is different for chemists
and chemical analysis machines. You keep saying it but give no
justification and no references for why the word should have different
meanings for chemists compared to statisticians. I encourage you to
try and make it clear why you think there is a difference.
> IMHO, r2 is employed by calibration curve to reveal total
> uncertainty of samples from different concentrations.
I disagree. Uncertainty refers to metrics such as standard error. R2
is a measure of association. The association may be very well
described and thus have very little uncertainty for any level of R2
you want to pick.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holford.-at-.auckland.ac.nz
www.health.auckland.ac.nz/pharmacology/staff/nholford
Back to the Top
The following message was posted to: PharmPK
Dear Stan,
"...the Beer-Lambert law says there is a linear relationship between
concentration and response by spectroscopy (in dilute solution)"
This is not entirely correct; the law states that there is a [linear]
relationship between absorbance and concentration, which is not the same
as the actual measured detector response.
It is interesting to note that absorbance is, of course, the
log-transformed ratio of incident and transmitted light.
Best regards,
Frederik Pruijn
Back to the Top
I have used the following in prior publications, and these may be
helpful:
A "system" is two or more components which together perform some
function (definition from George Sacher); and
Linearity exists if input A yields output X, and
input B yields output Y, and
input A + B yields output X +Y.
Of course inputs can be added ad lib. Best wishes to all, Harold
This latter definition comes from an old article in Pharmacological
Reviews on linearity.
Harold Boxenbaum, Ph.D.
Pharmacokinetic Consultant
Arishel Inc.
14621 Settlers Landing Way
North Potomac, MD 20878-4305
Email: harold.-at-.arishel.com
Website: www.arishel.com
Back to the Top
The following message was posted to: PharmPK
Dear Guangli,
You wrote:
> Could you please provide some examples to show that when linear
> model is the best and data is normally distributed, r2 does not
indicate
> linearity?
This is quite a strange request. It has been stated many times in this
thread, and can be found in textbooks of statistics, and on the
Wikipedia
page in your earlier message, that r2 is not a measure of linearity. The
condition 'when linear model is the best' needs to be demonstrated
first.
And this is the point where r2 cannot be used, because r2 is not a
measure
of linearity.
Of course, we all know that the better the data fit to a straight
line, the
higher r2. So, indeed, a higher r2 indicates a better fit. My question
to Ed
O'Connor was: 'HOW do you translate the errors at each point and the
r2 as
acceptance criteria?'. This topic has not been discussed in this
thread. If
somebody could show me an example how this works, we can continu the
discussion.
> Based on the above points, that r2 can not be used indicates
that
> there are nonlinear relationships between X and Y, or outliers. I am
> sorry that I did not express exactly. What I mean is not a low
value of
> r2, but the condition that r2 can be used. I am not sure that an
analytic
> method which produces nonlinear relationship or outlier can be
accepted
> generally.
I'm sorry, but I really do not understand what you want to say here.
Please
indicate what you mean with 'the condition that r2 can be used (for
what?)'.
And I don't think we were discussing about the acceptability of a an
analytic method which produces a nonlinear relationship or outliers
(which,
I agree, would certainly deserve its own thread).
A final remark: This is an interesting discussion. However, it is not a
single thread, but a clew of several threads, all in a tangle, about
weighting factors, linearity, bias and precision, acceptance criteria,
outliers, and so on. I'm sorry that I'm not able to untangle the clew.
best regards,
Hans Proost
Johannes H. Proost
Dept. of Pharmacokinetics and Drug Delivery
University Centre for Pharmacy
Antonius Deusinglaan 1
9713 AV Groningen, The Netherlands
Email: j.h.proost.at.rug.nl
Back to the Top
The following message was posted to: PharmPK
Hi,
In terms of quantitation in analytical work. Linear or
not linear is instrument dependant. For example,
response vs concentration is not linear when using
ELSD detector.
Xiaodong
Back to the Top
The following message was posted to: PharmPK
Dear Nick,
I just remember what I learned on experimental courses. If that
linearity is from statistics and misunderstood, there must be a
disaster.
http://en.wikipedia.org/wiki/Correlation
On the right, there is a picture about r2. Maybe on the first row,
does r2 indicates some uncertainty?
Guangli
[Actually r in that graph. Linearity is ASSUMED and r (or r**2) is a
measure of that correlation (linearity?) but in the general sense r**2
is a measure of how well the data fit a model, not necessarily a
linear model. - db]
Back to the Top
The following message was posted to: PharmPK
All: Please see the eloquent presentation in the link
http://path.upmc.edu/showcase/posters/linear.html
Back to the Top
The following message was posted to: PharmPK
ed wrote:
>
> All: Please see the eloquent presentation in the link
> http://path.upmc.edu/showcase/posters/linear.html
There is nothing at this URL which is pertinent to the discussion of
linearity. It describes a bunch of clinical pathologists who eyeballed
a set of calibration curves and decided if they were 'clinically
acceptable'. Linearity was not defined nor explicitly assessed.
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holford.-at-.auckland.ac.nz
www.health.auckland.ac.nz/pharmacology/staff/nholford
Back to the Top
The following message was posted to: PharmPK
Guangli,
> http://en.wikipedia.org/wiki/Correlation
> On the right, there is a picture about r2. Maybe on the first
row,
> does r2 indicates some uncertainty?
Once again (last time from me) -- r2 is a measure of association. It
is not a measure of uncertainty.
If you take a population (not a sample) and calculate r2 of two
variables then there is no uncertainty in the value of r2. There is no
uncertainty about the association between the variables. r2 is 100%
defined by the variables in the population.
On the other hand if you took samples from the population then the
value of r2 estimate from each sample would include uncertainty. The
standard error of the estimate describes the uncertainty of r2.
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holford.at.auckland.ac.nz
www.health.auckland.ac.nz/pharmacology/staff/nholford
Back to the Top
The following message was posted to: PharmPK
That is what the Bias and CV return, efficiently and directly
Back to the Top
Dear Ed et al:
The CV is not the correct measure of error. The Fisher
information is. This has been discussed ad nauseam in this forum but
it still keeps coming up.
Very best regards,
Roger Jelliffe
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
email= jelliffe.-a-.usc.edu
Our web site= http://www.lapk.org
Back to the Top
The following message was posted to: PharmPK
Hello Roger: It would seem we are caught once more on the horns of a
dilemma. FDA validation documents do not describe an acceptance
criteria using Fisher data, only Bias and CV. The movement now is to
describe total error (Bias+CV)which will obscure the source of error (
random vs systematic).
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.aaa.matrixbioanalytical.com
Back to the Top
Dear Ed;
I understand. How can we get them to see the science and not
just the regs? The regs badly need to be upgraded. Total error also is
even more confounded. The FDA of all people should be looking at the
real stats books! Any suggestions? Meyer, do you have any ideas?
All the best,
Roger
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
email= jelliffe.-a-.usc.edu
Our web site= http://www.lapk.org
Back to the Top
The following message was posted to: PharmPK
Take a data set-QCs, standards and samples, then compare data using both
analyses. Compare performance-risk using Bias/CV against perfromance
risk using fisher. Several of these comparisons should provide a solid
transition over the bridge of theunknown
--
Ed O'Connor, Ph.D.
Laboratory Director
Matrix BioAnalytical Laboratories
25 Science Park at Yale
New Haven, CT 06511
Web: www.matrixbioanalytical.com
Email: eoconnor.-at-.matrixbioanalytical.com
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Weighting factor" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)