Back to the Top
Dear Colleagues:
I don't understand all this discussion of how to weight the data,
whether
it is better to weight by 1/y^2 or by doing the log transformation, for
example. Why not skip all thiese assumptions and simply calibrate the assay
over its working range, and then fit the relationship between the
concentration and the SD with a polynomial so one can have a good estimate
of the SD with which each single level is measured, so one can then fit
according to the Fisher information of each concentration, namely the
reciprocal of the variance of each data point? The problem is that the
coefficient of variation is hardly ever constant, and the SD needs to be
known over its entire working range.
If one uses the log transformation, for example, a concentation of 10
units has only 1/100 the weight (Fisher info) of a concentration of 1 unit,
and only 1/10,000 the weight of a concentration of 0.1 units. Is this
realistic? I don't think so. I really don't understand all this discussion
about a point that can be easily answered simply by calibrating each assay,
by determining its error pattern over its working range. This point is
discussed more fully in Therapeutic Drug Monitoring 15: 380-393, 1993.
Of course there are other errors than just the assay. There are those
associated with errors in preparing and giving each dose, and in recording
the times when the doses were given, and with recording the times when the
serum samples were drawn. What sense does it make to assume that all of
these these are also part of the measurement noise, and then to use the log
transformation or 1/y^2 as the description of them? Most of them are
actually part of the process noise, not the measurement noise. But whatever
is done, why not start by knowing what the assay errors actually are?
Sincerely,
Roger Jelliffe
Back to the Top
Dear collegues,
Roger_Jelliffe (by way of David_Bourne) wrote:
>
> PharmPK - Discussions about Pharmacokinetics
> Pharmacodynamics and related topics
>
> Dear Colleagues:
>
> I don't understand all this discussion of how to weight the data,
> whether
> it is better to weight by 1/y^2 or by doing the log transformation, for
> example.
That discussion deals with the shape of the distribution of errors as
opposed
to the magnitude, which is discussed below. (Although I have just
joined this
list and not followed the preceding discussion and therefore may have
misinterpreted
the topic of the discussion).
> Why not skip all thiese assumptions and simply calibrate the assay
> over its working range, and then fit the relationship between the
> concentration and the SD with a polynomial so one can have a good estimate
> of the SD with which each single level is measured, so one can then fit
> according to the Fisher information of each concentration, namely the
> reciprocal of the variance of each data point? The problem is that the
> coefficient of variation is hardly ever constant, and the SD needs to be
> known over its entire working range.
>
> If one uses the log transformation, for example, a concentation of 10
> units has only 1/100 the weight (Fisher info) of a concentration of 1 unit,
> and only 1/10,000 the weight of a concentration of 0.1 units. Is this
> realistic? I don't think so. I really don't understand all this discussion
> about a point that can be easily answered simply by calibrating each assay,
> by determining its error pattern over its working range. This point is
> discussed more fully in Therapeutic Drug Monitoring 15: 380-393, 1993.
>
> Of course there are other errors than just the assay. There are those
> associated with errors in preparing and giving each dose, and in recording
> the times when the doses were given, and with recording the times when the
> serum samples were drawn. What sense does it make to assume that all of
> these these are also part of the measurement noise, and then to use the log
> transformation or 1/y^2 as the description of them? Most of them are
> actually part of the process noise, not the measurement noise. But whatever
> is done, why not start by knowing what the assay errors actually are?
>
> Sincerely,
>
> Roger Jelliffe
>
As I understand the suggestion, one should
1. Approximate the assay error magnitude by a polynomial
2. Fit a PK model to the data using two models for residual error, one
for assay
error (fixed from step 1) and one for process noise (to be estimated).
Some comments on such a scheme.
- The term "process noise", I assume is used to indicate that other
factors than
the underlying drug concentration may influence the error magnitude.
This is an
important point that has been made before (e.g. J Pharmacokinet
Biopharm 23:651-672, 1995;
J Pharmacokinet Biopharm 26:207-246, 1998).
- I missed one important source of errors from the list and that is
model misspecification.
Our models will always only be approximations and this is particularly
obvious in most analyses of
models/data incorporating oral absorption. Our residual error model
must reflect this.
- The original question of the shape of error distribution is not
addressed by the above scheme.
- To do step 1, i.e. approximating the assay error by a function, is not
trivial
and involves assumptions and model selection of its own. Since assay
variability in step 2,
is to be fixed (i.e. taken to be known without error) maybe also such a
procedure demand more assay data
than normally is collected. A polynomial is used for approximation and
as polynomials may behave
badly in ranges without data, and thus also the spacing of assay
standards may need to be different
from what is otherwise chosen.
- The above scheme has the advantage that, we will not obtain estimates
of the total
residual error below the assay error. However, in my experience, that
is not a problem and indeed,
in the vast majority of population PK analyses, assay error is only a
minor component of the total
residual error. The residual error is usually very well determined (in
population analyses) and
the part of the model for which prior information is of the least
value. In population analyses,
I therefore fail to see any advantage of having one component of the
residual error fixed to
a previous estimate. Since it is usually difficult to obtain
information on the residual error
model and magnitude in individual PK analyses, the above scheme may well
in such analyses be of value.
Best regards,
Mats Karlsson
Div. of Pharmacokinetics and Biopharmaceutics
Uppsala University
Sweden
PharmPK Discussion List Archive Index page
Copyright 1995-2010 David W. A. Bourne (david@boomer.org)