standard error of independent variables using inverse linear regression
Show older comments
Hi,
I have four dependent variables (y(1), y(2), y(3), y(4)) that vary as a function of two independent variables (x(1), x(2)). I performed a set of laboratory tests to determine the calibration between dependent and independent variables under controlled conditions. For all four dependent variables, the relationship is fairly described by a complex linear function:
y(1-4) : a + bx(1) + cy(2) + dx(1)^2 + ex(2)^2 + fx(1)x(2) + gx(1)^3 + hx(2)^3 + ix(1)x(2)^2 + jx(1)^2x(2)
Afterwards, I use a least square algorithm to determine "best fit" x(1) and x(2) values for a given set of y(1-4) values.
For the calculation of the variability of x(1) and x(2), I would like to consider both the measurement noise (this can be easily done by measuring y values for replicate samples) and the calibration noise (i.e. taking into account the standard errors associated to the regression coefficients a, b, c, etc.). I am struggling with this issue due to the complex linear functions. Does anyone have ideas on how to proceed?
Thanks a lot, Dries
Answers (0)
Categories
Find more on Linear and Nonlinear Regression in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!