850 resultados para Isotonic regression
Resumo:
Within the nutritional context, the supplementation of microminerals in bird food is often made in quantities exceeding those required in the attempt to ensure the proper performance of the animals. The experiments of type dosage x response are very common in the determination of levels of nutrients in optimal food balance and include the use of regression models to achieve this objective. Nevertheless, the regression analysis routine, generally, uses a priori information about a possible relationship between the response variable. The isotonic regression is a method of estimation by least squares that generates estimates which preserves data ordering. In the theory of isotonic regression this information is essential and it is expected to increase fitting efficiency. The objective of this work was to use an isotonic regression methodology, as an alternative way of analyzing data of Zn deposition in tibia of male birds of Hubbard lineage. We considered the models of plateau response of polynomial quadratic and linear exponential forms. In addition to these models, we also proposed the fitting of a logarithmic model to the data and the efficiency of the methodology was evaluated by Monte Carlo simulations, considering different scenarios for the parametric values. The isotonization of the data yielded an improvement in all the fitting quality parameters evaluated. Among the models used, the logarithmic presented estimates of the parameters more consistent with the values reported in literature.
Resumo:
The purpose of a phase I trial in cancer is to determine the level (dose) of the treatment under study that has an acceptable level of adverse effects. Although substantial progress has recently been made in this area using parametric approaches, the method that is widely used is based on treating small cohorts of patients at escalating doses until the frequency of toxicities seen at a dose exceeds a predefined tolerable toxicity rate. This method is popular because of its simplicity and freedom from parametric assumptions. In this payer, we consider cases in which it is undesirable to assume a parametric dose-toxicity relationship. We propose a simple model-free approach by modifying the method that is in common use. The approach assumes toxicity is nondecreasing with dose and fits an isotonic regression to accumulated data. At any point in a trial, the dose given is that with estimated toxicity deemed closest to the maximum tolerable toxicity. Simulations indicate that this approach performs substantially better than the commonly used method and it compares favorably with other phase I designs.
Semiparametric estimates of the supply and demand effects of disability on labor force participation
Resumo:
This paper modifies and uses the semiparametric methods of Ichimura and Lee (1991) on standard cross-section data to decompose the effect of disability on labor force participation into a demand and a supply effect. It shows that straightforward use of Ichimura and Lee leads to meaningless results while imposing monotonicity on the unknown function leads to substantial results. The paper finds that supply effects dominate the demand effects of disability.
Resumo:
The goal of this article is to provide a new design framework and its corresponding estimation for phase I trials. Existing phase I designs assign each subject to one dose level based on responses from previous subjects. Yet it is possible that subjects with neither toxicity nor efficacy responses can be treated at higher dose levels, and their subsequent responses to higher doses will provide more information. In addition, for some trials, it might be possible to obtain multiple responses (repeated measures) from a subject at different dose levels. In this article, a nonparametric estimation method is developed for such studies. We also explore how the designs of multiple doses per subject can be implemented to improve design efficiency. The gain of efficiency from "single dose per subject" to "multiple doses per subject" is evaluated for several scenarios. Our numerical study shows that using "multiple doses per subject" and the proposed estimation method together increases the efficiency substantially.
Resumo:
Thèse réalisée en cotutelle entre l'Université de Montréal et l'Université de Technologie de Troyes
Resumo:
This paper presents an application of AMMI models - Additive Main effects and Multiplicative Interaction model - for a thorough study about the effect of the interaction between genotype and environment in multi-environments experiments with balanced data. Two methods of crossed validation are presented and the improvement of these methods through the correction of eigenvalues, being these rearranged by the isotonic regression. A comparative study between these methods is made, with real data. The results show that the EASTMENT & KRZANOWSKI (1982) method selects a more parsimonious model and when this method is improved with the correction of the eigenvalues, the number of components are not modified. GABRIEL (2002) method selects a huge number of terms to hold back in the model, and when this method is improved by the correction of eigenvalue, the number of terms diminishes. Therefore, the improvement of these methods through the correction of eigenvalues brings a great benefit from the practical point of view for the analyst of data proceeding from multi-ambient, since the selection of numbers of multiplicative terms represents a profit of the number of blocks (or repetitions), when the model AMMI is used, instead of the complete model.
Resumo:
Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^
Resumo:
Expert elicitation is the process of retrieving and quantifying expert knowledge in a particular domain. Such information is of particular value when the empirical data is expensive, limited, or unreliable. This paper describes a new software tool, called Elicitator, which assists in quantifying expert knowledge in a form suitable for use as a prior model in Bayesian regression. Potential environmental domains for applying this elicitation tool include habitat modeling, assessing detectability or eradication, ecological condition assessments, risk analysis, and quantifying inputs to complex models of ecological processes. The tool has been developed to be user-friendly, extensible, and facilitate consistent and repeatable elicitation of expert knowledge across these various domains. We demonstrate its application to elicitation for logistic regression in a geographically based ecological context. The underlying statistical methodology is also novel, utilizing an indirect elicitation approach to target expert knowledge on a case-by-case basis. For several elicitation sites (or cases), experts are asked simply to quantify their estimated ecological response (e.g. probability of presence), and its range of plausible values, after inspecting (habitat) covariates via GIS.
Resumo:
Numerous expert elicitation methods have been suggested for generalised linear models (GLMs). This paper compares three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression. These methods were trialled on two experts in order to model the habitat suitability of the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The first elicitation approach is a geographically assisted indirect predictive method with a geographic information system (GIS) interface. The second approach is a predictive indirect method which uses an interactive graphical tool. The third method uses a questionnaire to elicit expert knowledge directly about the impact of a habitat variable on the response. Two variables (slope and aspect) are used to examine prior and posterior distributions of the three methods. The results indicate that there are some similarities and dissimilarities between the expert informed priors of the two experts formulated from the different approaches. The choice of elicitation method depends on the statistical knowledge of the expert, their mapping skills, time constraints, accessibility to experts and funding available. This trial reveals that expert knowledge can be important when modelling rare event data, such as threatened species, because experts can provide additional information that may not be represented in the dataset. However care must be taken with the way in which this information is elicited and formulated.