990 resultados para LIKELIHOOD APPROACH
Resumo:
The paper follows on from earlier work [Taroni F and Aitken CGG. Probabilistic reasoning in the law, Part 1: assessment of probabilities and explanation of the value of DNA evidence. Science & Justice 1998; 38: 165-177]. Different explanations of the value of DNA evidence were presented to students from two schools of forensic science and to members of fifteen laboratories all around the world. The responses were divided into two groups; those which came from a school or laboratory identified as Bayesian and those which came from a school or laboratory identified as non-Bayesian. The paper analyses these responses using a likelihood approach. This approach is more consistent with a Bayesian analysis than one based on a frequentist approach, as was reported by Taroni F and Aitken CGG. [Probabilistic reasoning in the law, Part 1: assessment of probabilities and explanation of the value of DNA evidence] in Science & Justice 1998.
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is "equivalent" to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence and examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods are a reasonable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. Our study also shows that the standard methods in the current practice of bioequivalence trials offers only weak evidence from the evidential point of view.
Resumo:
In this paper we propose a method to estimate by maximum likelihood the divergence time between two populations, specifically designed for the analysis of nonrecurrent rare mutations. Given the rapidly growing amount of data, rare disease mutations affecting humans seem the most suitable candidates for this method. The estimator RD, and its conditional version RDc, were derived, assuming that the population dynamics of rare alleles can be described by using a birth–death process approximation and that each mutation arose before the split of a common ancestral population into the two diverging populations. The RD estimator seems more suitable for large sample sizes and few alleles, whose age can be approximated, whereas the RDc estimator appears preferable when this is not the case. When applied to three cystic fibrosis mutations, the estimator RD could not exclude a very recent time of divergence among three Mediterranean populations. On the other hand, the divergence time between these populations and the Danish population was estimated to be, on the average, 4,500 or 15,000 years, assuming or not a selective advantage for cystic fibrosis carriers, respectively. Confidence intervals are large, however, and can probably be reduced only by analyzing more alleles or loci.
Resumo:
It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.
Resumo:
This work represents an original contribution to the methodology for ecosystem models' development as well as the rst attempt of an end-to-end (E2E) model of the Northern Humboldt Current Ecosystem (NHCE). The main purpose of the developed model is to build a tool for ecosystem-based management and decision making, reason why the credibility of the model is essential, and this can be assessed through confrontation to data. Additionally, the NHCE exhibits a high climatic and oceanographic variability at several scales, the major source of interannual variability being the interruption of the upwelling seasonality by the El Niño Southern Oscillation, which has direct e ects on larval survival and sh recruitment success. Fishing activity can also be highly variable, depending on the abundance and accessibility of the main shery resources. This context brings the two main methodological questions addressed in this thesis, through the development of an end-to-end model coupling the high trophic level model OSMOSE to the hydrodynamics and biogeochemical model ROMS-PISCES: i) how to calibrate ecosystem models using time series data and ii) how to incorporate the impact of the interannual variability of the environment and shing. First, this thesis highlights some issues related to the confrontation of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration of ecosystem models. We propose two criteria to classify the parameters of a model: the model dependency and the time variability of the parameters. Then, these criteria along with the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. Additionally, a new Evolutionary Algorithm designed for the calibration of stochastic models (e.g Individual Based Model) and optimized for maximum likelihood estimation has been developed and applied to the calibration of the OSMOSE model to time series data. The environmental variability is explicit in the model: the ROMS-PISCES model forces the OSMOSE model and drives potential bottom-up e ects up the foodweb through plankton and sh trophic interactions, as well as through changes in the spatial distribution of sh. The latter e ect was taken into account using presence/ absence species distribution models which are traditionally assessed through a confusion matrix and the statistical metrics associated to it. However, when considering the prediction of the habitat against time, the variability in the spatial distribution of the habitat can be summarized and validated using the emerging patterns from the shape of the spatial distributions. We modeled the potential habitat of the main species of the Humboldt Current Ecosystem using several sources of information ( sheries, scienti c surveys and satellite monitoring of vessels) jointly with environmental data from remote sensing and in situ observations, from 1992 to 2008. The potential habitat was predicted over the study period with monthly resolution, and the model was validated using quantitative and qualitative information of the system using a pattern oriented approach. The nal ROMS-PISCES-OSMOSE E2E ecosystem model for the NHCE was calibrated using our evolutionary algorithm and a likelihood approach to t monthly time series data of landings, abundance indices and catch at length distributions from 1992 to 2008. To conclude, some potential applications of the model for shery management are presented and their limitations and perspectives discussed.
Resumo:
The radiation of angiosperms is associated with shifts among pollination modes that are thought to have driven the diversification of floral forms. However, the exact sequence of evolutionary events that led to such great diversity in floral traits is unknown for most plant groups. Here, we characterize the patterns of evolution of individual floral traits and overall floral morphologies in the tribe Bignonieae (Bignoniaceae). We identified 12 discrete traits that are associated with seven floral types previously described for the group and used a penalized likelihood tree of the tribe to reconstruct the ancestral states of those traits at all nodes of the phylogeny of Bignonieae. In addition, evolutionary correlations among traits were conducted using a maximum likelihood approach to test whether the evolution of individual floral traits followed the correlated patterns of evolution expected under the ""pollination syndrome"" concept. The ancestral Bignonieae flower presented an Anemopaegma-type morphology, which was followed by several parallel shifts in floral morphologies. Those shifts occurred through intermediate stages resulting in mixed floral morphologies as well as directly from the Anemopaegma-type morphology to other floral types. Positive and negative evolutionary correlations among traits fit patterns expected under the pollination syndrome perspective, suggesting that interactions between Bignonieae flowers and pollinators likely played important roles in the diversification of the group as a whole.
Resumo:
When the data consist of certain attributes measured on the same set of items in different situations, they would be described as a three-mode three-way array. A mixture likelihood approach can be implemented to cluster the items (i.e., one of the modes) on the basis of both of the other modes simultaneously (i.e,, the attributes measured in different situations). In this paper, it is shown that this approach can be extended to handle three-mode three-way arrays where some of the data values are missing at random in the sense of Little and Rubin (1987). The methodology is illustrated by clustering the genotypes in a three-way soybean data set where various attributes were measured on genotypes grown in several environments.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
A hybrid zone between the grasshoppers Chorthippus brunneus and C. jacobsi (Orthoptera: Acrididae) in northern Spain has been analyzed for variation in morphology and ecology. These species are readily distinguished by the number of stridulatory pegs on the hind femur. Both sexes are fully winged and inhabit disturbed habitats throughout the study area. We develop a maximum-likelihood approach to fitting a two-dimensional cline to geographical variation in quantitative traits and for estimating associations of population mean with local habitat. This method reveals a cline in peg number approximately 30 km south of the Picos de Europa Mountains that shows substantial deviations in population mean compared with the expectations of simple tension zone models. The inclusion of variation in local vegetation in the model explains a significant proportion of the residual variation in peg number, indicating that habitat-genotype associations contribute to the observed spatial pattern. However, this association is weak, and a number of populations continue to show strong deviations in mean even after habitat is included in the final model. These outliers may be the result of long-distance colonization of sites distant from the cline center or may be due to a patchy pattern of initial contact during postglacial expansion. As well as contrasting with the smooth hybrid zones described for Chorthippus parallelus, this situation also contrasts with the mosaic hybrid zones observed in Gryllus crickets and in parts of the hybrid zone between Bombina toad species, where habitat-genotype associations account for substantial amounts of among-site variation.
Resumo:
C4 photosynthesis is an adaptation derived from the more common C3 photosynthetic pathway that confers a higher productivity under warm temperature and low atmospheric CO2 concentration [1, 2]. C4 evolution has been seen as a consequence of past atmospheric CO2 decline, such as the abrupt CO2 fall 32-25 million years ago (Mya) [3-6]. This relationship has never been tested rigorously, mainly because of a lack of accurate estimates of divergence times for the different C4 lineages [3]. In this study, we inferred a large phylogenetic tree for the grass family and estimated, through Bayesian molecular dating, the ages of the 17 to 18 independent grass C4 lineages. The first transition from C3 to C4 photosynthesis occurred in the Chloridoideae subfamily, 32.0-25.0 Mya. The link between CO2 decrease and transition to C4 photosynthesis was tested by a novel maximum likelihood approach. We showed that the model incorporating the atmospheric CO2 levels was significantly better than the null model, supporting the importance of CO2 decline on C4 photosynthesis evolvability. This finding is relevant for understanding the origin of C4 photosynthesis in grasses, which is one of the most successful ecological and evolutionary innovations in plant history.
Resumo:
Traditional econometric approaches in modeling the dynamics of equity and commodity markets, have, made great progress in the past decades. However, they assume rationality among the economic agents and and do not capture the dynamics that produce extreme events (black swans), due to deviation from the rationality assumption. The purpose of this study is to simulate the dynamics of silver markets by using the novel computational market dynamics approach. To this end, the daily data from the period of 1st March 2000 to 1st March 2013 of closing prices of spot silver prices has been simulated with the Jabłonska-Capasso-Morale(JCM) model. The Maximum Likelihood approach has been employed to calibrate the acquired data with JCM. Statistical analysis of the simulated series with respect to the actual one has been conducted to evaluate model performance. The model captures the animal spirits dynamics present in the data under evaluation well.
Resumo:
Kalman filter is a recursive mathematical power tool that plays an increasingly vital role in innumerable fields of study. The filter has been put to service in a multitude of studies involving both time series modelling and financial time series modelling. Modelling time series data in Computational Market Dynamics (CMD) can be accomplished using the Jablonska-Capasso-Morale (JCM) model. Maximum likelihood approach has always been utilised to estimate the parameters of the JCM model. The purpose of this study is to discover if the Kalman filter can be effectively utilized in CMD. Ensemble Kalman filter (EnKF), with 50 ensemble members, applied to US sugar prices spanning the period of January, 1960 to February, 2012 was employed for this work. The real data and Kalman filter trajectories showed no significant discrepancies, hence indicating satisfactory performance of the technique. Since only US sugar prices were utilized, it would be interesting to discover the nature of results if other data sets are employed.
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.