937 resultados para likelihood-based inference
Resumo:
2000 Mathematics Subject Classification: 60J80.
Resumo:
We build the Conditional Least Squares Estimator of 0 based on the observation of a single trajectory of {Zk,Ck}k, and give conditions ensuring its strong consistency. The particular case of general linear models according to 0=( 0, 0) and among them, regenerative processes, are studied more particularly. In this frame, we may also prove the consistency of the estimator of 0 although it belongs to an asymptotic negligible part of the model, and the asymptotic law of the estimator may also be calculated.
Resumo:
2010 Mathematics Subject Classification: 62J99.
Resumo:
2010 Mathematics Subject Classification: 62F12, 62M05, 62M09, 62M10, 60G42.
Resumo:
The goal of this paper is to model normal airframe conditions for helicopters in order to detect changes. This is done by inferring the flying state using a selection of sensors and frequency bands that are best for discriminating between different states. We used non-linear state-space models (NLSSM) for modelling flight conditions based on short-time frequency analysis of the vibration data and embedded the models in a switching framework to detect transitions between states. We then created a density model (using a Gaussian mixture model) for the NLSSM innovations: this provides a model for normal operation. To validate our approach, we used data with added synthetic abnormalities which was detected as low-probability periods. The model of normality gave good indications of faults during the flight, in the form of low probabilities under the model, with high accuracy (>92 %). © 2013 IEEE.
Resumo:
2000 Mathematics Subject Classification: 62E16,62F15, 62H12, 62M20.
Resumo:
Detection canines represent the fastest and most versatile means of illicit material detection. This research endeavor in its most simplistic form is the improvement of detection canines through training, training aids, and calibration. This study focuses on developing a universal calibration compound for which all detection canines, regardless of detection substance, can be tested daily to ensure that they are working with acceptable parameters. Surrogate continuation aids (SCAs) were developed for peroxide based explosives along with the validation of the SCAs already developed within the International Forensic Research Institute (IFRI) prototype surrogate explosives kit. Storage parameters of the SCAs were evaluated to give recommendations to the detection canine community on the best possible training aid storage solution that minimizes the likelihood of contamination. Two commonly used and accepted detection canine imprinting methods were also evaluated for the speed in which the canine is trained and their reliability. As a result of the completion of this study, SCAs have been developed for explosive detection canine use covering: peroxide based explosives, TNT based explosives, nitroglycerin based explosives, tagged explosives, plasticized explosives, and smokeless powders. Through the use of these surrogate continuation aids a more uniform and reliable system of training can be implemented in the field than is currently used today. By examining the storage parameters of the SCAs, an ideal storage system has been developed using three levels of containment for the reduction of possible contamination. The developed calibration compound will ease the growing concerns over the legality and reliability of detection canine use by detailing the daily working parameters of the canine, allowing for Daubert rules of evidence admissibility to be applied. Through canine field testing, it has been shown that the IFRI SCAs outperform other commercially available training aids on the market. Additionally, of the imprinting methods tested, no difference was found in the speed in which the canines are trained or their reliability to detect illicit materials. Therefore, if the recommendations discovered in this study are followed, the detection canine community will greatly benefit through the use of scientifically validated training techniques and training aids.
Resumo:
Lognormal distribution has abundant applications in various fields. In literature, most inferences on the two parameters of the lognormal distribution are based on Type-I censored sample data. However, exact measurements are not always attainable especially when the observation is below or above the detection limits, and only the numbers of measurements falling into predetermined intervals can be recorded instead. This is the so-called grouped data. In this paper, we will show the existence and uniqueness of the maximum likelihood estimators of the two parameters of the underlying lognormal distribution with Type-I censored data and grouped data. The proof was first established under the case of normal distribution and extended to the lognormal distribution through invariance property. The results are applied to estimate the median and mean of the lognormal population.
Resumo:
My dissertation has three chapters which develop and apply microeconometric tech- niques to empirically relevant problems. All the chapters examines the robustness issues (e.g., measurement error and model misspecification) in the econometric anal- ysis. The first chapter studies the identifying power of an instrumental variable in the nonparametric heterogeneous treatment effect framework when a binary treat- ment variable is mismeasured and endogenous. I characterize the sharp identified set for the local average treatment effect under the following two assumptions: (1) the exclusion restriction of an instrument and (2) deterministic monotonicity of the true treatment variable in the instrument. The identification strategy allows for general measurement error. Notably, (i) the measurement error is nonclassical, (ii) it can be endogenous, and (iii) no assumptions are imposed on the marginal distribution of the measurement error, so that I do not need to assume the accuracy of the measure- ment. Based on the partial identification result, I provide a consistent confidence interval for the local average treatment effect with uniformly valid size control. I also show that the identification strategy can incorporate repeated measurements to narrow the identified set, even if the repeated measurements themselves are endoge- nous. Using the the National Longitudinal Study of the High School Class of 1972, I demonstrate that my new methodology can produce nontrivial bounds for the return to college attendance when attendance is mismeasured and endogenous.
The second chapter, which is a part of a coauthored project with Federico Bugni, considers the problem of inference in dynamic discrete choice problems when the structural model is locally misspecified. We consider two popular classes of estimators for dynamic discrete choice models: K-step maximum likelihood estimators (K-ML) and K-step minimum distance estimators (K-MD), where K denotes the number of policy iterations employed in the estimation problem. These estimator classes include popular estimators such as Rust (1987)’s nested fixed point estimator, Hotz and Miller (1993)’s conditional choice probability estimator, Aguirregabiria and Mira (2002)’s nested algorithm estimator, and Pesendorfer and Schmidt-Dengler (2008)’s least squares estimator. We derive and compare the asymptotic distributions of K- ML and K-MD estimators when the model is arbitrarily locally misspecified and we obtain three main results. In the absence of misspecification, Aguirregabiria and Mira (2002) show that all K-ML estimators are asymptotically equivalent regardless of the choice of K. Our first result shows that this finding extends to a locally misspecified model, regardless of the degree of local misspecification. As a second result, we show that an analogous result holds for all K-MD estimators, i.e., all K- MD estimator are asymptotically equivalent regardless of the choice of K. Our third and final result is to compare K-MD and K-ML estimators in terms of asymptotic mean squared error. Under local misspecification, the optimally weighted K-MD estimator depends on the unknown asymptotic bias and is no longer feasible. In turn, feasible K-MD estimators could have an asymptotic mean squared error that is higher or lower than that of the K-ML estimators. To demonstrate the relevance of our asymptotic analysis, we illustrate our findings using in a simulation exercise based on a misspecified version of Rust (1987) bus engine problem.
The last chapter investigates the causal effect of the Omnibus Budget Reconcil- iation Act of 1993, which caused the biggest change to the EITC in its history, on unemployment and labor force participation among single mothers. Unemployment and labor force participation are difficult to define for a few reasons, for example, be- cause of marginally attached workers. Instead of searching for the unique definition for each of these two concepts, this chapter bounds unemployment and labor force participation by observable variables and, as a result, considers various competing definitions of these two concepts simultaneously. This bounding strategy leads to partial identification of the treatment effect. The inference results depend on the construction of the bounds, but they imply positive effect on labor force participa- tion and negligible effect on unemployment. The results imply that the difference- in-difference result based on the BLS definition of unemployment can be misleading
due to misclassification of unemployment.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
ABSTRACT. – Phylogenies and molecular clocks of the diatoms have largely been inferred from SSU rDNA sequences. A new phylogeny of diatoms was estimated using four gene markers SSU and LSU rDNA rbcL and psbA (total 4352 bp) with 42 diatom species. The four gene trees analysed with a maximum likelihood (ML) and Baysian (BI) analysis recovered a monophyletic origin of the new diatom classes with high bootstrap support, which has been controversial with single gene markers using single outgroups and alignments that do not take secondary structure of the SSU gene into account. The divergence time of the classes were calculated from a ML tree in the MultliDiv Time program using a Bayesian estimation allowing for simultaneous constraints from the fossil record and varying rates of molecular evolution of different branches in the phylogenetic tree. These divergence times are generally in agreement with those proposed by other clocks using single genes with the exception that the pennates appear much earlier and suggest a longer Cretaceous fossil record that has yet to be sampled. Ghost lineages (i.e. the discrepancy between first appearance (FA) and molecular clock age of origin from an extant taxon) were revealed in the pennate lineage, whereas those ghost lineages in the centric lineages previously reported by others are reviewed and referred to earlier literature.
Resumo:
ABSTRACT. – Phylogenies and molecular clocks of the diatoms have largely been inferred from SSU rDNA sequences. A new phylogeny of diatoms was estimated using four gene markers SSU and LSU rDNA rbcL and psbA (total 4352 bp) with 42 diatom species. The four gene trees analysed with a maximum likelihood (ML) and Baysian (BI) analysis recovered a monophyletic origin of the new diatom classes with high bootstrap support, which has been controversial with single gene markers using single outgroups and alignments that do not take secondary structure of the SSU gene into account. The divergence time of the classes were calculated from a ML tree in the MultliDiv Time program using a Bayesian estimation allowing for simultaneous constraints from the fossil record and varying rates of molecular evolution of different branches in the phylogenetic tree. These divergence times are generally in agreement with those proposed by other clocks using single genes with the exception that the pennates appear much earlier and suggest a longer Cretaceous fossil record that has yet to be sampled. Ghost lineages (i.e. the discrepancy between first appearance (FA) and molecular clock age of origin from an extant taxon) were revealed in the pennate lineage, whereas those ghost lineages in the centric lineages previously reported by others are reviewed and referred to earlier literature.
Resumo:
Purpose: Educational attainment has been shown to be positively associated with mental health and a potential buffer to stressful events. One stressful life event likely to affect everyone in their lifetime is bereavement. This paper assesses the effect of educational attainment on mental health post bereavement.
Methods: By utilising large administrative datasets, linking Census returns to death records and prescribed medication data, we analysed the bereavement exposure of 208,332 individuals aged 25-74 years. Two-level multi-level logistic regression models were constructed to determine the likelihood of antidepressant medication use (a proxy of mental ill-health) post bereavement given level of educational attainment.
Results: Individuals who are bereaved have greater antidepressant use than those who are not bereaved, with over a quarter (26.5%) of those bereaved by suicide in receipt of antidepressant medication compared to just 12.4% of those not bereaved. Within individuals bereaved by a sudden death those with a University Degree or higher qualifications are 73% less likely to be in receipt of antidepressant medication compared to those with no qualifications, after full adjustment for demographic, socio-economic and area factors (OR=0.27, 95% CI 0.09,0.75). Higher educational attainment and no qualifications have an equivalent effect for those bereaved by suicide.
Conclusions: Education may protect against poor mental health, as measured by the use of antidepressant medication, post bereavement, except in those bereaved by suicide. This is likely due to the improved cognitive, personal and psychological skills gained from time spent in education.
Resumo:
Background: The NECaSP intervention aspires to increase sport and physical activity (PA) participation amongst young people in the UK. The aims of this paper are to report on a summative process evaluation of the NECaSP and make recommendations for future interventions. Methods: Seventeen schools provided data by students aged 11-13 (n=1,226), parents (n=192) and teachers (n= 14) via direct observation and questionnaires. Means, standard deviations and percentages were calculated for socio-demographic data. Qualitative data was analysed via directed content analysis and main themes identified. Results: Findings indicate further administrative, educational and financial support will help facilitate the success of the programme in improving PA outcomes for young people, and of other similar intervention programmes globally. Data highlighted the need to engage parents to increase likelihood of intervention success. Conclusions: One main strength of this study is the mixed-methods nature of the process evaluation. It is recommended that future school based interventions that bridge sports clubs and formal curriculum provision, should consider a more broad approach to the delivery of programmes throughout the academic year, school week and school day. Finally, changes in the school curriculum can be successful once all parties are involved (community, school, families).