1000 resultados para Model identifiability


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper considers the structural identifiability of a parent–metabolite pharmacokinetic model for ivabradine and one of its metabolites. The model, which is linear, is considered initially for intravenous administration of ivabradine, and then for a combined intravenous and oral administration. In both cases, the model is shown to be unidentifiable. Simplification of the model (for both forms of administration) to that proposed by Duffull et al. (1) results in a globally structurally identifiable model. The analysis could be applied to the modeling of any drug undergoing first-pass metabolism, with plasma concentrations available from drug and metabolite.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street Pollution Model (OSPMr). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to succesfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach applied for the uncertainty calculations underestimated the parameter uncertainties. The model parameter uncertainty was qualitatively assessed to be significant, and reduction strategies were identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with a procedure for model re-identification of a process in closed loop with ail already existing commercial MPC. The controller considered here has a two-layer structure where the upper layer performs a target calculation based on a simplified steady-state optimization of the process. Here, it is proposed a methodology where a test signal is introduced in a tuning parameter of the target calculation layer. When the outputs are controlled by zones instead of at fixed set points, the approach allows the continuous operation of the process without an excessive disruption of the operating objectives as process constraints and product specifications remain satisfied during the identification test. The application of the method is illustrated through the simulation of two processes of the oil refining industry. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic evaluation using animal models or pedigree-based models generally assume only autosomal inheritance. Bayesian animal models provide a flexible framework for genetic evaluation, and we show how the model readily can accommodate situations where the trait of interest is influenced by both autosomal and sex-linked inheritance. This allows for simultaneous calculation of autosomal and sex-chromosomal additive genetic effects. Inferences were performed using integrated nested Laplace approximations (INLA), a nonsampling-based Bayesian inference methodology. We provide a detailed description of how to calculate the inverse of the X- or Z-chromosomal additive genetic relationship matrix, needed for inference. The case study of eumelanic spot diameter in a Swiss barn owl (Tyto alba) population shows that this trait is substantially influenced by variation in genes on the Z-chromosome (sigma(2)(z) = 0.2719 and sigma(2)(a) = 0.4405). Further, a simulation study for this study system shows that the animal model accounting for both autosomal and sex-chromosome-linked inheritance is identifiable, that is, the two effects can be distinguished, and provides accurate inference on the variance components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is known that patients may cease participating in a longitudinal study and become lost to follow-up. The objective of this article is to present a Bayesian model to estimate the malaria transition probabilities considering individuals lost to follow-up. We consider a homogeneous population, and it is assumed that the considered period of time is small enough to avoid two or more transitions from one state of health to another. The proposed model is based on a Gibbs sampling algorithm that uses information of lost to follow-up at the end of the longitudinal study. To simulate the unknown number of individuals with positive and negative states of malaria at the end of the study and lost to follow-up, two latent variables were introduced in the model. We used a real data set and a simulated data to illustrate the application of the methodology. The proposed model showed a good fit to these data sets, and the algorithm did not show problems of convergence or lack of identifiability. We conclude that the proposed model is a good alternative to estimate probabilities of transitions from one state of health to the other in studies with low adherence to follow-up.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard analyses of survival data involve the assumption that survival and censoring are independent. When censoring and survival are related, the phenomenon is known as informative censoring. This paper examines the effects of an informative censoring assumption on the hazard function and the estimated hazard ratio provided by the Cox model.^ The limiting factor in all analyses of informative censoring is the problem of non-identifiability. Non-identifiability implies that it is impossible to distinguish a situation in which censoring and death are independent from one in which there is dependence. However, it is possible that informative censoring occurs. Examination of the literature indicates how others have approached the problem and covers the relevant theoretical background.^ Three models are examined in detail. The first model uses conditionally independent marginal hazards to obtain the unconditional survival function and hazards. The second model is based on the Gumbel Type A method for combining independent marginal distributions into bivariate distributions using a dependency parameter. Finally, a formulation based on a compartmental model is presented and its results described. For the latter two approaches, the resulting hazard is used in the Cox model in a simulation study.^ The unconditional survival distribution formed from the first model involves dependency, but the crude hazard resulting from this unconditional distribution is identical to the marginal hazard, and inferences based on the hazard are valid. The hazard ratios formed from two distributions following the Gumbel Type A model are biased by a factor dependent on the amount of censoring in the two populations and the strength of the dependency of death and censoring in the two populations. The Cox model estimates this biased hazard ratio. In general, the hazard resulting from the compartmental model is not constant, even if the individual marginal hazards are constant, unless censoring is non-informative. The hazard ratio tends to a specific limit.^ Methods of evaluating situations in which informative censoring is present are described, and the relative utility of the three models examined is discussed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the molecular mechanisms of oral carcinogenesis will yield important advances in diagnostics, prognostics, effective treatment, and outcome of oral cancer. Hence, in this study we have investigated the proteomic and peptidomic profiles by combining an orthotopic murine model of oral squamous cell carcinoma (OSCC), mass spectrometry-based proteomics and biological network analysis. Our results indicated the up-regulation of proteins involved in actin cytoskeleton organization and cell-cell junction assembly events and their expression was validated in human OSCC tissues. In addition, the functional relevance of talin-1 in OSCC adhesion, migration and invasion was demonstrated. Taken together, this study identified specific processes deregulated in oral cancer and provided novel refined OSCC-targeting molecules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two single crystalline surfaces of Au vicinal to the (111) plane were modified with Pt and studied using scanning tunneling microscopy (STM) and X-ray photoemission spectroscopy (XPS) in ultra-high vacuum environment. The vicinal surfaces studied are Au(332) and Au(887) and different Pt coverage (θPt) were deposited on each surface. From STM images we determine that Pt deposits on both surfaces as nanoislands with heights ranging from 1 ML to 3 ML depending on θPt. On both surfaces the early growth of Pt ad-islands occurs at the lower part of the step edge, with Pt ad-atoms being incorporated into the steps in some cases. XPS results indicate that partial alloying of Pt occurs at the interface at room temperature and at all coverage, as suggested by the negative chemical shift of Pt 4f core line, indicating an upward shift of the d-band center of the alloyed Pt. Also, the existence of a segregated Pt phase especially at higher coverage is detected by XPS. Sample annealing indicates that the temperature rise promotes a further incorporation of Pt atoms into the Au substrate as supported by STM and XPS results. Additionally, the catalytic activity of different PtAu systems reported in the literature for some electrochemical reactions is discussed considering our findings.