91 resultados para Parameter Inference
Resumo:
Nowadays, demand for automated Gas metal arc welding (GMAW) is growing and consequently need for intelligent systems is increased to ensure the accuracy of the procedure. To date, welding pool geometry has been the most used factor in quality assessment of intelligent welding systems. But, it has recently been found that Mahalanobis Distance (MD) not only can be used for this purpose but also is more efficient. In the present paper, Artificial Neural Networks (ANN) has been used for prediction of MD parameter. However, advantages and disadvantages of other methods have been discussed. The Levenberg–Marquardt algorithm was found to be the most effective algorithm for GMAW process. It is known that the number of neurons plays an important role in optimal network design. In this work, using trial and error method, it has been found that 30 is the optimal number of neurons. The model has been investigated with different number of layers in Multilayer Perceptron (MLP) architecture and has been shown that for the aim of this work the optimal result is obtained when using MLP with one layer. Robustness of the system has been evaluated by adding noise into the input data and studying the effect of the noise in prediction capability of the network. The experiments for this study were conducted in an automated GMAW setup that was integrated with data acquisition system and prepared in a laboratory for welding of steel plate with 12 mm in thickness. The accuracy of the network was evaluated by Root Mean Squared (RMS) error between the measured and the estimated values. The low error value (about 0.008) reflects the good accuracy of the model. Also the comparison of the predicted results by ANN and the test data set showed very good agreement that reveals the predictive power of the model. Therefore, the ANN model offered in here for GMA welding process can be used effectively for prediction goals.
Resumo:
This chapter explores the possibility and exigencies of employing hypotheses, or educated guesses, as the basis for ethnographic research design. The authors’ goal is to examine whether using hypotheses might provide a path to resolve some of the challenges to knowledge claims produced by ethnographic studies. Through resolution of the putative division between qualitative and quantitative research traditions , it is argued that hypotheses can serve as inferential warrants in qualitative and ethnographic studies.
Resumo:
In this paper, we propose a novel online hidden Markov model (HMM) parameter estimator based on the new information-theoretic concept of one-step Kerridge inaccuracy (OKI). Under several regulatory conditions, we establish a convergence result (and some limited strong consistency results) for our proposed online OKI-based parameter estimator. In simulation studies, we illustrate the global convergence behaviour of our proposed estimator and provide a counter-example illustrating the local convergence of other popular HMM parameter estimators.
Resumo:
The increasing amount of information that is annotated against standardised semantic resources offers opportunities to incorporate sophisticated levels of reasoning, or inference, into the retrieval process. In this position paper, we reflect on the need to incorporate semantic inference into retrieval (in particular for medical information retrieval) as well as previous attempts that have been made so far with mixed success. Medical information retrieval is a fertile ground for testing inference mechanisms to augment retrieval. The medical domain offers a plethora of carefully curated, structured, semantic resources, along with well established entity extraction and linking tools, and search topics that intuitively require a number of different inferential processes (e.g., conceptual similarity, conceptual implication, etc.). We argue that integrating semantic inference in information retrieval has the potential to uncover a large amount of information that otherwise would be inaccessible; but inference is also risky and, if not used cautiously, can harm retrieval.
Resumo:
A recurring question for cognitive science is whether functional neuroimaging data can provide evidence for or against psychological theories. As posed, the question reflects an adherence to a popular scientific method known as 'strong inference'. The method entails constructing multiple hypotheses (Hs) and designing experiments so that alternative possible outcomes will refute at least one (i.e., 'falsify' it). In this article, after first delineating some well-documented limitations of strong inference, I provide examples of functional neuroimaging data being used to test Hs from rival modular information-processing models of spoken word production. 'Strong inference' for neuroimaging involves first establishing a systematic mapping of 'processes to processors' for a common modular architecture. Alternate Hs are then constructed from psychological theories that attribute the outcome of manipulating an experimental factor to two or more distinct processing stages within this architecture. Hs are then refutable by a finding of activity differentiated spatially and chronometrically by experimental condition. When employed in this manner, the data offered by functional neuroimaging may be more useful for adjudicating between accounts of processing loci than behavioural measures.
Resumo:
A phylogenetic hypothesis for the lepidopteran superfamily Noctuoidea was inferred based on the complete mitochondrial (mt) genomes of 12 species (six newly sequenced). The monophyly of each noctuoid family in the latest classification was well supported. Novel and robust relationships were recovered at the family level, in contrast to previous analyses using nuclear genes. Erebidae was recovered as sister to (Nolidae+(Euteliidae+Noctuidae)), while Notodontidae was sister to all these taxa (the putatively basalmost lineage Oenosandridae was not included). In order to improve phylogenetic resolution using mt genomes, various analytical approaches were tested: Bayesian inference (BI) vs. maximum likelihood (ML), excluding vs. including RNA genes (rRNA or tRNA), and Gblocks treatment. The evolutionary signal within mt genomes had low sensitivity to analytical changes. Inference methods had the most significant influence. Inclusion of tRNAs positively increased the congruence of topologies, while inclusion of rRNAs resulted in a range of phylogenetic relationships varying depending on other analytical factors. The two Gblocks parameter settings had opposite effects on nodal support between the two inference methods. The relaxed parameter (GBRA) resulted in higher support values in BI analyses, while the strict parameter (GBDH) resulted in higher support values in ML analyses.
Resumo:
In this paper we have used simulations to make a conjecture about the coverage of a t-dimensional subspace of a d-dimensional parameter space of size n when performing k trials of Latin Hypercube sampling. This takes the form P(k,n,d,t) = 1 - e^(-k/n^(t-1)). We suggest that this coverage formula is independent of d and this allows us to make connections between building Populations of Models and Experimental Designs. We also show that Orthogonal sampling is superior to Latin Hypercube sampling in terms of allowing a more uniform coverage of the t-dimensional subspace at the sub-block size level. These ideas have particular relevance when attempting to perform uncertainty quantification and sensitivity analyses.
Resumo:
Stochastic (or random) processes are inherent to numerous fields of human endeavour including engineering, science, and business and finance. This thesis presents multiple novel methods for quickly detecting and estimating uncertainties in several important classes of stochastic processes. The significance of these novel methods is demonstrated by employing them to detect aircraft manoeuvres in video signals in the important application of autonomous mid-air collision avoidance.
Resumo:
In this paper we provide estimates for the coverage of parameter space when using Latin Hypercube Sampling, which forms the basis of building so-called populations of models. The estimates are obtained using combinatorial counting arguments to determine how many trials, k, are needed in order to obtain specified parameter space coverage for a given value of the discretisation size n. In the case of two dimensions, we show that if the ratio (Ø) of trials to discretisation size is greater than 1, then as n becomes moderately large the fractional coverage behaves as 1-exp-ø. We compare these estimates with simulation results obtained from an implementation of Latin Hypercube Sampling using MATLAB.
Resumo:
This paper demonstrates the procedures for probabilistic assessment of a pesticide fate and transport model, PCPF-1, to elucidate the modeling uncertainty using the Monte Carlo technique. Sensitivity analyses are performed to investigate the influence of herbicide characteristics and related soil properties on model outputs using four popular rice herbicides: mefenacet, pretilachlor, bensulfuron-methyl and imazosulfuron. Uncertainty quantification showed that the simulated concentrations in paddy water varied more than those of paddy soil. This tendency decreased as the simulation proceeded to a later period but remained important for herbicides having either high solubility or a high 1st-order dissolution rate. The sensitivity analysis indicated that PCPF-1 parameters requiring careful determination are primarily those involve with herbicide adsorption (the organic carbon content, the bulk density and the volumetric saturated water content), secondary parameters related with herbicide mass distribution between paddy water and soil (1st-order desorption and dissolution rates) and lastly, those involving herbicide degradations. © Pesticide Science Society of Japan.
Resumo:
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Resumo:
In analysis of longitudinal data, the variance matrix of the parameter estimates is usually estimated by the 'sandwich' method, in which the variance for each subject is estimated by its residual products. We propose smooth bootstrap methods by perturbing the estimating functions to obtain 'bootstrapped' realizations of the parameter estimates for statistical inference. Our extensive simulation studies indicate that the variance estimators by our proposed methods can not only correct the bias of the sandwich estimator but also improve the confidence interval coverage. We applied the proposed method to a data set from a clinical trial of antibiotics for leprosy.
Resumo:
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.
Resumo:
In this paper, the trajectory tracking control of an autonomous underwater vehicle (AUVs) in six-degrees-of-freedom (6-DOFs) is addressed. It is assumed that the system parameters are unknown and the vehicle is underactuated. An adaptive controller is proposed, based on Lyapunov׳s direct method and the back-stepping technique, which interestingly guarantees robustness against parameter uncertainties. The desired trajectory can be any sufficiently smooth bounded curve parameterized by time even if consist of straight line. In contrast with the majority of research in this field, the likelihood of actuators׳ saturation is considered and another adaptive controller is designed to overcome this problem, in which control signals are bounded using saturation functions. The nonlinear adaptive control scheme yields asymptotic convergence of the vehicle to the reference trajectory, in the presence of parametric uncertainties. The stability of the presented control laws is proved in the sense of Lyapunov theory and Barbalat׳s lemma. Efficiency of presented controller using saturation functions is verified through comparing numerical simulations of both controllers.
Inference of the genetic architecture underlying BMI and height with the use of 20,240 sibling pairs
Resumo:
Evidence that complex traits are highly polygenic has been presented by population-based genome-wide association studies (GWASs) through the identification of many significant variants, as well as by family-based de novo sequencing studies indicating that several traits have a large mutational target size. Here, using a third study design, we show results consistent with extreme polygenicity for body mass index (BMI) and height. On a sample of 20,240 siblings (from 9,570 nuclear families), we used a within-family method to obtain narrow-sense heritability estimates of 0.42 (SE = 0.17, p = 0.01) and 0.69 (SE = 0.14, p = 6 x 10(-)(7)) for BMI and height, respectively, after adjusting for covariates. The genomic inflation factors from locus-specific linkage analysis were 1.69 (SE = 0.21, p = 0.04) for BMI and 2.18 (SE = 0.21, p = 2 x 10(-10)) for height. This inflation is free of confounding and congruent with polygenicity, consistent with observations of ever-increasing genomic-inflation factors from GWASs with large sample sizes, implying that those signals are due to true genetic signals across the genome rather than population stratification. We also demonstrate that the distribution of the observed test statistics is consistent with both rare and common variants underlying a polygenic architecture and that previous reports of linkage signals in complex traits are probably a consequence of polygenic architecture rather than the segregation of variants with large effects. The convergent empirical evidence from GWASs, de novo studies, and within-family segregation implies that family-based sequencing studies for complex traits require very large sample sizes because the effects of causal variants are small on average.