917 resultados para INFERENCE


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an in-depth study on the effect that composition and properties of recycled coarse aggregates from previous concrete structures, together with water/cement ratio (w/c) and a replacement ratio of coarse aggregate, have on compressive strength, its evolution through time, and its variability. A rigorous approach through statistical inference based on multiple linear regression has identified the key factors. A predictive equation is given for compressive strength when recycled coarse aggregates are used. The w/c and replacement ratio are the capital factors affecting concrete compressive strength. Their effect is significantly modified by the properties and composition of the recycled aggregates used. An equation that accurately predicts concrete compressive strength in terms of these parameters is presented. Particular attention has been paid to the complex effect that old concrete and adhered mortar have on concrete compressive strength and its mid-term evolution. It has been confirmed that the presence of contaminants tends to increase variability of compressive strength values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivation: To date, Gene Set Analysis (GSA) approaches primarily focus on identifying differentially expressed gene sets (pathways). Methods for identifying differentially coexpressed pathways also exist but are mostly based on aggregated pairwise correlations, or other pairwise measures of coexpression. Instead, we propose Gene Sets Net Correlations Analysis (GSNCA), a multivariate differential coexpression test that accounts for the complete correlation structure between genes.

Results: In GSNCA, weight factors are assigned to genes in proportion to the genes' cross-correlations (intergene correlations). The problem of finding the weight vectors is formulated as an eigenvector problem with a unique solution. GSNCA tests the null hypothesis that for a gene set there is no difference in the weight vectors of the genes between two conditions. In simulation studies and the analyses of experimental data, we demonstrate that GSNCA, indeed, captures changes in the structure of genes' cross-correlations rather than differences in the averaged pairwise correlations. Thus, GSNCA infers differences in coexpression networks, however, bypassing method-dependent steps of network inference. As an additional result from GSNCA, we define hub genes as genes with the largest weights and show that these genes correspond frequently to major and specific pathway regulators, as well as to genes that are most affected by the biological difference between two conditions. In summary, GSNCA is a new approach for the analysis of differentially coexpressed pathways that also evaluates the importance of the genes in the pathways, thus providing unique information that may result in the generation of novel biological hypotheses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mineral exploration programmes around the world use data from remote sensing, geophysics and direct sampling. On a regional scale, the combination of airborne geophysics and ground-based geochemical sampling can aid geological mapping and economic minerals exploration. The fact that airborne geophysical and traditional soil-sampling data are generated at different spatial resolutions means that they are not immediately comparable due to their different sampling density. Several geostatistical techniques, including indicator cokriging and collocated cokriging, can be used to integrate different types of data into a geostatistical model. With increasing numbers of variables the inference of the cross-covariance model required for cokriging can be demanding in terms of effort and computational time. In this paper a Gaussian-based Bayesian updating approach is applied to integrate airborne radiometric data and ground-sampled geochemical soil data to maximise information generated from the soil survey, to enable more accurate geological interpretation for the exploration and development of natural resources. The Bayesian updating technique decomposes the collocated estimate into a production of two models: prior and likelihood models. The prior model is built from primary information and the likelihood model is built from secondary information. The prior model is then updated with the likelihood model to build the final model. The approach allows multiple secondary variables to be simultaneously integrated into the mapping of the primary variable. The Bayesian updating approach is demonstrated using a case study from Northern Ireland where the history of mineral prospecting for precious and base metals dates from the 18th century. Vein-hosted, strata-bound and volcanogenic occurrences of mineralisation are found. The geostatistical technique was used to improve the resolution of soil geochemistry, collected one sample per 2 km2, by integrating more closely measured airborne geophysical data from the GSNI Tellus Survey, measured over a footprint of 65 x 200 m. The directly measured geochemistry data were considered as primary data in the Bayesian approach and the airborne radiometric data were used as secondary data. The approach produced more detailed updated maps and in particular maximized information on mapped estimates of zinc, copper and lead. Greater delineation of an elongated northwest/southeast trending zone in the updated maps strengthened the potential to investigate stratabound base metal deposits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

According to a higher order reasoning account, inferential reasoning processes underpin the widely observed cue competition effect of blocking in causal learning. The inference required for blocking has been described as modus tollens (if p then q, not q therefore not p). Young children are known to have difficulties with this type of inference, but research with adults suggests that this inference is easier if participants think counterfactually. In this study, 100 children (51 five-year-olds and 49 six- to seven-year-olds) were assigned to two types of pretraining groups. The counterfactual group observed demonstrations of cues paired with outcomes and answered questions about what the outcome would have been if the causal status of cues had been different, whereas the factual group answered factual questions about the same demonstrations. Children then completed a causal learning task. Counterfactual pretraining enhanced levels of blocking as well as modus tollens reasoning but only for the younger children. These findings provide new evidence for an important role for inferential reasoning in causal learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sample of 99 children completed a causal learning task that was an analogue of the food allergy paradigm used with adults. The cue competition effects of blocking and unovershadowing were assessed under forward and backward presentation conditions. Children also answered questions probing their ability to make the inference posited to be necessary for blocking by a reasoning account of cue competition. For the first time, children's working memory and general verbal ability were also measured alongside their causal learning. The magnitude of blocking and unovershadowing effects increased with age. However, analyses showed that the best predictor of both blocking and unovershadowing effects was children's performance on the reasoning questions. The magnitude of the blocking effect was also predicted by children's working memory abilities. These findings provide new evidence that cue competition effects such as blocking are underpinned by effortful reasoning processes. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Across a range of domains in psychology different theories assume different mental representations of knowledge. For example, in the literature on category-based inductive reasoning, certain theories (e.g., Rogers & McClelland, 2004; Sloutsky & Fisher, 2008) assume that the knowledge upon which inductive inferences are based is associative, whereas others (e.g., Heit & Rubinstein, 1994; Kemp & Tenenbaum, 2009; Osherson, Smith, Wilkie, López, & Shafir, 1990) assume that knowledge is structured. In this article we investigate whether associative and structured knowledge underlie inductive reasoning to different degrees under different processing conditions. We develop a measure of knowledge about the degree of association between categories and show that it dissociates from measures of structured knowledge. In Experiment 1 participants rated the strength of inductive arguments whose categories were either taxonomically or causally related. A measure of associative strength predicted reasoning when people had to respond fast, whereas causal and taxonomic knowledge explained inference strength when people responded slowly. In Experiment 2, we also manipulated whether the causal link between the categories was predictive or diagnostic. Participants preferred predictive to diagnostic arguments except when they responded under cognitive load. In Experiment 3, using an open-ended induction paradigm, people generated and evaluated their own conclusion categories. Inductive strength was predicted by associative strength under heavy cognitive load, whereas an index of structured knowledge was more predictive of inductive strength under minimal cognitive load. Together these results suggest that associative and structured models of reasoning apply best under different processing conditions and that the application of structured knowledge in reasoning is often effortful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We combine matter-wave interferometry and cavity optomechanics to propose a coherent matter-light interface based on mechanical motion at the quantum level. We demonstrate a mechanism that is able to transfer non-classical features imprinted on the state of a matter-wave system to an optomechanical device, transducing them into distinctive interference fringes. This provides a reliable tool for the inference of quantum coherence in the particle beam. Moreover, we discuss how our system allows for intriguing perspectives, paving the way to the construction of a device for the encoding of quantum information in matter-wave systems. Our proposal, which highlights previously unforeseen possibilities for the synergistic exploitation of these two experimental platforms, is explicitly based on existing technology, available and widely used in current cutting-edge experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A quarter of all lagomorphs (pikas, rabbits, hares and jackrabbits) are threatened with extinction, including several genera that contain only one species. The number of species in a genus correlates with extinction risk in lagomorphs, but not in other mammal groups, and this is concerning because the non-random extinction of small clades disproportionately threatens genetic diversity and phylogenetic history. Here, we use phylogenetic analyses to explore the properties of the lagomorph phylogeny and test if variation in evolution, biogeography and ecology between taxa explains current patterns of diversity and extinction risk. Threat status was not related to body size (and, by inference, its biological correlates), and there was no phylogenetic signal in extinction risk. We show that the lagomorph phylogeny has a similar clade-size distribution to other mammals, and found that genus size was unrelated to present climate, topography, or geographic range size. Extinction risk was greater in areas of higher human population density and negatively correlated with anthropogenically modified habitat. Consistent with this, habitat generalists were less likely to be threatened. Our models did not predict threat status accurately for taxa that experience region-specific threats. We suggest that pressure from human populations is so severe and widespread that it overrides ecological, biological, and geographic variation in extant lagomorphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. An algorithm for approximate credal network updating is presented. The problem in its general formulation is a multilinear optimization task, which can be linearized by an appropriate rule for fixing all the local models apart from those of a single variable. This simple idea can be iterated and quickly leads to accurate inferences. A transformation is also derived to reduce decision making in credal networks based on the maximality criterion to updating. The decision task is proved to have the same complexity of standard inference, being NPPP-complete for general credal nets and NP-complete for polytrees. Similar results are derived for the E-admissibility criterion. Numerical experiments confirm a good performance of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the Bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that inferences can be performed in linear time if there is a single observed node, which is a relevant practical case. Because our proof is constructive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynomial-time algorithm for SQPNs. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Credal networks relax the precise probability requirement of Bayesian networks, enabling a richer representation of uncertainty in the form of closed convex sets of probability measures. The increase in expressiveness comes at the expense of higher computational costs. In this paper, we present a new variable elimination algorithm for exactly computing posterior inferences in extensively specified credal networks, which is empirically shown to outperform a state-of-the-art algorithm. The algorithm is then turned into a provably good approximation scheme, that is, a procedure that for any input is guaranteed to return a solution not worse than the optimum by a given factor. Remarkably, we show that when the networks have bounded treewidth and bounded number of states per variable the approximation algorithm runs in time polynomial in the input size and in the inverse of the error factor, thus being the first known fully polynomial-time approximation scheme for inference in credal networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Credal networks provide a scheme for dealing with imprecise probabilistic models. The inference algorithms often used in credal networks compute the interval of the posterior probability of an event of interest given evidence of the specific kind -- evidence that describe the current state of a set of variables. These algorithms do not perform evidential reasoning in case of the evidence must be processed according to the conditioning rule proposed by RC Jeffrey. This paper describes a procedure to integrate evidence with Jeffrey's rule when performing inferences with credal nets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hidden Markov models (HMMs) are widely used models for sequential data. As with other probabilistic graphical models, they require the specification of precise probability values, which can be too restrictive for some domains, especially when data are scarce or costly to acquire. We present a generalized version of HMMs, whose quantification can be done by sets of, instead of single, probability distributions. Our models have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. Efficient inference algorithms are developed to address standard HMM usage such as the computation of likelihoods and most probable explanations. Experiments with real data show that the use of imprecise probabilities leads to more reliable inferences without compromising efficiency.