138 resultados para approximated inference


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents regional sequences of production, consumption and Social relations ill Southern Spain from the beginning of the Neolithic to the Early Bronze Age (c. 5600-1550 BC). The regions Studied are southeast Spain, Valencia, the southern Meseta and central/western Andalucia. The details presented for each region and period vary in quality but Show how Much our knowledge of the archaeological record of southern Spain has changed during the last four decades. Among the Surprises are the rapidity of agricultural adoption. the emergence of regional centres of aggregated population in enclosed/fortified settlements of up to 400 hectares in the fourth and third millennia BC. the use of copper objects as instruments of production, rather than as items With 11 purely symbolic of 'prestige' value, large-scale copper production in western Andalucia in the third millennium BC (as opposed to the usual domestic production model), and the inference of societies based oil relations of class.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surface coatings are very common on mineral grains in soils but most laboratory dissolution experiments are carried out on pristine, uncoated mineral grains. An experiment designed to unambiguously isolate the effect of surface coatings on mineral dissolution from any influence of solution saturation state is reported. Two aliquots of 53 to 63 mum anorthite feldspar powder were used. One was dissolved in pH 2.6 HCl, the other in pH 2.6 FeCl3 solution, both for similar to6000 h in flow-through reactors. An amorphous Fe-rich, Al-, Ca- and Si-free orange precipitate coated the anorthite dissolved in the FeCl3 solution. BET surface area of the anorthite increased from 0.16 to 1.65 m(2) g(-1) in the HCl experiment and to 3.89 m(2) g(-1) in the FeCl3 experiment. The increase in surface area in the HCl experiment was due to the formation of etch pits on the anorthite grain surface whilst the additional increase in the FeCl3 experiment was due to the micro- and meso-porous nature of the orange precipitate. This precipitate did not inhibit or slow the dissolution of the anorthite. Steady state dissolution rates for the anorthite dissolved in the HCl and FeCl3 were similar to2.5 and 3.2 X 10(-10) mol(feldspar) m(-2) s(-1) respectively. These rates are not significantly different after the cumulative uncertainty of 17% in their value due to uncertainty in the inputs parameters used in their calculation is taken into account. Results from this experiment support previous theoretical and inference-based conclusions that porous coatings should not inhibit mineral dissolution. Copyright (C) 2003 Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accuracy and mesh generation are key issues for the high-resolution hydrodynamic modelling of the whole Great Barrier Reef. Our objective is to generate suitable unstructured grids that can resolve topological and dynamical features like tidal jets and recirculation eddies in the wake of islands. A new strategy is suggested to refine the mesh in areas of interest taking into account the bathymetric field and an approximated distance to islands and reefs. Such a distance is obtained by solving an elliptic differential operator, with specific boundary conditions. Meshes produced illustrate both the validity and the efficiency of the adaptive strategy. Selection of refinement and geometrical parameters is discussed. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The radar scattering properties of realistic aggregate snowflakes have been calculated using the Rayleigh-Gans theory. We find that the effect of the snowflake geometry on the scattering may be described in terms of a single universal function, which depends only on the overall shape of the aggregate and not the geometry or size of the pristine ice crystals which compose the flake. This function is well approximated by a simple analytic expression at small sizes; for larger snowflakes we fit a curve to Our numerical data. We then demonstrate how this allows a characteristic snowflake radius to be derived from dual wavelength radar measurements without knowledge of the pristine crystal size or habit, while at the same time showing that this detail is crucial to using such data to estimate ice water content. We also show that the 'effective radius'. characterizing the ratio of particle volume to projected area, cannot be inferred from dual wavelength radar data for aggregates. Finally, we consider the errors involved in approximating snowflakes by 'air-ice spheres', and show that for small enough aggregates the predicted dual wavelength ratio typically agrees to within a few percent, provided some care is taken in choosing the radius of the sphere and the dielectric constant of the air-ice mixture; at larger sizes the radar becomes more sensitive to particle shape, and the errors associated with the sphere model are found to increase accordingly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inferences consistent with “recognition-based” decision-making may be drawn for various reasons other than recognition alone. We demonstrate that, for 2-alternative forced-choice decision tasks, less-is-more effects (reduced performance with additional learning) are not restricted to recognition-based inference but can also be seen in circumstances where inference is knowledge-based but item knowledge is limited. One reason why such effects may not be observed more widely is the dependence of the effect on specific values for the validity of recognition and knowledge cues. We show that both recognition and knowledge validity may vary as a function of the number of items recognized. The implications of these findings for the special nature of recognition information, and for the investigation of recognition-based inference, are discussed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although many examples exist for shared neural representations of self and other, it is unknown how such shared representations interact with the rest of the brain. Furthermore, do high-level inference-based shared mentalizing representations interact with lower level embodied/simulation-based shared representations? We used functional neuroimaging (fMRI) and a functional connectivity approach to assess these questions during high-level inference-based mentalizing. Shared mentalizing representations in ventromedial prefrontal cortex, posterior cingulate/precuneus, and temporo-parietal junction (TPJ) all exhibited identical functional connectivity patterns during mentalizing of both self and other. Connectivity patterns were distributed across low-level embodied neural systems such as the frontal operculum/ventral premotor cortex, the anterior insula, the primary sensorimotor cortex, and the presupplementary motor area. These results demonstrate that identical neural circuits are implementing processes involved in mentalizing of both self and other and that the nature of such processes may be the integration of low-level embodied processes within higher level inference-based mentalizing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper focuses on improving computer network management by the adoption of artificial intelligence techniques. A logical inference system has being devised to enable automated isolation, diagnosis, and even repair of network problems, thus enhancing the reliability, performance, and security of networks. We propose a distributed multi-agent architecture for network management, where a logical reasoner acts as an external managing entity capable of directing, coordinating, and stimulating actions in an active management architecture. The active networks technology represents the lower level layer which makes possible the deployment of code which implement teleo-reactive agents, distributed across the whole network. We adopt the Situation Calculus to define a network model and the Reactive Golog language to implement the logical reasoner. An active network management architecture is used by the reasoner to inject and execute operational tasks in the network. The integrated system collects the advantages coming from logical reasoning and network programmability, and provides a powerful system capable of performing high-level management tasks in order to deal with network fault.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article explores how data envelopment analysis (DEA), along with a smoothed bootstrap method, can be used in applied analysis to obtain more reliable efficiency rankings for farms. The main focus is the smoothed homogeneous bootstrap procedure introduced by Simar and Wilson (1998) to implement statistical inference for the original efficiency point estimates. Two main model specifications, constant and variable returns to scale, are investigated along with various choices regarding data aggregation. The coefficient of separation (CoS), a statistic that indicates the degree of statistical differentiation within the sample, is used to demonstrate the findings. The CoS suggests a substantive dependency of the results on the methodology and assumptions employed. Accordingly, some observations are made on how to conduct DEA in order to get more reliable efficiency rankings, depending on the purpose for which they are to be used. In addition, attention is drawn to the ability of the SLICE MODEL, implemented in GAMS, to enable researchers to overcome the computational burdens of conducting DEA (with bootstrapping).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fixed transactions costs that prohibit exchange engender bias in supply analysis due to censoring of the sample observations. The associated bias in conventional regression procedures applied to censored data and the construction of robust methods for mitigating bias have been preoccupations of applied economists since Tobin [Econometrica 26 (1958) 24]. This literature assumes that the true point of censoring in the data is zero and, when this is not the case, imparts a bias to parameter estimates of the censored regression model. We conjecture that this bias can be significant; affirm this from experiments; and suggest techniques for mitigating this bias using Bayesian procedures. The bias-mitigating procedures are based on modifications of the key step that facilitates Bayesian estimation of the censored regression model; are easy to implement; work well in both small and large samples; and lead to significantly improved inference in the censored regression model. These findings are important in light of the widespread use of the zero-censored Tobit regression and we investigate their consequences using data on milk-market participation in the Ethiopian highlands. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper provides one of the first applications of the double bootstrap procedure (Simar and Wilson 2007) in a two-stage estimation of the effect of environmental variables on non-parametric estimates of technical efficiency. This procedure enables consistent inference within models explaining efficiency scores, while simultaneously producing standard errors and confidence intervals for these efficiency scores. The application is to 88 livestock and 256 crop farms in the Czech Republic, split into individual and corporate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper provides one of the first applications of the double bootstrap procedure (Simar and Wilson 2007) in a two-stage estimation of the effect of environmental variables on non-parametric estimates of technical efficiency. This procedure enables consistent inference within models explaining efficiency scores, while simultaneously producing standard errors and confidence intervals for these efficiency scores. The application is to 88 livestock and 256 crop farms in the Czech Republic, split into individual and corporate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that the Hájek (Ann. Math Statist. (1964) 1491) variance estimator can be used to estimate the variance of the Horvitz–Thompson estimator when the Chao sampling scheme (Chao, Biometrika 69 (1982) 653) is implemented. This estimator is simple and can be implemented with any statistical packages. We consider a numerical and an analytic method to show that this estimator can be used. A series of simulations supports our findings.