915 resultados para simultaneous inference
Resumo:
We present a statistical image-based shape + structure model for Bayesian visual hull reconstruction and 3D structure inference. The 3D shape of a class of objects is represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras. Bayesian reconstructions of new shapes are then estimated using a prior density constructed with a mixture model and probabilistic principal components analysis. We show how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process. The proposed method is applied to a data set of pedestrian images, and improvements in the approximate 3D models under various noise conditions are shown. We further augment the shape model to incorporate structural features of interest; unknown structural parameters for a novel set of contours are then inferred via the Bayesian reconstruction process. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and works even with only a single input view. Using a data set of thousands of pedestrian images generated from a synthetic model, we can accurately infer the 3D locations of 19 joints on the body based on observed silhouette contours from real images.
Resumo:
We present a type-based approach to statically derive symbolic closed-form formulae that characterize the bounds of heap memory usages of programs written in object-oriented languages. Given a program with size and alias annotations, our inference system will compute the amount of memory required by the methods to execute successfully as well as the amount of memory released when methods return. The obtained analysis results are useful for networked devices with limited computational resources as well as embedded software.
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of being compositional data and, for a given level of significance, are likely to be beyond the capabilities of laboratories to distinguish between minute concentrations and complete absence, thus preventing laboratories from reporting extremely low concentrations of the analyte. Instead, what is reported is the detection limit, which is the minimum concentration that conclusively differentiates between presence and absence of the element. A spatially distributed exhaustive sample is employed in this study to generate unbiased sub-samples, which are further censored to observe the effect that different detection limits and sample sizes have on the inference of population distributions starting from geochemical analyses having specimens below detection limit (nondetects). The isometric logratio transformation is used to convert the compositional data in the simplex to samples in real space, thus allowing the practitioner to properly borrow from the large source of statistical techniques valid only in real space. The bootstrap method is used to numerically investigate the reliability of inferring several distributional parameters employing different forms of imputation for the censored data. The case study illustrates that, in general, best results are obtained when imputations are made using the distribution best fitting the readings above detection limit and exposes the problems of other more widely used practices. When the sample is spatially correlated, it is necessary to combine the bootstrap with stochastic simulation
Resumo:
Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting for measurement error. From the various specifications, Jöreskog and Yang's (1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance
Resumo:
Este documento analiza la relación de doble causalidad entre salud y empleo y su comportamiento dinámico usando datos de Estados Unidos tomados del PSID (Pane Study of Income Dynamics). Este estudio usa dos variables dependientes (Estado de salud auto-reportado y Empleo), las cuales son estimadas usando un modelo probit bivariado para abordar el problema de endegeneidad presente en dicha relación. Los resultados muestran evidencia significativa de la existencia de dicha endogeneidad y del impacto positivo que tiene sobre la probabilidad de ser empleado tener un buen estado de salud y vicesersa, sin embargo, el impacto de la situación de empleo sobre el estado de salud se encuentra que no es significativa.
Resumo:
This paper uses Colombian household survey data collected over the period 1984-2005 to estimate Gini coe¢ cients along with their corresponding standard errors. We Önd a statistically signiÖcant increase in wage income inequality following the adoption of the liberalisation measures of the early 1990s, and mixed evidence during the recovery years that followed the economic recession of the late 1990s. We also Önd that in several cases the observed di§erences in the Gini coe¢ cients across cities have not been statistically signiÖcant.
Resumo:
El objetivo fue evaluar la intervención de las alertas en la prescripción de diclofenaco. Estudio observacional, comparativo, post intervención, de un antes después, en pacientes con prescripción de diclofenaco. Se evaluó la intervención de las alertas restrictivas antes y después de su implementación en los pacientes prescritos con diclofenaco y que tenían asociado un diagnóstico de riesgo cardiovascular según CIE 10 o eran mayores de 65 años. Un total de 315.135 transacciones con prescripción de diclofenaco, en 49.355 pacientes promedio mes. El 94,8% (298.674) de las transacciones fueron prescritas por médicos generales.
Resumo:
Comprensión es un libro para Key Stage 2 y los primeros años de Key Stage 3. La serie anima a los chicos a pensar, requiere no solo que interpreten lo que ellos leen, sino que usen la información que ellos han reunido de una manera constructiva, aplicándolo por ejemplo a gráficos, mapas, diagramas, dibujos y tablas. Alternativamente, muchas de las actividades requieren que los chicos expliquen con palabras información que está contenida en diferentes representaciones visuales tales como gráficos, diagramas e ilustraciones. El libro desarrollará las capacidades evaluativas y de deducción de los chicos. Muchas de las actividades toman aspectos de la ciencia, historia y geografía por ejemplo. Otras actividades están centradas en los intereses de los chicos, y tópicos tales como magia, marcianos y dragones están incluidos.
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Bayesian inference has been used to determine rigorous estimates of hydroxyl radical concentrations () and air mass dilution rates (K) averaged following air masses between linked observations of nonmethane hydrocarbons (NMHCs) spanning the North Atlantic during the Intercontinental Transport and Chemical Transformation (ITCT)-Lagrangian-2K4 experiment. The Bayesian technique obtains a refined (posterior) distribution of a parameter given data related to the parameter through a model and prior beliefs about the parameter distribution. Here, the model describes hydrocarbon loss through OH reaction and mixing with a background concentration at rate K. The Lagrangian experiment provides direct observations of hydrocarbons at two time points, removing assumptions regarding composition or sources upstream of a single observation. The estimates are sharpened by using many hydrocarbons with different reactivities and accounting for their variability and measurement uncertainty. A novel technique is used to construct prior background distributions of many species, described by variation of a single parameter . This exploits the high correlation of species, related by the first principal component of many NMHC samples. The Bayesian method obtains posterior estimates of , K and following each air mass. Median values are typically between 0.5 and 2.0 × 106 molecules cm−3, but are elevated to between 2.5 and 3.5 × 106 molecules cm−3, in low-level pollution. A comparison of estimates from absolute NMHC concentrations and NMHC ratios assuming zero background (the “photochemical clock” method) shows similar distributions but reveals systematic high bias in the estimates from ratios. Estimates of K are ∼0.1 day−1 but show more sensitivity to the prior distribution assumed.
Resumo:
A bipolar air conductivity instrument is described for use with a standard disposable meteorological radiosonde package. It is intended to provide electrical measurements at cloud boundaries, where the ratio of the bipolar air conductivities is affected by the presence of charged particles. The sensors are two identical Gerdien-type electrodes, which, through a voltage decay method, measure positive and negative air conductivities simultaneously. Voltage decay provides a thermally stable approach and a novel low current leakage electrometer switch is described which initiates the decay sequence. The radiosonde supplies power and telemetry, as well as measuring simultaneous meteorological data. A test flight using a tethered balloon determined positive (σ+) and negative (σ−) conductivities of σ+ = 2.77±0.2 fS m−1 and σ− = 2.82±0.2 fS m−1, respectively, at 400 m aloft, with σ+/σ− = 0.98±0.04.
Resumo:
Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of games by players across a broad FIDE Elo range, and is in principle applicable to any scenario where high-value decisions are being made under pressure.