869 resultados para Error-correcting codes (Information theory)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conceptual Information Systems unfold the conceptual structure of data stored in relational databases. In the design phase of the system, conceptual hierarchies have to be created which describe different aspects of the data. In this paper, we describe two principal ways of designing such conceptual hierarchies, data driven design and theory driven design and discuss advantages and drawbacks. The central part of the paper shows how Attribute Exploration, a knowledge acquisition tool developped by B. Ganter can be applied for narrowing the gap between both approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops an approach to the construction of multidimensional stochastic models for intelligent systems exploring an underwater environment. It describes methods for building models by a three- dimensional spatial decomposition of stochastic, multisensor feature vectors. New sensor information is incrementally incorporated into the model by stochastic backprojection. Error and ambiguity are explicitly accounted for by blurring a spatial projection of remote sensor data before incorporation. The stochastic models can be used to derive surface maps or other representations of the environment. The methods are demonstrated on data sets from multibeam bathymetric surveying, towed sidescan bathymetry, towed sidescan acoustic imagery, and high-resolution scanning sonar aboard a remotely operated vehicle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fueled by ever-growing genomic information and rapid developments of proteomics–the large scale analysis of proteins and mapping its functional role has become one of the most important disciplines for characterizing complex cell function. For building functional linkages between the biomolecules, and for providing insight into the mechanisms of biological processes, last decade witnessed the exploration of combinatorial and chip technology for the detection of bimolecules in a high throughput and spatially addressable fashion. Among the various techniques developed, the protein chip technology has been rapid. Recently we demonstrated a new platform called “Spacially addressable protein array” (SAPA) to profile the ligand receptor interactions. To optimize the platform, the present study investigated various parameters such as the surface chemistry and role of additives for achieving high density and high-throughput detection with minimal nonspecific protein adsorption. In summary the present poster will address some of the critical challenges in protein micro array technology and the process of fine tuning to achieve the optimum system for solving real biological problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When unmanned underwater vehicles (UUVs) perform missions near the ocean floor, optical sensors can be used to improve local navigation. Video mosaics allow to efficiently process the images acquired by the vehicle, and also to obtain position estimates. We discuss in this paper the role of lens distortions in this context, proving that degenerate mosaics have their origin not only in the selected motion model or in registration errors, but also in the cumulative effect of radial distortion residuals. Additionally, we present results on the accuracy of different feature-based approaches for self-correction of lens distortions that may guide the choice of appropriate techniques for correcting distortions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I test the presence of hidden information and action in the automobile insurance market using a data set from several Colombian insurers. To identify the presence of hidden information I find a common knowledge variable providing information on policyholder s risk type which is related to both experienced risk and insurance demand and that was excluded from the pricing mechanism. Such unused variable is the record of policyholder s traffic offenses. I find evidence of adverse selection in six of the nine insurance companies for which the test is performed. From the point of view of hidden action I develop a dynamic model of effort in accident prevention given an insurance contract with bonus experience rating scheme and I show that individual accident probability decreases with previous accidents. This result brings a testable implication for the empirical identification of hidden action and based on that result I estimate an econometric model of the time spans between the purchase of the insurance and the first claim, between the first claim and the second one, and so on. I find strong evidence on the existence of unobserved heterogeneity that deceives the testable implication. Once the unobserved heterogeneity is controlled, I find conclusive statistical grounds supporting the presence of moral hazard in the Colombian insurance market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La seguridad del paciente constituye una prioridad y un reto para los organismos Gubernamentales y para las instituciones de salud tanto a nivel nacional como internacional (Sescam, 2007), quienes han emprendido una búsqueda de soluciones por medio de diferentes metodologías y estrategias que permitan reducir al máximo los riesgos de la atención de salud para el paciente (Ministerio de Sanidad y Consumo, 2002). Aunque se cuenta con mejores sistemas o metodología de análisis y sistemas de notificación la persistencia del fenómeno es constante. ( Requena, Aranaz, Gea, Limón, Miralles, & Vitaller , 2010). En esta tesis se plantea una nueva alternativa de gestión en la seguridad del paciente a través de la Teoría de Restricciones (TOC) para emprender acciones que permitan analizar el sistema bajo esta nueva metodología, intervenir de manera oportuna, impactar y estimular al personal de salud a trabajar en la búsqueda del mejoramiento continuo para el establecimiento de un sistema efectivo de gestión de la seguridad del paciente y una cultura de seguridad de los trabajadores de la institución de salud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El neurofeedback es una técnica no invasiva en la que se pretende corregir, mediante condicionamiento operante, ondas cerebrales que se encuentren alteradas en el electroencefalograma. Desde 1967, se han conducido numerosas investigaciones relacionadas con los efectos de la técnica en el tratamiento de alteraciones psicológicas. Sin embargo, a la fecha no existen revisiones sistemáticas que reúnan los temas que serán aquí tratados. El aporte de este trabajo es la revisión de 56 artículos, publicados entre los años 1995 y 2013 y la evaluación metodológica de 29 estudios incluidos en la revisión. La búsqueda fue acotada a la efectividad del neurofeedback en el tratamiento de depresión, ansiedad, trastorno obsesivo compulsivo (TOC), ira y fibromialgia. Los hallazgos demuestran que el neurofeedback ha tenido resultados positivos en el tratamiento de estos trastornos, sin embargo, es una técnica que aún está en desarrollo, con unas bases teóricas no muy bien establecidas y cuyos resultados necesitan de diseños metodológicamente más sólidos que ratifiquen su validez.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dado el interés que se presenta con los temas de gobierno corporativo, este trabajo busca describir si la divulgación on-line de los contenidos de los códigos de buen gobierno, es determinante en el posicionamiento que tienen las Instituciones de Educación Superior (IES) en el ranking QS. Partiendo de una muestra de 20 IES, se recolectaron un conjunto de datos dicotómicos para 30 variables independientes y se relacionaron con la variable dependiente denominada posicionamiento en el ranking. A partir de lo anterior, se elaboró un trabajo descriptivo y correlacional con el fin de probar las hipótesis de investigación. Este estudio reveló que la divulgación on-line de los contenidos de los códigos de buen gobierno en las IES, no es determinante para el posicionamiento en el ranking QS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Loans are illiquid assets that can be sold in a secondary market even that buyers have no certainty about their quality. I study a model in which a lender has access to new investment opportunities when all her assets are illiquid. To raise funds, the lender may either borrow using her assets as collateral, or she can sell them in a secondary market. Given asymmetric information about assets quality, the lender cannot recover the total value of her assets. There is then a role for the government to correct the information problem using fiscal tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Møller-Plesset (MP2) and Becke-3-Lee-Yang-Parr (B3LYP) calculations have been used to compare the geometrical parameters, hydrogen-bonding properties, vibrational frequencies and relative energies for several X- and X+ hydrogen peroxide complexes. The geometries and interaction energies were corrected for the basis set superposition error (BSSE) in all the complexes (1-5), using the full counterpoise method, yielding small BSSE values for the 6-311 + G(3df,2p) basis set used. The interaction energies calculated ranged from medium to strong hydrogen-bonding systems (1-3) and strong electrostatic interactions (4 and 5). The molecular interactions have been characterized using the atoms in molecules theory (AIM), and by the analysis of the vibrational frequencies. The minima on the BSSE-counterpoise corrected potential-energy surface (PES) have been determined as described by S. Simón, M. Duran, and J. J. Dannenberg, and the results were compared with the uncorrected PES

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comparision of the local effects of the basis set superposition error (BSSE) on the electron densities and energy components of three representative H-bonded complexes was carried out. The electron densities were obtained with Hartee-Fock and density functional theory versions of the chemical Hamiltonian approach (CHA) methodology. It was shown that the effects of the BSSE were common for all complexes studied. The electron density difference maps and the chemical energy component analysis (CECA) analysis confirmed that the local effects of the BSSE were different when diffuse functions were present in the calculations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of this work presents an accurate analysis of the most relevant 3D registration techniques, including initial pose estimation, pairwise registration and multiview registration strategies. A new classification has been proposed, based on both the applications and the approach of the methods that have been discussed. The main contribution of this thesis is the proposal of a new 3D multiview registration strategy. The proposed approach detects revisited regions obtaining cycles of views that are used to reduce the inaccuracies that may exist in the final model due to error propagation. The method takes advantage of both global and local information of the registration process, using graph theory techniques in order correlate multiple views and minimize the propagated error by registering the views in an optimal way. The proposed method has been tested using both synthetic and real data, in order to show and study its behavior and demonstrate its reliability.