912 resultados para Error Correction Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The SiC optical processor for error detection and correction is realized by using double pin/pin a-SiC:H photodetector with front and back biased optical gating elements. Data shows that the background act as selector that pick one or more states by splitting portions of the input multi optical signals across the front and back photodiodes. Boolean operations such as exclusive OR (EXOR) and three bit addition are demonstrated optically with a combination of such switching devices, showing that when one or all of the inputs are present the output will be amplified, the system will behave as an XOR gate representing the SUM. When two or three inputs are on, the system acts as AND gate indicating the present of the CARRY bit. Additional parity logic operations are performed by use of the four incoming pulsed communication channels that are transmitted and checked for errors together. As a simple example of this approach, we describe an all optical processor for error detection and correction and then, provide an experimental demonstration of this fault tolerant reversible system, in emerging nanotechnology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computational power is increasing day by day. Despite that, there are some tasks that are still difficult or even impossible for a computer to perform. For example, while identifying a facial expression is easy for a human, for a computer it is an area in development. To tackle this and similar issues, crowdsourcing has grown as a way to use human computation in a large scale. Crowdsourcing is a novel approach to collect labels in a fast and cheap manner, by sourcing the labels from the crowds. However, these labels lack reliability since annotators are not guaranteed to have any expertise in the field. This fact has led to a new research area where we must create or adapt annotation models to handle these weaklylabeled data. Current techniques explore the annotators’ expertise and the task difficulty as variables that influences labels’ correction. Other specific aspects are also considered by noisy-labels analysis techniques. The main contribution of this thesis is the process to collect reliable crowdsourcing labels for a facial expressions dataset. This process consists in two steps: first, we design our crowdsourcing tasks to collect annotators labels; next, we infer the true label from the collected labels by applying state-of-art crowdsourcing algorithms. At the same time, a facial expression dataset is created, containing 40.000 images and respective labels. At the end, we publish the resulting dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composite materials have a complex behavior, which is difficult to predict under different types of loads. In the course of this dissertation a methodology was developed to predict failure and damage propagation of composite material specimens. This methodology uses finite element numerical models created with Ansys and Matlab softwares. The methodology is able to perform an incremental-iterative analysis, which increases, gradually, the load applied to the specimen. Several structural failure phenomena are considered, such as fiber and/or matrix failure, delamination or shear plasticity. Failure criteria based on element stresses were implemented and a procedure to reduce the stiffness of the failed elements was prepared. The material used in this dissertation consist of a spread tow carbon fabric with a 0°/90° arrangement and the main numerical model analyzed is a 26-plies specimen under compression loads. Numerical results were compared with the results of specimens tested experimentally, whose mechanical properties are unknown, knowing only the geometry of the specimen. The material properties of the numerical model were adjusted in the course of this dissertation, in order to find the lowest difference between the numerical and experimental results with an error lower than 5% (it was performed the numerical model identification based on the experimental results).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The needs of reducing human error has been growing in every field of study, and medicine is one of those. Through the implementation of technologies is possible to help in the decision making process of clinics, therefore to reduce the difficulties that are typically faced. This study focuses on easing some of those difficulties by presenting real-time data mining models capable of predicting if a monitored patient, typically admitted in intensive care, will need to take vasopressors. Data Mining models were induced using clinical variables such as vital signs, laboratory analysis, among others. The best model presented a sensitivity of 94.94%. With this model it is possible reducing the misuse of vasopressors acting as prevention. At same time it is offered a better care to patients by anticipating their treatment with vasopressors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este proyecto propone extender y generalizar los procesos de estimación e inferencia de modelos aditivos generalizados multivariados para variables aleatorias no gaussianas, que describen comportamientos de fenómenos biológicos y sociales y cuyas representaciones originan series longitudinales y datos agregados (clusters). Se genera teniendo como objeto para las aplicaciones inmediatas, el desarrollo de metodología de modelación para la comprensión de procesos biológicos, ambientales y sociales de las áreas de Salud y las Ciencias Sociales, la condicionan la presencia de fenómenos específicos, como el de las enfermedades.Es así que el plan que se propone intenta estrechar la relación entre la Matemática Aplicada, desde un enfoque bajo incertidumbre y las Ciencias Biológicas y Sociales, en general, generando nuevas herramientas para poder analizar y explicar muchos problemas sobre los cuales tienen cada vez mas información experimental y/o observacional.Se propone, en forma secuencial, comenzando por variables aleatorias discretas (Yi, con función de varianza menor que una potencia par del valor esperado E(Y)) generar una clase unificada de modelos aditivos (paramétricos y no paramétricos) generalizados, la cual contenga como casos particulares a los modelos lineales generalizados, no lineales generalizados, los aditivos generalizados, los de media marginales generalizados (enfoques GEE1 -Liang y Zeger, 1986- y GEE2 -Zhao y Prentice, 1990; Zeger y Qaqish, 1992; Yan y Fine, 2004), iniciando una conexión con los modelos lineales mixtos generalizados para variables latentes (GLLAMM, Skrondal y Rabe-Hesketh, 2004), partiendo de estructuras de datos correlacionados. Esto permitirá definir distribuciones condicionales de las respuestas, dadas las covariables y las variables latentes y estimar ecuaciones estructurales para las VL, incluyendo regresiones de VL sobre las covariables y regresiones de VL sobre otras VL y modelos específicos para considerar jerarquías de variación ya reconocidas. Cómo definir modelos que consideren estructuras espaciales o temporales, de manera tal que permitan la presencia de factores jerárquicos, fijos o aleatorios, medidos con error como es el caso de las situaciones que se presentan en las Ciencias Sociales y en Epidemiología, es un desafío a nivel estadístico. Se proyecta esa forma secuencial para la construcción de metodología tanto de estimación como de inferencia, comenzando con variables aleatorias Poisson y Bernoulli, incluyendo los existentes MLG, hasta los actuales modelos generalizados jerárquicos, conextando con los GLLAMM, partiendo de estructuras de datos correlacionados. Esta familia de modelos se generará para estructuras de variables/vectores, covariables y componentes aleatorios jerárquicos que describan fenómenos de las Ciencias Sociales y la Epidemiología.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Patients with defective ectodysplasin A (EDA) are affected by X-linked hypohidrotic ectodermal dysplasia (XLHED), a condition characterized by sparse hair, inability to sweat, decreased lacrimation, frequent pulmonary infections, and missing and malformed teeth. The canine model of XLHED was used to study the developmental impact of EDA on secondary dentition, since dogs have an entirely brachyodont, diphyodont dentition similar to that in humans, as opposed to mice, which have only permanent teeth (monophyodont dentition), some of which are very different (aradicular hypsodont) than brachyodont human teeth. Also, clinical signs in humans and dogs with XLHED are virtually identical, whereas several are missing in the murine equivalent. In our model, the genetically missing EDA was compensated for by postnatal intravenous administration of soluble recombinant EDA. Untreated XLHED dogs have an incomplete set of conically shaped teeth similar to those seen in human patients with XLHED. After treatment with EDA, significant normalization of adult teeth was achieved in four of five XLHED dogs. Moreover, treatment restored normal lacrimation and resistance to eye and airway infections and improved sweating ability. These results not only provide proof of concept for a potential treatment of this orphan disease but also demonstrate an essential role of EDA in the development of secondary dentition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In vitro aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system-specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration-dependent neurotoxicity is determined at several time points. METHODS: A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration-dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time-concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death. RESULTS: The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 μM reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours. CONCLUSION: The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time-consuming long-term experiments could possibly be eliminated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Entrevistant infants pre-escolars víctimes d’abús sexual i/o maltractament familiar: eficàcia dels models d’entrevista forense Entrevistar infants en edat preescolar que han viscut una situació traumàtica és una tasca complexa que dins l’avaluació psicològica forense necessita d’un protocol perfectament delimitat, clar i temporalitzat. Per això, s’han seleccionat 3 protocols d’entrevista: el Protocol de Menors (PM) de Bull i Birch, el model del National Institute for Children Development (NICHD) de Michel Lamb, a partir del qual es va desenvolupar l’EASI (Evaluación del Abuso Sexual Infantojuvenil) i l’Entrevista Cognitiva (EC) de Fisher i Geiselman. La hipòtesi de partida vol comprovar si els anteriors models permeten obtenir volums informatius diferents en infants preescolars. Conseqüentment, els objectius han estat determinar quin dels models d’entrevista permet obtenir un volum informatiu amb més precisions i menys errors, dissenyar un model d’entrevista propi i consensuar aquest model. En el treball s’afegeixen esquemes pràctics que facilitin l’obertura, desenvolupament i tancament de l’entrevista forense. La metodologia ha reproduït el binomi infant - esdeveniment traumàtic, mitjançant la visualització i l’explicació d’un fet emocionalment significatiu amb facilitat per identificar-se: l’accident en bicicleta d’un infant que cau, es fa mal, sagna i el seu pare el cura. A partir d’aquí, hem entrevistat 135 infants de P3, P4 i P5, mitjançant els 3 models d’entrevista referits, enfrontant-los a una demanda específica: recordar i narrar aquest esdeveniment. S’ha conclòs que el nivell de record correcte, quan s’utilitza un model d’entrevista adequat amb els infants en edat preescolar, oscil•la entre el 70-90%, fet que permet defensar la confiança en els records dels infants. Es constata que el percentatge d’emissions incorrectes dels infants en edat preescolar és mínim, al voltant d’un 5-6%. L’estudi remarca la necessitat d’establir perfectament les regles de l’entrevista i, per últim, en destaca la ineficàcia de les tècniques de memòria de l’entrevista cognitiva en els infants de P3 i P4. En els de P5 es comencen a veure beneficis gràcies a la tècnica de la reinstauració contextual (RC), estant les altres tècniques fora de la comprensió i utilització dels infants d’aquestes edats. Interviewing preschoolers victims of sexual abuse and/or domestic abuse: Effectiveness of forensic interviews models 135 preschool children were interviewed with 3 different interview models in order to remember a significant emotional event. Authors conclude that the correct recall of children ranging from 70-90% and the percentage of error messages is 5-6%. It is necessary to fully establish the rules of the interview. The present research highlights the effectiveness of the cognitive interview techniques in children from P3 and P4. Entrevistando niños preescolares víctimas de abuso sexual y/o maltrato familiar: eficacia de los modelos de entrevista forense Se han entrevistado 135 niños preescolares con 3 modelos de entrevista diferentes para recordar un hecho emocionalmente significativo. Se concluye que el recuerdo correcto de los niños oscila entre el 70-90% y el porcentaje de errores de mensajes es del 5-6%. El estudio remarca la necesidad de establecer perfectamente las reglas de la entrevista y se destaca la ineficacia de las técnicas de la entrevista cognitiva en los niños de P3 y P4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solving multi-stage oligopoly models by backward induction can easily become a com- plex task when rms are multi-product and demands are derived from a nested logit frame- work. This paper shows that under the assumption that within-segment rm shares are equal across segments, the analytical expression for equilibrium pro ts can be substantially simpli ed. The size of the error arising when this condition does not hold perfectly is also computed. Through numerical examples, it is shown that the error is rather small in general. Therefore, using this assumption allows to gain analytical tractability in a class of models that has been used to approach relevant policy questions, such as for example rm entry in an industry or the relation between competition and location. The simplifying approach proposed in this paper is aimed at helping improving these type of models for reaching more accurate recommendations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictive species distribution modelling (SDM) has become an essential tool in biodiversity conservation and management. The choice of grain size (resolution) of environmental layers used in modelling is one important factor that may affect predictions. We applied 10 distinct modelling techniques to presence-only data for 50 species in five different regions, to test whether: (1) a 10-fold coarsening of resolution affects predictive performance of SDMs, and (2) any observed effects are dependent on the type of region, modelling technique, or species considered. Results show that a 10 times change in grain size does not severely affect predictions from species distribution models. The overall trend is towards degradation of model performance, but improvement can also be observed. Changing grain size does not equally affect models across regions, techniques, and species types. The strongest effect is on regions and species types, with tree species in the data sets (regions) with highest locational accuracy being most affected. Changing grain size had little influence on the ranking of techniques: boosted regression trees remain best at both resolutions. The number of occurrences used for model training had an important effect, with larger sample sizes resulting in better models, which tended to be more sensitive to grain. Effect of grain change was only noticeable for models reaching sufficient performance and/or with initial data that have an intrinsic error smaller than the coarser grain size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting formeasurement error. From the various specifications, Jöreskog and Yang's(1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. METHODS Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. DISCUSSION This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.