81 resultados para errors and erasures decoding


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La spectroscopie infrarouge (FTIR) est une technique de choix dans l'analyse des peintures en spray (traces ou bonbonnes de référence), grâce à son fort pouvoir discriminant, sa sensibilité, et ses nombreuses possibilités d'échantillonnage. La comparaison des spectres obtenus est aujourd'hui principalement faite visuellement, mais cette procédure présente des limitations telles que la subjectivité de la prise de décision car celle-ci dépend de l'expérience et de la formation suivie par l'expert. De ce fait, de faibles différences d'intensités relatives entre deux pics peuvent être perçues différemment par des experts, même au sein d'un même laboratoire. Lorsqu'il s'agit de justifier ces différences, certains les expliqueront par la méthode analytique utilisée, alors que d'autres estimeront plutôt qu'il s'agit d'une variabilité intrinsèque à la peinture et/ou à son vécu (par exemple homogénéité, sprayage, ou dégradation). Ce travail propose d'étudier statistiquement les différentes sources de variabilité observables dans les spectres infrarouges, de les identifier, de les comprendre et tenter de les minimiser. Le deuxième objectif principal est de proposer une procédure de comparaison des spectres qui soit davantage transparente et permette d'obtenir des réponses reproductibles indépendamment des experts interrogés. La première partie du travail traite de l'optimisation de la mesure infrarouge et des principaux paramètres analytiques. Les conditions nécessaires afin d'obtenir des spectres reproductibles et minimisant la variation au sein d'un même échantillon (intra-variabilité) sont présentées. Par la suite une procédure de correction des spectres est proposée au moyen de prétraitements et de sélections de variables, afin de minimiser les erreurs systématiques et aléatoires restantes, et de maximiser l'information chimique pertinente. La seconde partie présente une étude de marché effectuée sur 74 bonbonnes de peintures en spray représentatives du marché suisse. Les capacités de discrimination de la méthode FTIR au niveau de la marque et du modèle sont évaluées au moyen d'une procédure visuelle, et comparées à diverses procédures statistiques. Les limites inférieures de discrimination sont testées sur des peintures de marques et modèles identiques mais provenant de différents lots de production. Les résultats ont montré que la composition en pigments était particulièrement discriminante, à cause des étapes de corrections et d'ajustement de la couleur subies lors de la production. Les particularités associées aux peintures en spray présentes sous forme de traces (graffitis, gouttelettes) ont également été testées. Trois éléments sont mis en évidence et leur influence sur le spectre infrarouge résultant testée : 1) le temps minimum de secouage nécessaire afin d'obtenir une homogénéité suffisante de la peinture et, en conséquence, de la surface peinte, 2) la dégradation initiée par le rayonnement ultra- violet en extérieur, et 3) la contamination provenant du support lors du prélèvement. Finalement une étude de population a été réalisée sur 35 graffitis de la région lausannoise et les résultats comparés à l'étude de marché des bonbonnes en spray. La dernière partie de ce travail s'est concentrée sur l'étape de prise de décision lors de la comparaison de spectres deux-à-deux, en essayant premièrement de comprendre la pratique actuelle au sein des laboratoires au moyen d'un questionnaire, puis de proposer une méthode statistique de comparaison permettant d'améliorer l'objectivité et la transparence lors de la prise de décision. Une méthode de comparaison basée sur la corrélation entre les spectres est proposée, et ensuite combinée à une évaluation Bayesienne de l'élément de preuve au niveau de la source et au niveau de l'activité. Finalement des exemples pratiques sont présentés et la méthodologie est discutée afin de définir le rôle précis de l'expert et des statistiques dans la procédure globale d'analyse des peintures. -- Infrared spectroscopy (FTIR) is a technique of choice for analyzing spray paint speciments (i.e. traces) and reference samples (i.e. cans seized from suspects) due to its high discriminating power, sensitivity and sampling possibilities. The comparison of the spectra is currently carried out visually, but this procedure has limitations such as the subjectivity in the decision due to its dependency on the experience and training of the expert. This implies that small differences in the relative intensity of two peaks can be perceived differently by experts, even between analysts working in the same laboratory. When it comes to justifying these differences, some will explain them by the analytical technique, while others will estimate that the observed differences are mostly due to an intrinsic variability from the paint sample and/or its acquired characteristics (for example homogeneity, spraying, or degradation). This work proposes to statistically study the different sources of variability observed in infrared spectra, to identify them, understand them and try to minimize them. The second goal is to propose a procedure for spectra comparison that is more transparent, and allows obtaining reproducible answers being independent from the expert. The first part of the manuscript focuses on the optimization of infrared measurement and on the main analytical parameters. The necessary conditions to obtain reproducible spectra with a minimized variation within a sample (intra-variability) are presented. Following that a procedure of spectral correction is then proposed using pretreatments and variable selection methods, in order to minimize systematic and random errors, and increase simultaneously relevant chemical information. The second part presents a market study of 74 spray paints representative of the Swiss market. The discrimination capabilities of FTIR at the brand and model level are evaluated by means of visual and statistical procedures. The inferior limits of discrimination are tested on paints coming from the same brand and model, but from different production batches. The results showed that the pigment composition was particularly discriminatory, because of the corrections and adjustments made to the paint color during its manufacturing process. The features associated with spray paint traces (graffitis, droplets) were also tested. Three elements were identified and their influence on the resulting infrared spectra were tested: 1) the minimum shaking time necessary to obtain a sufficient homogeneity of the paint and subsequently of the painted surface, 2) the degradation initiated by ultraviolet radiation in an exterior environment, and 3) the contamination from the support when paint is recovered. Finally a population study was performed on 35 graffitis coming from the city of Lausanne and surroundings areas, and the results were compared to the previous market study of spray cans. The last part concentrated on the decision process during the pairwise comparison of spectra. First, an understanding of the actual practice among laboratories was initiated by submitting a questionnaire. Then, a proposition for a statistical method of comparison was advanced to improve the objectivity and transparency during the decision process. A method of comparison based on the correlation between spectra is proposed, followed by the integration into a Bayesian framework at both source and activity levels. Finally, some case examples are presented and the recommended methodology is discussed in order to define the role of the expert as well as the contribution of the tested statistical approach within a global analytical sequence for paint examinations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, an explosion of interest in neuroscience has led to the development of "Neuro-law," a new multidisciplinary field of knowledge whose aim is to examine the impact and role of neuroscientific findings in legal proceedings. Neuroscientific evidence is increasingly being used in US and European courts in criminal trials, as part of psychiatric testimony, nourishing the debate about the legal implications of brain research in psychiatric-legal settings. During these proceedings, the role of forensic psychiatrists is crucial. In most criminal justice systems, their mission consists in accomplishing two basic tasks: assessing the degree of responsibility of the offender and evaluating their future dangerousness. In the first part of our research, we aim to examine the impact of Neuroscientific evidence in the assessment of criminal responsibility, a key concept of law. An initial jurisprudential research leads to conclude that there are significant difficulties and limitations in using neuroscience for the assessment of criminal responsibility. In the current socio-legal context, responsibility assessments are progressively being weakened, whereas dangerousness assessments gain increasing importance in the field of forensic psychiatry. In the second part of our research we concentrate on the impact of using neuroscience for the assessment of dangerousness. We argue that in the current policy era of zero tolerance, judges, confronted with the pressure to ensure public security, may tend to interpret neuroscientific knowledge and data as an objective and reliable way of evaluating one's dangerousness and risk of reoffending, rather than their responsibility. This tendency could be encouraged by a utilitarian approach to punishment, advanced by some recent neuroscientific research which puts into question the existence of free will and responsibility and argues for a rejection of the retributive theory of punishment. Although this shift away from punishment aimed at retribution in favor of a consequentialist approach to criminal law is advanced by some authors as a more progressive and humane approach, we believe that it could lead to the instrumentalisation of neuroscience in the interest of public safety, which can run against the proper exercise of justice and civil liberties of the offenders. By advancing a criminal law regime animated by the consequentialist aim of avoiding social harms through rehabilitation, neuroscience promotes a return to a therapeutical approach to crime which can have serious impact on the kind and the length of sentences imposed on the offenders; if neuroscientific data are interpreted as evidence of dangerousness, rather than responsibility, it is highly likely that judges impose heavier sentences, or/and security measures (in civil law systems), which can be indeterminate in length. Errors and epistemic traps of past criminological movements trying to explain the manifestation of a violent and deviant behavior on a biological and deterministic basis stress the need for caution concerning the use of modern neuroscientific methods in criminal proceedings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gram-negative bacteria represent a major group of pathogens that infect all eukaryotes from plants to mammals. Gram-negative microbe-associated molecular patterns include lipopolysaccharides and peptidoglycans, major immunostimulatory determinants across phyla. Recent advances have furthered our understanding of Gram-negative detection beyond the well-defined pattern recognition receptors such as TLR4. A B-type lectin receptor for LPS and Lysine-motif containing receptors for peptidoglycans were recently added to the plant arsenal. Caspases join the ranks of mammalian cytosolic immune detectors by binding LPS, and make TLR4 redundant for septic shock. Fascinating bacterial evasion mechanisms lure the host into tolerance or promote inter-bacterial competition. Our review aims to cover recent advances on bacterial messages and host decoding systems across phyla, and highlight evolutionarily recurrent strategies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Neuroimaging studies analyzing neurophysiological signals are typically based on comparing averages of peri-stimulus epochs across experimental conditions. This approach can however be problematic in the case of high-level cognitive tasks, where response variability across trials is expected to be high and in cases where subjects cannot be considered part of a group. The main goal of this thesis has been to address this issue by developing a novel approach for analyzing electroencephalography (EEG) responses at the single-trial level. This approach takes advantage of the spatial distribution of the electric field on the scalp (topography) and exploits repetitions across trials for quantifying the degree of discrimination between experimental conditions through a classification scheme. In the first part of this thesis, I developed and validated this new method (Tzovara et al., 2012a,b). Its general applicability was demonstrated with three separate datasets, two in the visual modality and one in the auditory. This development allowed then to target two new lines of research, one in basic and one in clinical neuroscience, which represent the second and third part of this thesis respectively. For the second part of this thesis (Tzovara et al., 2012c), I employed the developed method for assessing the timing of exploratory decision-making. Using single-trial topographic EEG activity during presentation of a choice's payoff, I could predict the subjects' subsequent decisions. This prediction was due to a topographic difference which appeared on average at ~516ms after the presentation of payoff and was subject-specific. These results exploit for the first time the temporal correlates of individual subjects' decisions and additionally show that the underlying neural generators start differentiating their responses already ~880ms before the button press. Finally, in the third part of this project, I focused on a clinical study with the goal of assessing the degree of intact neural functions in comatose patients. Auditory EEG responses were assessed through a classical mismatch negativity paradigm, during the very early phase of coma, which is currently under-investigated. By taking advantage of the decoding method developed in the first part of the thesis, I could quantify the degree of auditory discrimination at the single patient level (Tzovara et al., in press). Our results showed for the first time that even patients who do not survive the coma can discriminate sounds at the neural level, during the first hours after coma onset. Importantly, an improvement in auditory discrimination during the first 48hours of coma was predictive of awakening and survival, with 100% positive predictive value. - L'analyse des signaux électrophysiologiques en neuroimagerie se base typiquement sur la comparaison des réponses neurophysiologiques à différentes conditions expérimentales qui sont moyennées après plusieurs répétitions d'une tâche. Pourtant, cette approche peut être problématique dans le cas des fonctions cognitives de haut niveau, où la variabilité des réponses entre les essais peut être très élevéeou dans le cas où des sujets individuels ne peuvent pas être considérés comme partie d'un groupe. Le but principal de cette thèse est d'investiguer cette problématique en développant une nouvelle approche pour l'analyse des réponses d'électroencephalographie (EEG) au niveau de chaque essai. Cette approche se base sur la modélisation de la distribution du champ électrique sur le crâne (topographie) et profite des répétitions parmi les essais afin de quantifier, à l'aide d'un schéma de classification, le degré de discrimination entre des conditions expérimentales. Dans la première partie de cette thèse, j'ai développé et validé cette nouvelle méthode (Tzovara et al., 2012a,b). Son applicabilité générale a été démontrée avec trois ensembles de données, deux dans le domaine visuel et un dans l'auditif. Ce développement a permis de cibler deux nouvelles lignes de recherche, la première dans le domaine des neurosciences cognitives et l'autre dans le domaine des neurosciences cliniques, représentant respectivement la deuxième et troisième partie de ce projet. En particulier, pour la partie cognitive, j'ai appliqué cette méthode pour évaluer l'information temporelle de la prise des décisions (Tzovara et al., 2012c). En se basant sur l'activité topographique de l'EEG au niveau de chaque essai pendant la présentation de la récompense liée à un choix, on a pu prédire les décisions suivantes des sujets (en termes d'exploration/exploitation). Cette prédiction s'appuie sur une différence topographique qui apparaît en moyenne ~516ms après la présentation de la récompense. Ces résultats exploitent pour la première fois, les corrélés temporels des décisions au niveau de chaque sujet séparément et montrent que les générateurs neuronaux de ces décisions commencent à différentier leurs réponses déjà depuis ~880ms avant que les sujets appuient sur le bouton. Finalement, pour la dernière partie de ce projet, je me suis focalisée sur une étude Clinique afin d'évaluer le degré des fonctions neuronales intactes chez les patients comateux. Des réponses EEG auditives ont été examinées avec un paradigme classique de mismatch negativity, pendant la phase précoce du coma qui est actuellement sous-investiguée. En utilisant la méthode de décodage développée dans la première partie de la thèse, j'ai pu quantifier le degré de discrimination auditive au niveau de chaque patient (Tzovara et al., in press). Nos résultats montrent pour la première fois que même des patients comateux qui ne vont pas survivre peuvent discriminer des sons au niveau neuronal, lors de la phase aigue du coma. De plus, une amélioration dans la discrimination auditive pendant les premières 48heures du coma a été observée seulement chez des patients qui se sont réveillés par la suite (100% de valeur prédictive pour un réveil).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract Background: Medical errors have recently been recognized as a relevant concern in public health, and increasing research efforts have been made to find ways of improving patient safety. In palliative care, however, studies on errors are scant. Objective: Our aim was to gather pilot data concerning experiences and attitudes of palliative care professionals on this topic. Methods: We developed a questionnaire, which consists of questions on relevance, estimated frequency, kinds and severity of errors, their causes and consequences, and the way palliative care professionals handle them. The questionnaire was sent to all specialist palliative care institutions in the region of Bavaria, Germany (n=168; inhabitants 12.5 million) reaching a response rate of 42% (n=70). Results: Errors in palliative care were regarded as a highly relevant problem (median 8 on a 10-point numeric rating scale). Most respondents experienced a moderate frequency of errors (1-10 per 100 patients). Errors in communication were estimated to be more common than those in symptom control. The causes most often mentioned were deficits in communication or organization. Moral and psychological problems for the person committing the error were seen as more frequent than consequences for the patient. Ninety percent of respondents declared that they disclose errors to the harmed patient. For 78% of the professionals, the issue was not a part of their professional training. Conclusion: Professionals acknowledge errors-in particular errors in communication-to be a common and relevant problem in palliative care, one that has, however, been neglected in training and research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The construct of cognitive errors is clinically relevant for cognitive therapy of mood disorders. Beck's universality hypothesis postulates the relevance of negative cognitions in all subtypes of mood disorders, as well as positive cognitions for manic states. This hypothesis has rarely been empirically addressed for patients presenting bipolar affective disorder (BD). In-patients (n = 30) presenting with BD were interviewed, as were 30 participants of a matched control group. Valid and reliable observer-rater methodology for cognitive errors was applied to the session transcripts. Overall, patients make more cognitive errors than controls. When manic and depressive patients were compared, parts of the universality hypothesis were confirmed. Manic symptoms are related to positive and negative cognitive errors. These results are discussed with regard to the main assumptions of the cognitive model for depression; thus adding an argument for extending it to the BD diagnostic group, taking into consideration specificities in terms of cognitive errors. Clinical implications for cognitive therapy of BD are suggested.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Analyses of brain responses to external stimuli are typically based on the means computed across conditions. However in many cognitive and clinical applications, taking into account their variability across trials has turned out to be statistically more sensitive than comparing their means. NEW METHOD: In this study we present a novel implementation of a single-trial topographic analysis (STTA) for discriminating auditory evoked potentials at predefined time-windows. This analysis has been previously introduced for extracting spatio-temporal features at the level of the whole neural response. Adapting the STTA on specific time windows is an essential step for comparing its performance to other time-window based algorithms. RESULTS: We analyzed responses to standard vs. deviant sounds and showed that the new implementation of the STTA gives above-chance decoding results in all subjects (in comparison to 7 out of 11 with the original method). In comatose patients, the improvement of the decoding performance was even more pronounced than in healthy controls and doubled the number of significant results. COMPARISON WITH EXISTING METHOD(S): We compared the results obtained with the new STTA to those based on a logistic regression in healthy controls and patients. We showed that the first of these two comparisons provided a better performance of the logistic regression; however only the new STTA provided significant results in comatose patients at group level. CONCLUSIONS: Our results provide quantitative evidence that a systematic investigation of the accuracy of established methods in normal and clinical population is an essential step for optimizing decoding performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A collaborative exercise was carried out by the European DNA Profiling Group (EDNAP) in order to evaluate the distribution of mitochondrial DNA (mtDNA) heteroplasmy amongst the hairs of an individual who displays point heteroplasmy in blood and buccal cells. A second aim of the exercise was to study reproducibility of mtDNA sequencing of hairs between laboratories using differing chemistries, further to the first mtDNA reproducibility study carried out by the EDNAP group. Laboratories were asked to type 2 sections from each of 10 hairs, such that each hair was typed by at least two laboratories. Ten laboratories participated in the study, and a total of 55 hairs were typed. The results showed that the C/T point heteroplasmy observed in blood and buccal cells at position 16234 segregated differentially between hairs, such that some hairs showed only C, others only T and the remainder, C/T heteroplasmy at varying ratios. Additionally, differential segregation of heteroplasmic variants was confirmed in independent extracts at positions 16093 and the poly(C) tract at 302-309, whilst a complete A-G transition was confirmed at position 16129 in one hair. Heteroplasmy was observed at position 16195 on both strands of a single extract from one hair segment, but was not observed in the extracts from any other segment of the same hair. Similarly, heteroplasmy at position 16304 was observed on both strands of a single extract from one hair. Additional variants at positions 73, 249 and the HVII poly(C) region were reported by one laboratory; as these were not confirmed in independent extracts, the possibility of contamination cannot be excluded. Additionally, the electrophoresis and detection equipment used by this laboratory was different to those of the other laboratories, and the discrepancies at position 249 and the HVII poly(C) region appear to be due to reading errors that may be associated with this technology. The results, and their implications for forensic mtDNA typing, are discussed in the light of the biology of hair formation.