43 resultados para Error threshold
em Université de Lausanne, Switzerland
Resumo:
n this paper the iterative MSFV method is extended to include the sequential implicit simulation of time dependent problems involving the solution of a system of pressure-saturation equations. To control numerical errors in simulation results, an error estimate, based on the residual of the MSFV approximate pressure field, is introduced. In the initial time steps in simulation iterations are employed until a specified accuracy in pressure is achieved. This initial solution is then used to improve the localization assumption at later time steps. Additional iterations in pressure solution are employed only when the pressure residual becomes larger than a specified threshold value. Efficiency of the strategy and the error control criteria are numerically investigated. This paper also shows that it is possible to derive an a-priori estimate and control based on the allowed pressure-equation residual to guarantee the desired accuracy in saturation calculation.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.
Resumo:
Sleep spindles are synchronized 11-15 Hz electroencephalographic (EEG) oscillations predominant during nonrapid-eye-movement sleep (NREMS). Rhythmic bursting in the reticular thalamic nucleus (nRt), arising from interplay between Ca(v)3.3-type Ca(2+) channels and Ca(2+)-dependent small-conductance-type 2 (SK2) K(+) channels, underlies spindle generation. Correlative evidence indicates that spindles contribute to memory consolidation and protection against environmental noise in human NREMS. Here, we describe a molecular mechanism through which spindle power is selectively extended and we probed the actions of intensified spindling in the naturally sleeping mouse. Using electrophysiological recordings in acute brain slices from SK2 channel-overexpressing (SK2-OE) mice, we found that nRt bursting was potentiated and thalamic circuit oscillations were prolonged. Moreover, nRt cells showed greater resilience to transit from burst to tonic discharge in response to gradual depolarization, mimicking transitions out of NREMS. Compared with wild-type littermates, chronic EEG recordings of SK2-OE mice contained less fragmented NREMS, while the NREMS EEG power spectrum was conserved. Furthermore, EEG spindle activity was prolonged at NREMS exit. Finally, when exposed to white noise, SK2-OE mice needed stronger stimuli to arouse. Increased nRt bursting thus strengthens spindles and improves sleep quality through mechanisms independent of EEG slow waves (<4 Hz), suggesting SK2 signaling as a new potential therapeutic target for sleep disorders and for neuropsychiatric diseases accompanied by weakened sleep spindles.
Resumo:
Mitochondrial tRNA(Leu(UUR)) mutation m.3302A > G is associated with respiratory chain complex I deficiency and has been described as a rare cause of mostly adult-onset slowly progressive myopathy. Five families with 11 patients have been described so far; 5 of them died young due to cardiorespiratory failure. Here, we report on a segregation study in a family with an index patient who already presented at the age of 18 months with proximal muscular hypotonia, abnormal fatigability, and lactic acidosis. This early-onset myopathy was rapidly progressive. At 8 years, the patient is wheel-chair bound, requires nocturnal assisted ventilation, and suffers from recurrent respiratory infections. Severe complex I deficiency and nearly homoplasmy for m.3302A > G were found in muscle. We collected blood, hair, buccal swabs and muscle biopsies from asymptomatic adults in this pedigree and determined heteroplasmy levels in these tissues as well as OXPHOS activities in muscle. All participating asymptomatic adults had normal OXPHOS activities. In contrast to earlier reports, we found surprisingly little variation of heteroplasmy levels in different tissues of the same individual. Up to 45% mutation load in muscle and up to 38% mutation load in other tissues were found in non-affected adults. The phenotypic spectrum of tRNA(Leu(UUR)) m.3302A > G mutation seems to be wider than previously described. A threshold of more than 45% heteroplasmy in muscle seems to be necessary to alter complex I activity leading to clinical manifestation. The presented data may be helpful for prognostic considerations and counseling in affected families.
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.
Resumo:
Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.
Resumo:
Summary : Forensic science - both as a source of and as a remedy for error potentially leading to judicial error - has been studied empirically in this research. A comprehensive literature review, experimental tests on the influence of observational biases in fingermark comparison, and semistructured interviews with heads of forensic science laboratories/units in Switzerland and abroad were the tools used. For the literature review, some of the areas studied are: the quality of forensic science work in general, the complex interaction between science and law, and specific propositions as to error sources not directly related to the interaction between law and science. A list of potential error sources all the way from the crime scene to the writing of the report has been established as well. For the empirical tests, the ACE-V (Analysis, Comparison, Evaluation, and Verification) process of fingermark comparison was selected as an area of special interest for the study of observational biases, due to its heavy reliance on visual observation and recent cases of misidentifications. Results of the tests performed with forensic science students tend to show that decision-making stages are the most vulnerable to stimuli inducing observational biases. For the semi-structured interviews, eleven senior forensic scientists answered questions on several subjects, for example on potential and existing error sources in their work, of the limitations of what can be done with forensic science, and of the possibilities and tools to minimise errors. Training and education to augment the quality of forensic science have been discussed together with possible solutions to minimise the risk of errors in forensic science. In addition, the time that samples of physical evidence are kept has been determined as well. Results tend to show considerable agreement on most subjects among the international participants. Their opinions on possible explanations for the occurrence of such problems and the relative weight of such errors in the three stages of crime scene, laboratory, and report writing, disagree, however, with opinions widely represented in existing literature. Through the present research it was therefore possible to obtain a better view of the interaction of forensic science and judicial error to propose practical recommendations to minimise their occurrence. Résumé : Les sciences forensiques - considérés aussi bien comme source de que comme remède à l'erreur judiciaire - ont été étudiées empiriquement dans cette recherche. Une revue complète de littérature, des tests expérimentaux sur l'influence du biais de l'observation dans l'individualisation de traces digitales et des entretiens semi-directifs avec des responsables de laboratoires et unités de sciences forensiques en Suisse et à l'étranger étaient les outils utilisés. Pour la revue de littérature, quelques éléments étudies comprennent: la qualité du travail en sciences forensiques en général, l'interaction complexe entre la science et le droit, et des propositions spécifiques quant aux sources d'erreur pas directement liées à l'interaction entre droit et science. Une liste des sources potentielles d'erreur tout le long du processus de la scène de crime à la rédaction du rapport a également été établie. Pour les tests empiriques, le processus d'ACE-V (analyse, comparaison, évaluation et vérification) de l'individualisation de traces digitales a été choisi comme un sujet d'intérêt spécial pour l'étude des effets d'observation, due à son fort recours à l'observation visuelle et dû à des cas récents d'identification erronée. Les résultats des tests avec des étudiants tendent à prouver que les étapes de prise de décision sont les plus vulnérables aux stimuli induisant des biais d'observation. Pour les entretiens semi-structurés, onze forensiciens ont répondu à des questions sur des sujets variés, par exemple sur des sources potentielles et existantes d'erreur dans leur travail, des limitations de ce qui peut être fait en sciences forensiques, et des possibilités et des outils pour réduire au minimum ses erreurs. La formation et l'éducation pour augmenter la qualité des sciences forensiques ont été discutées ainsi que les solutions possibles pour réduire au minimum le risque d'erreurs en sciences forensiques. Le temps que des échantillons sont gardés a été également déterminé. En général, les résultats tendent à montrer un grand accord sur la plupart des sujets abordés pour les divers participants internationaux. Leur avis sur des explications possibles pour l'occurrence de tels problèmes et sur le poids relatif de telles erreurs dans les trois étapes scène de crime;', laboratoire et rédaction de rapports est cependant en désaccord avec les avis largement représentés dans la littérature existante. Par cette recherche il était donc possible d'obtenir une meilleure vue de l'interaction des sciences forensiques et de l'erreur judiciaire afin de proposer des recommandations pratiques pour réduire au minimum leur occurrence. Zusammenfassung : Forensische Wissenschaften - als Ursache und als Hilfsmittel gegen Fehler, die möglicherweise zu Justizirrtümern führen könnten - sind hier empirisch erforscht worden. Die eingestzten Methoden waren eine Literaturübersicht, experimentelle Tests über den Einfluss von Beobachtungseffekten (observer bias) in der Individualisierung von Fingerabdrücken und halbstandardisierte Interviews mit Verantwortlichen von kriminalistischen Labors/Diensten in der Schweiz und im Ausland. Der Literaturüberblick umfasst unter anderem: die Qualität der kriminalistischen Arbeit im Allgemeinen, die komplizierte Interaktion zwischen Wissenschaft und Recht und spezifische Fehlerquellen, welche nicht direkt auf der Interaktion von Recht und Wissenschaft beruhen. Eine Liste möglicher Fehlerquellen vom Tatort zum Rapportschreiben ist zudem erstellt worden. Für die empirischen Tests wurde der ACE-V (Analyse, Vergleich, Auswertung und Überprüfung) Prozess in der Fingerabdruck-Individualisierung als speziell interessantes Fachgebiet für die Studie von Beobachtungseffekten gewählt. Gründe sind die Wichtigkeit von visuellen Beobachtungen und kürzliche Fälle von Fehlidentifizierungen. Resultate der Tests, die mit Studenten durchgeführt wurden, neigen dazu Entscheidungsphasen als die anfälligsten für Stimuli aufzuzeigen, die Beobachtungseffekte anregen könnten. Für die halbstandardisierten Interviews beantworteten elf Forensiker Fragen über Themen wie zum Beispiel mögliche und vorhandene Fehlerquellen in ihrer Arbeit, Grenzen der forensischen Wissenschaften und Möglichkeiten und Mittel um Fehler zu verringern. Wie Training und Ausbildung die Qualität der forensischen Wissenschaften verbessern können ist zusammen mit möglichen Lösungen zur Fehlervermeidung im selben Bereich diskutiert worden. Wie lange Beweismitten aufbewahrt werden wurde auch festgehalten. Resultate neigen dazu, für die meisten Themen eine grosse Übereinstimmung zwischen den verschiedenen internationalen Teilnehmern zu zeigen. Ihre Meinungen über mögliche Erklärungen für das Auftreten solcher Probleme und des relativen Gewichts solcher Fehler in den drei Phasen Tatort, Labor und Rapportschreiben gehen jedoch mit den Meinungen, welche in der Literatur vertreten werden auseinander. Durch diese Forschungsarbeit war es folglich möglich, ein besseres Verständnis der Interaktion von forensischen Wissenschaften und Justizirrtümer zu erhalten, um somit praktische Empfehlungen vorzuschlagen, welche diese verringern. Resumen : Esta investigación ha analizado de manera empírica el rol de las ciencias forenses como fuente y como remedio de potenciales errores judiciales. La metodología empleada consistió en una revisión integral de la literatura, en una serie de experimentos sobre la influencia de los sesgos de observación en la individualización de huellas dactilares y en una serie de entrevistas semiestructuradas con jefes de laboratorios o unidades de ciencias forenses en Suiza y en el extranjero. En la revisión de la literatura, algunas de las áreas estudiadas fueron: la calidad del trabajo en ciencias forenses en general, la interacción compleja entre la ciencia y el derecho, así como otras fuentes de error no relacionadas directamente con la interacción entre derecho y ciencia. También se ha establecido una lista exhaustiva de las fuentes potenciales de error desde la llegada a la escena del crimen a la redacción del informe. En el marco de los tests empíricos, al analizar los sesgos de observación dedicamos especial interés al proceso de ACE-V (análisis, comparación, evaluación y verificación) para la individualización de huellas dactilares puesto que este reposa sobre la observación visual y ha originado varios casos recientes de identificaciones erróneas. Los resultados de las experimentaciones realizadas con estudiantes sugieren que las etapas en las que deben tornarse decisiones son las más vulnerables a lös factores que pueden generar sesgos de observación. En el contexto de las entrevistas semi-estructuradas, once científicos forenses de diversos países contestaron preguntas sobre varios temas, incluyendo las fuentes potenciales y existehtes de error en su trabajo, las limitaciones propias a las ciencias forenses, las posibilidades de reducir al mínimo los errores y las herramientas que podrían ser utilizadas para ello. Se han sugerido diversas soluciones para alcanzar este objetivo, incluyendo el entrenamiento y la educación para aumentar la calidad de las ciencias forenses. Además, se ha establecido el periodo de conservación de las muestras judiciales. Los resultados apuntan a un elevado grado de consenso entre los entrevistados en la mayoría de los temas. Sin embargo, sus opiniones sobre las posibles causas de estos errores y su importancia relativa en las tres etapas de la investigación -la escena del crimen, el laboratorio y la redacción de informe- discrepan con las que predominan ampliamente en la literatura actual. De este modo, esta investigación nos ha permitido obtener una mejor imagen de la interacción entre ciencias forenses y errores judiciales, y comenzar a formular una serie de recomendaciones prácticas para reducirlos al minimo.
Resumo:
Humoral factors play an important role in the control of exercise hyperpnea. The role of neuromechanical ventilatory factors, however, is still being investigated. We tested the hypothesis that the afferents of the thoracopulmonary system, and consequently of the neuromechanical ventilatory loop, have an influence on the kinetics of oxygen consumption (VO2), carbon dioxide output (VCO2), and ventilation (VE) during moderate intensity exercise. We did this by comparing the ventilatory time constants (tau) of exercise with and without an inspiratory load. Fourteen healthy, trained men (age 22.6 +/- 3.2 yr) performed a continuous incremental cycle exercise test to determine maximal oxygen uptake (VO2max = 55.2 +/- 5.8 ml x min(-1) x kg(-1)). On another day, after unloaded warm-up they performed randomized constant-load tests at 40% of their VO2max for 8 min, one with and the other without an inspiratory threshold load of 15 cmH2O. Ventilatory variables were obtained breath by breath. Phase 2 ventilatory kinetics (VO2, VCO2, and VE) could be described in all cases by a monoexponential function. The bootstrap method revealed small coefficients of variation for the model parameters, indicating an accurate determination for all parameters. Paired Student's t-tests showed that the addition of the inspiratory resistance significantly increased the tau during phase 2 of VO2 (43.1 +/- 8.6 vs. 60.9 +/- 14.1 s; P < 0.001), VCO2 (60.3 +/- 17.6 vs. 84.5 +/- 18.1 s; P < 0.001) and VE (59.4 +/- 16.1 vs. 85.9 +/- 17.1 s; P < 0.001). The average rise in tau was 41.3% for VO2, 40.1% for VCO2, and 44.6% for VE. The tau changes indicated that neuromechanical ventilatory factors play a role in the ventilatory response to moderate exercise.
Resumo:
A low arousal threshold is believed to predispose to breathing instability during sleep. The present authors hypothesised that trazodone, a nonmyorelaxant sleep-promoting agent, would increase the effort-related arousal threshold in obstructive sleep apnoea (OSA) patients. In total, nine OSA patients, mean+/-sd age 49+/-9 yrs, apnoea/hypopnoea index 52+/-32 events.h(-1), were studied on 2 nights, one with trazodone at 100 mg and one with a placebo, in a double blind randomised fashion. While receiving continuous positive airway pressure (CPAP), repeated arousals were induced: 1) by increasing inspired CO(2) and 2) by stepwise decreases in CPAP level. Respiratory effort was measured with an oesophageal balloon. End-tidal CO(2 )tension (P(ET,CO(2))) was monitored with a nasal catheter. During trazodone nights, compared with placebo nights, the arousals occurred at a higher P(ET,CO(2)) level (mean+/-sd 7.30+/-0.57 versus 6.62+/-0.64 kPa (54.9+/-4.3 versus 49.8+/-4.8 mmHg), respectively). When arousals were triggered by increasing inspired CO(2) level, the maximal oesophageal pressure swing was greater (19.4+/-4.0 versus 13.1+/-4.9 cm H(2)O) and the oesophageal pressure nadir before the arousals was lower (-5.1+/-4.7 versus -0.38+/-4.2 cm H(2)O) with trazodone. When arousals were induced by stepwise CPAP drops, the maximal oesophageal pressure swings before the arousals did not differ. Trazodone at 100 mg increased the effort-related arousal threshold in response to hypercapnia in obstructive sleep apnoea patients and allowed them to tolerate higher CO(2) levels.
Resumo:
In many European countries, image quality for digital x-ray systems used in screening mammography is currently specified using a threshold-detail detectability method. This is a two-part study that proposes an alternative method based on calculated detectability for a model observer: the first part of the work presents a characterization of the systems. Eleven digital mammography systems were included in the study; four computed radiography (CR) systems, and a group of seven digital radiography (DR) detectors, composed of three amorphous selenium-based detectors, three caesium iodide scintillator systems and a silicon wafer-based photon counting system. The technical parameters assessed included the system response curve, detector uniformity error, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Approximate quantum noise limited exposure range was examined using a separation of noise sources based upon standard deviation. Noise separation showed that electronic noise was the dominant noise at low detector air kerma for three systems; the remaining systems showed quantum noise limited behaviour between 12.5 and 380 µGy. Greater variation in detector MTF was found for the DR group compared to the CR systems; MTF at 5 mm(-1) varied from 0.08 to 0.23 for the CR detectors against a range of 0.16-0.64 for the DR units. The needle CR detector had a higher MTF, lower NNPS and higher DQE at 5 mm(-1) than the powder CR phosphors. DQE at 5 mm(-1) ranged from 0.02 to 0.20 for the CR systems, while DQE at 5 mm(-1) for the DR group ranged from 0.04 to 0.41, indicating higher DQE for the DR detectors and needle CR system than for the powder CR phosphor systems. The technical evaluation section of the study showed that the digital mammography systems were well set up and exhibiting typical performance for the detector technology employed in the respective systems.
Resumo:
OBJECTIVES: Hypoglycaemia (glucose <2.2 mmol/l) is a defining feature of severe malaria, but the significance of other levels of blood glucose has not previously been studied in children with severe malaria. METHODS: A prospective study of 437 consecutive children with presumed severe malaria was conducted in Mali. We defined hypoglycaemia as <2.2 mmol/l, low glycaemia as 2.2-4.4 mmol/l and hyperglycaemia as >8.3 mmol/l. Associations between glycaemia and case fatality were analysed for 418 children using logistic regression models and a receiver operator curve (ROC). RESULTS: There was a significant difference between blood glucose levels in children who died (median 4.6 mmol/l) and survivors (median 7.6 mmol/l, P < 0.001). Case fatality declined from 61.5% of the hypoglycaemic children to 46.2% of those with low glycaemia, 13.4% of those with normal glycaemia and 7.6% of those with hyperglycaemia (P < 0.001). Logistic regression showed an adjusted odds ratio (AOR) of 0.75 (0.64-0.88) for case fatality per 1 mmol/l increase in baseline blood glucose. Compared to a normal blood glucose, hypoglycaemia and low glycaemia both significantly increased the odds of death (AOR 11.87, 2.10-67.00; and 5.21, 1.86-14.63, respectively), whereas hyperglycaemia reduced the odds of death (AOR 0.34, 0.13-0.91). The ROC [area under the curve at 0.753 (95% CI 0.684-0.820)] indicated that glycaemia had a moderate predictive value for death and identified an optimal threshold at glycaemia <6.1 mmol/l, (sensitivity 64.5% and specificity 75.1%). CONCLUSIONS: If there is a threshold of blood glucose which defines a worse prognosis, it is at a higher level than the current definition of 2.2 mmol/l.
Resumo:
In the assessment of medical malpractice imaging methods can be used for the documentation of crucial morphological findings which are indicative for or against an iatrogenically caused injury. The clarification of deaths in this context can be usefully supported by postmortem imaging (primarily native computed tomography, angiography, magnetic resonance imaging). Postmortem imaging offers significant additional information compared to an autopsy in the detection of iatrogenic air embolisms and documentation of misplaced medical aids before dissection with an inherent danger of relocation. Additional information is supplied by postmortem imaging in the search for sources of bleeding as well as the documentation of perfusion after cardiovascular surgery. Key criteria for the decision to perform postmortem imaging can be obtained from the necessary preliminary inspection of clinical documentation.
Resumo:
Many complex systems may be described by not one but a number of complex networks mapped on each other in a multi-layer structure. Because of the interactions and dependencies between these layers, the state of a single layer does not necessarily reflect well the state of the entire system. In this paper we study the robustness of five examples of two-layer complex systems: three real-life data sets in the fields of communication (the Internet), transportation (the European railway system), and biology (the human brain), and two models based on random graphs. In order to cover the whole range of features specific to these systems, we focus on two extreme policies of system's response to failures, no rerouting and full rerouting. Our main finding is that multi-layer systems are much more vulnerable to errors and intentional attacks than they appear from a single layer perspective.