941 resultados para False negatives


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Aortic dissection is a severe pathological condition in which blood penetrates between layers of the aortic wall and creates a duplicate channel - the false lumen. This considerable change on the aortic morphology alters hemodynamic features dramatically and, in the case of rupture, induces markedly high rates of morbidity and mortality. METHODS In this study, we establish a patient-specific computational model and simulate the pulsatile blood flow within the dissected aorta. The k-ω SST turbulence model is employed to represent the flow and finite volume method is applied for numerical solutions. Our emphasis is on flow exchange between true and false lumen during the cardiac cycle and on quantifying the flow across specific passages. Loading distributions including pressure and wall shear stress have also been investigated and results of direct simulations are compared with solutions employing appropriate turbulence models. RESULTS Our results indicate that (i) high velocities occur at the periphery of the entries; (ii) for the case studied, approximately 40% of the blood flow passes the false lumen during a heartbeat cycle; (iii) higher pressures are found at the outer wall of the dissection, which may induce further dilation of the pseudo-lumen; (iv) highest wall shear stresses occur around the entries, perhaps indicating the vulnerability of this region to further splitting; and (v) laminar simulations with adequately fine mesh resolutions, especially refined near the walls, can capture similar flow patterns to the (coarser mesh) turbulent results, although the absolute magnitudes computed are in general smaller. CONCLUSIONS The patient-specific model of aortic dissection provides detailed flow information of blood transport within the true and false lumen and quantifies the loading distributions over the aorta and dissection walls. This contributes to evaluating potential thrombotic behavior in the false lumen and is pivotal in guiding endovascular intervention. Moreover, as a computational study, mesh requirements to successfully evaluate the hemodynamic parameters have been proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamically typed languages lack information about the types of variables in the source code. Developers care about this information as it supports program comprehension. Ba- sic type inference techniques are helpful, but may yield many false positives or negatives. We propose to mine information from the software ecosys- tem on how frequently given types are inferred unambigu- ously to improve the quality of type inference for a single system. This paper presents an approach to augment existing type inference techniques by supplementing the informa- tion available in the source code of a project with data from other projects written in the same language. For all available projects, we track how often messages are sent to instance variables throughout the source code. Predictions for the type of a variable are made based on the messages sent to it. The evaluation of a proof-of-concept prototype shows that this approach works well for types that are sufficiently popular, like those from the standard librarie, and tends to create false positives for unpopular or domain specific types. The false positives are, in most cases, fairly easily identifiable. Also, the evaluation data shows a substantial increase in the number of correctly inferred types when compared to the non-augmented type inference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM Our aim was to challenge the validity of these software algorithms. METHODS We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fragestellung/Einleitung: Prüfungen sind essentieller Bestandteil in der ärztlichen Ausbildung. Sie liefern wertvolle Informationen über den Entwicklungsprozess der Studierenden und wirken lernbegleitend und lernmodulierend [1], [2]. Bei schriftlichen Prüfungen dominieren derzeit Multiple Choice Fragen, die in verschiedenen Typen verwendet werden. Zumeist werden Typ-A Fragen genutzt, bei denen genau eine Antwort richtig ist. Multiple True-False (MTF) Fragen hingegen lassen mehrere richtige Antworten zu: es muss für jede Antwortmöglichkeit entschieden werden, ob diese richtig oder falsch ist. Durch die Mehrfachantwort scheinen MTF Fragen bestimmte klinische Sachverhalte besser widerspiegeln zu können. Auch bezüglich Reliabilität und dem Informationsgewinn pro Testzeit scheinen MTF Fragen den Typ-A Fragen überlegen zu sein [3]. Dennoch werden MTF Fragen bislang selten genutzt und es gibt wenig Literatur zu diesem Fragenformat. In dieser Studie soll untersucht werden, inwiefern die Verwendung von MTF Fragen die Nutzbarkeit (Utility) nach van der Vleuten (Reliabilität, Validität, Kostenaufwand, Effekt auf den Lernprozess und Akzeptanz der Teilnehmer) [4] schriftlicher Prüfungen erhöhen kann. Um die Testreliabilität zu steigern, sowie den Kostenaufwand für Prüfungen zu senken, möchten wir das optimale Bewertungssystem (Scoring) für MTF Fragen ermitteln. Methoden: Wir analysieren die Daten summativer Prüfungen der Medizinischen Fakultät der Universität Bern. Unsere Daten beinhalten Prüfungen vom ersten bis zum sechsten Studienjahr, sowie eine Facharztprüfung. Alle Prüfungen umfassen sowohl MTF als auch Typ-A Fragen. Für diese Prüfungen vergleichen wir die Viertel-, Halb- und Ganzpunktbewertung für MTF Fragen. Bei der Viertelpunktbewertung bekommen Kandidaten für jede richtige Teilantwort ¼ Punkt. Bei der Halbpunktbewertung wird ½ Punkt vergeben, wenn mehr als die Hälfte der Antwortmöglichkeiten richtig ist, einen ganzen Punkt erhalten die Kandidaten wenn alle Antworten richtig beantwortet wurden. Bei der Ganzpunktbewertung erhalten Kandidaten lediglich einen Punkt wenn die komplette Frage richtig beantwortet wurde. Diese unterschiedlichen Bewertungsschemata werden hinsichtlich Fragencharakteristika wie Trennschärfe und Schwierigkeit sowie hinsichtlich Testcharakteristika wie der Reliabilität einander gegenübergestellt. Die Ergebnisse werden ausserdem mit denen für Typ A Fragen verglichen. Ergebnisse: Vorläufige Ergebnisse deuten darauf hin, dass eine Halbpunktbewertung optimal zu sein scheint. Eine Halbpunktbewertung führt zu mittleren Item-Schwierigkeiten und daraus resultierend zu hohen Trennschärfen. Dies trägt zu einer hohen Testreliabilität bei. Diskussion/Schlussfolgerung: MTF Fragen scheinen in Verbindung mit einem optimalen Bewertungssystem, zu höheren Testreliabilitäten im Vergleich zu Typ A Fragen zu führen. In Abhängigkeit des zu prüfenden Inhalts könnten MTF Fragen einen wertvolle Ergänzung zu Typ-A Fragen darstellen. Durch die geeignete Kombination von MTF und Typ A Fragen könnte die Nutzbarkeit (Utility) schriftlicher Prüfungen verbessert werden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Multiple True-False-Items (MTF-Items) might offer some advantages compared to one-best-answer-questions (TypeA) as they allow more than one correct answer and may better represent clinical decisions. However, in medical education assessment MTF-Items are seldom used. Summary of Work: With this literature review existing findings on MTF-items and on TypeA were compared along the Ottawa Criteria for Good Assessment, i.e. (1) reproducibility, (2) feasibility, (3) validity, (4) acceptance, (5) educational effect, (6) catalytic effects, and (7) equivalence. We conducted a literature research on ERIC and Google Scholar including papers from the years 1935 to 2014. We used the search terms “multiple true-false”, “true-false”, “true/false”, and “Kprim” combined with “exam”, “test”, and “assessment”. Summary of Results: We included 29 out of 33 studies. Four of them were carried out in the medical field Compared to TypeA, MTF-Items are associated with (1) higher reproducibility (2) lower feasibility (3) similar validity (4) higher acceptance (5) higher educational effect (6) no studies on catalytic effects or (7) equivalence. Discussion and Conclusions: While studies show overall good characteristics of MTF items according to the Ottawa criteria, this type of question seems to be rather seldom used. One reason might be the reported lower feasibility. Overall the literature base is still weak. Furthermore, only 14 % of literature is from the medical domain. Further studies to better understand the characteristics of MTF-Items in the medical domain are warranted. Take-home messages: Overall the literature base is weak and therefore further studies are needed. Existing studies show that: MTF-Items show higher reliability, acceptance and educational effect; MTF-Items are more difficult to produce

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Fourth Amendment prohibits unreasonable searches and seizures in criminal investigations. The Supreme Court has interpreted this to require that police obtain a warrant prior to search and that illegally seized evidence be excluded from trial. A consensus has developed in the law and economics literature that tort liability for police officers is a superior means of deterring unreasonable searches. We argue that this conclusion depends on the assumption of truth-seeking police, and develop a game-theoretic model to compare the two remedies when some police officers (the bad type) are willing to plant evidence in order to obtain convictions, even though other police (the good type) are not (where this type is private information). We characterize the perfect Bayesian equilibria of the asymmetric-information game between the police and a court that seeks to minimize error costs in deciding whether to convict or acquit suspects. In this framework, we show that the exclusionary rule with a warrant requirement leads to superior outcomes (relative to tort liability) in terms of truth-finding function of courts, because the warrant requirement can reduce the scope for bad types of police to plant evidence

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been demonstrated that rating trust and reputation of individual nodes is an effective approach in distributed environments in order to improve security, support decision-making and promote node collaboration. Nevertheless, these systems are vulnerable to deliberate false or unfair testimonies. In one scenario, the attackers collude to give negative feedback on the victim in order to lower or destroy its reputation. This attack is known as bad mouthing attack. In another scenario, a number of entities agree to give positive feedback on an entity (often with adversarial intentions). This attack is known as ballot stuffing. Both attack types can significantly deteriorate the performances of the network. The existing solutions for coping with these attacks are mainly concentrated on prevention techniques. In this work, we propose a solution that detects and isolates the abovementioned attackers, impeding them in this way to further spread their malicious activity. The approach is based on detecting outliers using clustering, in this case self-organizing maps. An important advantage of this approach is that we have no restrictions on training data, and thus there is no need for any data pre-processing. Testing results demonstrate the capability of the approach in detecting both bad mouthing and ballot stuffing attack in various scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article explores one aspect of the processing perspective in L2 learning in an EST context: the processing of new content words, in English, of the type ‘cognates’ and ‘false friends’, by Spanish speaking engineering students. The paper does not try to offer a comprehensive overview of language acquisition mechanisms, but rather it is intended to review more narrowly how our conceptual systems, governed by intricately linked networks of neural connections in the brain, make language development possible, creating, at the same time, some L2 processing problems. The case of ‘cognates and false friends’ in specialised contexts is brought here to illustrate some of the processing problems that the L2 learner has to confront, and how mappings in the visual, phonological and semantic (conceptual) brain structures function in second language processing of new vocabulary. Resumen Este artículo pretende reflexionar sobre un aspecto de la perspectiva del procesamiento de segundas lenguas (L2) en el contexto del ICT: el procesamiento de palabras nuevas, en inglés, conocidas como “cognados” y “falsos amigos”, por parte de estudiantes de ingeniería españoles. No se pretende ofrecer una visión completa de los mecanismos de adquisición del lenguaje, más bien se intenta mostrar cómo nuestro sistema conceptual, gobernado por una complicada red de conexiones neuronales en el cerebro, hace posible el desarrollo del lenguaje, aunque ello conlleve ciertas dificultades en el procesamiento de segundas lenguas. El caso de los “cognados” y los “falsos amigos”, en los lenguajes de especialidad, se trae para ilustrar algunos de los problemas de procesamiento que el estudiante de una lengua extranjera tiene que afrontar y el funcionamiento de las correspondencias entre las estructuras visuales, fonológicas y semánticas (conceptuales) del cerebro en el procesamiento de nuevo vocabulario.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a method to design dominant-negative proteins (D-N) to the basic helix–loop–helix–leucine zipper (B-HLHZip) family of sequence-specific DNA binding transcription factors. The D-Ns specifically heterodimerize with the B-HLHZip dimerization domain of the transcription factors and abolish DNA binding in an equimolar competition. Thermal denaturation studies indicate that a heterodimer between a Myc B-HLHZip domain and a D-N consisting of a 12-amino acid sequence appended onto the Max dimerization domain (A-Max) is −6.3 kcal⋅mol−1 more stable than the Myc:Max heterodimer. One molar equivalent of A-Max can totally abolish the DNA binding activity of a Myc:Max heterodimer. This acidic extension also has been appended onto the dimerization domain of the B-HLHZip protein Mitf, a member of the transcription factor enhancer binding subfamily, to produce A-Mitf. The heterodimer between A-Mitf and the B-HLHZip domain of Mitf is −3.7 kcal⋅mol−1 more stable than the Mitf homodimer. Cell culture studies show that A-Mitf can inhibit Mitf-dependent transactivation both in acidic extension and in a dimerization-dependent manner. A-Max can inhibit Myc-dependent foci formation twice as well as the Max dimerization domain (HLHZip). This strategy of producing D-Ns may be applicable to other B-HLHZip or B-HLH proteins because it provides a method to inhibit the DNA binding of these transcription factors in a dimerization-specific manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Individuals with autism spectrum disorder (ASD) have impaired ability to use context, which may manifest as alterations of relatedness within the semantic network. However, impairment in context use may be more difficult to detect in high-functioning adults with ASD. To test context use in this population, we examined the influence of context on memory by using the “false memory” test. In the false memory task, lists of words were presented to high-functioning subjects with ASD and matched controls. Each list consists of words highly related to an index word not on the list. Subjects are then given a recognition test. Positive responses to the index words represent false memories. We found that individuals with ASD are able to discriminate false memory items from true items significantly better than are control subjects. Memory in patients with ASD may be more accurate than in normal individuals under certain conditions. These results also suggest that semantic representations comprise a less distributed network in high-functioning adults with ASD. Furthermore, these results may be related to the unusually high memory capacities found in some individuals with ASD. Research directed at defining the range of tasks performed superiorly by high-functioning individuals with ASD will be important for optimal vocational rehabilitation.