959 resultados para false personation
Resumo:
A recent article in this journal (Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2: e124) argued that more than half of published research findings in the medical literature are false. In this commentary, we examine the structure of that argument, and show that it has three basic components: 1)An assumption that the prior probability of most hypotheses explored in medical research is below 50%. 2)Dichotomization of P-values at the 0.05 level and introduction of a “bias” factor (produced by significance-seeking), the combination of which severely weakens the evidence provided by every design. 3)Use of Bayes theorem to show that, in the face of weak evidence, hypotheses with low prior probabilities cannot have posterior probabilities over 50%. Thus, the claim is based on a priori assumptions that most tested hypotheses are likely to be false, and then the inferential model used makes it impossible for evidence from any study to overcome this handicap. We focus largely on step (2), explaining how the combination of dichotomization and “bias” dilutes experimental evidence, and showing how this dilution leads inevitably to the stated conclusion. We also demonstrate a fallacy in another important component of the argument –that papers in “hot” fields are more likely to produce false findings. We agree with the paper’s conclusions and recommendations that many medical research findings are less definitive than readers suspect, that P-values are widely misinterpreted, that bias of various forms is widespread, that multiple approaches are needed to prevent the literature from being systematically biased and the need for more data on the prevalence of false claims. But calculating the unreliability of the medical research literature, in whole or in part, requires more empirical evidence and different inferential models than were used. The claim that “most research findings are false for most research designs and for most fields” must be considered as yet unproven.
Resumo:
The association of simian virus 40 (SV40) with malignant pleural mesothelioma is currently under debate. In some malignancies of viral aetiology, viral DNA can be detected in the patients' serum or plasma. To characterize the prevalence of SV40 in Swiss mesothelioma patients, we optimized a real-time PCR for quantitative detection of SV40 DNA in plasma, and used a monoclonal antibody for immunohistochemical detection of SV40 in mesothelioma tissue microarrays. Real-time PCR was linear over five orders of magnitude, and sensitive to a single gene copy. Repeat PCR determinations showed excellent reproducibility. However, SV40 status varied for independent DNA isolates of single samples. We noted that SV40 detection rates by PCR were drastically reduced by the implementation of strict room compartmentalization and decontamination procedures. Therefore, we systematically addressed common sources of contamination and found no cross-reactivity with DNA of other polyomaviruses. Contamination during PCR was rare and plasmid contamination was infrequent. SV40 DNA was reproducibly detected in only 4 of 78 (5.1%) plasma samples. SV40 DNA levels were low and not consistently observed in paired plasma and tumour samples from the same patient. Immunohistochemical analysis revealed a weak but reproducible SV40 staining in 16 of 341 (4.7%) mesotheliomas. Our data support the occurrence of non-reproducible SV40 PCR amplifications and underscore the importance of proper sample handling and analysis. SV40 DNA and protein were found at low prevalence (5%) in plasma and tumour tissue, respectively. This suggests that SV40 does not appear to play a major role in the development of mesothelioma.
Resumo:
We evaluated a double screening strategy for carriage of methicillin-resistant Staphylococcus aureus (MRSA) in patients exposed to a newly detected MRSA carrier. If the first screening of the exposed patient yielded negative results, screening was repeated 4 days later. This strategy detected 12 (28%) of the 43 new MRSA carriers identified during the study period. The results suggest that there is an incubation period before MRSA carriage is detectable.
Resumo:
BACKGROUND Aortic dissection is a severe pathological condition in which blood penetrates between layers of the aortic wall and creates a duplicate channel - the false lumen. This considerable change on the aortic morphology alters hemodynamic features dramatically and, in the case of rupture, induces markedly high rates of morbidity and mortality. METHODS In this study, we establish a patient-specific computational model and simulate the pulsatile blood flow within the dissected aorta. The k-ω SST turbulence model is employed to represent the flow and finite volume method is applied for numerical solutions. Our emphasis is on flow exchange between true and false lumen during the cardiac cycle and on quantifying the flow across specific passages. Loading distributions including pressure and wall shear stress have also been investigated and results of direct simulations are compared with solutions employing appropriate turbulence models. RESULTS Our results indicate that (i) high velocities occur at the periphery of the entries; (ii) for the case studied, approximately 40% of the blood flow passes the false lumen during a heartbeat cycle; (iii) higher pressures are found at the outer wall of the dissection, which may induce further dilation of the pseudo-lumen; (iv) highest wall shear stresses occur around the entries, perhaps indicating the vulnerability of this region to further splitting; and (v) laminar simulations with adequately fine mesh resolutions, especially refined near the walls, can capture similar flow patterns to the (coarser mesh) turbulent results, although the absolute magnitudes computed are in general smaller. CONCLUSIONS The patient-specific model of aortic dissection provides detailed flow information of blood transport within the true and false lumen and quantifies the loading distributions over the aorta and dissection walls. This contributes to evaluating potential thrombotic behavior in the false lumen and is pivotal in guiding endovascular intervention. Moreover, as a computational study, mesh requirements to successfully evaluate the hemodynamic parameters have been proposed.
Resumo:
BACKGROUND Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM Our aim was to challenge the validity of these software algorithms. METHODS We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes.
Resumo:
Fragestellung/Einleitung: Prüfungen sind essentieller Bestandteil in der ärztlichen Ausbildung. Sie liefern wertvolle Informationen über den Entwicklungsprozess der Studierenden und wirken lernbegleitend und lernmodulierend [1], [2]. Bei schriftlichen Prüfungen dominieren derzeit Multiple Choice Fragen, die in verschiedenen Typen verwendet werden. Zumeist werden Typ-A Fragen genutzt, bei denen genau eine Antwort richtig ist. Multiple True-False (MTF) Fragen hingegen lassen mehrere richtige Antworten zu: es muss für jede Antwortmöglichkeit entschieden werden, ob diese richtig oder falsch ist. Durch die Mehrfachantwort scheinen MTF Fragen bestimmte klinische Sachverhalte besser widerspiegeln zu können. Auch bezüglich Reliabilität und dem Informationsgewinn pro Testzeit scheinen MTF Fragen den Typ-A Fragen überlegen zu sein [3]. Dennoch werden MTF Fragen bislang selten genutzt und es gibt wenig Literatur zu diesem Fragenformat. In dieser Studie soll untersucht werden, inwiefern die Verwendung von MTF Fragen die Nutzbarkeit (Utility) nach van der Vleuten (Reliabilität, Validität, Kostenaufwand, Effekt auf den Lernprozess und Akzeptanz der Teilnehmer) [4] schriftlicher Prüfungen erhöhen kann. Um die Testreliabilität zu steigern, sowie den Kostenaufwand für Prüfungen zu senken, möchten wir das optimale Bewertungssystem (Scoring) für MTF Fragen ermitteln. Methoden: Wir analysieren die Daten summativer Prüfungen der Medizinischen Fakultät der Universität Bern. Unsere Daten beinhalten Prüfungen vom ersten bis zum sechsten Studienjahr, sowie eine Facharztprüfung. Alle Prüfungen umfassen sowohl MTF als auch Typ-A Fragen. Für diese Prüfungen vergleichen wir die Viertel-, Halb- und Ganzpunktbewertung für MTF Fragen. Bei der Viertelpunktbewertung bekommen Kandidaten für jede richtige Teilantwort ¼ Punkt. Bei der Halbpunktbewertung wird ½ Punkt vergeben, wenn mehr als die Hälfte der Antwortmöglichkeiten richtig ist, einen ganzen Punkt erhalten die Kandidaten wenn alle Antworten richtig beantwortet wurden. Bei der Ganzpunktbewertung erhalten Kandidaten lediglich einen Punkt wenn die komplette Frage richtig beantwortet wurde. Diese unterschiedlichen Bewertungsschemata werden hinsichtlich Fragencharakteristika wie Trennschärfe und Schwierigkeit sowie hinsichtlich Testcharakteristika wie der Reliabilität einander gegenübergestellt. Die Ergebnisse werden ausserdem mit denen für Typ A Fragen verglichen. Ergebnisse: Vorläufige Ergebnisse deuten darauf hin, dass eine Halbpunktbewertung optimal zu sein scheint. Eine Halbpunktbewertung führt zu mittleren Item-Schwierigkeiten und daraus resultierend zu hohen Trennschärfen. Dies trägt zu einer hohen Testreliabilität bei. Diskussion/Schlussfolgerung: MTF Fragen scheinen in Verbindung mit einem optimalen Bewertungssystem, zu höheren Testreliabilitäten im Vergleich zu Typ A Fragen zu führen. In Abhängigkeit des zu prüfenden Inhalts könnten MTF Fragen einen wertvolle Ergänzung zu Typ-A Fragen darstellen. Durch die geeignete Kombination von MTF und Typ A Fragen könnte die Nutzbarkeit (Utility) schriftlicher Prüfungen verbessert werden.
Resumo:
Background: Multiple True-False-Items (MTF-Items) might offer some advantages compared to one-best-answer-questions (TypeA) as they allow more than one correct answer and may better represent clinical decisions. However, in medical education assessment MTF-Items are seldom used. Summary of Work: With this literature review existing findings on MTF-items and on TypeA were compared along the Ottawa Criteria for Good Assessment, i.e. (1) reproducibility, (2) feasibility, (3) validity, (4) acceptance, (5) educational effect, (6) catalytic effects, and (7) equivalence. We conducted a literature research on ERIC and Google Scholar including papers from the years 1935 to 2014. We used the search terms “multiple true-false”, “true-false”, “true/false”, and “Kprim” combined with “exam”, “test”, and “assessment”. Summary of Results: We included 29 out of 33 studies. Four of them were carried out in the medical field Compared to TypeA, MTF-Items are associated with (1) higher reproducibility (2) lower feasibility (3) similar validity (4) higher acceptance (5) higher educational effect (6) no studies on catalytic effects or (7) equivalence. Discussion and Conclusions: While studies show overall good characteristics of MTF items according to the Ottawa criteria, this type of question seems to be rather seldom used. One reason might be the reported lower feasibility. Overall the literature base is still weak. Furthermore, only 14 % of literature is from the medical domain. Further studies to better understand the characteristics of MTF-Items in the medical domain are warranted. Take-home messages: Overall the literature base is weak and therefore further studies are needed. Existing studies show that: MTF-Items show higher reliability, acceptance and educational effect; MTF-Items are more difficult to produce
Resumo:
The Fourth Amendment prohibits unreasonable searches and seizures in criminal investigations. The Supreme Court has interpreted this to require that police obtain a warrant prior to search and that illegally seized evidence be excluded from trial. A consensus has developed in the law and economics literature that tort liability for police officers is a superior means of deterring unreasonable searches. We argue that this conclusion depends on the assumption of truth-seeking police, and develop a game-theoretic model to compare the two remedies when some police officers (the bad type) are willing to plant evidence in order to obtain convictions, even though other police (the good type) are not (where this type is private information). We characterize the perfect Bayesian equilibria of the asymmetric-information game between the police and a court that seeks to minimize error costs in deciding whether to convict or acquit suspects. In this framework, we show that the exclusionary rule with a warrant requirement leads to superior outcomes (relative to tort liability) in terms of truth-finding function of courts, because the warrant requirement can reduce the scope for bad types of police to plant evidence
Resumo:
False-positive and false-negative values were calculated for five different designs of the trend test and it was demonstrated that a design suggested by Portier and Hoel in 1984 for a different problem produced the lowest false-positive and false-negative rates when applied to historical spontaneous tumor rate data for Fischer Rats. ^
Resumo:
It has been demonstrated that rating trust and reputation of individual nodes is an effective approach in distributed environments in order to improve security, support decision-making and promote node collaboration. Nevertheless, these systems are vulnerable to deliberate false or unfair testimonies. In one scenario, the attackers collude to give negative feedback on the victim in order to lower or destroy its reputation. This attack is known as bad mouthing attack. In another scenario, a number of entities agree to give positive feedback on an entity (often with adversarial intentions). This attack is known as ballot stuffing. Both attack types can significantly deteriorate the performances of the network. The existing solutions for coping with these attacks are mainly concentrated on prevention techniques. In this work, we propose a solution that detects and isolates the abovementioned attackers, impeding them in this way to further spread their malicious activity. The approach is based on detecting outliers using clustering, in this case self-organizing maps. An important advantage of this approach is that we have no restrictions on training data, and thus there is no need for any data pre-processing. Testing results demonstrate the capability of the approach in detecting both bad mouthing and ballot stuffing attack in various scenarios.