894 resultados para probability of error


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Smear-negative pulmonary tuberculosis (SNPTB) accounts for 30% of Pulmonary Tuberculosis (PTB) cases reported annually in developing nations. Polymerase chain reaction (PCR) may provide an alternative for the rapid detection of Mycobacterium tuberculosis (MTB); however little data are available regarding the clinical utility of PCR in SNPTB, in a setting with a high burden of TB/HIV co-infection. Methods To evaluate the performance of the PCR dot-blot in parallel with pretest probability (Clinical Suspicion) in patients suspected of having SNPTB, a prospective study of 213 individuals with clinical and radiological suspicion of SNPTB was carried out from May 2003 to May 2004, in a TB/HIV reference hospital. Respiratory specialists estimated the pretest probability of active disease into high, intermediate, low categories. Expectorated sputum was examined by direct microscopy (Ziehl-Neelsen staining), culture (Lowenstein Jensen) and PCR dot-blot. Gold standard was based on culture positivity combined with the clinical definition of PTB. Results In smear-negative and HIV subjects, active PTB was diagnosed in 28.4% (43/151) and 42.2% (19/45), respectively. In the high, intermediate and low pretest probability categories active PTB was diagnosed in 67.4% (31/46), 24% (6/25), 7.5% (6/80), respectively. PCR had sensitivity of 65% (CI 95%: 50%–78%) and specificity of 83% (CI 95%: 75%–89%). There was no difference in the sensitivity of PCR in relation to HIV status. PCR sensitivity and specificity among non-previously TB treated and those treated in the past were, respectively: 69%, 43%, 85% and 80%. The high pretest probability, when used as a diagnostic test, had sensitivity of 72% (CI 95%:57%–84%) and specificity of 86% (CI 95%:78%–92%). Using the PCR dot-blot in parallel with high pretest probability as a diagnostic test, sensitivity, specificity, positive and negative predictive values were: 90%, 71%, 75%, and 88%, respectively. Among non-previously TB treated and HIV subjects, this approach had sensitivity, specificity, positive and negative predictive values of 91%, 79%, 81%, 90%, and 90%, 65%, 72%, 88%, respectively. Conclusion PCR dot-blot associated with a high clinical suspicion may provide an important contribution to the diagnosis of SNPTB mainly in patients that have not been previously treated attended at a TB/HIV reference hospital.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To estimate the pretest probability of Cushing's syndrome (CS) diagnosis by a Bayesian approach using intuitive clinical judgment. MATERIALS AND METHODS: Physicians were requested, in seven endocrinology meetings, to answer three questions: "Based on your personal expertise, after obtaining clinical history and physical examination, without using laboratorial tests, what is your probability of diagnosing Cushing's Syndrome?"; "For how long have you been practicing Endocrinology?"; and "Where do you work?". A Bayesian beta regression, using the WinBugs software was employed. RESULTS: We obtained 294 questionnaires. The mean pretest probability of CS diagnosis was 51.6% (95%CI: 48.7-54.3). The probability was directly related to experience in endocrinology, but not with the place of work. CONCLUSION: Pretest probability of CS diagnosis was estimated using a Bayesian methodology. Although pretest likelihood can be context-dependent, experience based on years of practice may help the practitioner to diagnosis CS. Arq Bras Endocrinol Metab. 2012;56(9):633-7

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Swiss-specific FRAX model was developed. Patient profiles at increased probability of fracture beyond currently accepted reimbursement thresholds for bone mineral density (BMD) measurement by dual X-ray absorptiometry (DXA), and osteoporosis treatment were identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To derive tests for randomness, nonlinear-independence, and stationarity, we combine surrogates with a nonlinear prediction error, a nonlinear interdependence measure, and linear variability measures, respectively. We apply these tests to intracranial electroencephalographic recordings (EEG) from patients suffering from pharmacoresistant focal-onset epilepsy. These recordings had been performed prior to and independent from our study as part of the epilepsy diagnostics. The clinical purpose of these recordings was to delineate the brain areas to be surgically removed in each individual patient in order to achieve seizure control. This allowed us to define two distinct sets of signals: One set of signals recorded from brain areas where the first ictal EEG signal changes were detected as judged by expert visual inspection ("focal signals") and one set of signals recorded from brain areas that were not involved at seizure onset ("nonfocal signals"). We find more rejections for both the randomness and the nonlinear-independence test for focal versus nonfocal signals. In contrast more rejections of the stationarity test are found for nonfocal signals. Furthermore, while for nonfocal signals the rejection of the stationarity test increases the rejection probability of the randomness and nonlinear-independence test substantially, we find a much weaker influence for the focal signals. In consequence, the contrast between the focal and nonfocal signals obtained from the randomness and nonlinear-independence test is further enhanced when we exclude signals for which the stationarity test is rejected. To study the dependence between the randomness and nonlinear-independence test we include only focal signals for which the stationarity test is not rejected. We show that the rejection of these two tests correlates across signals. The rejection of either test is, however, neither necessary nor sufficient for the rejection of the other test. Thus, our results suggest that EEG signals from epileptogenic brain areas are less random, more nonlinear-dependent, and more stationary compared to signals recorded from nonepileptogenic brain areas. We provide the data, source code, and detailed results in the public domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While beneficially decreasing the necessary incision size, arthroscopic hip surgery increases the surgical complexity due to loss of joint visibility. To ease such difficulty, a computer-aided mechanical navigation system was developed to present the location of the surgical tool relative to the patient¿s hip joint. A preliminary study reduced the position error of the tracking linkage with limited static testing trials. In this study, a correction method, including a rotational correction factor and a length correction function, was developed through more in-depth static testing. The developed correction method was then applied to additional static and dynamic testing trials to evaluate its effectiveness. For static testing, the position error decreased from an average of 0.384 inches to 0.153 inches, with an error reduction of 60.5%. Three parameters utilized to quantify error reduction of dynamic testing did not show consistent results. The vertex coordinates achieved 29.4% of error reduction, yet with large variation in the upper vertex. The triangular area error was reduced by 5.37%, however inconsistent among all five dynamic trials. Error of vertex angles increased, indicating a shape torsion using the developed correction method. While the established correction method effectively and consistently reduced position error in static testing, it did not present consistent results in dynamic trials. More dynamic paramters should be explored to quantify error reduction of dynamic testing, and more in-depth dynamic testing methodology should be conducted to further improve the accuracy of the computer-aided nagivation system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Primate immunodeficiency viruses, or lentiviruses (HIV-1, HIV-2, and SIV), and hepatitis delta virus (HDV) are RNA viruses characterized by rapid evolution. Infection by primate immunodeficiency viruses usually results in the development of acquired immunodeficiency syndrome (AIDS) in humans and AIDS-like illnesses in Asian macaques. Similarly, hepatitis delta virus infection causes hepatitis and liver cancer in humans. These viruses are heterogeneous within an infected patient and among individuals. Substitution rates in the virus genomes are high and vary in different lineages and among sites. Methods of phylogenetic analysis were applied to study the evolution of primate lentiviruses and the hepatitis delta virus. The following results have been obtained: (1) The substitution rate varies among sites of primate lentivirus genes according to the two parameter gamma distribution, with the shape parameter $\alpha$ being close to 1. (2) Primate immunodeficiency viruses fall into species-specific lineages. Therefore, viral transmissions across primate species are not as frequent as suggested by previous authors. (3) Primate lentiviruses have acquired or lost their pathogenicity several times in the course of evolution. (4) Evidence was provided for multiple infections of a North American patient by distinct HIV-1 strains of the B subtype. (5) Computer simulations indicate that the probability of committing an error in testing HIV transmission depends on the number of virus sequences and their length, the divergence times among sequences, and the model of nucleotide substitution. (6) For future investigations of HIV-1 transmissions, using longer virus sequences and avoiding the use of distant outgroups is recommended. (7) Hepatitis delta virus strains are usually related according to the geographic region of isolation. (8) Evolution of HDV is characterized by the rate of synonymous substitution being lower than the nonsynonymous substitution rate and the rate of evolution of the noncoding region. (9) There is a strong preference for G and C nucleotides at the third codon positions of the HDV coding region. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, the Society for Personality and Social Psychology (SPSP) Task Force on Publication and Research Practices offers a brief statistical primer and recommendations for improving the dependability of research. Recommendations for research practice include (a) describing and addressing the choice of N (sample size) and consequent issues of statistical power, (b) reporting effect sizes and 95% confidence intervals (CIs), (c) avoiding “questionable research practices” that can inflate the probability of Type I error, (d) making available research materials necessary to replicate reported results, (e) adhering to SPSP’s data sharing policy, (f) encouraging publication of high-quality replication studies, and (g) maintaining flexibility and openness to alternative standards and methods. Recommendations for educational practice include (a) encouraging a culture of “getting it right,” (b) teaching and encouraging transparency of data reporting, (c) improving methodological instruction, and (d) modeling sound science and supporting junior researchers who seek to “get it right.”

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stochastic simulation is an important and practical technique for computing probabilities of rare events, like the payoff probability of a financial option, the probability that a queue exceeds a certain level or the probability of ruin of the insurer's risk process. Rare events occur so infrequently, that they cannot be reasonably recorded during a standard simulation procedure: specifc simulation algorithms which thwart the rarity of the event to simulate are required. An important algorithm in this context is based on changing the sampling distribution and it is called importance sampling. Optimal Monte Carlo algorithms for computing rare event probabilities are either logarithmic eficient or possess bounded relative error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Each year, hospitalized patients experience 1.5 million preventable injuries from medication errors and hospitals incur an additional $3.5 billion in cost (Aspden, Wolcott, Bootman, & Cronenwatt; (2007). It is believed that error reporting is one way to learn about factors contributing to medication errors. And yet, an estimated 50% of medication errors go unreported. This period of medication error pre-reporting, with few exceptions, is underexplored. The literature focuses on error prevention and management, but lacks a description of the period of introspection and inner struggle over whether to report an error and resulting likelihood to report. Reporting makes a nurse vulnerable to reprimand, legal liability, and even threat to licensure. For some nurses this state may invoke a disparity between a person‘s belief about him or herself as a healer and the undeniable fact of the error.^ This study explored the medication error reporting experience. Its purpose was to inform nurses, educators, organizational leaders, and policy-makers about the medication error pre-reporting period, and to contribute to a framework for further investigation. From a better understanding of factors that contribute to or detract from the likelihood of an individual to report an error, interventions can be identified to help the nurse come to a psychologically healthy resolution and help increase reporting of error in order to learn from error and reduce the possibility of future similar error.^ The research question was: "What factors contribute to a nurse's likelihood to report an error?" The specific aims of the study were to: (1) describe participant nurses' perceptions of medication error reporting; (2) describe participant explanations of the emotional, cognitive, and physical reactions to making a medication error; (3) identify pre-reporting conditions that make it less likely for a nurse to report a medication error; and (4) identify pre-reporting conditions that make it more likely for a nurse to report a medication error.^ A qualitative research study was conducted to explore the medication error experience and in particular the pre-reporting period from the perspective of the nurse. A total of 54 registered nurses from a large private free-standing not-for-profit children's hospital in the southwestern United States participated in group interviews. The results describe the experience of the nurse as well as the physical, emotional, and cognitive responses to the realization of the commission of a medication error. The results also reveal factors that make it more and less likely to report a medication error.^ It is clear from this study that upon realization that he or she has made a medication error, a nurse's foremost concern is for the safety of the patient. Fear was also described by each group of nurses. The nurses described a fear of several things including physician reaction, manager reaction, peer reaction, as well as family reaction and possible lack of trust as a result. Another universal response was the description of a struggle with guilt, shame, imperfection, blaming oneself, and questioning one's competence.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey uncertainties inherent to spatial data and analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here we present the first radiometric age data and a comprehensive geochemical data set (including major and trace element and Sr-Nd-Pb-Hf isotope ratios) for samples from the Hikurangi Plateau basement and seamounts on and adjacent to the plateau obtained during the R/V Sonne 168 cruise, in addition to age and geochemical data from DSDP Site 317 on the Manihiki Plateau. The 40Ar/39Ar age and geochemical data show that the Hikurangi basement lavas (118-96 Ma) have surprisingly similar major and trace element and isotopic characteristics to the Ontong Java Plateau lavas (ca. 120 and 90 Ma), primarily the Kwaimbaita-type composition, whereas the Manihiki DSDP Site 317 lavas (117 Ma) have similar compositions to the Singgalo lavas on the Ontong Java Plateau. Alkalic, incompatible-element-enriched seamount lavas (99-87 Ma and 67 Ma) on the Hikurangi Plateau and adjacent to it (Kiore Seamount), however, were derived from a distinct high time-integrated U/Pb (HIMU)-type mantle source. The seamount lavas are similar in composition to similar-aged alkalic volcanism on New Zealand, indicating a second wide-spread event from a distinct source beginning ca. 20 Ma after the plateau-forming event. Tholeiitic lavas from two Osbourn seamounts on the abyssal plain adjacent to the northeast Hikurangi Plateau margin have extremely depleted incompatible element compositions, but incompatible element characteristics similar to the Hikurangi and Ontong Java Plateau lavas and enriched isotopic compositions intermediate between normal mid-ocean-ridge basalt (N-MORB) and the plateau basement. These younger (~52 Ma) seamounts may have formed through remelting of mafic cumulate rocks associated with the plateau formation. The similarity in age and geochemistry of the Hikurangi, Ontong Java and Manihiki Plateaus suggest derivation from a common mantle source. We propose that the Greater Ontong Java Event, during which ?1% of the Earth's surface was covered with volcanism, resulted from a thermo-chemical superplume/dome that stalled at the transition zone, similar to but larger than the structure imaged presently beneath the South Pacific superswell. The later alkalic volcanism on the Hikurangi Plateau and the Zealandia micro-continent may have been part of a second large-scale volcanic event that may have also triggered the final breakup stage of Gondwana, which resulted in the separation of Zealandia fragments from West Antarctica.