866 resultados para Test data
Resumo:
There has been significant interest in indirect measures of attitudes like the Implicit Association Test (IAT), presumably because of the possibility of uncovering implicit prejudices. The authors derived a set of qualitative predictions for people's performance in the IAT on the basis of random walk models. These were supported in 3 experiments comparing clearly positive or negative categories to nonwords. They also provided evidence that participants shift their response criterion when doing the IAT. Because of these criterion shifts, a response pattern in the IAT can have multiple causes. Thus, it is not possible to infer a single cause (such as prejudice) from IAT results. A surprising additional result was that nonwords were treated as though they were evaluated more negatively than obviously negative items like insects, suggesting that low familiarity items may generate the pattern of data previously interpreted as evidence for implicit prejudice.
Resumo:
Based on the results from detailed structural and petrological characterisation and on up-scaled laboratory values for sorption and diffusion, blind predictions were made for the STT1 dipole tracer test performed in the Swedish A¨ spo¨ Hard Rock Laboratory. The tracers used were nonsorbing, such as uranine and tritiated water, weakly sorbing 22Na+, 85Sr2 +, 47Ca2 +and more strongly sorbing 86Rb+, 133Ba2 +, 137Cs+. Our model consists of two parts: (1) a flow part based on a 2D-streamtube formalism accounting for the natural background flow field and with an underlying homogeneous and isotropic transmissivity field and (2) a transport part in terms of the dual porosity medium approach which is linked to the flow part by the flow porosity. The calibration of the model was done using the data from one single uranine breakthrough (PDT3). The study clearly showed that matrix diffusion into a highly porous material, fault gouge, had to be included in our model evidenced by the characteristic shape of the breakthrough curve and in line with geological observations. After the disclosure of the measurements, it turned out that, in spite of the simplicity of our model, the prediction for the nonsorbing and weakly sorbing tracers was fairly good. The blind prediction for the more strongly sorbing tracers was in general less accurate. The reason for the good predictions is deemed to be the result of the choice of a model structure strongly based on geological observation. The breakthrough curves were inversely modelled to determine in situ values for the transport parameters and to draw consequences on the model structure applied. For good fits, only one additional fracture family in contact with cataclasite had to be taken into account, but no new transport mechanisms had to be invoked. The in situ values for the effective diffusion coefficient for fault gouge are a factor of 2–15 larger than the laboratory data. For cataclasite, both data sets have values comparable to laboratory data. The extracted Kd values for the weakly sorbing tracers are larger than Swedish laboratory data by a factor of 25–60, but agree within a factor of 3–5 for the more strongly sorbing nuclides. The reason for the inconsistency concerning Kds is the use of fresh granite in the laboratory studies, whereas tracers in the field experiments interact only with fracture fault gouge and to a lesser extent with cataclasite both being mineralogically very different (e.g. clay-bearing) from the intact wall rock.
Resumo:
Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.
Resumo:
Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.
Resumo:
Dendrogeomorphology uses information sources recorded in the roots, trunks and branches of trees and bushes located in the fluvial system to complement (or sometimes even replace) systematic and palaeohydrological records of past floods. The application of dendrogeomorphic data sources and methods to palaeoflood analysis over nearly 40 years has allowed improvements to be made in frequency and magnitude estimations of past floods. Nevertheless, research carried out so far has shown that the dendrogeomorphic indicators traditionally used (mainly scar evidence), and their use to infer frequency and magnitude, have been restricted to a small, limited set of applications. New possibilities with enormous potential remain unexplored. New insights in future research of palaeoflood frequency and magnitude using dendrogeomorphic data sources should: (1) test the application of isotopic indicators (16O/18O ratio) to discover the meteorological origin of past floods; (2) use different dendrogeomorphic indicators to estimate peak flows with 2D (and 3D) hydraulic models and study how they relate to other palaeostage indicators; (3) investigate improved calibration of 2D hydraulic model parameters (roughness); and (4) apply statistics-based cost–benefit analysis to select optimal mitigation measures. This paper presents an overview of these innovative methodologies, with a focus on their capabilities and limitations in the reconstruction of recent floods and palaeofloods.
Resumo:
Aims: Arterial plaque rupture and thrombus characterise ST-elevation myocardial infarction (STEMI) and may aggravate delayed arterial healing following durable polymer drug-eluting stent (DP-DES) implantation. Biodegradable polymer (BP) may improve biocompatibility. We compared long-term outcomes in STEMI patients receiving BP-DES vs. durable polymer sirolimus-eluting stents (DP-SES). Methods and results: We pooled individual patient-level data from three randomised clinical trials (ISAR-TEST-3, ISAR-TEST-4 and LEADERS) comparing outcomes from BP-DES with DP-SES at four years. The primary endpoint (MACE) comprised cardiac death, MI, or target lesion revascularisation (TLR). Secondary endpoints were TLR, cardiac death or MI, and definite or probable stent thrombosis. Of 497 patients with STEMI, 291 received BP-DES and 206 DP-SES. At four years, MACE was significantly reduced following treatment with BP-DES (hazard ratio [HR] 0.59, 95% CI: 0.39-0.90; p=0.01) driven by reduced TLR (HR 0.54, 95% CI: 0.30-0.98; p=0.04). Trends towards reduction were seen for cardiac death or MI (HR 0.63, 95% CI: 0.37-1.05; p=0.07) and definite or probable stent thrombosis (3.6% vs. 7.1%; HR 0.49, 95% CI: 0.22-1.11; p=0.09). Conclusions: In STEMI, BP-DES demonstrated superior clinical outcomes to DP-SES at four years. Trends towards reduced cardiac death or myocardial infarction and reduced stent thrombosis require corroboration in specifically powered trials.
Resumo:
BACKGROUND: Early detection of colorectal cancer through timely follow-up of positive Fecal Occult Blood Tests (FOBTs) remains a challenge. In our previous work, we found 40% of positive FOBT results eligible for colonoscopy had no documented response by a treating clinician at two weeks despite procedures for electronic result notification. We determined if technical and/or workflow-related aspects of automated communication in the electronic health record could lead to the lack of response. METHODS: Using both qualitative and quantitative methods, we evaluated positive FOBT communication in the electronic health record of a large, urban facility between May 2008 and March 2009. We identified the source of test result communication breakdown, and developed an intervention to fix the problem. Explicit medical record reviews measured timely follow-up (defined as response within 30 days of positive FOBT) pre- and post-intervention. RESULTS: Data from 11 interviews and tracking information from 490 FOBT alerts revealed that the software intended to alert primary care practitioners (PCPs) of positive FOBT results was not configured correctly and over a third of positive FOBTs were not transmitted to PCPs. Upon correction of the technical problem, lack of timely follow-up decreased immediately from 29.9% to 5.4% (p<0.01) and was sustained at month 4 following the intervention. CONCLUSION: Electronic communication of positive FOBT results should be monitored to avoid limiting colorectal cancer screening benefits. Robust quality assurance and oversight systems are needed to achieve this. Our methods may be useful for others seeking to improve follow-up of FOBTs in their systems.
Resumo:
AIM: To determine the feasibility of evaluating surgically induced hepatocyte damage using gadoxetate disodium (Gd-EOB-DTPA) as a marker for viable hepatocytes at magnetic resonance imaging (MRI) after liver resection. MATERIAL AND METHODS: Fifteen patients were prospectively enrolled in this institutional review board-approved study prior to elective liver resection after informed consent. Three Tesla MRI was performed 3-7 days after surgery. Three-dimensional (3D) T1-weighted (W) volumetric interpolated breath-hold gradient echo (VIBE) sequences covering the liver were acquired before and 20 min after Gd-EOB-DTPA administration. The signal-to-noise ratio (SNR) was used to compare the uptake of Gd-EOB-DTPA in healthy liver tissue and in liver tissue adjacent to the resection border applying paired Student's t-test. Correlations with potential influencing factors (blood loss, duration of intervention, age, pre-existing liver diseases, postoperative change of resection surface) were calculated using Pearson's correlation coefficient. RESULTS: Before Gd-EOB-DTPA administration the SNR did not differ significantly (p = 0.052) between healthy liver tissue adjacent to untouched liver borders [59.55 ± 25.46 (SD)] and the liver tissue compartment close to the resection surface (63.31 ± 27.24). During the hepatocyte-specific phase, the surgical site showed a significantly (p = 0.04) lower SNR (69.44 ± 24.23) compared to the healthy site (78.45 ± 27.71). Dynamic analyses revealed a significantly lower increase (p = 0.008) in signal intensity in the healthy tissue compared to the resection border compartment. CONCLUSION: EOB-DTPA-enhanced MRI may have the potential to be an effective non-invasive tool for detecting hepatocyte damage after liver resection.
Resumo:
Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^
Resumo:
Many studies in biostatistics deal with binary data. Some of these studies involve correlated observations, which can complicate the analysis of the resulting data. Studies of this kind typically arise when a high degree of commonality exists between test subjects. If there exists a natural hierarchy in the data, multilevel analysis is an appropriate tool for the analysis. Two examples are the measurements on identical twins, or the study of symmetrical organs or appendages such as in the case of ophthalmic studies. Although this type of matching appears ideal for the purposes of comparison, analysis of the resulting data while ignoring the effect of intra-cluster correlation has been shown to produce biased results.^ This paper will explore the use of multilevel modeling of simulated binary data with predetermined levels of correlation. Data will be generated using the Beta-Binomial method with varying degrees of correlation between the lower level observations. The data will be analyzed using the multilevel software package MlwiN (Woodhouse, et al, 1995). Comparisons between the specified intra-cluster correlation of these data and the estimated correlations, using multilevel analysis, will be used to examine the accuracy of this technique in analyzing this type of data. ^
Resumo:
This paper investigates empirically the Bolton, Scheinkman, and Xiong (2006) hypothesis, according to which initial shareholders may provide incentives to managers to take actions that stimulate speculative bubbles. We test this hypothesis with data on up to 8,544 directors and up to 1,677 companies between 2004-2008. Using vesting time as a measure of the short-term performance weighting in CEO compensation and various alternative measures of the extent of speculation, the findings support the hypothesis: vesting time decreases with more intensive speculation. The results prove robust in various empirical model specifications.