878 resultados para Receiver Operating Characteristic
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Following study, participants received 2 tests. The 1st was a recognition test; the 2nd was designed to tap recollection. The objective was to examine performance on Test I conditional on Test 2 performance. In Experiment 1, contrary to process dissociation assumptions, exclusion errors better predicted subsequent recollection than did inclusion errors. In Experiments 2 and 3, with alternate questions posed on Test 2, words having high estimates of recollection with one question had high estimates of familiarity with the other question. Results supported the following: (a) the 2-test procedure has considerable potential for elucidating the relationship between recollection and familiarity; (b) there is substantial evidence for dependency between such processes when estimates are obtained using the process dissociation and remember-know procedures; and (c) order of information access appears to depend on the question posed to the memory system.
Resumo:
OBJECTIVES This study was designed to predict the response and prognosis after cardiac resynchronization therapy (CRT) in patients with end-stage heart failure (HF). BACKGROUND Cardiac resynchronization therapy improves HF symptoms, exercise capacity, and left ventricular (LV) function. Because not all patients respond, preimplantation identification of responders is needed. In the present study, response to CRT was predicted by the presence of LV dyssynchrony assessed by tissue Doppler imaging. Moreover, the prognostic value of LV dyssynchrony in patients undergoing CRT was assessed. METHODS Eighty-five patients with end-stage HF, QRS duration >120 ins, and left bundle-branch block were evaluated by tissue Doppler imaging before CRT. At baseline and six months follow-up, New York Heart Association functional class, quality of life and 6-min walking distance, LV volumes, and LV ejection fraction were determined. Events (death, hospitalization for decompensated HF) were obtained during one-year follow-up. RESULTS Responders (74%) and nonresponders (26%) had comparable baseline characteristics, except for a larger dyssynchrony in responders (87 +/- 49 ms vs. 35 +/- 20 ms, p < 0.01). Receiver-operator characteristic curve analysis demonstrated that an optimal cutoff value of 65 ms for LV dyssynchrony yielded a sensitivity and specificity of 80% to predict clinical improvement and of 92% to predict LV reverse remodeling. Patients with dyssynchrony :65 ms had an excellent prognosis (6% event rate) after CRT as compared with a 50% event rate in patients with dyssynchrony <65 ins (p < 0.001). CONCLUSIONS Patients with LV dyssynchrony greater than or equal to65 ms respond to CRT and have an excellent prognosis after CRT. (C) 2004 by the American College of Cardiology Foundation.
Resumo:
Clinical evaluation of arterial potency in acute ST-elevation myocardial infarction (STEMI) is unreliable. We sought to identify infarction and predict infarct-related artery potency measured by the Thrombolysis In Myocardial Infarction (TIMI) score with qualitative and quantitative intravenous myocardial contrast echocardiography (MCE). Thirty-four patients with suspected STEMI underwent MCE before emergency angiography and planned angioplasty. MCE was performed with harmonic imaging and variable triggering intervals during intravenous administration of Optison. Myocardial perfusion was quantified offline, fitting an exponential function to contrast intensity at various pulsing intervals. Plateau myocardial contrast intensity (A), rate of rise (beta), and myocardial flow (Q = A x beta) were assessed in 6 segments. Qualitative assessment of perfusion defects was sensitive for the diagnosis of infarction (sensitivity 93%) and did not differ between anterior and inferior infarctions. However, qualitative assessment had only moderate specificity (50%), and perfusion defects were unrelated to TIMI flow. In patients with STEMI, quantitatively derived myocardial blood flow Q (A x beta) was significantly lower in territories subtended by an artery with impaired (TIMI 0 to 2) flow than those territories supplied by a reperfused artery with TIMI 3 flow (10.2 +/- 9.1 vs 44.3 +/- 50.4, p = 0.03). Quantitative flow was also lower in segments with impaired flow in the subtending artery compared with normal patients with TIMI 3 flow (42.8 +/- 36.6, p = 0.006) and all segments with TIMI 3 flow (35.3 +/- 32.9, p = 0.018). An receiver-operator characteristic curve derived cut-off Q value of
Resumo:
Predicting the various responses of different species to changes in landscape structure is a formidable challenge to landscape ecology. Based on expert knowledge and landscape ecological theory, we develop five competing a priori models for predicting the presence/absence of the Koala (Phascolarctos cinereus) in Noosa Shire, south-east Queensland (Australia). A priori predictions were nested within three levels of ecological organization: in situ (site level) habitat (< 1 ha), patch level (100 ha) and landscape level (100-1000 ha). To test the models, Koala surveys and habitat surveys (n = 245) were conducted across the habitat mosaic. After taking into account tree species preferences, the patch and landscape context, and the neighbourhood effect of adjacent present sites, we applied logistic regression and hierarchical partitioning analyses to rank the alternative models and the explanatory variables. The strongest support was for a multilevel model, with Koala presence best predicted by the proportion of the landscape occupied by high quality habitat, the neighbourhood effect, the mean nearest neighbour distance between forest patches, the density of forest patches and the density of sealed roads. When tested against independent data (n = 105) using a receiver operator characteristic curve, the multilevel model performed moderately well. The study is consistent with recent assertions that habitat loss is the major driver of population decline, however, landscape configuration and roads have an important effect that needs to be incorporated into Koala conservation strategies.
Resumo:
The detection of signals in the presence of noise is one of the most basic and important problems encountered by communication engineers. Although the literature abounds with analyses of communications in Gaussian noise, relatively little work has appeared dealing with communications in non-Gaussian noise. In this thesis several digital communication systems disturbed by non-Gaussian noise are analysed. The thesis is divided into two main parts. In the first part, a filtered-Poisson impulse noise model is utilized to calulate error probability characteristics of a linear receiver operating in additive impulsive noise. Firstly the effect that non-Gaussian interference has on the performance of a receiver that has been optimized for Gaussian noise is determined. The factors affecting the choice of modulation scheme so as to minimize the deterimental effects of non-Gaussian noise are then discussed. In the second part, a new theoretical model of impulsive noise that fits well with the observed statistics of noise in radio channels below 100 MHz has been developed. This empirical noise model is applied to the detection of known signals in the presence of noise to determine the optimal receiver structure. The performance of such a detector has been assessed and is found to depend on the signal shape, the time-bandwidth product, as well as the signal-to-noise ratio. The optimal signal to minimize the probability of error of; the detector is determined. Attention is then turned to the problem of threshold detection. Detector structure, large sample performance and robustness against errors in the detector parameters are examined. Finally, estimators of such parameters as. the occurrence of an impulse and the parameters in an empirical noise model are developed for the case of an adaptive system with slowly varying conditions.
Resumo:
A distinct feature of several recent models of contrast masking is that detecting mechanisms are divisively inhibited by a broadly tuned ‘gain pool’ of narrow-band spatial pattern mechanisms. The contrast gain control provided by this ‘cross-channel’ architecture achieves contrast normalisation of early pattern mechanisms, which is important for keeping them within the non-saturating part of their biological operating characteristic. These models superseded earlier ‘within-channel’ models, which had supposed that masking arose from direct stimulation of the detecting mechanism by the mask. To reveal the extent of masking, I measured the levels produced with large ranges of pattern spatial relationships that have not been explored before. Substantial interactions between channels tuned to different orientations and spatial frequencies were found. Differences in the masking levels produced with single and multiple component mask patterns provided insights into the summation rules within the gain pool. A widely used cross-channel masking model was tested on these data and was found to perform poorly. The model was developed and a version in which linear summation was allowed between all components within the gain pool but with the exception of the self-suppressing route typically provided the best account of the data. Subsequently, an adaptation paradigm was used to probe the processes underlying pooled responses in masking. This delivered less insight into the pooling than the other studies and areas were identified that require investigation for a new unifying model of masking and adaptation. In further experiments, levels of cross-channel masking were found to be greatly influenced by the spatio-temporal tuning of the channels involved. Old masking experiments and ideas relying on within-channel models were re-elevated in terms of contemporary cross-channel models (e.g. estimations of channel bandwidths from orientation masking functions) and this led to different conclusions than those originally arrived at. The investigation of effects with spatio-temporally superimposed patterns is focussed upon throughout this work, though it is shown how these enquiries might be extended to investigate effects across spatial and temporal position.
Resumo:
On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM.
Resumo:
We develop, implement and study a new Bayesian spatial mixture model (BSMM). The proposed BSMM allows for spatial structure in the binary activation indicators through a latent thresholded Gaussian Markov random field. We develop a Gibbs (MCMC) sampler to perform posterior inference on the model parameters, which then allows us to assess the posterior probabilities of activation for each voxel. One purpose of this article is to compare the HJ model and the BSMM in terms of receiver operating characteristics (ROC) curves. Also we consider the accuracy of the spatial mixture model and the BSMM for estimation of the size of the activation region in terms of bias, variance and mean squared error. We perform a simulation study to examine the aforementioned characteristics under a variety of configurations of spatial mixture model and BSMM both as the size of the region changes and as the magnitude of activation changes.
Resumo:
OBJECTIVE: To evaluate the validity of hemoglobin A1C (A1C) as a diagnostic tool for type 2 diabetes and to determine the most appropriate A1C cutoff point for diagnosis in a sample of Haitian-Americans. SUBJECTS AND METHODS: Subjects (n = 128) were recruited from Miami-Dade and Broward counties, FL. Receiver operating characteristics (ROC) analysis was run in order to measure sensitivity and specificity of A1C for detecting diabetes at different cutoff points. RESULTS: The area under the ROC curve was 0.86 using fasting plasma glucose ≥ 7.0 mmol/L as the gold standard. An A1C cutoff point of 6.26% had sensitivity of 80% and specificity of 74%, whereas an A1C cutoff point of 6.50% (recommended by the American Diabetes Association – ADA) had sensitivity of 73% and specificity of 89%. CONCLUSIONS: A1C is a reliable alternative to fasting plasma glucose in detecting diabetes in this sample of Haitian-Americans. A cutoff point of 6.26% was the optimum value to detect type 2 diabetes.
Resumo:
Acknowledgements This study was supported by a Medical Research Council UK grant (grant number G0800901), as a sub-study of Nitrites in Acute Myocardial Infarction. Thanks are due to Roger Staff, for invaluable advice regarding receiver operator characteristic analysis.
Resumo:
Acknowledgements This study was supported by a Medical Research Council UK grant (grant number G0800901), as a sub-study of Nitrites in Acute Myocardial Infarction. Thanks are due to Roger Staff, for invaluable advice regarding receiver operator characteristic analysis.
Resumo:
OBJECTIVES: To report on the responsiveness testing and clinical utility of the 12-item Geriatric Self-Efficacy Index for Urinary Incontinence (GSE-UI). DESIGN: Prospective cohort study. SETTING: Six urinary incontinence (UI) outpatient clinics in Quebec, Canada. PARTICIPANTS: Community-dwelling incontinent adults aged 65 and older. MEASUREMENTS: The abridged 12-item GSE-UI, measuring older adults' level of confidence for preventing urine loss, was administered to all new consecutive incontinent patients 1 week before their initial clinic visit, at baseline, and 3 months posttreatment. At follow-up, a positive rating of improvement in UI was ascertained from patients and their physicians using the Patient's and Clinician's Global Impression of Improvement scales, respectively. Responsiveness of the GSE-UI was calculated using Guyatt's change index. Its clinical utility was determined using receiver operating curves. RESULTS: Eighty-nine of 228 eligible patients (39.0%) participated (mean age 72.6+5.8, range 65–90). At 3-month follow-up, 22.5% of patients were very much better, and 41.6% were a little or much better. Guyatt's change index was 2.6 for patients who changed by a clinically meaningful amount and 1.5 for patients having experienced any level of improvement. An improvement of 14 points on the 12-item GSE-UI had a sensitivity of 75.1% and a specificity of 78.2% for detecting clinically meaningful changes in UI status. Mean GSE-UI scores varied according to improvement status (P<.001) and correlated with changes in quality-of-life scores (r=0.7, P<.001) and reductions in UI episodes (r=0.4, P=.004). CONCLUSION: The GSE-UI is responsive and clinically useful.
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.