930 resultados para false negative rate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The 9/11 Act mandates the inspection of 100% of cargo shipments entering the U.S. by 2012 and 100% inspection of air cargo by March 2010. So far, only 5% of inbound shipping containers are inspected thoroughly while air cargo inspections have fared better at 50%. Government officials have admitted that these milestones cannot be met since the appropriate technology does not exist. This research presents a novel planar solid phase microextraction (PSPME) device with enhanced surface area and capacity for collection of the volatile chemical signatures in air that are emitted from illicit compounds for direct introduction into ion mobility spectrometers (IMS) for detection. These IMS detectors are widely used to detect particles of illicit substances and do not have to be adapted specifically to this technology. For static extractions, PDMS and sol-gel PDMS PSPME devices provide significant increases in sensitivity over conventional fiber SPME. Results show a 50–400 times increase in mass detected of piperonal and a 2–4 times increase for TNT. In a blind study of 6 cases suspected to contain varying amounts of MDMA, PSPME-IMS correctly detected 5 positive cases with no false positives or negatives. One of these cases had minimal amounts of MDMA resulting in a false negative response for fiber SPME-IMS. A La (dihed) phase chemistry has shown an increase in the extraction efficiency of TNT and 2,4-DNT and enhanced retention over time. An alternative PSPME device was also developed for the rapid (seconds) dynamic sampling and preconcentration of large volumes of air for direct thermal desorption into an IMS. This device affords high extraction efficiencies due to strong retention properties under ambient conditions resulting in ppt detection limits when 3.5 L of air are sampled over the course of 10 seconds. Dynamic PSPME was used to sample the headspace over the following: MDMA tablets (12–40 ng detected of piperonal), high explosives (Pentolite) (0.6 ng detected of TNT), and several smokeless powders (26–35 ng of 2,4-DNT and 11–74 ng DPA detected). PSPME-IMS technology is flexible to end-user needs, is low-cost, rapid, sensitive, easy to use, easy to implement, and effective. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation develops an innovative approach towards less-constrained iris biometrics. Two major contributions are made in this research endeavor: (1) Designed an award-winning segmentation algorithm in the less-constrained environment where image acquisition is made of subjects on the move and taken under visible lighting conditions, and (2) Developed a pioneering iris biometrics method coupling segmentation and recognition of the iris based on video of moving persons under different acquisitions scenarios. The first part of the dissertation introduces a robust and fast segmentation approach using still images contained in the UBIRIS (version 2) noisy iris database. The results show accuracy estimated at 98% when using 500 randomly selected images from the UBIRIS.v2 partial database, and estimated at 97% in a Noisy Iris Challenge Evaluation (NICE.I) in an international competition that involved 97 participants worldwide involving 35 countries, ranking this research group in sixth position. This accuracy is achieved with a processing speed nearing real time. The second part of this dissertation presents an innovative segmentation and recognition approach using video-based iris images. Following the segmentation stage which delineates the iris region through a novel segmentation strategy, some pioneering experiments on the recognition stage of the less-constrained video iris biometrics have been accomplished. In the video-based and less-constrained iris recognition, the test or subject iris videos/images and the enrolled iris images are acquired with different acquisition systems. In the matching step, the verification/identification result was accomplished by comparing the similarity distance of encoded signature from test images with each of the signature dataset from the enrolled iris images. With the improvements gained, the results proved to be highly accurate under the unconstrained environment which is more challenging. This has led to a false acceptance rate (FAR) of 0% and a false rejection rate (FRR) of 17.64% for 85 tested users with 305 test images from the video, which shows great promise and high practical implications for iris biometrics research and system design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Weakly electric fish produce a dual function electric signal that makes them ideal models for the study of sensory computation and signal evolution. This signal, the electric organ discharge (EOD), is used for communication and navigation. In some families of gymnotiform electric fish, the EOD is a dynamic signal that increases in amplitude during social interactions. Amplitude increase could facilitate communication by increasing the likelihood of being sensed by others or by impressing prospective mates or rivals. Conversely, by increasing its signal amplitude a fish might increase its sensitivity to objects by lowering its electrolocation detection threshold. To determine how EOD modulations elicited in the social context affect electrolocation, I developed an automated and fast method for measuring electroreception thresholds using a classical conditioning paradigm. This method employs a moving shelter tube, which these fish occupy at rest during the day, paired with an electrical stimulus. A custom built and programmed robotic system presents the electrical stimulus to the fish, slides the shelter tube requiring them to follow, and records video of their movements. I trained the electric fish of the genus Sternopygus was trained to respond to a resistive stimulus on this apparatus in 2 days. The motion detection algorithm correctly identifies the responses 91% of the time, with a false positive rate of only 4%. This system allows for a large number of trials, decreasing the amount of time needed to determine behavioral electroreception thresholds. This novel method enables the evaluation the evolutionary interplay between two conflicting sensory forces, social communication and navigation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our national highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Funding — Forest Enterprise Scotland and the University of Aberdeen provided funding for the project. The Carnegie Trust supported the lead author, E. McHenry, in this research through the award of a tuition fees bursary.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Colorectal cancer is the third most commonly diagnosed cancer, accounting for 53,219 deaths in 2007 and an estimated 146,970 new cases in the USA during 2009. The combination of FDG PET and CT has proven to be of great benefit for the assessment of colorectal cancer. This is most evident in the detection of occult metastases, particularly intra- or extrahepatic sites of disease, that would preclude a curative procedure or in the detection of local recurrence. FDG PET is generally not used for the diagnosis of colorectal cancer although there are circumstances where PET-CT may make the initial diagnosis, particularly with its more widespread use. In addition, precancerous adenomatous polyps can also be detected incidentally on whole-body images performed for other indications; sensitivity increases with increasing polyp size. False-negative FDG PET findings have been reported with mucinous adenocarcinoma, and false-positive findings have been reported due to inflammatory conditions such as diverticulitis, colitis, and postoperative scarring. Therefore, detailed evaluation of the CT component of a PET/CT exam, including assessment of the entire colon, is essential.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

RATIONALE: Limitations in methods for the rapid diagnosis of hospital-acquired infections often delay initiation of effective antimicrobial therapy. New diagnostic approaches offer potential clinical and cost-related improvements in the management of these infections. OBJECTIVES: We developed a decision modeling framework to assess the potential cost-effectiveness of a rapid biomarker assay to identify hospital-acquired infection in high-risk patients earlier than standard diagnostic testing. METHODS: The framework includes parameters representing rates of infection, rates of delayed appropriate therapy, and impact of delayed therapy on mortality, along with assumptions about diagnostic test characteristics and their impact on delayed therapy and length of stay. Parameter estimates were based on contemporary, published studies and supplemented with data from a four-site, observational, clinical study. Extensive sensitivity analyses were performed. The base-case analysis assumed 17.6% of ventilated patients and 11.2% of nonventilated patients develop hospital-acquired infection and that 28.7% of patients with hospital-acquired infection experience delays in appropriate antibiotic therapy with standard care. We assumed this percentage decreased by 50% (to 14.4%) among patients with true-positive results and increased by 50% (to 43.1%) among patients with false-negative results using a hypothetical biomarker assay. Cost of testing was set at $110/d. MEASUREMENTS AND MAIN RESULTS: In the base-case analysis, among ventilated patients, daily diagnostic testing starting on admission reduced inpatient mortality from 12.3 to 11.9% and increased mean costs by $1,640 per patient, resulting in an incremental cost-effectiveness ratio of $21,389 per life-year saved. Among nonventilated patients, inpatient mortality decreased from 7.3 to 7.1% and costs increased by $1,381 with diagnostic testing. The resulting incremental cost-effectiveness ratio was $42,325 per life-year saved. Threshold analyses revealed the probabilities of developing hospital-acquired infection in ventilated and nonventilated patients could be as low as 8.4 and 9.8%, respectively, to maintain incremental cost-effectiveness ratios less than $50,000 per life-year saved. CONCLUSIONS: Development and use of serial diagnostic testing that reduces the proportion of patients with delays in appropriate antibiotic therapy for hospital-acquired infections could reduce inpatient mortality. The model presented here offers a cost-effectiveness framework for future test development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Limited information exists on the effects of temporary functional deafferentation (TFD) on brain activity after peripheral nerve block (PNB) in healthy humans. Increasingly, resting-state functional connectivity (RSFC) is being used to study brain activity and organization. The purpose of this study was to test the hypothesis that TFD through PNB will influence changes in RSFC plasticity in central sensorimotor functional brain networks in healthy human participants. METHODS: The authors achieved TFD using a supraclavicular PNB model with 10 healthy human participants undergoing functional connectivity magnetic resonance imaging before PNB, during active PNB, and during PNB recovery. RSFC differences among study conditions were determined by multiple-comparison-corrected (false discovery rate-corrected P value less than 0.05) random-effects, between-condition, and seed-to-voxel analyses using the left and right manual motor regions. RESULTS: The results of this pilot study demonstrated disruption of interhemispheric left-to-right manual motor region RSFC (e.g., mean Fisher-transformed z [effect size] at pre-PNB 1.05 vs. 0.55 during PNB) but preservation of intrahemispheric RSFC of these regions during PNB. Additionally, there was increased RSFC between the left motor region of interest (PNB-affected area) and bilateral higher order visual cortex regions after clinical PNB resolution (e.g., Fisher z between left motor region of interest and right and left lingual gyrus regions during PNB, -0.1 and -0.6 vs. 0.22 and 0.18 after PNB resolution, respectively). CONCLUSIONS: This pilot study provides evidence that PNB has features consistent with other models of deafferentation, making it a potentially useful approach to investigate brain plasticity. The findings provide insight into RSFC of sensorimotor functional brain networks during PNB and PNB recovery and support modulation of the sensory-motor integration feedback loop as a mechanism for explaining the behavioral correlates of peripherally induced TFD through PNB.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Autism spectrum disorder (ASD) is multifactorial and is likely the result of complex interactions between multiple environmental and genetic factors. Recently, it has been suggested that each symptom cluster of the disorder, such as poor social communication, may be mediated by different genetic influences. Genes in the oxytocin pathway, which mediates social behaviours in humans, have been studied with single nucleotide polymorphisms (SNPs) in the oxytocin receptor gene (OXTR) being implicated in ASD. This thesis examines the presence of different oxytocin receptor genotypes, and their associations with ASD and resulting social communication deficits. Methods: The relationship between four OXTR variants and ASD was evaluated in 607 ASD simplex (SPX) families. Cases were compared to their unaffected siblings using a conditional logistic approach. Odds ratios and associated 95 percent confidence intervals were obtained. A second sample of 235 individuals with a diagnosis of ASD was examined to evaluate whether these four OXTR variants were associated with social communication scores on the Autism Diagnostic Interview – Revised (ADI-R). Parameter estimates and associated 95 percent confidence intervals were generated using a linear regression approach. Multiple testing issues were addressed using false discovery adjustments. Results: The rs53576 AG genotype was significantly associated with a lower risk of ASD (OR = 0.707, 95% CI: 0.512-0.975). A single genotype (AG) provided by the rs2254298 marker was found to be significantly associated with higher social communication scores (Parameter estimate = 1.833, SE = 0.762, p = 0.0171). This association was also seen in a Caucasian only and mothers as the respondent samples. No association was significant following false discovery rate adjustments. Conclusion: The findings from these studies provide limited support for the role of OXTR SNPs in ASD, especially in social communication skills. The clinical significance of these associations remains unknown, however, it is likely that these associations do not play a role in the severity of symptoms associated with ASD. Rather, they may be important in the appearance of social deficits due to the rs2254298 markers association with enlarged amygdalas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVE: The main difficulty of PCR-based clonality studies for B-cell lymphoproliferative disorders (B-LPD) is discrimination between monoclonal and polyclonal PCR products, especially when there is a high background of polyclonal B cells in the tumor sample. Actually, PCR-based methods for clonality assessment require additional analysis of the PCR products in order to discern between monoclonal and polyclonal samples. Heteroduplex analysis represents an attractive approach since it is easy to perform and avoids the use of radioactive substrates or expensive equipment. DESIGN AND METHODS: We studied the sensitivity and specificity of heteroduplex PCR analysis for monoclonal detection in samples from 90 B-cell non Hodgkin's lymphoma (B-NHL) patients and in 28 individuals without neoplastic B-cell disorders (negative controls). Furthermore, in 42 B-NHL and in the same 28 negative controls, we compared heteroduplex analysis vs the classical PCR technique. We also compared ethidium bromide (EtBr) vs. silver nitrate (AgNO(3)) staining as well as agarose vs. polyacrylamide gel electrophoresis (PAGE). RESULTS: Using two pair consensus primers sited at VH (FR3 and FR2) and at JH, 91% of B-NHL samples displayed monoclonal products after heteroduplex PCR analysis using PAGE and AgNO(3) staining. Moreover, no polyclonal sample showed a monoclonal PCR product. By contrast, false positive results were obtained when using agarose (5/28) and PAGE without heteroduplex analysis: 2/28 and 8/28 with EtBr and AgNO(3) staining, respectively. In addition, false negative results only appeared with EtBr staining: 13/42 in agarose, 4/42 in PAGE without heteroduplex analysis and 7/42 in PAGE after heteroduplex analysis. INTERPRETATION AND CONCLUSIONS: We conclude that AgNO(3) stained PAGE after heteroduplex analysis is the most suitable strategy for detecting monoclonal rearrangements in B-NHL samples because it does not produce false-positive results and the risk of false-negative results is very low.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this study was to develop a multiplex loop-mediated isothermal amplification (LAMP) method capable of detecting Escherichia coli generally and verocytotoxigenic E. coli (VTEC) specifically in beef and bovine faeces. The LAMP assay developed was highly specific (100%) and able to distinguish between E. coli and VTEC based on the amplification of the phoA, and stx1 and/or stx2 genes, respectively. In the absence of an enrichment step, the limit of detection 50% (LOD50) of the LAMP assay was determined to be 2.83, 3.17 and 2.83-3.17 log CFU/g for E. coli with phoA, stx1 and stx2 genes, respectively, when artificially inoculated minced beef and bovine faeces were tested. The LAMP calibration curves generated with pure cultures, and spiked beef and faeces, suggested that the assay had good quantification capability. Validation of the assay, performed using retail beef and bovine faeces samples, demonstrated good correlation between counts obtained by the LAMP assay and by a conventional culture method, but suggested the possibility of false negative LAMP results for 12.5-14.7% of samples tested. The multiplex LAMP assay developed potentially represents a rapid alternative to culture for monitoring E.coli levels in beef or faeces and it would provide additional information on the presence of VTEC. However, some further optimisation is needed to improve detection sensitivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08