901 resultados para false positives
Resumo:
Objective: The description and evaluation of the performance of a new real-time seizure detection algorithm in the newborn infant. Methods: The algorithm includes parallel fragmentation of EEG signal into waves; wave-feature extraction and averaging; elementary, preliminary and final detection. The algorithm detects EEG waves with heightened regularity, using wave intervals, amplitudes and shapes. The performance of the algorithm was assessed with the use of event-based and liberal and conservative time-based approaches and compared with the performance of Gotman's and Liu's algorithms. Results: The algorithm was assessed on multi-channel EEG records of 55 neonates including 17 with seizures. The algorithm showed sensitivities ranging 83-95% with positive predictive values (PPV) 48-77%. There were 2.0 false positive detections per hour. In comparison, Gotman's algorithm (with 30 s gap-closing procedure) displayed sensitivities of 45-88% and PPV 29-56%; with 7.4 false positives per hour and Liu's algorithm displayed sensitivities of 96-99%, and PPV 10-25%; with 15.7 false positives per hour. Conclusions: The wave-sequence analysis based algorithm displayed higher sensitivity, higher PPV and a substantially lower level of false positives than two previously published algorithms. Significance: The proposed algorithm provides a basis for major improvements in neonatal seizure detection and monitoring. Published by Elsevier Ireland Ltd. on behalf of International Federation of Clinical Neurophysiology.
Resumo:
This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.
Resumo:
In inflammatory diseases, release of oxidants leads to oxidative damage to proteins. The precise nature of oxidative damage to individual proteins depends on the oxidant involved. Chlorination and nitration are markers of modification by the myeloperoxidase-H2O2-Cl- system and nitric oxide-derived oxidants, respectively. Although these modifications can be detected by western blotting, currently no reliable method exists to identify the specific sites damage to individual proteins in complex mixtures such as clinical samples. We are developing novel LCMS2 and precursor ion scanning methods to address this. LC-MS2 allows separation of peptides and detection of mass changes in oxidized residues on fragmentation of the peptides. We have identified indicative fragment ions for chlorotyrosine, nitrotyrosine, hydroxytyrosine and hydroxytryptophan. A nano-LC/MS3 method involving the dissociation of immonium ions to give specific fragments for the oxidized residues has been developed to overcome the problem of false positives from ions isobaric to these immonium ions that exist in unmodified peptides. The approach has proved able to identify precise protein modifications in individual proteins and mixtures of proteins. An alternative methodology involves multiple reaction monitoring for precursors and fragment ions are specific to oxidized and chlorinated proteins, and this has been tested with human serum albumin. Our ultimate aim is to apply this methodology to the detection of oxidative post-translational modifications in clinical samples for disease diagnosis, monitoring the outcomes of therapy, and improved understanding of disease biochemistry.
Resumo:
Magnetoencephalography (MEG), a non-invasive technique for characterizing brain electrical activity, is gaining popularity as a tool for assessing group-level differences between experimental conditions. One method for assessing task-condition effects involves beamforming, where a weighted sum of field measurements is used to tune activity on a voxel-by-voxel basis. However, this method has been shown to produce inhomogeneous smoothness differences as a function of signal-to-noise across a volumetric image, which can then produce false positives at the group level. Here we describe a novel method for group-level analysis with MEG beamformer images that utilizes the peak locations within each participant's volumetric image to assess group-level effects. We compared our peak-clustering algorithm with SnPM using simulated data. We found that our method was immune to artefactual group effects that can arise as a result of inhomogeneous smoothness differences across a volumetric image. We also used our peak-clustering algorithm on experimental data and found that regions were identified that corresponded with task-related regions identified in the literature. These findings suggest that our technique is a robust method for group-level analysis with MEG beamformer images.
Resumo:
Background Qualitative research makes an important contribution to our understanding of health and healthcare. However, qualitative evidence can be difficult to search for and identify, and the effectiveness of different types of search strategies is unknown. Methods Three search strategies for qualitative research in the example area of support for breast-feeding were evaluated using six electronic bibliographic databases. The strategies were based on using thesaurus terms, free-text terms and broad-based terms. These strategies were combined with recognised search terms for support for breast-feeding previously used in a Cochrane review. For each strategy, we evaluated the recall (potentially relevant records found) and precision (actually relevant records found). Results A total yield of 7420 potentially relevant records was retrieved by the three strategies combined. Of these, 262 were judged relevant. Using one strategy alone would miss relevant records. The broad-based strategy had the highest recall and the thesaurus strategy the highest precision. Precision was generally poor: 96% of records initially identified as potentially relevant were deemed irrelevant. Searching for qualitative research involves trade-offs between recall and precision. Conclusions These findings confirm that strategies that attempt to maximise the number of potentially relevant records found are likely to result in a large number of false positives. The findings also suggest that a range of search terms is required to optimise searching for qualitative evidence. This underlines the problems of current methods for indexing qualitative research in bibliographic databases and indicates where improvements need to be made.
Resumo:
Background Introduction of proposed criteria for DSM-5 Autism Spectrum Disorder (ASD) has raised concerns that some individuals currently meeting diagnostic criteria for Pervasive Developmental Disorder (PDD; DSM-IV-TR/ICD-10) will not qualify for a diagnosis under the proposed changes. To date, reports of sensitivity and specificity of the new criteria have been inconsistent across studies. No study has yet considered how changes at the 'sub domain' level might affect overall sensitivity and specificity, and few have included individuals of different ages and ability levels. Methods A set of DSM-5 ASD algorithms were developed using items from the Diagnostic Interview for Social and Communication Disorders (DISCO). The number of items required for each DSM-5 subdomain was defined either according to criteria specified by DSM-5 (Initial Algorithm), a statistical approach (Youden J Algorithm), or to minimise the number of false positives while maximising sensitivity (Modified Algorithm). The algorithms were designed, tested and compared in two independent samples (Sample 1, N = 82; Sample 2, N = 115), while sensitivity was assessed across age and ability levels in an additional dataset of individuals with an ICD-10 PDD diagnosis (Sample 3, N = 190). Results Sensitivity was highest in the Initial Algorithm, which had the poorest specificity. Although Youden J had excellent specificity, sensitivity was significantly lower than in the Modified Algorithm, which had both good sensitivity and specificity. Relaxing the domain A rules improved sensitivity of the Youden J Algorithm, but it remained less sensitive than the Modified Algorithm. Moreover, this was the only algorithm with variable sensitivity across age. All versions of the algorithm performed well across ability level. Conclusions This study demonstrates that good levels of both sensitivity and specificity can be achieved for a diagnostic algorithm adhering to the DSM-5 criteria that is suitable across age and ability level. © 2013 The Authors. Journal of Child Psychology and Psychiatry © 2013 Association for Child and Adolescent Mental Health.
Resumo:
Fast spreading unknown viruses have caused major damage on computer systems upon their initial release. Current detection methods have lacked capabilities to detect unknown viruses quickly enough to avoid mass spreading and damage. This dissertation has presented a behavior based approach to detecting known and unknown viruses based on their attempt to replicate. Replication is the qualifying fundamental characteristic of a virus and is consistently present in all viruses making this approach applicable to viruses belonging to many classes and executing under several conditions. A form of replication called self-reference replication, (SR-replication), has been formalized as one main type of replication which specifically replicates by modifying or creating other files on a system to include the virus itself. This replication type was used to detect viruses attempting replication by referencing themselves which is a necessary step to successfully replicate files. The approach does not require a priori knowledge about known viruses. Detection was accomplished at runtime by monitoring currently executing processes attempting to replicate. Two implementation prototypes of the detection approach called SRRAT were created and tested on the Microsoft Windows operating systems focusing on the tracking of user mode Win32 API system calls and Kernel mode system services. The research results showed SR-replication capable of distinguishing between file infecting viruses and benign processes with little or no false positives and false negatives. ^
Resumo:
This dissertation proposed a new approach to seizure detection in intracranial EEG recordings using nonlinear decision functions. It implemented well-established features that were designed to deal with complex signals such as brain recordings, and proposed a 2-D domain of analysis. Since the features considered assume both the time and frequency domains, the analysis was carried out both temporally and as a function of different frequency ranges in order to ascertain those measures that were most suitable for seizure detection. In retrospect, this study established a generalized approach to seizure detection that works across several features and across patients. ^ Clinical experiments involved 8 patients with intractable seizures that were evaluated for potential surgical interventions. A total of 35 iEEG data files collected were used in a training phase to ascertain the reliability of the formulated features. The remaining 69 iEEG data files were then used in the testing phase. ^ The testing phase revealed that the correlation sum is the feature that performed best across all patients with a sensitivity of 92% and an accuracy of 99%. The second best feature was the gamma power with a sensitivity of 92% and an accuracy of 96%. In the frequency domain, all of the 5 other spectral bands considered, revealed mixed results in terms of low sensitivity in some frequency bands and low accuracy in other frequency bands, which is expected given that the dominant frequencies in iEEG are those of the gamma band. In the time domain, other features which included mobility, complexity, and activity, all performed very well with an average a sensitivity of 80.3% and an accuracy of 95%. ^ The computational requirement needed for these nonlinear decision functions to be generated in the training phase was extremely long. It was determined that when the duration dimension was rescaled, the results improved and the convergence rates of the nonlinear decision functions were reduced dramatically by more than a 100 fold. Through this rescaling, the sensitivity of the correlation sum improved to 100% and the sensitivity of the gamma power to 97%, which meant that there were even less false negatives and false positives detected. ^
Resumo:
The 9/11 Act mandates the inspection of 100% of cargo shipments entering the U.S. by 2012 and 100% inspection of air cargo by March 2010. So far, only 5% of inbound shipping containers are inspected thoroughly while air cargo inspections have fared better at 50%. Government officials have admitted that these milestones cannot be met since the appropriate technology does not exist. This research presents a novel planar solid phase microextraction (PSPME) device with enhanced surface area and capacity for collection of the volatile chemical signatures in air that are emitted from illicit compounds for direct introduction into ion mobility spectrometers (IMS) for detection. These IMS detectors are widely used to detect particles of illicit substances and do not have to be adapted specifically to this technology. For static extractions, PDMS and sol-gel PDMS PSPME devices provide significant increases in sensitivity over conventional fiber SPME. Results show a 50–400 times increase in mass detected of piperonal and a 2–4 times increase for TNT. In a blind study of 6 cases suspected to contain varying amounts of MDMA, PSPME-IMS correctly detected 5 positive cases with no false positives or negatives. One of these cases had minimal amounts of MDMA resulting in a false negative response for fiber SPME-IMS. A La (dihed) phase chemistry has shown an increase in the extraction efficiency of TNT and 2,4-DNT and enhanced retention over time. An alternative PSPME device was also developed for the rapid (seconds) dynamic sampling and preconcentration of large volumes of air for direct thermal desorption into an IMS. This device affords high extraction efficiencies due to strong retention properties under ambient conditions resulting in ppt detection limits when 3.5 L of air are sampled over the course of 10 seconds. Dynamic PSPME was used to sample the headspace over the following: MDMA tablets (12–40 ng detected of piperonal), high explosives (Pentolite) (0.6 ng detected of TNT), and several smokeless powders (26–35 ng of 2,4-DNT and 11–74 ng DPA detected). PSPME-IMS technology is flexible to end-user needs, is low-cost, rapid, sensitive, easy to use, easy to implement, and effective. ^
Resumo:
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^
Resumo:
Fast spreading unknown viruses have caused major damage on computer systems upon their initial release. Current detection methods have lacked capabilities to detect unknown virus quickly enough to avoid mass spreading and damage. This dissertation has presented a behavior based approach to detecting known and unknown viruses based on their attempt to replicate. Replication is the qualifying fundamental characteristic of a virus and is consistently present in all viruses making this approach applicable to viruses belonging to many classes and executing under several conditions. A form of replication called self-reference replication, (SR-replication), has been formalized as one main type of replication which specifically replicates by modifying or creating other files on a system to include the virus itself. This replication type was used to detect viruses attempting replication by referencing themselves which is a necessary step to successfully replicate files. The approach does not require a priori knowledge about known viruses. Detection was accomplished at runtime by monitoring currently executing processes attempting to replicate. Two implementation prototypes of the detection approach called SRRAT were created and tested on the Microsoft Windows operating systems focusing on the tracking of user mode Win32 API system calls and Kernel mode system services. The research results showed SR-replication capable of distinguishing between file infecting viruses and benign processes with little or no false positives and false negatives.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. ^ We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. ^ We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. ^ We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). ^ In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.^
Resumo:
The maned wolf (Chrysocyon brachyurus Illiger 1815) is the biggest canid in South America and it is considered a “near threatened” species by IUCN. Because of its nocturnal, territorial and solitary habits, there are still many understudied aspects of their behavior in natural environments, including acoustic communication. In its vocal repertoire, the wolf presents a longdistance call named “roar-bark” which, according to literature, functions for spacing maintenance between individuals and/or communication between members of the reproductive pair inside the territory. In this context, this study aimed: 1) to compare four methods for detecting maned wolf’s roar-barks in recordings made in a natural environment, in order to elect the most efficient one for our project; 2) to understand the night emission pattern of these vocalizations, verifying possible weather and moon phases influences in roarbark’s emission rates; and 3) to test Passive Acoustic Monitoring as a tool to identify the presence of maned wolves in a natural environment. The study area was the Serra da Canastra National Park (Minas Gerais, Brazil), where autonomous recorders were used for sound acquisition, recording all night (from 06pm to 06am) during five days in December/2013 and every day from April to July/2014. Roar-barks’ detection methods were tested and compared regarding time needed to analyze files, number of false positives and number of correctly identified calls. The mixed method (XBAT + manual) was the most efficient one, finding 100% of vocalizations in almost half of the time the manual method did, being chosen for our data analysis. By studying roarbarks’ temporal variation we verified that the wolves vocalize more in the early hours of the evening, suggesting an important social function for those calls at the beginning of its period of most intense activity. Average wind speed negatively influenced vocalization rate, which may indicate lower sound reception of recorders or a change in behavioral patterns of wolves in high speed wind conditions. A better understanding of seasonal variation of maned wolves’ vocal activity is required, but our study already shows that it is possible to detect behavioral patterns of wild animals only by sound, validating PAM as a tool in this species’ conservation.
Resumo:
Lung cancer is one of the most common types of cancer and has the highest mortality rate. Patient survival is highly correlated with early detection. Computed Tomography technology services the early detection of lung cancer tremendously by offering aminimally invasive medical diagnostic tool. However, the large amount of data per examination makes the interpretation difficult. This leads to omission of nodules by human radiologist. This thesis presents a development of a computer-aided diagnosis system (CADe) tool for the detection of lung nodules in Computed Tomography study. The system, called LCD-OpenPACS (Lung Cancer Detection - OpenPACS) should be integrated into the OpenPACS system and have all the requirements for use in the workflow of health facilities belonging to the SUS (Brazilian health system). The LCD-OpenPACS made use of image processing techniques (Region Growing and Watershed), feature extraction (Histogram of Gradient Oriented), dimensionality reduction (Principal Component Analysis) and classifier (Support Vector Machine). System was tested on 220 cases, totaling 296 pulmonary nodules, with sensitivity of 94.4% and 7.04 false positives per case. The total time for processing was approximately 10 minutes per case. The system has detected pulmonary nodules (solitary, juxtavascular, ground-glass opacity and juxtapleural) between 3 mm and 30 mm.
Resumo:
The main objective of this work was to enable the recognition of human gestures through the development of a computer program. The program created captures the gestures executed by the user through a camera attached to the computer and sends it to the robot command referring to the gesture. They were interpreted in total ve gestures made by human hand. The software (developed in C ++) widely used the computer vision concepts and open source library OpenCV that directly impact the overall e ciency of the control of mobile robots. The computer vision concepts take into account the use of lters to smooth/blur the image noise reduction, color space to better suit the developer's desktop as well as useful information for manipulating digital images. The OpenCV library was essential in creating the project because it was possible to use various functions/procedures for complete control lters, image borders, image area, the geometric center of borders, exchange of color spaces, convex hull and convexity defect, plus all the necessary means for the characterization of imaged features. During the development of the software was the appearance of several problems, as false positives (noise), underperforming the insertion of various lters with sizes oversized masks, as well as problems arising from the choice of color space for processing human skin tones. However, after the development of seven versions of the control software, it was possible to minimize the occurrence of false positives due to a better use of lters combined with a well-dimensioned mask size (tested at run time) all associated with a programming logic that has been perfected over the construction of the seven versions. After all the development is managed software that met the established requirements. After the completion of the control software, it was observed that the overall e ectiveness of the various programs, highlighting in particular the V programs: 84.75 %, with VI: 93.00 % and VII with: 94.67 % showed that the nal program performed well in interpreting gestures, proving that it was possible the mobile robot control through human gestures without the need for external accessories to give it a better mobility and cost savings for maintain such a system. The great merit of the program was to assist capacity in demystifying the man set/machine therefore uses an easy and intuitive interface for control of mobile robots. Another important feature observed is that to control the mobile robot is not necessary to be close to the same, as to control the equipment is necessary to receive only the address that the Robotino passes to the program via network or Wi-Fi.