988 resultados para Automatic detection of upwelling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although severe patient-ventilator asynchrony is frequent during invasive and non-invasive mechanical ventilation, diagnosing such asynchronies usually requires the presence at the bedside of an experienced clinician to assess the tracings displayed on the ventilator screen, thus explaining why evaluating patient-ventilator interaction remains a challenge in daily clinical practice. In the previous issue of Critical Care, Sinderby and colleagues present a new automated method to detect, quantify, and display patient-ventilator interaction. In this validation study, the automatic method is as efficient as experts in mechanical ventilation. This promising system could help clinicians extend their knowledge about patient-ventilator interaction and further improve assisted mechanical ventilation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alzheimer׳s disease (AD) is the most common type of dementia among the elderly. This work is part of a larger study that aims to identify novel technologies and biomarkers or features for the early detection of AD and its degree of severity. The diagnosis is made by analyzing several biomarkers and conducting a variety of tests (although only a post-mortem examination of the patients’ brain tissue is considered to provide definitive confirmation). Non-invasive intelligent diagnosis techniques would be a very valuable diagnostic aid. This paper concerns the Automatic Analysis of Emotional Response (AAER) in spontaneous speech based on classical and new emotional speech features: Emotional Temperature (ET) and fractal dimension (FD). This is a pre-clinical study aiming to validate tests and biomarkers for future diagnostic use. The method has the great advantage of being non-invasive, low cost, and without any side effects. The AAER shows very promising results for the definition of features useful in the early diagnosis of AD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of automatic recognition of the fish from the video sequences is discussed in this Master’s Thesis. This is a very urgent issue for many organizations engaged in fish farming in Finland and Russia because the process of automation control and counting of individual species is turning point in the industry. The difficulties and the specific features of the problem have been identified in order to find a solution and propose some recommendations for the components of the automated fish recognition system. Methods such as background subtraction, Kalman filtering and Viola-Jones method were implemented during this work for detection, tracking and estimation of fish parameters. Both the results of the experiments and the choice of the appropriate methods strongly depend on the quality and the type of a video which is used as an input data. Practical experiments have demonstrated that not all methods can produce good results for real data, whereas on synthetic data they operate satisfactorily.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Epilepsy is a chronic brain disorder, characterized by reoccurring seizures. Automatic sei-zure detector, incorporated into a mobile closed-loop system, can improve the quality of life for the people with epilepsy. Commercial EEG headbands, such as Emotiv Epoc, have a potential to be used as the data acquisition devices for such a system. In order to estimate that potential, epileptic EEG signals from the commercial devices were emulated in this work based on the EEG data from a clinical dataset. The emulated characteristics include the referencing scheme, the set of electrodes used, the sampling rate, the sample resolution and the noise level. Performance of the existing algorithm for detection of epileptic seizures, developed in the context of clinical data, has been evaluated on the emulated commercial data. The results show, that after the transformation of the data towards the characteristics of Emotiv Epoc, the detection capabilities of the algorithm are mostly preserved. The ranges of acceptable changes in the signal parameters are also estimated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high resolution TerraSAR-X data to detect flooded regions in urban areas is described. An important application for this would be the calibration and validation of the flood extent predicted by an urban flood inundation model. To date, research on such models has been hampered by lack of suitable distributed validation data. The study uses a 3m resolution TerraSAR-X image of a 1-in-150 year flood near Tewkesbury, UK, in 2007, for which contemporaneous aerial photography exists for validation. The DLR SETES SAR simulator was used in conjunction with airborne LiDAR data to estimate regions of the TerraSAR-X image in which water would not be visible due to radar shadow or layover caused by buildings and taller vegetation, and these regions were masked out in the flood detection process. A semi-automatic algorithm for the detection of floodwater was developed, based on a hybrid approach. Flooding in rural areas adjacent to the urban areas was detected using an active contour model (snake) region-growing algorithm seeded using the un-flooded river channel network, which was applied to the TerraSAR-X image fused with the LiDAR DTM to ensure the smooth variation of heights along the reach. A simpler region-growing approach was used in the urban areas, which was initialized using knowledge of the flood waterline in the rural areas. Seed pixels having low backscatter were identified in the urban areas using supervised classification based on training areas for water taken from the rural flood, and non-water taken from the higher urban areas. Seed pixels were required to have heights less than a spatially-varying height threshold determined from nearby rural waterline heights. Seed pixels were clustered into urban flood regions based on their close proximity, rather than requiring that all pixels in the region should have low backscatter. This approach was taken because it appeared that urban water backscatter values were corrupted in some pixels, perhaps due to contributions from side-lobes of strong reflectors nearby. The TerraSAR-X urban flood extent was validated using the flood extent visible in the aerial photos. It turned out that 76% of the urban water pixels visible to TerraSAR-X were correctly detected, with an associated false positive rate of 25%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19% respectively. These findings indicate that TerraSAR-X is capable of providing useful data for the calibration and validation of urban flood inundation models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

World-wide structural genomics initiatives are rapidly accumulating structures for which limited functional information is available. Additionally, state-of-the art structural prediction programs are now capable of generating at least low resolution structural models of target proteins. Accurate detection and classification of functional sites within both solved and modelled protein structures therefore represents an important challenge. We present a fully automatic site detection method, FuncSite, that uses neural network classifiers to predict the location and type of functionally important sites in protein structures. The method is designed primarily to require only backbone residue positions without the need for specific side-chain atoms to be present. In order to highlight effective site detection in low resolution structural models FuncSite was used to screen model proteins generated using mGenTHREADER on a set of newly released structures. We found effective metal site detection even for moderate quality protein models illustrating the robustness of the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research presents a novel multi-functional system for medical Imaging-enabled Assistive Diagnosis (IAD). Although the IAD demonstrator has focused on abdominal images and supports the clinical diagnosis of kidneys using CT/MRI imaging, it can be adapted to work on image delineation, annotation and 3D real-size volumetric modelling of other organ structures such as the brain, spine, etc. The IAD provides advanced real-time 3D visualisation and measurements with fully automated functionalities as developed in two stages. In the first stage, via the clinically driven user interface, specialist clinicians use CT/MRI imaging datasets to accurately delineate and annotate the kidneys and their possible abnormalities, thus creating “3D Golden Standard Models”. Based on these models, in the second stage, clinical support staff i.e. medical technicians interactively define model-based rules and parameters for the integrated “Automatic Recognition Framework” to achieve results which are closest to that of the clinicians. These specific rules and parameters are stored in “Templates” and can later be used by any clinician to automatically identify organ structures i.e. kidneys and their possible abnormalities. The system also supports the transmission of these “Templates” to another expert for a second opinion. A 3D model of the body, the organs and their possible pathology with real metrics is also integrated. The automatic functionality was tested on eleven MRI datasets (comprising of 286 images) and the 3D models were validated by comparing them with the metrics from the corresponding “3D Golden Standard Models”. The system provides metrics for the evaluation of the results, in terms of Accuracy, Precision, Sensitivity, Specificity and Dice Similarity Coefficient (DSC) so as to enable benchmarking of its performance. The first IAD prototype has produced promising results as its performance accuracy based on the most widely deployed evaluation metric, DSC, yields 97% for the recognition of kidneys and 96% for their abnormalities; whilst across all the above evaluation metrics its performance ranges between 96% and 100%. Further development of the IAD system is in progress to extend and evaluate its clinical diagnostic support capability through development and integration of additional algorithms to offer fully computer-aided identification of other organs and their abnormalities based on CT/MRI/Ultra-sound Imaging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To evaluate the accuracy of three different cutoff points for the detection of high blood pressure in adolescents, given the strong relationship between overweight and high blood pressure levels.Methods: A total of 1,021 adolescents of both sexes were enrolled in the study, selected at random from public and private schools in Londrina, Brazil. Their body weight was measured using a digital balance, and their height with a portable stadiometer with a maximum extension of 2 meters. Arterial blood pressure was measured using an automatic apparatus. The capacity of body mass index to detect high blood pressure was gauged using ROC curves and their parameters (sensitivity, specificity, and area under the curve).Results: The cutoff points proposed in a Brazilian standard exhibited greater accuracy (males: 0.636 +/- 0.038; females: 0.585 +/- 0.043) than the cutoff points proposed in an international (males: 0.594 +/- 0.040; females: 0.570 +/- 0.044) and a North-American standard (males: 0.612 +/- 0.039; females: 0.578 +/- 0.044).Conclusions: The Brazilian proposal offered greatest accuracy for indicating high blood pressure levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semi-automatic building detection and extraction is a topic of growing interest due to its potential application in such areas as cadastral information systems, cartographic revision, and GIS. One of the existing strategies for building extraction is to use a digital surface model (DSM) represented by a cloud of known points on a visible surface, and comprising features such as trees or buildings. Conventional surface modeling using stereo-matching techniques has its drawbacks, the most obvious being the effect of building height on perspective, shadows, and occlusions. The laser scanner, a recently developed technological tool, can collect accurate DSMs with high spatial frequency. This paper presents a methodology for semi-automatic modeling of buildings which combines a region-growing algorithm with line-detection methods applied over the DSM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents three methods for automatic detection of dust devils tracks in images of Mars. The methods are mainly based on Mathematical Morphology and results of their performance are analyzed and compared. A dataset of 21 images from the surface of Mars representative of the diversity of those track features were considered for developing, testing and evaluating our methods, confronting their outputs with ground truth images made manually. Methods 1 and 3, based on closing top-hat and path closing top-hat, respectively, showed similar mean accuracies around 90% but the time of processing was much greater for method 1 than for method 3. Method 2, based on radial closing, was the fastest but showed worse mean accuracy. Thus, this was the tiebreak factor. © 2011 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a Computer Aided Diagnosis (CAD) system that automatically classifies microcalcifications detected on digital mammograms into one of the five types proposed by Michele Le Gal, a classification scheme that allows radiologists to determine whether a breast tumor is malignant or not without the need for surgeries. The developed system uses a combination of wavelets and Artificial Neural Networks (ANN) and is executed on an Altera DE2-115 Development Kit, a kit containing a Field-Programmable Gate Array (FPGA) that allows the system to be smaller, cheaper and more energy efficient. Results have shown that the system was able to correctly classify 96.67% of test samples, which can be used as a second opinion by radiologists in breast cancer early diagnosis. (C) 2013 The Authors. Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autism Spectrum Disorders (ASDs) describe a set of neurodevelopmental disorders. ASD represents a significant public health problem. Currently, ASDs are not diagnosed before the 2nd year of life but an early identification of ASDs would be crucial as interventions are much more effective than specific therapies starting in later childhood. To this aim, cheap an contact-less automatic approaches recently aroused great clinical interest. Among them, the cry and the movements of the newborn, both involving the central nervous system, are proposed as possible indicators of neurological disorders. This PhD work is a first step towards solving this challenging problem. An integrated system is presented enabling the recording of audio (crying) and video (movements) data of the newborn, their automatic analysis with innovative techniques for the extraction of clinically relevant parameters and their classification with data mining techniques. New robust algorithms were developed for the selection of the voiced parts of the cry signal, the estimation of acoustic parameters based on the wavelet transform and the analysis of the infant’s general movements (GMs) through a new body model for segmentation and 2D reconstruction. In addition to a thorough literature review this thesis presents the state of the art on these topics that shows that no studies exist concerning normative ranges for newborn infant cry in the first 6 months of life nor the correlation between cry and movements. Through the new automatic methods a population of control infants (“low-risk”, LR) was compared to a group of “high-risk” (HR) infants, i.e. siblings of children already diagnosed with ASD. A subset of LR infants clinically diagnosed as newborns with Typical Development (TD) and one affected by ASD were compared. The results show that the selected acoustic parameters allow good differentiation between the two groups. This result provides new perspectives both diagnostic and therapeutic.