881 resultados para Automated segmentation
Resumo:
In this study two commonly used automated methods to detect atmospheric fronts in the lower troposphere are compared in various synoptic situations. The first method is a thermal approach, relying on the gradient of equivalent potential temperature (TH), while the second method is based on temporal changes in the 10 m wind (WND). For a comprehensive objective comparison of the outputs of these methods of frontal identification, both schemes are firstly applied to an idealised strong baroclinic wave simulation in the absence of topography. Then, two case-studies (one in the Northern Hemisphere (NH) and one in the Southern Hemisphere (SH)) were conducted to contrast fronts detected by the methods. Finally, we obtain global winter and summer frontal occurrence climatologies (derived from ERA-Interim for 1979–2012) and compare the structure of these. TH is able to identify cold and warm fronts in strong baroclinic cases that are in good agreement with manual analyses. WND is particularly suited for the detection of strongly elongated, meridionally oriented moving fronts, but has very limited ability to identify zonally oriented warm fronts. We note that the areas of the main TH frontal activity are shifted equatorwards compared to the WND patterns and are located upstream of regions of main WND front activity. The number of WND fronts in the NH shows more interseasonal variations than TH fronts, decreasing by more than 50% from winter to summer. In the SH there is a weaker seasonal variation of the number of observed WND fronts, however TH front activity reduces from summer (DJF) to winter (JJA). The main motivation is to give an overview of the performance of these methods, such that researchers can choose the appropriate one for their particular interest.
Resumo:
Point Distribution Models (PDM) are among the most popular shape description techniques and their usefulness has been demonstrated in a wide variety of medical imaging applications. However, to adequately characterize the underlying modeled population it is essential to have a representative number of training samples, which is not always possible. This problem is especially relevant as the complexity of the modeled structure increases, being the modeling of ensembles of multiple 3D organs one of the most challenging cases. In this paper, we introduce a new GEneralized Multi-resolution PDM (GEM-PDM) in the context of multi-organ analysis able to efficiently characterize the different inter-object relations, as well as the particular locality of each object separately. Importantly, unlike previous approaches, the configuration of the algorithm is automated thanks to a new agglomerative landmark clustering method proposed here, which equally allows us to identify smaller anatomically significant regions within organs. The significant advantage of the GEM-PDM method over two previous approaches (PDM and hierarchical PDM) in terms of shape modeling accuracy and robustness to noise, has been successfully verified for two different databases of sets of multiple organs: six subcortical brain structures, and seven abdominal organs. Finally, we propose the integration of the new shape modeling framework into an active shape-model-based segmentation algorithm. The resulting algorithm, named GEMA, provides a better overall performance than the two classical approaches tested, ASM, and hierarchical ASM, when applied to the segmentation of 3D brain MRI.
Resumo:
AMS-14C applications often require the analysis of small samples. Such is the case of atmospheric aerosols where frequently only a small amount of sample is available. The ion beam physics group at the ETH, Zurich, has designed an Automated Graphitization Equipment (AGE III) for routine graphite production for AMS analysis from organic samples of approximately 1 mg. In this study, we explore the potential use of the AGE III for graphitization of particulate carbon collected in quartz filters. In order to test the methodology, samples of reference materials and blanks with different sizes were prepared in the AGE III and the graphite was analyzed in a MICADAS AMS (ETH) system. The graphite samples prepared in the AGE III showed recovery yields higher than 80% and reproducible 14C values for masses ranging from 50 to 300 lg. Also, reproducible radiocarbon values were obtained for aerosol filters of small sizes that had been graphitized in the AGE III. As a study case, the tested methodology was applied to PM10 samples collected in two urban cities in Mexico in order to compare the source apportionment of biomass and fossil fuel combustion. The obtained 14C data showed that carbonaceous aerosols from Mexico City have much lower biogenic signature than the smaller city of Cuernavaca.
Resumo:
Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.
Resumo:
Several lake ice phenology studies from satellite data have been undertaken. However, the availability of long-term lake freeze-thaw-cycles, required to understand this proxy for climate variability and change, is scarce for European lakes. Long time series from space observations are limited to few satellite sensors. Data of the Advanced Very High Resolution Radiometer (AVHRR) are used in account of their unique potential as they offer each day global coverage from the early 1980s expectedly until 2022. An automatic two-step extraction was developed, which makes use of near-infrared reflectance values and thermal infrared derived lake surface water temperatures to extract lake ice phenology dates. In contrast to other studies utilizing thermal infrared, the thresholds are derived from the data itself, making it unnecessary to define arbitrary or lake specific thresholds. Two lakes in the Baltic region and a steppe lake on the Austrian–Hungarian border were selected. The later one was used to test the applicability of the approach to another climatic region for the time period 1990 to 2012. A comparison of the extracted event dates with in situ data provided good agreements of about 10 d mean absolute error. The two-step extraction was found to be applicable for European lakes in different climate regions and could fill existing data gaps in future applications. The extension of the time series to the full AVHRR record length (early 1980 until today) with adequate length for trend estimations would be of interest to assess climate variability and change. Furthermore, the two-step extraction itself is not sensor-specific and could be applied to other sensors with equivalent near- and thermal infrared spectral bands.
Resumo:
An efficient and reliable automated model that can map physical Soil and Water Conservation (SWC) structures on cultivated land was developed using very high spatial resolution imagery obtained from Google Earth and ArcGIS, ERDAS IMAGINE, and SDC Morphology Toolbox for MATLAB and statistical techniques. The model was developed using the following procedures: (1) a high-pass spatial filter algorithm was applied to detect linear features, (2) morphological processing was used to remove unwanted linear features, (3) the raster format was vectorized, (4) the vectorized linear features were split per hectare (ha) and each line was then classified according to its compass direction, and (5) the sum of all vector lengths per class of direction per ha was calculated. Finally, the direction class with the greatest length was selected from each ha to predict the physical SWC structures. The model was calibrated and validated on the Ethiopian Highlands. The model correctly mapped 80% of the existing structures. The developed model was then tested at different sites with different topography. The results show that the developed model is feasible for automated mapping of physical SWC structures. Therefore, the model is useful for predicting and mapping physical SWC structures areas across diverse areas.
Resumo:
Automatic segmentation of the hip joint with pelvis and proximal femur surfaces from CT images is essential for orthopedic diagnosis and surgery. It remains challenging due to the narrowness of hip joint space, where the adjacent surfaces of acetabulum and femoral head are hardly distinguished from each other. This chapter presents a fully automatic method to segment pelvic and proximal femoral surfaces from hip CT images. A coarse-to-fine strategy was proposed to combine multi-atlas segmentation with graph-based surface detection. The multi-atlas segmentation step seeks to coarsely extract the entire hip joint region. It uses automatically detected anatomical landmarks to initialize and select the atlas and accelerate the segmentation. The graph based surface detection is to refine the coarsely segmented hip joint region. It aims at completely and efficiently separate the adjacent surfaces of the acetabulum and the femoral head while preserving the hip joint structure. The proposed strategy was evaluated on 30 hip CT images and provided an average accuracy of 0.55, 0.54, and 0.50 mm for segmenting the pelvis, the left and right proximal femurs, respectively.
Resumo:
This paper addresses the issue of fully automatic segmentation of a hip CT image with the goal to preserve the joint structure for clinical applications in hip disease diagnosis and treatment. For this purpose, we propose a Multi-Atlas Segmentation Constrained Graph (MASCG) method. The MASCG method uses multi-atlas based mesh fusion results to initialize a bone sheetness based multi-label graph cut for an accurate hip CT segmentation which has the inherent advantage of automatic separation of the pelvic region from the bilateral proximal femoral regions. We then introduce a graph cut constrained graph search algorithm to further improve the segmentation accuracy around the bilateral hip joint regions. Taking manual segmentation as the ground truth, we evaluated the present approach on 30 hip CT images (60 hips) with a 15-fold cross validation. When the present approach was compared to manual segmentation, an average surface distance error of 0.30 mm, 0.29 mm, and 0.30 mm was found for the pelvis, the left proximal femur, and the right proximal femur, respectively. A further look at the bilateral hip joint regions demonstrated an average surface distance error of 0.16 mm, 0.21 mm and 0.20 mm for the acetabulum, the left femoral head, and the right femoral head, respectively.
Resumo:
Diet-related chronic diseases severely affect personal and global health. However, managing or treating these diseases currently requires long training and high personal involvement to succeed. Computer vision systems could assist with the assessment of diet by detecting and recognizing different foods and their portions in images. We propose novel methods for detecting a dish in an image and segmenting its contents with and without user interaction. All methods were evaluated on a database of over 1600 manually annotated images. The dish detection scored an average of 99% accuracy with a .2s/image run time, while the automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 91% respectively, with an average run time of .5s/image, outperforming competing solutions.
Resumo:
This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.
Resumo:
Background: Diabetes mellitus is spreading throughout the world and diabetic individuals have been shown to often assess their food intake inaccurately; therefore, it is a matter of urgency to develop automated diet assessment tools. The recent availability of mobile phones with enhanced capabilities, together with the advances in computer vision, have permitted the development of image analysis apps for the automated assessment of meals. GoCARB is a mobile phone-based system designed to support individuals with type 1 diabetes during daily carbohydrate estimation. In a typical scenario, the user places a reference card next to the dish and acquires two images using a mobile phone. A series of computer vision modules detect the plate and automatically segment and recognize the different food items, while their 3D shape is reconstructed. Finally, the carbohydrate content is calculated by combining the volume of each food item with the nutritional information provided by the USDA Nutrient Database for Standard Reference. Objective: The main objective of this study is to assess the accuracy of the GoCARB prototype when used by individuals with type 1 diabetes and to compare it to their own performance in carbohydrate counting. In addition, the user experience and usability of the system is evaluated by questionnaires. Methods: The study was conducted at the Bern University Hospital, “Inselspital” (Bern, Switzerland) and involved 19 adult volunteers with type 1 diabetes, each participating once. Each study day, a total of six meals of broad diversity were taken from the hospital’s restaurant and presented to the participants. The food items were weighed on a standard balance and the true amount of carbohydrate was calculated from the USDA nutrient database. Participants were asked to count the carbohydrate content of each meal independently and then by using GoCARB. At the end of each session, a questionnaire was completed to assess the user’s experience with GoCARB. Results: The mean absolute error was 27.89 (SD 38.20) grams of carbohydrate for the estimation of participants, whereas the corresponding value for the GoCARB system was 12.28 (SD 9.56) grams of carbohydrate, which was a significantly better performance ( P=.001). In 75.4% (86/114) of the meals, the GoCARB automatic segmentation was successful and 85.1% (291/342) of individual food items were successfully recognized. Most participants found GoCARB easy to use. Conclusions: This study indicates that the system is able to estimate, on average, the carbohydrate content of meals with higher accuracy than individuals with type 1 diabetes can. The participants thought the app was useful and easy to use. GoCARB seems to be a well-accepted supportive mHealth tool for the assessment of served-on-a-plate meals.
Resumo:
Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^