835 resultados para Automatic tagging of music


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In clinical practice, traditional X-ray radiography is widely used, and knowledge of landmarks and contours in anteroposterior (AP) pelvis X-rays is invaluable for computer aided diagnosis, hip surgery planning and image-guided interventions. This paper presents a fully automatic approach for landmark detection and shape segmentation of both pelvis and femur in conventional AP X-ray images. Our approach is based on the framework of landmark detection via Random Forest (RF) regression and shape regularization via hierarchical sparse shape composition. We propose a visual feature FL-HoG (Flexible- Level Histogram of Oriented Gradients) and a feature selection algorithm based on trace radio optimization to improve the robustness and the efficacy of RF-based landmark detection. The landmark detection result is then used in a hierarchical sparse shape composition framework for shape regularization. Finally, the extracted shape contour is fine-tuned by a post-processing step based on low level image features. The experimental results demonstrate that our feature selection algorithm reduces the feature dimension in a factor of 40 and improves both training and test efficiency. Further experiments conducted on 436 clinical AP pelvis X-rays show that our approach achieves an average point-to-curve error around 1.2 mm for femur and 1.9 mm for pelvis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Near-infrared spectroscopy (NIRS) enables the non-invasive measurement of changes in hemodynamics and oxygenation in tissue. Changes in light-coupling due to movement of the subject can cause movement artifacts (MAs) in the recorded signals. Several methods have been developed so far that facilitate the detection and reduction of MAs in the data. However, due to fixed parameter values (e.g., global threshold) none of these methods are perfectly suitable for long-term (i.e., hours) recordings or were not time-effective when applied to large datasets. We aimed to overcome these limitations by automation, i.e., data adaptive thresholding specifically designed for long-term measurements, and by introducing a stable long-term signal reconstruction. Our new technique (“acceleration-based movement artifact reduction algorithm”, AMARA) is based on combining two methods: the “movement artifact reduction algorithm” (MARA, Scholkmann et al. Phys. Meas. 2010, 31, 649–662), and the “accelerometer-based motion artifact removal” (ABAMAR, Virtanen et al. J. Biomed. Opt. 2011, 16, 087005). We describe AMARA in detail and report about successful validation of the algorithm using empirical NIRS data, measured over the prefrontal cortex in adolescents during sleep. In addition, we compared the performance of AMARA to that of MARA and ABAMAR based on validation data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic segmentation of the hip joint with pelvis and proximal femur surfaces from CT images is essential for orthopedic diagnosis and surgery. It remains challenging due to the narrowness of hip joint space, where the adjacent surfaces of acetabulum and femoral head are hardly distinguished from each other. This chapter presents a fully automatic method to segment pelvic and proximal femoral surfaces from hip CT images. A coarse-to-fine strategy was proposed to combine multi-atlas segmentation with graph-based surface detection. The multi-atlas segmentation step seeks to coarsely extract the entire hip joint region. It uses automatically detected anatomical landmarks to initialize and select the atlas and accelerate the segmentation. The graph based surface detection is to refine the coarsely segmented hip joint region. It aims at completely and efficiently separate the adjacent surfaces of the acetabulum and the femoral head while preserving the hip joint structure. The proposed strategy was evaluated on 30 hip CT images and provided an average accuracy of 0.55, 0.54, and 0.50 mm for segmenting the pelvis, the left and right proximal femurs, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lexical items like and well can serve as discourse markers (DMs), but can also play numerous other roles, such as verb or adverb. Identifying the occurrences that function as DMs is an important step for language understanding by computers. In this study, automatic classifiers using lexical, prosodic/positional and sociolinguistic features are trained over transcribed dialogues, manually annotated with DM information. The resulting classifiers improve state-of-the-art performance of DM identification, at about 90% recall and 79% precision for like (84.5% accuracy, κ = 0.69), and 99% recall and 98% precision for well (97.5% accuracy, κ = 0.88). Automatic feature analysis shows that lexical collocations are the most reliable indicators, followed by prosodic/positional features, while sociolinguistic features are marginally useful for the identification of DM like and not useful for well. The differentiated processing of each type of DM improves classification accuracy, suggesting that these types should be treated individually.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article discusses the detection of discourse markers (DM) in dialog transcriptions, by human annotators and by automated means. After a theoretical discussion of the definition of DMs and their relevance to natural language processing, we focus on the role of like as a DM. Results from experiments with human annotators show that detection of DMs is a difficult but reliable task, which requires prosodic information from soundtracks. Then, several types of features are defined for automatic disambiguation of like: collocations, part-of-speech tags and duration-based features. Decision-tree learning shows that for like, nearly 70% precision can be reached, with near 100% recall, mainly using collocation filters. Similar results hold for well, with about 91% precision at 100% recall.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose In recent years, selective retina laser treatment (SRT), a sub-threshold therapy method, avoids widespread damage to all retinal layers by targeting only a few. While these methods facilitate faster healing, their lack of visual feedback during treatment represents a considerable shortcoming as induced lesions remain invisible with conventional imaging and make clinical use challenging. To overcome this, we present a new strategy to provide location-specific and contact-free automatic feedback of SRT laser applications. Methods We leverage time-resolved optical coherence tomography (OCT) to provide informative feedback to clinicians on outcomes of location-specific treatment. By coupling an OCT system to SRT treatment laser, we visualize structural changes in the retinal layers as they occur via time-resolved depth images. We then propose a novel strategy for automatic assessment of such time-resolved OCT images. To achieve this, we introduce novel image features for this task that when combined with standard machine learning classifiers yield excellent treatment outcome classification capabilities. Results Our approach was evaluated on both ex vivo porcine eyes and human patients in a clinical setting, yielding performances above 95 % accuracy for predicting patient treatment outcomes. In addition, we show that accurate outcomes for human patients can be estimated even when our method is trained using only ex vivo porcine data. Conclusion The proposed technique presents a much needed strategy toward noninvasive, safe, reliable, and repeatable SRT applications. These results are encouraging for the broader use of new treatment options for neovascularization-based retinal pathologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A plan to construct a canal through the Kra Isthmus in Southern Thailand has been proposed many times since the 17th century. The proposed canal would become an alternative route to the over-crowded Straits of Malacca. In this paper, we attempt to utilize a Geographical Information System (GIS) to calculate the realistic distances between ports that would be affected by the Kra Canal and to estimate the economic impact of the canal using a simulation model based on spatial economics. We find that China, India, Japan, and Europe gain the most from the construction of the canal, besides Thailand. On the other hand, the routes through the Straits of Malacca are largely beneficial to Malaysia, Brunei, and Indonesia, besides Singapore. Thus, it is beneficial for all ASEAN member countries that the Kra Canal and the Straits of Malacca coexist and complement one another.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although there has been a lot of interest in recognizing and understanding air traffic control (ATC) speech, none of the published works have obtained detailed field data results. We have developed a system able to identify the language spoken and recognize and understand sentences in both Spanish and English. We also present field results for several in-tower controller positions. To the best of our knowledge, this is the first time that field ATC speech (not simulated) is captured, processed, and analyzed. The use of stochastic grammars allows variations in the standard phraseology that appear in field data. The robust understanding algorithm developed has 95% concept accuracy from ATC text input. It also allows changes in the presentation order of the concepts and the correction of errors created by the speech recognition engine improving it by 17% and 25%, respectively, absolute in the percentage of fully correctly understood sentences for English and Spanish in relation to the percentages of fully correctly recognized sentences. The analysis of errors due to the spontaneity of the speech and its comparison to read speech is also carried out. A 96% word accuracy for read speech is reduced to 86% word accuracy for field ATC data for Spanish for the "clearances" task confirming that field data is needed to estimate the performance of a system. A literature review and a critical discussion on the possibilities of speech recognition and understanding technology applied to ATC speech are also given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing methods

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe how to use a Granular Linguistic Model of a Phenomenon (GLMP) to assess e-learning processes. We apply this technique to evaluate algorithm learning using the GRAPHs learning environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an analysis for detecting procedures and goals that are deterministic (i.e., that produce at most one solution at most once),or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic. The analysis takes advantage of the pruning operator in order to improve the detection of mutual exclusion and determinacy. It also supports arithmetic equations and disequations, as well as equations and disequations on terms,for which we give a complete satisfiability testing algorithm, w.r.t. available type information. Information about determinacy can be used for program debugging and optimization, resource consumption and granularity control, abstraction carrying code, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One important issue emerging strongly in agriculture is related with the automatization of tasks, where the optical sensors play an important role. They provide images that must be conveniently processed. The most relevantimage processing procedures require the identification of green plants, in our experiments they come from barley and corn crops including weeds, so that some types of action can be carried out, including site-specific treatments with chemical products or mechanical manipulations. Also the identification of textures belonging to the soil could be useful to know some variables, such as humidity, smoothness or any others. Finally, from the point of view of the autonomous robot navigation, where the robot is equipped with the imaging system, some times it is convenient to know not only the soil information and the plants growing in the soil but also additional information supplied by global references based on specific areas. This implies that the images to be processed contain textures of three main types to be identified: green plants, soil and sky if any. This paper proposes a new automatic approach for segmenting these main textures and also to refine the identification of sub-textures inside the main ones. Concerning the green identification, we propose a new approach that exploits the performance of existing strategies by combining them. The combination takes into account the relevance of the information provided by each strategy based on the intensity variability. This makes an important contribution. The combination of thresholding approaches, for segmenting the soil and the sky, makes the second contribution; finally the adjusting of the supervised fuzzy clustering approach for identifying sub-textures automatically, makes the third finding. The performance of the method allows to verify its viability for automatic tasks in agriculture based on image processing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work explores the automatic recognition of physical activity intensity patterns from multi-axial accelerometry and heart rate signals. Data collection was carried out in free-living conditions and in three controlled gymnasium circuits, for a total amount of 179.80 h of data divided into: sedentary situations (65.5%), light-to-moderate activity (17.6%) and vigorous exercise (16.9%). The proposed machine learning algorithms comprise the following steps: time-domain feature definition, standardization and PCA projection, unsupervised clustering (by k-means and GMM) and a HMM to account for long-term temporal trends. Performance was evaluated by 30 runs of a 10-fold cross-validation. Both k-means and GMM-based approaches yielded high overall accuracy (86.97% and 85.03%, respectively) and, given the imbalance of the dataset, meritorious F-measures (up to 77.88%) for non-sedentary cases. Classification errors tended to be concentrated around transients, what constrains their practical impact. Hence, we consider our proposal to be suitable for 24 h-based monitoring of physical activity in ambulatory scenarios and a first step towards intensity-specific energy expenditure estimators