908 resultados para Image-to-Image Variation
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
Yarrowia lipolytica, a yeast strain with a huge biotechnological potential, capable to produce metabolites such as γ-decalactone, citric acid, intracellular lipids and enzymes, possesses the ability to change its morphology in response to environmental conditions. In the present study, a quantitative image analysis (QIA) procedure was developed for the identification and quantification of Y. lipolytica W29 and MTLY40-2P strains dimorphic growth, cultivated in batch cultures on hydrophilic (glucose and N-acetylglucosamine (GlcNAc) and hydrophobic (olive oil and castor oil) media. The morphological characterization of yeast cells by QIA techniques revealed that hydrophobic carbon sources, namely castor oil, should be preferred for both strains growth in the yeast single cell morphotype. On the other hand, hydrophilic sugars, namely glucose and GlcNAc caused a dimorphic transition growth towards the hyphae morphotype. Experiments for γ-decalactone production with MTLY40-2P strain in two distinct morphotypes (yeast single cells and hyphae cells) were also performed. The obtained results showed the adequacy of the proposed morphology monitoring tool in relation to each morphotype on the aroma production ability. The present work allowed establishing that QIA techniques can be a valuable tool for the identification of the best culture conditions for industrial processes implementation.
Resumo:
Many texture measures have been developed and used for improving land-cover classification accuracy, but rarely has research examined the role of textures in improving the performance of aboveground biomass estimations. The relationship between texture and biomass is poorly understood. This paper used Landsat Thematic Mapper (TM) data to explore relationships between TM image textures and aboveground biomass in Rondônia, Brazilian Amazon. Eight grey level co-occurrence matrix (GLCM) based texture measures (i.e., mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation), associated with seven different window sizes (5x5, 7x7, 9x9, 11x11, 15x15, 19x19, and 25x25), and five TM bands (TM 2, 3, 4, 5, and 7) were analyzed. Pearson's correlation coefficient was used to analyze texture and biomass relationships. This research indicates that most textures are weakly correlated with successional vegetation biomass, but some textures are significantly correlated with mature forest biomass. In contrast, TM spectral signatures are significantly correlated with successional vegetation biomass, but weakly correlated with mature forest biomass. Our findings imply that textures may be critical in improving mature forest biomass estimation, but relatively less important for successional vegetation biomass estimation.
Resumo:
Given the limitations of different types of remote sensing images, automated land-cover classifications of the Amazon várzea may yield poor accuracy indexes. One way to improve accuracy is through the combination of images from different sensors, by either image fusion or multi-sensor classifications. Therefore, the objective of this study was to determine which classification method is more efficient in improving land cover classification accuracies for the Amazon várzea and similar wetland environments - (a) synthetically fused optical and SAR images or (b) multi-sensor classification of paired SAR and optical images. Land cover classifications based on images from a single sensor (Landsat TM or Radarsat-2) are compared with multi-sensor and image fusion classifications. Object-based image analyses (OBIA) and the J.48 data-mining algorithm were used for automated classification, and classification accuracies were assessed using the kappa index of agreement and the recently proposed allocation and quantity disagreement measures. Overall, optical-based classifications had better accuracy than SAR-based classifications. Once both datasets were combined using the multi-sensor approach, there was a 2% decrease in allocation disagreement, as the method was able to overcome part of the limitations present in both images. Accuracy decreased when image fusion methods were used, however. We therefore concluded that the multi-sensor classification method is more appropriate for classifying land cover in the Amazon várzea.
Resumo:
OBJECTIVE: To characterize eating habits and possible risk factors associated with eating disorders among psychology students, a segment at risk for eating disorders. METHOD: This is a cross-sectional study. The questionnaires Bulimic Investigatory Test Edinburgh (BITE), Eating Attitudes Test (EAT-26), Body Shape Questionnaire (BSQ) and a variety that considers related issues were applied. Statistical Package for the Social Sciences (SPSS) 11.0 was utilized in analysis. The study population was composed of 175 female students, with a mean age of 21.2 (DP ± 3.6 years). RESULTS: A positive result was detected on the EAT-26 for 6.9% of the cases (CI95%: 3.6-11.7%). The prevalence of increased symptoms and intense gravity, according to the BITE questionnaire was 5% (CI95%: 2.4-9.5%) and 2.5% (CI95%: 0.7-6.3%), respectively. According to the findings, 26.29% of the students presented abnormal eating behavior. The population with moderate/severe BSQ scores presented dissatisfaction with corporal weight. CONCLUSION: The results indicate that attention must be given to eating behavior risks within this group. A differentiated gaze is justified with respect to these future professionals, whose practice is jeopardized in cases in which they are themselves the bearers of installed symptoms or precursory behavior.
Resumo:
Objective: To evaluate body image dissatisfaction and its relationship with physical activity and body mass index in a Brazilian sample of adolescents. Methods: A total of 275 adolescents (139 boys and 136 girls) between the ages of 14 and 18 years completed measures of body image dissatisfaction through the Contour Drawing Scale and current physical activity by the International Physical Activity Questionnaire. Weight and height were also measured for subsequent calculation of body mass index. Results: Boys and girls differed significantly regarding body image dissatisfaction, with girls reporting higher levels of dissatisfaction. Underweight and eutrophic boys preferred to be heavier, while those overweight preferred be thinner and, in contrast, girls desired to be thinner even when they are of normal weight. Conclusion: Body image dissatisfaction was strictly related to body mass index, but not to physical activity.
Resumo:
As digital imaging processing techniques become increasingly used in a broad range of consumer applications, the critical need to evaluate algorithm performance has become recognised by developers as an area of vital importance. With digital image processing algorithms now playing a greater role in security and protection applications, it is of crucial importance that we are able to empirically study their performance. Apart from the field of biometrics little emphasis has been put on algorithm performance evaluation until now and where evaluation has taken place, it has been carried out in a somewhat cumbersome and unsystematic fashion, without any standardised approach. This paper presents a comprehensive testing methodology and framework aimed towards automating the evaluation of image processing algorithms. Ultimately, the test framework aims to shorten the algorithm development life cycle by helping to identify algorithm performance problems quickly and more efficiently.
Resumo:
Combined media on photographic paper. 42" x 77¼”
Resumo:
A nationwide survey was launched to investigate the use of fluoroscopy and establish national reference levels (RL) for dose-intensive procedures. The 2-year investigation covered five radiology and nine cardiology departments in public hospitals and private clinics, and focused on 12 examination types: 6 diagnostic and 6 interventional. A total of 1,000 examinations was registered. Information including the fluoroscopy time (T), the number of frames (N) and the dose-area product (DAP) was provided. The data set was used to establish the distributions of T, N and the DAP and the associated RL values. The examinations were pooled to improve the statistics. A wide variation in dose and image quality in fixed geometry was observed. As an example, the skin dose rate for abdominal examinations varied in the range of 10 to 45 mGy/min for comparable image quality. A wide variability was found for several types of examinations, mainly complex ones. DAP RLs of 210, 125, 80, 240, 440 and 110 Gy cm2 were established for lower limb and iliac angiography, cerebral angiography, coronary angiography, biliary drainage and stenting, cerebral embolization and PTCA, respectively. The RL values established are compared to the data published in the literature.
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
Images obtained from high-throughput mass spectrometry (MS) contain information that remains hidden when looking at a single spectrum at a time. Image processing of liquid chromatography-MS datasets can be extremely useful for quality control, experimental monitoring and knowledge extraction. The importance of imaging in differential analysis of proteomic experiments has already been established through two-dimensional gels and can now be foreseen with MS images. We present MSight, a new software designed to construct and manipulate MS images, as well as to facilitate their analysis and comparison.
Resumo:
In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.
Resumo:
Introduction: A standardized three-dimensional ultrasonographic (3DUS) protocol is described that allows fetal face reconstruction. Ability to identify cleft lip with 3DUS using this protocol was assessed by operators with minimal 3DUS experience. Material and Methods: 260 stored volumes of fetal face were analyzed using a standardized protocol by operators with different levels of competence in 3DUS. The outcomes studied were: (1) the performance of post-processing 3D face volumes for the detection of facial clefts; (2) the ability of a resident with minimal 3DUS experience to reconstruct the acquired facial volumes, and (3) the time needed to reconstruct each plane to allow proper diagnosis of a cleft. Results: The three orthogonal planes of the fetal face (axial, sagittal and coronal) were adequately reconstructed with similar performance when acquired by a maternal-fetal medicine specialist or by residents with minimal experience (72 vs. 76%, p = 0.629). The learning curve for manipulation of 3DUS volumes of the fetal face corresponds to 30 cases and is independent of the operator's level of experience. Discussion: The learning curve for the standardized protocol we describe is short, even for inexperienced sonographers. This technique might decrease the length of anatomy ultrasounds and improve the ability to visualize fetal face anomalies.
Resumo:
The effect of copper (Cu) filtration on image quality and dose in different digital X-ray systems was investigated. Two computed radiography systems and one digital radiography detector were used. Three different polymethylmethacrylate blocks simulated the pediatric body. The effect of Cu filters of 0.1, 0.2, and 0.3 mm thickness on the entrance surface dose (ESD) and the corresponding effective doses (EDs) were measured at tube voltages of 60, 66, and 73 kV. Image quality was evaluated in a contrast-detail phantom with an automated analyzer software. Cu filters of 0.1, 0.2, and 0.3 mm thickness decreased the ESD by 25-32%, 32-39%, and 40-44%, respectively, the ranges depending on the respective tube voltages. There was no consistent decline in image quality due to increasing Cu filtration. The estimated ED of anterior-posterior (AP) chest projections was reduced by up to 23%. No relevant reduction in the ED was noted in AP radiographs of the abdomen and pelvis or in posterior-anterior radiographs of the chest. Cu filtration reduces the ESD, but generally does not reduce the effective dose. Cu filters can help protect radiosensitive superficial organs, such as the mammary glands in AP chest projections.