846 resultados para image processing and analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel algorithm for performing registration of dynamic contrast-enhanced (DCE) MRI data of the breast is presented. It is based on an algorithm known as iterated dynamic programming originally devised to solve the stereo matching problem. Using artificially distorted DCE-MRI breast images it is shown that the proposed algorithm is able to correct for movement and distortions over a larger range than is likely to occur during routine clinical examination. In addition, using a clinical DCE-MRI data set with an expertly labeled suspicious region, it is shown that the proposed algorithm significantly reduces the variability of the enhancement curves at the pixel level yielding more pronounced uptake and washout phases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three studies tested the impact of properties of behavioral intention on intention-behavior consistency, information processing, and resistance. Principal components analysis showed that properties of intention formed distinct factors. Study 1 demonstrated that temporal stability, but not the other intention attributes, moderated intention-behavior consistency. Study 2 found that greater stability of intention was associated with improved memory performance. In Study 3, participants were confronted with a rating scale manipulation designed to alter their intention scores. Findings showed that stable intentions were able to withstand attack. Overall, the present research findings suggest that different properties of intention are not simply manifestations of a single underlying construct ("intention strength"), and that temporal stability exhibits superior resistance and impact compared to other intention attributes. © 2013 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

After many years of scholar study, manuscript collections continue to be an important source of novel information for scholars, concerning both the history of earlier times as well as the development of cultural documentation over the centuries. D-SCRIBE project aims to support and facilitate current and future efforts in manuscript digitization and processing. It strives toward the creation of a comprehensive software product, which can assist the content holders in turning an archive of manuscripts into a digital collection using automated methods. In this paper, we focus on the problem of recognizing early Christian Greek manuscripts. We propose a novel digital image binarization scheme for low quality historical documents allowing further content exploitation in an efficient way. Based on the existence of closed cavity regions in the majority of characters and character ligatures in these scripts, we propose a novel, segmentation-free, fast and efficient technique that assists the recognition procedure by tracing and recognizing the most frequently appearing characters or character ligatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents the principal results of the doctoral thesis “Recognition of neume notation in historical documents” by Lasko Laskov (Institute of Mathematics and Informatics at Bulgarian Academy of Sciences), successfully defended before the Specialized Academic Council for Informatics and Mathematical Modelling on 07 June 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Congenital nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations, that can arise since the first months of life. Pathogenesis of congenital nystagmus is still under investigation. In general, CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, image stabilisation is still achieved during the short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recording are routinely employed, allowing physicians to extract and analyse nystagmus main features such as shape, amplitude and frequency. Using eye movement recording, it is also possible to compute estimated visual acuity predictors: analytical functions which estimates expected visual acuity using signal features such as foveation time and foveation position variability. Use of those functions add information to typical visual acuity measurement (e.g. Landolt C test) and could be a support for therapy planning or monitoring. This study focus on robust detection of CN patients' foveations. Specifically, it proposes a method to recognize the exact signal tracts in which a subject foveates, This paper also analyses foveation sequences. About 50 eyemovement recordings, either infrared-oculographic or electrooculographic, from different CN subjects were acquired. Results suggest that an exponential interpolation for the slow phases of nystagmus could improve foveation time computing and reduce influence of breaking saccades and data noise. Moreover a concise description of foveation sequence variability can be achieved using non-fitting splines. © 2009 Springer Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term oxylipin is applied to the generation of oxygenated products of polyunsaturated fatty acids that can arise either through non-enzymatic or enzymatic processes generating a complex array of products, including alcohols, aldehydes, ketones, acids and hydrocarbon gases. The biosynthetic origin of these products has revealed an array of enzymes involved in their formation and more recently a radical pathway. These include lipoxygenases and α-dioxygenase that insert both oxygen atoms in to the acyl chain to initiate the pathways, to specialised P450 monooxygenases that are responsible for their downstream processing. This latter group include enzymes at the branch points such as allene oxide synthase, leading to jasmonate signalling, hydroperoxide lyase, responsible for generating pathogen/pest defensive volatiles and divinyl ether synthases and peroxygenases involved in the formation of antimicrobial compounds. The complexity of the products generated raises significant challenges for their rapid identification and quantification using metabolic screening methods. Here the current developments in oxylipin metabolism are reviewed together with the emerging technologies required to expand this important field of research that underpins advances in plant-pest/pathogen interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research pursued the conceptualization and real-time verification of a system that allows a computer user to control the cursor of a computer interface without using his/her hands. The target user groups for this system are individuals who are unable to use their hands due to spinal dysfunction or other afflictions, and individuals who must use their hands for higher priority tasks while still requiring interaction with a computer. ^ The system receives two forms of input from the user: Electromyogram (EMG) signals from muscles in the face and point-of-gaze coordinates produced by an Eye Gaze Tracking (EGT) system. In order to produce reliable cursor control from the two forms of user input, the development of this EMG/EGT system addressed three key requirements: an algorithm was created to accurately translate EMG signals due to facial movements into cursor actions, a separate algorithm was created that recognized an eye gaze fixation and provided an estimate of the associated eye gaze position, and an information fusion protocol was devised to efficiently integrate the outputs of these algorithms. ^ Experiments were conducted to compare the performance of EMG/EGT cursor control to EGT-only control and mouse control. These experiments took the form of two different types of point-and-click trials. The data produced by these experiments were evaluated using statistical analysis, Fitts' Law analysis and target re-entry (TRE) analysis. ^ The experimental results revealed that though EMG/EGT control was slower than EGT-only and mouse control, it provided effective hands-free control of the cursor without a spatial accuracy limitation, and it also facilitated a reliable click operation. This combination of qualities is not possessed by either EGT-only or mouse control, making EMG/EGT cursor control a unique and practical alternative for a user's cursor control needs. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The microarray technology provides a high-throughput technique to study gene expression. Microarrays can help us diagnose different types of cancers, understand biological processes, assess host responses to drugs and pathogens, find markers for specific diseases, and much more. Microarray experiments generate large amounts of data. Thus, effective data processing and analysis are critical for making reliable inferences from the data. ^ The first part of dissertation addresses the problem of finding an optimal set of genes (biomarkers) to classify a set of samples as diseased or normal. Three statistical gene selection methods (GS, GS-NR, and GS-PCA) were developed to identify a set of genes that best differentiate between samples. A comparative study on different classification tools was performed and the best combinations of gene selection and classifiers for multi-class cancer classification were identified. For most of the benchmarking cancer data sets, the gene selection method proposed in this dissertation, GS, outperformed other gene selection methods. The classifiers based on Random Forests, neural network ensembles, and K-nearest neighbor (KNN) showed consistently god performance. A striking commonality among these classifiers is that they all use a committee-based approach, suggesting that ensemble classification methods are superior. ^ The same biological problem may be studied at different research labs and/or performed using different lab protocols or samples. In such situations, it is important to combine results from these efforts. The second part of the dissertation addresses the problem of pooling the results from different independent experiments to obtain improved results. Four statistical pooling techniques (Fisher inverse chi-square method, Logit method. Stouffer's Z transform method, and Liptak-Stouffer weighted Z-method) were investigated in this dissertation. These pooling techniques were applied to the problem of identifying cell cycle-regulated genes in two different yeast species. As a result, improved sets of cell cycle-regulated genes were identified. The last part of dissertation explores the effectiveness of wavelet data transforms for the task of clustering. Discrete wavelet transforms, with an appropriate choice of wavelet bases, were shown to be effective in producing clusters that were biologically more meaningful. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Issues of body image and ability to achieve intimacy are connected to body weight, yet remain largely unexplored and have not been evaluated by gender. The underlying purpose of this research was to determine if avoidant attitudes and perceptions of one's body may hold implications toward its use in intimate interactions, and if an above average body weight would tend to increase this avoidance. The National Health and Nutrition Examination Survey (NHANES, 1999-2002) finds that 64.5% of US adults are overweight, with 61.9% of women and 67.2% of men. The increasing prevalence of overweight and obesity in men and women shows no reverse trend, nor have prevention and treatment proven effective in the long term. The researcher gathered self-reported age, gender, height and weight data from 55 male and 58 female subjects (determined by a prospective power analysis with a desired medium effect size (r=.30) to determine body mass index (BMI), determining a mean age of 21.6 years and mean BMI of 25.6. Survey instruments consisted of two scales that are germane to the variables being examined. They were (1) Descutner and Thelen of the University of Missouri‘s (1991) Fear-of-Intimacy scale; and (2) Rosen, Srebnik, Saltzberg, and Wendt's (1991) Body Image Avoidance Questionnaire. Results indicated that as body mass index increases, fear of intimacy increases (p<0.05) and that as body mass index increases, body image avoidance increases (p<0.05). The relationship that as body image avoidance increases, fear of intimacy increases was not supported, but approached significance at (p<0.07). No differences in these relationships were determined between gender groups. For age, the only observed relationship was that of a difference between scores for age groups [18 to 22 (group 1) and ages 23 to 34 (group 2)] for the relationship of body image avoidance and fear of intimacy (p<0.02). The results suggest that the relationship of body image avoidance and fear of intimacy, as well as age, bear consideration toward the escalating prevalence of overweight and obesity. An integrative approach to body weight that addresses issues of body image and intimacy may prove effective in prevention and treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to determine the racial and ethnic differences on body image perceptions and weight concerns of fourth grade girls. A purposive sample of 182 fourth grade girls were eligible to participate, 166 were included in the data analysis. The Children's Eating Attitude Test (ChEAT) and a Dieting and Demographic Questionnaire (DDQ) were used to determine eating attitudes of fourth grade girls. A pictoral instrument that was modified from the original was used to assess body image. Anthropometric data was assessed and body mass index (BMI) values were used to classify subjects into percentiles. Results revealed that 56% of all fourth grade girls studied wanted to be thinner and 53% had tried to lose weight. Significantly more non-Hispanic white (NHW) girls reported wanting to be thinner than non-Hispanic black (NHB) and H girls (65.5% vs.32% and 47%, respectively, P=0.005) No significant racial/ethnic differences were revealed for the ChEAT scores. However, 19% of all subjects studied fell into the category indicative of anorexia nervosa. H girls who were less than the 85" %tile for BMI chose significantly smaller figures as their perceived body image (3.5±0.7) than both NHB and NHW girls (4.0±0.6 and 3.9±0.5, respectively, P<0.01). These findings demonstrated that weight concerns were prevalent among girls ages 9- 11 years. NHW and H girls may have more concerns about their body size and shape than their NHB counterparts. Implementing intervention programs at an early age may prevent eating disorders in adolescence and adulthood.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an image processing based detection method for detecting pitting corrosion in steel structures. High Dynamic Range (HDR) imaging has been carried out in this regard to demonstrate the effectiveness of such relatively inexpensive techniques that are of immense benefit to Non – Destructive – Tesing (NDT) community. The pitting corrosion of a steel sample in marine environment is successfully detected in this paper using the proposed methodology. It is observed, that the proposed method has a definite potential to be applied to a wider range of applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.