912 resultados para Image analysis method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recursive filters are widely used in image analysis due to their efficiency and simple implementation. However these filters have an initialisation problem which either produces unusable results near the image boundaries or requires costly approximate solutions such as extending the boundary manually. In this paper, we describe a method for the recursive filtering of symmetrically extended images for filters with symmetric denominator. We begin with an analysis of symmetric extensions and their effect on non-recursive filtering operators. Based on the non-recursive case, we derive a formulation of recursive filtering on symmetric domains as a linear but spatially varying implicit operator. We then give an efficient method for decomposing and solving the linear implicit system, along with a proof that this decomposition always exists. This decomposition needs to be performed only once for each dimension of the image. This yields a filtering which is both stable and consistent with the ideal infinite extension. The filter is efficient, requiring less computation than the standard recursive filtering. We give experimental evidence to verify these claims. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, we propose an improvement of the classical Derjaguin-Broekhoff-de Boer (DBdB) theory for capillary condensation/evaporation in mesoporous systems. The primary idea of this improvement is to employ the Gibbs-Tolman-Koenig-Buff equation to predict the surface tension changes in mesopores. In addition, the statistical film thickness (so-called t-curve) evaluated accurately on the basis of the adsorption isotherms measured for the MCM-41 materials is used instead of the originally proposed t-curve (to take into account the excess of the chemical potential due to the surface forces). It is shown that the aforementioned modifications of the original DBdB theory have significant implications for the pore size analysis of mesoporous solids. To verify our improvement of the DBdB pore size analysis method (IDBdB), a series of the calcined MCM-41 samples, which are well-defined materials with hexagonally ordered cylindrical mesopores, were used for the evaluation of the pore size distributions. The correlation of the IDBdB method with the empirically calibrated Kruk-Jaroniec-Sayari (KJS) relationship is very good in the range of small mesopores. So, a major advantage of the IDBdB method is its applicability for small mesopores as well as for the mesopore range beyond that established by the KJS calibration, i.e., for mesopore radii greater than similar to4.5 nm. The comparison of the IDBdB results with experimental data reported by Kruk and Jaroniec for capillary condensation/evaporation as well as with the results from nonlocal density functional theory developed by Neimark et al. clearly justifies our approach. Note that the proposed improvement of the classical DBdB method preserves its original simplicity and simultaneously ensures a significant improvement of the pore size analysis, which is confirmed by the independent estimation of the mean pore size by the powder X-ray diffraction method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We hypothesized that the four rotation crops: wheat (Triticum aestivum L.), sorghum [Sorghum bicolor (L.) Merr.], lablab [Lablab purpureus (L.) Sweet] and mung bean [ Vigna radiata (L.) R. Wilczek] differ in their ability to repair soil structure. The study was conducted on a Typic Haplustert, Queensland, Australia, locally termed a Black Earth and considered a prime cropping soil. Large (0.5-m depth by 0.3-m diam.) soil cores, collected from compacted wheel furrows in an irrigated cotton (Gossypium hirsutum L.) field, were subjected to three, six, or nine wet-dry cycles that simulated local flood irrigation practices. After each cycle, soil profiles were sampled for clod bulk density, image analysis of soil structure, and evapotranspiration. Generally, all crops improved soil structure over the initial field condition but lablab and mung bean gave improvements to greater depths and more rapidly than wheat and sorghum. Mung bean and lablab caused up to a threefold increase in clod porosity in the 0.1- to 0.4-m soil layer after only three wet-dry cycles, whereas sorghum required nine wet-dry cycles to increase clod porosity in only the 0.2- to 0.3-m layer, and wheat gave no improvement even after nine wet-dry cycles. Image analysis of soil structure showed that lablab and mung bean rapidly (by three wet-dry cycles) produced smaller peds with more interconnected pore space than wheat and sorghum. By nine wet-dry cycles, sorghum achieved deep cracking of the soil but the material between the cracks remained large and dense. Evapotranspiration was double under lablab and mung bean compared with wheat and sorghum. Our results indicate greater cycles of wetting and drying under lablab and mung bean than wheat and sorghum that have led to rapid repair of soil compaction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We are concerned with the problem of image segmentation in which each pixel is assigned to one of a predefined finite number of classes. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of segmentations. Markov Random Fields (MRFs) have been used to incorporate some of this prior knowledge, but this not entirely satisfactory as inference in MRFs is NP-hard. The multiscale quadtree model of Bouman and Shapiro (1994) is an attractive alternative, as this is a tree-structured belief network in which inference can be carried out in linear time (Pearl 1988). It is an hierarchical model where the bottom-level nodes are pixels, and higher levels correspond to downsampled versions of the image. The conditional-probability tables (CPTs) in the belief network encode the knowledge of how the levels interact. In this paper we discuss two methods of learning the CPTs given training data, using (a) maximum likelihood and the EM algorithm and (b) emphconditional maximum likelihood (CML). Segmentations obtained using networks trained by CML show a statistically-significant improvement in performance on synthetic images. We also demonstrate the methods on a real-world outdoor-scene segmentation task.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed. Copyright (C) 2000 The College of Optometrists.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This review will discuss the use of manual grading scales, digital photography, and automated image analysis in the quantification of fundus changes caused by age-related macular disease. Digital imaging permits processing of images for enhancement, comparison, and feature quantification, and these techniques have been investigated for automated drusen analysis. The accuracy of automated analysis systems has been enhanced by the incorporation of interactive elements, such that the user is able to adjust the sensitivity of the system, or manually add and remove pixels. These methods capitalize on both computer and human image feature recognition and the advantage of computer-based methodologies for quantification. The histogram-based adaptive local thresholding system is able to extract useful information from the image without being affected by the presence of other structures. More recent developments involve compensation for fundus background reflectance, which has most recently been combined with the Otsu method of global thresholding. This method is reported to provide results comparable with manual stereo viewing. Developments in this area are likely to encourage wider use of automated techniques. This will make the grading of photographs easier and cheaper for clinicians and researchers. © 2007 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, we introduced a new 'GLM-beamformer' technique for MEG analysis that enables accurate localisation of both phase-locked and non-phase-locked neuromagnetic effects, and their representation as statistical parametric maps (SPMs). This provides a useful framework for comparison of the full range of MEG responses with fMRI BOLD results. This paper reports a 'proof of principle' study using a simple visual paradigm (static checkerboard). The five subjects each underwent both MEG and fMRI paradigms. We demonstrate, for the first time, the presence of a sustained (DC) field in the visual cortex, and its co-localisation with the visual BOLD response. The GLM-beamformer analysis method is also used to investigate the main non-phase-locked oscillatory effects: an event-related desynchronisation (ERD) in the alpha band (8-13 Hz) and an event-related synchronisation (ERS) in the gamma band (55-70 Hz). We show, using SPMs and virtual electrode traces, the spatio-temporal covariance of these effects with the visual BOLD response. Comparisons between MEG and fMRI data sets generally focus on the relationship between the BOLD response and the transient evoked response. Here, we show that the stationary field and changes in oscillatory power are also important contributors to the BOLD response, and should be included in future studies on the relationship between neuronal activation and the haemodynamic response. © 2005 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To assess the repeatability of an objective image analysis technique to determine intraocular lens (IOL) rotation and centration. SETTING: Six ophthalmology clinics across Europe. METHODS: One-hundred seven patients implanted with Akreos AO aspheric IOLs with orientation marks were imaged. Image quality was rated by a masked observer. The axis of rotation was determined from a line bisecting the IOL orientation marks. This was normalized for rotation of the eye between visits using the axis bisecting 2 consistent conjunctival vessels or iris features. The center of ovals overlaid to circumscribe the IOL optic edge and the pupil or limbus were compared to determine IOL centration. Intrasession repeatability was assessed in 40 eyes and the variability of repeated analysis examined. RESULTS: Intrasession rotational stability of the IOL was ±0.79 degrees (SD) and centration was ±0.10 mm horizontally and ±0.10 mm vertically. Repeated analysis variability of the same image was ±0.70 degrees for rotation and ±0.20 mm horizontally and ±0.31 mm vertically for centration. Eye rotation (absolute) between visits was 2.23 ± 1.84 degrees (10%>5 degrees rotation) using one set of consistent conjunctival vessels or iris features and 2.03 ± 1.66 degrees (7%>5 degrees rotation) using the average of 2 sets (P =.13). Poorer image quality resulted in larger apparent absolute IOL rotation (r =-0.45,P<.001). CONCLUSIONS: Objective analysis of digital retroillumination images allows sensitive assessment of IOL rotation and centration stability. Eye rotation between images can lead to significant errors if not taken into account. Image quality is important to analysis accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stereology and other image analysis methods have enabled rapid and objective quantitative measurements to be made on histological sections. These mesurements may include total volumes, surfaces, lengths and numbers of cells and blood vessels or pathological lesions. Histological features, however, may not be randomly distributed across a section but exhibit 'dispersion', a departure from randomness either towards regularity or aggregation. Information of population dispersion may be valuable not only in understanding the two-or three-dimensional structure but also in elucidating the pathogenesis of lesions in pathological conditions. This article reviews some of the statistical methods available for studying dispersion. These range from simple tests of whether the distribution of a histological faeture departs significantly from random to more complex methods which can detect the intensity of aggregation and the sizes, distribution and spacing of the clusters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The concept of a task is fundamental to the discipline of ergonomics. Approaches to the analysis of tasks began in the early 1900's. These approaches have evolved and developed to the present day, when there is a vast array of methods available. Some of these methods are specific to particular contexts or applications, others more general. However, whilst many of these analyses allow tasks to be examined in detail, they do not act as tools to aid the design process or the designer. The present thesis examines the use of task analysis in a process control context, and in particular the use of task analysis to specify operator information and display requirements in such systems. The first part of the thesis examines the theoretical aspect of task analysis and presents a review of the methods, issues and concepts relating to task analysis. A review of over 80 methods of task analysis was carried out to form a basis for the development of a task analysis method to specify operator information requirements in industrial process control contexts. Of the methods reviewed Hierarchical Task Analysis was selected to provide such a basis and developed to meet the criteria outlined for such a method of task analysis. The second section outlines the practical application and evolution of the developed task analysis method. Four case studies were used to examine the method in an empirical context. The case studies represent a range of plant contexts and types, both complex and more simple, batch and continuous and high risk and low risk processes. The theoretical and empirical issues are drawn together and a method developed to provide a task analysis technique to specify operator information requirements and to provide the first stages of a tool to aid the design of VDU displays for process control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thesis presents new methodology and algorithms that can be used to analyse and measure the hand tremor and fatigue of surgeons while performing surgery. This will assist them in deriving useful information about their fatigue levels, and make them aware of the changes in their tool point accuracies. This thesis proposes that muscular changes of surgeons, which occur through a day of operating, can be monitored using Electromyography (EMG) signals. The multi-channel EMG signals are measured at different muscles in the upper arm of surgeons. The dependence of EMG signals has been examined to test the hypothesis that EMG signals are coupled with and dependent on each other. The results demonstrated that EMG signals collected from different channels while mimicking an operating posture are independent. Consequently, single channel fatigue analysis has been performed. In measuring hand tremor, a new method for determining the maximum tremor amplitude using Principal Component Analysis (PCA) and a new technique to detrend acceleration signals using Empirical Mode Decomposition algorithm were introduced. This tremor determination method is more representative for surgeons and it is suggested as an alternative fatigue measure. This was combined with the complexity analysis method, and applied to surgically captured data to determine if operating has an effect on a surgeon’s fatigue and tremor levels. It was found that surgical tremor and fatigue are developed throughout a day of operating and that this could be determined based solely on their initial values. Finally, several Nonlinear AutoRegressive with eXogenous inputs (NARX) neural networks were evaluated. The results suggest that it is possible to monitor surgeon tremor variations during surgery from their EMG fatigue measurements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Alborz Mountain range separates the northern part of Iran from the southern part. It also isolates a narrow coastal strip to the south of the Caspian Sea from the Central Iran plateau. Communication between the south and north until the 1950's was via two roads and one rail link. In 1963 work was completed on a major access road via the Haraz Valley (the most physically hostile area in the region). From the beginning the road was plagued by accidents resulting from unstable slopes on either side of the valley. Heavy casualties persuaded the government to undertake major engineering works to eliminate ''black spots" and make the road safe. However, despite substantial and prolonged expenditure the problems were not solved and casualties increased steadily due to the increase in traffic using the road. Another road was built to bypass the Haraz road and opened to traffic in 1983. But closure of the Haraz road was still impossible because of the growth of settlements along the route and the need for access to other installations such as the Lar Dam. The aim of this research was to explore the possibility of applying Landsat MSS imagery to locating black spots along the road and the instability problems. Landsat data had not previously been applied to highway engineering problems in the study area. Aerial photographs are better in general than satellite images for detailed mapping, but Landsat images are superior for reconnaissance and adequate for mapping at the 1 :250,000 scale. The broad overview and lack of distortion in the Landsat imagery make the images ideal for structural interpretation. The results of Landsat digital image analysis showed that certain rock types and structural features can be delineated and mapped. The most unstable areas comprising steep slopes, free of vegetation cover can be identified using image processing techniques. Structural lineaments revealed from the image analysis led to improved results (delineation of unstable features). Damavand Quaternary volcanics were found to be the dominant rock type along a 40 km stretch of the road. These rock types are inherently unstable and partly responsible for the difficulties along the road. For more detailed geological and morphological interpretation a sample of small subscenes was selected and analysed. A special developed image analysis package was designed at Aston for use on a non specialized computing system. Using this package a new and unique method for image classification was developed, allowing accurate delineation of the critical features of the study area.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Continuing advances in digital image capture and storage are resulting in a proliferation of imagery and associated problems of information overload in image domains. In this work we present a framework that supports image management using an interactive approach that captures and reuses task-based contextual information. Our framework models the relationship between images and domain tasks they support by monitoring the interactive manipulation and annotation of task-relevant imagery. During image analysis, interactions are captured and a task context is dynamically constructed so that human expertise, proficiency and knowledge can be leveraged to support other users in carrying out similar domain tasks using case-based reasoning techniques. In this article we present our framework for capturing task context and describe how we have implemented the framework as two image retrieval applications in the geo-spatial and medical domains. We present an evaluation that tests the efficiency of our algorithms for retrieving image context information and the effectiveness of the framework for carrying out goal-directed image tasks. © 2010 Springer Science+Business Media, LLC.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique. Methods and patients: Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes), and glaucoma suspect patients (38 eyes). All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis protocol: the hemifield sector analysis protocol. Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P<0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group). The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P<0.001), statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P<0.01), and only 1/11 pair was statistically significant (t-test P<0.9). The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86% respectively and 89% and 79% in glaucoma suspects. These results showed that the new analysis protocol was able to confirm existing visual field defects detected by standard perimetry, was able to differentiate between the three study groups with a clear distinction between normal patients and those with suspected glaucoma, and was able to detect early visual field changes not detected by standard perimetry. In addition, the distinction between normal and glaucoma patients was especially clear and significant using this analysis. Conclusion: The new hemifield sector analysis protocol used in mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol, it can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. The sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucomatous visual field loss. The intersector analysis protocol can detect early field changes not detected by the standard Humphrey Field Analyzer test. © 2013 Mousa et al, publisher and licensee Dove Medical Press Ltd.