916 resultados para Biomedical Image Processing
Resumo:
Purpose: To examine the use of real-time, generic edge detection, image processing techniques to enhance the television viewing of the visually impaired. Design: Prospective, clinical experimental study. Method: One hundred and two sequential visually impaired (average age 73.8 ± 14.8 years; 59% female) in a single center optimized a dynamic television image with respect to edge detection filter (Prewitt, Sobel, or the two combined), color (red, green, blue, or white), and intensity (one to 15 times) of the overlaid edges. They then rated the original television footage compared with a black-and-white image displaying the edges detected and the original television image with the detected edges overlaid in the chosen color and at the intensity selected. Footage of news, an advertisement, and the end of program credits were subjectively assessed in a random order. Results: A Prewitt filter was preferred (44%) compared with the Sobel filter (27%) or a combination of the two (28%). Green and white were equally popular for displaying the detected edges (32%), with blue (22%) and red (14%) less so. The average preferred edge intensity was 3.5 ± 1.7 times. The image-enhanced television was significantly preferred to the original (P < .001), which in turn was preferred to viewing the detected edges alone (P < .001) for each of the footage clips. Preference was not dependent on the condition causing visual impairment. Seventy percent were definitely willing to buy a set-top box that could achieve these effects for a reasonable price. Conclusions: Simple generic edge detection image enhancement options can be performed on television in real-time and significantly enhance the viewing of the visually impaired. © 2007 Elsevier Inc. All rights reserved.
Resumo:
Image content interpretation is much dependent on segmentations efficiency. Requirements for the image recognition applications lead to a nessesity to create models of new type, which will provide some adaptation between law-level image processing, when images are segmented into disjoint regions and features are extracted from each region, and high-level analysis, using obtained set of all features for making decisions. Such analysis requires some a priori information, measurable region properties, heuristics, and plausibility of computational inference. Sometimes to produce reliable true conclusion simultaneous processing of several partitions is desired. In this paper a set of operations with obtained image segmentation and a nested partitions metric are introduced.
Resumo:
This paper presents implementation of a low-power tracking CMOS image sensor based on biological models of attention. The presented imager allows tracking of up to N salient targets in the field of view. Employing "smart" image sensor architecture, where all image processing is implemented on the sensor focal plane, the proposed imager allows reduction of the amount of data transmitted from the sensor array to external processing units and thus provides real time operation. The imager operation and architecture are based on the models taken from biological systems, where data sensed by many millions of receptors should be transmitted and processed in real time. The imager architecture is optimized to achieve low-power dissipation both in acquisition and tracking modes of operation. The tracking concept is presented, the system architecture is shown and the circuits description is discussed.
Resumo:
During the MEMORIAL project time an international consortium has developed a software solution called DDW (Digital Document Workbench). It provides a set of tools to support the process of digitisation of documents from the scanning up to the retrievable presentation of the content. The attention is focused to machine typed archival documents. One of the important features is the evaluation of quality in each step of the process. The workbench consists of automatic parts as well as of parts which request human activity. The measurable improvement of 20% shows the approach is successful.
Resumo:
* The work is partially supported by the grant of National Academy of Science of Ukraine for the support of scientific researches by young scientists No 24-7/05, " Розробка Desktop Grid-системи і оптимізація її продуктивності ”.
Resumo:
The activities of the Institute of Information Technologies in the area of automatic text processing are outlined. Major problems related to different steps of processing are pointed out together with the shortcomings of the existing solutions.
Resumo:
In this paper a novel method for an application of digital image processing, Edge Detection is developed. The contemporary Fuzzy logic, a key concept of artificial intelligence helps to implement the fuzzy relative pixel value algorithms and helps to find and highlight all the edges associated with an image by checking the relative pixel values and thus provides an algorithm to abridge the concepts of digital image processing and artificial intelligence. Exhaustive scanning of an image using the windowing technique takes place which is subjected to a set of fuzzy conditions for the comparison of pixel values with adjacent pixels to check the pixel magnitude gradient in the window. After the testing of fuzzy conditions the appropriate values are allocated to the pixels in the window under testing to provide an image highlighted with all the associated edges.
Resumo:
A vision system is applied to full-field displacements and deformation measurements in solid mechanics. A speckle like pattern is preliminary formed on the surface under investigation. To determine displacements field of one speckle image with respect to a reference speckle image, sub-images, referred to Zones Of Interest (ZOI) are considered. The field is obtained by matching a ZOI in the reference image with the respective ZOI in the moved image. Two image processing techniques are used for implementing the matching procedure: – cross correlation function and minimum mean square error (MMSE) of the ZOI intensity distribution. The two algorithms are compared and the influence of the ZOI size on the accuracy of measurements is studied.
Resumo:
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
This article presents the principal results of the doctoral thesis “Recognition of neume notation in historical documents” by Lasko Laskov (Institute of Mathematics and Informatics at Bulgarian Academy of Sciences), successfully defended before the Specialized Academic Council for Informatics and Mathematical Modelling on 07 June 2010.
Resumo:
Congenital nystagmus is an ocular-motor disorder that develops in the first few months of life; its pathogenesis is still unknown. Patients affected by congenital nystagmus show continuous, involuntary, rhythmical oscillations of the eyes. Monitoring eye movements, nystagmus main features such as shape, amplitude and frequency, can be extracted and analysed. Previous studies highlighted, in some cases, a much slower and smaller oscillation, which appears added up to the ordinary nystagmus waveform. This sort of baseline oscillation, or slow nystagmus, hinder precise cycle-to-cycle image placement onto the fovea. Such variability of the position may reduce patient visual acuity. This study aims to analyse more extensively eye movements recording including the baseline oscillation and investigate possible relationships between these slow oscillations and nystagmus. Almost 100 eye movement recordings (either infrared-oculographic or electrooculographic), relative to different gaze positions, belonging to 32 congenital nystagmus patients were analysed. The baseline oscillation was assumed sinusoidal; its amplitude and frequency were computed and compared with those of the nystagmus by means of a linear regression analysis. The results showed that baseline oscillations were characterised by an average frequency of 0.36 Hz (SD 0.11 Hz) and an average amplitude of 2.1° (SD 1.6°). It also resulted in a considerable correlation (R2 scored 0.78) between nystagmus amplitude and baseline oscillation amplitude; the latter, on average, resulted to be about one-half of the correspondent nystagmus amplitude. © 2009 Elsevier Ltd. All rights reserved.
Resumo:
The objectives of this research are to analyze and develop a modified Principal Component Analysis (PCA) and to develop a two-dimensional PCA with applications in image processing. PCA is a classical multivariate technique where its mathematical treatment is purely based on the eigensystem of positive-definite symmetric matrices. Its main function is to statistically transform a set of correlated variables to a new set of uncorrelated variables over $\IR\sp{n}$ by retaining most of the variations present in the original variables.^ The variances of the Principal Components (PCs) obtained from the modified PCA form a correlation matrix of the original variables. The decomposition of this correlation matrix into a diagonal matrix produces a set of orthonormal basis that can be used to linearly transform the given PCs. It is this linear transformation that reproduces the original variables. The two-dimensional PCA can be devised as a two successive of one-dimensional PCA. It can be shown that, for an $m\times n$ matrix, the PCs obtained from the two-dimensional PCA are the singular values of that matrix.^ In this research, several applications for image analysis based on PCA are developed, i.e., edge detection, feature extraction, and multi-resolution PCA decomposition and reconstruction. ^
Resumo:
This dissertation establishes the foundation for a new 3-D visual interface integrating Magnetic Resonance Imaging (MRI) to Diffusion Tensor Imaging (DTI). The need for such an interface is critical for understanding brain dynamics, and for providing more accurate diagnosis of key brain dysfunctions in terms of neuronal connectivity. ^ This work involved two research fronts: (1) the development of new image processing and visualization techniques in order to accurately establish relational positioning of neuronal fiber tracts and key landmarks in 3-D brain atlases, and (2) the obligation to address the computational requirements such that the processing time is within the practical bounds of clinical settings. The system was evaluated using data from thirty patients and volunteers with the Brain Institute at Miami Children's Hospital. ^ Innovative visualization mechanisms allow for the first time white matter fiber tracts to be displayed alongside key anatomical structures within accurately registered 3-D semi-transparent images of the brain. ^ The segmentation algorithm is based on the calculation of mathematically-tuned thresholds and region-detection modules. The uniqueness of the algorithm is in its ability to perform fast and accurate segmentation of the ventricles. In contrast to the manual selection of the ventricles, which averaged over 12 minutes, the segmentation algorithm averaged less than 10 seconds in its execution. ^ The registration algorithm established searches and compares MR with DT images of the same subject, where derived correlation measures quantify the resulting accuracy. Overall, the images were 27% more correlated after registration, while an average of 1.5 seconds is all it took to execute the processes of registration, interpolation, and re-slicing of the images all at the same time and in all the given dimensions. ^ This interface was fully embedded into a fiber-tracking software system in order to establish an optimal research environment. This highly integrated 3-D visualization system reached a practical level that makes it ready for clinical deployment. ^
Resumo:
According to the American Podiatric Medical Association, about 15 percent of the patients with diabetes would develop a diabetic foot ulcer. Furthermore, foot ulcerations leads to 85 percent of the diabetes-related amputations. Foot ulcers are caused due to a combination of factors, such as lack of feeling in the foot, poor circulation, foot deformities and the duration of the diabetes. To date, the wounds are inspected visually to monitor the wound healing, without any objective imaging approach to look before the wound’s surface. Herein, a non-contact, portable handheld optical device was developed at the Optical Imaging Laboratory as an objective approach to monitor wound healing in foot ulcer. This near-infrared optical technology is non-radiative, safe and fast in imaging large wounds on patients. The FIU IRB-approved study will involve subjects that have been diagnosed with diabetes by a physician and who have developed foot ulcers. Currently, in-vivo imaging studies are carried out every week on diabetic patients with foot ulcers at two clinical sites in Miami. Near-infrared images of the wound are captured on subjects every week and the data is processed using customdeveloped Matlab-based image processing tools. The optical contrast of the wound to its peripheries and the wound size are analyzed and compared from the NIR and white light images during the weekly systematic imaging of wound healing.
Resumo:
Efficient and effective approaches of dealing with the vast amount of visual information available nowadays are highly sought after. This is particularly the case for image collections, both personal and commercial. Due to the magnitude of these ever expanding image repositories, annotation of all images images is infeasible, and search in such an image collection therefore becomes inherently difficult. Although content-based image retrieval techniques have shown much potential, such approaches also suffer from various problems making it difficult to adopt them in practice. In this paper, we follow a different approach, namely that of browsing image databases for image retrieval. In our Honeycomb Image Browser, large image databases are visualised on a hexagonal lattice with image thumbnails occupying hexagons. Arranged in a space filling manner, visually similar images are located close together enabling large image datasets to be navigated in a hierarchical manner. Various browsing tools are incorporated to allow for interactive exploration of the database. Experimental results confirm that our approach affords efficient image retrieval. © 2010 IEEE.