954 resultados para Image processing techniques


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Following miniaturisation of cameras and their integration into mobile devices such as smartphones combined with the intensive use of the latter, it is likely that in the near future the majority of digital images will be captured using such devices rather than using dedicated cameras. Since many users decide to keep their photos on their mobile devices, effective methods for managing these image collections are required. Common image browsers prove to be only of limited use, especially for large image sets [1].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Photonic signal processing is used to implement common mode signal cancellation across a very wide bandwidth utilising phase modulation of radio frequency (RF) signals onto a narrow linewidth laser carrier. RF spectra were observed using narrow-band, tunable optical filtering using a scanning Fabry Perot etalon. Thus functions conventionally performed using digital signal processing techniques in the electronic domain have been replaced by analog techniques in the photonic domain. This technique was able to observe simultaneous cancellation of signals across a bandwidth of 1400 MHz, limited only by the free spectral range of the etalon. © 2013 David M. Benton.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To compare graticule and image capture assessment of the lower tear film meniscus height (TMH). Methods: Lower tear film meniscus height measures were taken in the right eyes of 55 healthy subjects at two study visits separated by 6 months. Two images of the TMH were captured in each subject with a digital camera attached to a slit-lamp biomicroscope and stored in a computer for future analysis. Using the best of two images, the TMH was quantified by manually drawing a line across the tear meniscus profile, following which the TMH was measured in pixels and converted into millimetres, where one pixel corresponded to 0.0018 mm. Additionally, graticule measures were carried out by direct observation using a calibrated graticule inserted into the same slit-lamp eyepiece. The graticule was calibrated so that actual readings, in 0.03 mm increments, could be made with a 40× ocular. Results: Smaller values of TMH were found in this study compared to previous studies. TMH, as measured with the image capture technique (0.13 ± 0.04 mm), was significantly greater (by approximately 0.01 ± 0.05 mm, p = 0.03) than that measured with the graticule technique (0.12 ± 0.05 mm). No bias was found across the range sampled. Repeatability of the TMH measurements taken at two study visits showed that graticule measures were significantly different (0.02 ± 0.05 mm, p = 0.01) and highly correlated (r = 0.52, p < 0.0001), whereas image capture measures were similar (0.01 ± 0.03 mm, p = 0.16), and also highly correlated (r = 0.56, p < 0.0001). Conclusions: Although graticule and image analysis techniques showed similar mean values for TMH, the image capture technique was more repeatable than the graticule technique and this can be attributed to the higher measurement resolution of the image capture (i.e. 0.0018 mm) compared to the graticule technique (i.e. 0.03 mm). © 2006 British Contact Lens Association.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim: To use previously validated image analysis techniques to determine the incremental nature of printed subjective anterior eye grading scales. Methods: A purpose designed computer program was written to detect edges using a 3 × 3 kernal and to extract colour planes in the selected area of an image. Annunziato and Efron pictorial, and CCLRU and Vistakon-Synoptik photographic grades of bulbar hyperaemia, palpebral hyperaemia roughness, and corneal staining were analysed. Results: The increments of the grading scales were best described by a quadratic rather than a linear function. Edge detection and colour extraction image analysis for bulbar hyperaemia (r2 = 0.35-0.99), palpebral hyperaemia (r2 = 0.71-0.99), palpebral roughness (r2 = 0.30-0.94), and corneal staining (r2 = 0.57-0.99) correlated well with scale grades, although the increments varied in magnitude and direction between different scales. Repeated image analysis measures had a 95% confidence interval of between 0.02 (colour extraction) and 0.10 (edge detection) scale units (on a 0-4 scale). Conclusion: The printed grading scales were more sensitive for grading features of low severity, but grades were not comparable between grading scales. Palpebral hyperaemia and staining grading is complicated by the variable presentations possible. Image analysis techniques are 6-35 times more repeatable than subjective grading, with a sensitivity of 1.2-2.8% of the scale.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The knowledge of insulation debris generation and transport gains in importance regarding reactor safety research for PWR and BWR. The insulation debris released near the break consists of a mixture of very different fibres and particles concerning size, shape, consistence and other properties. Some fraction of the released insulation debris will be transported into the reactor sump where it may affect emergency core cooling. Experiments are performed to blast original samples of mineral wool insulation material by steam under original thermal-hydraulic break conditions of BWR. The gained fragments are used as initial specimen for further experiments at acrylic glass test facilities. The quasi ID-sinking behaviour of the insulation fragments are investigated in a water column by optical high speed video techniques and methods of image processing. Drag properties are derived from the measured sinking velocities of the fibres and observed geometric parameters for an adequate CFD modelling. In the test rig "Ring line-II" the influence of the insulation material on the head loss is investigated for debris loaded strainers. Correlations from the filter bed theory are adapted with experimental results and are used to model the flow resistance depending on particle load, filter bed porosity and parameters of the coolant flow. This concept also enables the simulation of a particular blocked strainer with CFDcodes. During the ongoing work further results of separate effect and integral experiments and the application and validation of the CFD-models for integral test facilities and original containment sump conditions are expected.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image content interpretation is much dependent on segmentations efficiency. Requirements for the image recognition applications lead to a nessesity to create models of new type, which will provide some adaptation between law-level image processing, when images are segmented into disjoint regions and features are extracted from each region, and high-level analysis, using obtained set of all features for making decisions. Such analysis requires some a priori information, measurable region properties, heuristics, and plausibility of computational inference. Sometimes to produce reliable true conclusion simultaneous processing of several partitions is desired. In this paper a set of operations with obtained image segmentation and a nested partitions metric are introduced.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents implementation of a low-power tracking CMOS image sensor based on biological models of attention. The presented imager allows tracking of up to N salient targets in the field of view. Employing "smart" image sensor architecture, where all image processing is implemented on the sensor focal plane, the proposed imager allows reduction of the amount of data transmitted from the sensor array to external processing units and thus provides real time operation. The imager operation and architecture are based on the models taken from biological systems, where data sensed by many millions of receptors should be transmitted and processed in real time. The imager architecture is optimized to achieve low-power dissipation both in acquisition and tracking modes of operation. The tracking concept is presented, the system architecture is shown and the circuits description is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The never-stopping increase in demand for information transmission capacity has been met with technological advances in telecommunication systems, such as the implementation of coherent optical systems, advanced multilevel multidimensional modulation formats, fast signal processing, and research into new physical media for signal transmission (e.g. a variety of new types of optical fibers). Since the increase in the signal-to-noise ratio makes fiber communication channels essentially nonlinear (due to the Kerr effect for example), the problem of estimating the Shannon capacity for nonlinear communication channels is not only conceptually interesting, but also practically important. Here we discuss various nonlinear communication channels and review the potential of different optical signal coding, transmission and processing techniques to improve fiber-optic Shannon capacity and to increase the system reach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the MEMORIAL project time an international consortium has developed a software solution called DDW (Digital Document Workbench). It provides a set of tools to support the process of digitisation of documents from the scanning up to the retrievable presentation of the content. The attention is focused to machine typed archival documents. One of the important features is the evaluation of quality in each step of the process. The workbench consists of automatic parts as well as of parts which request human activity. The measurable improvement of 20% shows the approach is successful.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

* The work is partially supported by the grant of National Academy of Science of Ukraine for the support of scientific researches by young scientists No 24-7/05, " Розробка Desktop Grid-системи і оптимізація її продуктивності ”.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The activities of the Institute of Information Technologies in the area of automatic text processing are outlined. Major problems related to different steps of processing are pointed out together with the shortcomings of the existing solutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper a novel method for an application of digital image processing, Edge Detection is developed. The contemporary Fuzzy logic, a key concept of artificial intelligence helps to implement the fuzzy relative pixel value algorithms and helps to find and highlight all the edges associated with an image by checking the relative pixel values and thus provides an algorithm to abridge the concepts of digital image processing and artificial intelligence. Exhaustive scanning of an image using the windowing technique takes place which is subjected to a set of fuzzy conditions for the comparison of pixel values with adjacent pixels to check the pixel magnitude gradient in the window. After the testing of fuzzy conditions the appropriate values are allocated to the pixels in the window under testing to provide an image highlighted with all the associated edges.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article presents the principal results of the doctoral thesis “Recognition of neume notation in historical documents” by Lasko Laskov (Institute of Mathematics and Informatics at Bulgarian Academy of Sciences), successfully defended before the Specialized Academic Council for Informatics and Mathematical Modelling on 07 June 2010.