239 resultados para Optical images.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The electrochemical reduction of TCNQ to TCNQ•- in acetonitrile in the presence of [Cu(MeCN)4]+ has been undertaken at boron-doped diamond (BDD) and indium tin oxide (ITO) electrodes. The nucleation and growth process at BDD is similar to that reported previously at metal electrodes. At an ITO electrode, the electrocrystallization of more strongly adhered, larger, branched, needle-shaped phase I CuTCNQ crystals is detected under potential step conditions and also when the potential is cycled over the potential range of 0.7 to −0.1 V versus Ag/AgCl (3 M KCl). Video imaging can be used at optically transparent ITO electrodes to monitor the growth stage of the very large branched crystals formed during the course of electrochemical experiments. Both in situ video imaging and ex situ X-ray diffraction and scanning electron microscopy (SEM) data are consistent with the nucleation of CuTCNQ taking place at a discrete number of preferred sites on the ITO surface. At BDD electrodes, ex situ optical images show that the preferential growth of CuTCNQ occurs at the more highly conducting boron-rich areas of the electrode, within which there are preferred sites for CuTCNQ formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Object tracking systems require accurate segmentation of the objects from the background for effective tracking. Motion segmentation or optical flow can be used to segment incoming images. Whilst optical flow allows multiple moving targets to be separated based on their individual velocities, optical flow techniques are prone to errors caused by changing lighting and occlusions, both common in a surveillance environment. Motion segmentation techniques are more robust to fluctuating lighting and occlusions, but don't provide information on the direction of the motion. In this paper we propose a combined motion segmentation/optical flow algorithm for use in object tracking. The proposed algorithm uses the motion segmentation results to inform the optical flow calculations and ensure that optical flow is only calculated in regions of motion, and improve the performance of the optical flow around the edge of moving objects. Optical flow is calculated at pixel resolution and tracking of flow vectors is employed to improve performance and detect discontinuities, which can indicate the location of overlaps between objects. The algorithm is evaluated by attempting to extract a moving target within the flow images, given expected horizontal and vertical movement (i.e. the algorithms intended use for object tracking). Results show that the proposed algorithm outperforms other widely used optical flow techniques for this surveillance application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual recording devices such as video cameras, CCTVs, or webcams have been broadly used to facilitate work progress or safety monitoring on construction sites. Without human intervention, however, both real-time reasoning about captured scenes and interpretation of recorded images are challenging tasks. This article presents an exploratory method for automated object identification using standard video cameras on construction sites. The proposed method supports real-time detection and classification of mobile heavy equipment and workers. The background subtraction algorithm extracts motion pixels from an image sequence, the pixels are then grouped into regions to represent moving objects, and finally the regions are identified as a certain object using classifiers. For evaluating the method, the formulated computer-aided process was implemented on actual construction sites, and promising results were obtained. This article is expected to contribute to future applications of automated monitoring systems of work zone safety or productivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal-degrading speckle is one factor that can reduce the quality of optical coherence tomography images. We demonstrate the use of a hierarchical model-based motion estimation processing scheme based on an affine-motion model to reduce speckle in optical coherence tomography imaging, by image registration and the averaging of multiple B-scans. The proposed technique is evaluated against other methods available in the literature. The results from a set of retinal images show the benefit of the proposed technique, which provides an improvement in signal-to-noise ratio of the square root of the number of averaged images, leading to clearer visual information in the averaged image. The benefits of the proposed technique are also explored in the case of ocular anterior segment imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye’s normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of very high resolution (VHR) optical satellites capable of producing stereo images led to a new era in extracting digital elevation model which commenced with the launch of IKONOS. The special specifications of VHR optical satellites besides, the significant economic profit stimulated other countries and companies to have their constellations such as EROS-A1 and EROS-B1 as the cooperation between Israel and ImageSat. QuickBird, WorldView-1 and WorldVew-2 were launched by DigitalGlobe. ALOS and GeoEye-1 were offered by Japan and GeoEye Respectively. In addition to aforementioned satellites, Indian and South Korea initiated their own constellation by launching CartoSat-1 and KOPOSAT-2 respectively.The availability of all so-called satellites make a huge market of stereo images for extracting of digital elevation model and other correspondent applications such as, producing orthorectifcatin images and updating maps. Therefore, there is a need for a comprehensive comparison for scientific and commercial clients to choose appropriate satellite images and methods of generating digital elevation model to obtain optimum results. This paper will thus give a review about the specifications of VHR optical satellites. Then it will discuss the automatic elaborating of digital elevation model. Finally an overview of studies and corresponding results is reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: We term the visual field position from which the pupil appears most nearly circular as the pupillary circular axis (PCAx). The aim was to determine and compare the horizontal and vertical co-ordinates of the PCAx and optical axis from pupil shape and refraction information for only the horizontal meridian of the visual field. Method: The PCAx was determined from the changes with visual field angle in the ellipticity and orientation of pupil images out to ±90° from fixation along the horizontal meridian for the right eyes of 30 people. This axis was compared with the optical axis determined from the changes in the astigmatic components of the refractions for field angles out to ±35° in the same meridian. Results: The mean estimated horizontal and vertical field coordinates of the PCAx were (‒5.3±1.9°, ‒3.2±1.5°) compared with (‒4.8±5.1°, ‒1.5±3.4°) for the optical axis. The vertical co-ordinates of the two axes were just significantly different (p =0.03) but there was no significant correlation between them. Only the horizontal coordinate of the PCAx was significantly related to the refraction in the group. Conclusion: On average, the PCAx is displaced from the line-of-sight by about the same angle as the optical axis but there is more inter-subject variation in the position of the optical axis. When modelling the optical performance of the eye, it appears reasonable to assume that the pupil is circular when viewed along the line-of-sight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Layered graphitic materials exhibit new intriguing electronic structure and the search for new types of two-dimensional (2D) monolayer is of importance for the fabrication of next generation miniature electronic and optoelectronic devices. By means of density functional theory (DFT) computations, we investigated in detail the structural, electronic, mechanical and optical properties of the single-layer bismuth iodide (BiI3) nanosheet. Monolayer BiI3 is dynamically stable as confirmed by the computed phonon spectrum. The cleavage energy (Ecl) and interlayer coupling strength of bulk BiI3 are comparable to the experimental values of graphite, which indicates that the exfoliation of BiI3 is highly feasible. The obtained stress-strain curve shows that the BiI3 nanosheet is a brittle material with a breaking strain of 13%. The BiI3 monolayer has an indirect band gap of 1.57 eV with spin orbit coupling (SOC), indicating its potential application for solar cells. Furthermore, the band gap of BiI3 monolayer can be modulated by biaxial strain. Most interestingly, interfacing electrically active graphene with monolayer BiI3 nanosheet leads to enhanced light absorption compared to that in pure monolayer BiI3 nanosheet, highlighting its great potential applications in photonics and photovoltaic solar cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many parts of the world, uncontrolled fires in sparsely populated areas are a major concern as they can quickly grow into large and destructive conflagrations in short time spans. Detecting these fires has traditionally been a job for trained humans on the ground, or in the air. In many cases, these manned solutions are simply not able to survey the amount of area necessary to maintain sufficient vigilance and coverage. This paper investigates the use of unmanned aerial systems (UAS) for automated wildfire detection. The proposed system uses low-cost, consumer-grade electronics and sensors combined with various airframes to create a system suitable for automatic detection of wildfires. The system employs automatic image processing techniques to analyze captured images and autonomously detect fire-related features such as fire lines, burnt regions, and flammable material. This image recognition algorithm is designed to cope with environmental occlusions such as shadows, smoke and obstructions. Once the fire is identified and classified, it is used to initialize a spatial/temporal fire simulation. This simulation is based on occupancy maps whose fidelity can be varied to include stochastic elements, various types of vegetation, weather conditions, and unique terrain. The simulations can be used to predict the effects of optimized firefighting methods to prevent the future propagation of the fires and greatly reduce time to detection of wildfires, thereby greatly minimizing the ensuing damage. This paper also documents experimental flight tests using a SenseFly Swinglet UAS conducted in Brisbane, Australia as well as modifications for custom UAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood extent mapping is a basic tool for flood damage assessment, which can be done by digital classification techniques using satellite imageries, including the data recorded by radar and optical sensors. However, converting the data into the information we need is not a straightforward task. One of the great challenges involved in the data interpretation is to separate the permanent water bodies and flooding regions, including both the fully inundated areas and the wet areas where trees and houses are partly covered with water. This paper adopts the decision fusion technique to combine the mapping results from radar data and the NDVI data derived from optical data. An improved capacity in terms of identifying the permanent or semi-permanent water bodies from flood inundated areas has been achieved. Computer software tools Multispec and Matlab were used.