923 resultados para Side pixel registration


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fast sequential multi-element determination of Ca, Mg, K, Cu, Fe, Mn and Zn in plant tissues by high-resolution continuum source flame atomic absorption spectrometry is proposed. For this, the main lines for Cu (324.754 nm), Fe (248.327 nm), Mn (279.482 nm) and Zn (213.857 nm) were selected, and the secondary lines for Ca (239.856 nm), Mg (202.582 nm) and K (404.414 nm) were evaluated. The side pixel registration approach was studied to reduce sensitivity and extend the linear working range for Mg by measuring at wings (202.576 nm; 202.577 nm; 202.578 nm; 202.580 nm: 202.585 nm; 202.586 nm: 202.587 nm; 202.588 nm) of the secondary line. The interference caused by NO bands on Zn at 213.857 nm was removed using the least-squares background correction. Using the main lines for Cu, Fe, Mn and Zn, secondary lines for Ca and K, and line wing at 202.588 nm for Mg, and 5 mL min(-1) sample flow-rate, calibration curves in the 0.1-0.5 mg L-1 Cu, 0.5-4.0 mg L-1 Fe, 0.5-4.0 mg L-1 Mn, 0.2-1.0 mg L-1 Zn, 10.0-100.0 mg L-1 Ca, 5.0-40.0 mg L-1 Mg and 50.0-250.0 mg L-1 K ranges were consistently obtained. Accuracy and precision were evaluated after analysis of five plant standard reference materials. Results were in agreement at a 95% confidence level (paired t-test) with certified values. The proposed method was applied to digests of sugar-cane leaves and results were close to those obtained by line-source flame atomic absorption spectrometry. Recoveries of Ca, Mg, K, Cu, Fe, Mn and Zn in the 89-103%, 84-107%, 87-103%, 85-105%, 92-106%, 91-114%, 96-114% intervals, respectively, were obtained. The limits of detection were 0.6 mg L-1 Ca, 0.4 mg L-1 Mg, 0.4 mg L-1 K, 7.7 mu g L-1 Cu, 7.7 mu g L-1 Fe, 1.5 mu g L-1 Mn and 5.9 mu g L-1 Zn. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Habitat mapping and characterization has been defined as a high-priority management issue for the Olympic Coast National Marine Sanctuary (OCNMS), especially for poorly known deep-sea habitats that may be sensitive to anthropogenic disturbance. As a result, a team of scientists from OCNMS, National Centers for Coastal Ocean Science (NCCOS), and other partnering institutions initiated a series of surveys to assess the distribution of deep-sea coral/sponge assemblages within the sanctuary and to look for evidence of potential anthropogenic impacts in these critical habitats. Initial results indicated that remotely delineating areas of hard bottom substrate through acoustic sensing could be a useful tool to increase the efficiency and success of subsequent ROV-based surveys of the associated deep-sea fauna. Accordingly, side scan sonar surveys were conducted in May 2004, June 2005, and April 2006 aboard the NOAA Ship McArthur II to: (1) obtain additional imagery of the seafloor for broader habitat-mapping coverage of sanctuary waters, and (2) help delineate suitable deep-sea coral/sponge habitat, in areas of both high and low commercial-fishing activities, to serve as sites for surveying-in more detail using an ROV on subsequent cruises. Several regions of the sea floor throughout the OCNMS were surveyed and mosaicked at 1-meter pixel resolution. Imagery from the side scan sonar mapping efforts was integrated with other complementary data from a towed camera sled, ROVs, sedimentary samples, and bathymetry records to describe geological and biological (where possible) aspects of habitat. Using a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999), we created a preliminary map of various habitat polygon features for use in a geographical information system (GIS). This report provides a description of the mapping and groundtruthing efforts as well as results of the image classification procedure for each of the areas surveyed. (PDF contains 60 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Olympic Coast National Marine Sanctuary (OCNMS) continues to invest significant resources into seafloor mapping activities along Washington’s outer coast (Intelmann and Cochrane 2006; Intelmann et al. 2006; Intelmann 2006). Results from these annual mapping efforts offer a snapshot of current ground conditions, help to guide research and management activities, and provide a baseline for assessing the impacts of various threats to important habitat. During the months of August 2004 and May and July 2005, we used side scan sonar to image several regions of the sea floor in the northern OCNMS, and the data were mosaicked at 1-meter pixel resolution. Video from a towed camera sled, bathymetry data, sedimentary samples and side scan sonar mapping were integrated to describe geological and biological aspects of habitat. Polygon features were created and attributed with a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999). For three small areas that were mapped with both side scan sonar and multibeam echosounder, we made a comparison of output from the classified images indicating little difference in results between the two methods. With these considerations, backscatter derived from multibeam bathymetry is currently a costefficient and safe method for seabed imaging in the shallow (<30 meters) rocky waters of OCNMS. The image quality is sufficient for classification purposes, the associated depths provide further descriptive value and risks to gear are minimized. In shallow waters (<30 meters) which do not have a high incidence of dangerous rock pinnacles, a towed multi-beam side scan sonar could provide a better option for obtaining seafloor imagery due to the high rate of acquisition speed and high image quality, however the high probability of losing or damaging such a costly system when deployed as a towed configuration in the extremely rugose nearshore zones within OCNMS is a financially risky proposition. The development of newer technologies such as intereferometric multibeam systems and bathymetric side scan systems could also provide great potential for mapping these nearshore rocky areas as they allow for high speed data acquisition, produce precisely geo-referenced side scan imagery to bathymetry, and do not experience the angular depth dependency associated with multibeam echosounders allowing larger range scales to be used in shallower water. As such, further investigation of these systems is needed to assess their efficiency and utility in these environments compared to traditional side scan sonar and multibeam bathymetry. (PDF contains 43 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In September 2002, side scan sonar was used to image a portion of the sea floor in the northern OCNMS and was mosaiced at 1-meter pixel resolution using 100 kHz data collected at 300-meter range scale. Video from a remotely-operated vehicle (ROV), bathymetry data, sedimentary samples, and sonar mapping have been integrated to describe geological and biological aspects of habitat and polygon features have been created and attributed with a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999). The data can be used with geographic information system (GIS) software for display, query, and analysis. Textural analysis of the sonar images provided a relatively automated method for delineating substrate into three broad classes representing soft, mixed sediment, and hard bottom. Microhabitat and presence of certain biologic attributes were also populated into the polygon features, but strictly limited to areas where video groundtruthing occurred. Further groundtruthing work in specific areas would improve confidence in the classified habitat map. (PDF contains 22 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The POINT-AGAPE collaboration is currently searching for massive compact halo objects (MACHOs) toward the Andromeda galaxy (M31). The survey aims to exploit the high inclination of the M31 disk, which causes an asymmetry in the spatial distribution of M31 MACHOs. Here, we investigate the effects of halo velocity anisotropy and flattening on the asymmetry signal using simple halo models. For a spherically symmetric and isotropic halo, we find that the underlying pixel lensing rate in far-disk M31 MACHOs is more than 5 times the rate of near-disk events. We find that the asymmetry is further increased by about 30% if the MACHOs occupy radial orbits rather than tangential orbits, but it is substantially reduced if the MACHOs lie in a flattened halo. However, even for halos with a minor- to major-axis ratio of q = 0.3, the number of M31 MACHOs in the far side outnumber those in the near side by a factor of similar to2. There is also a distance asymmetry, in that the events on the far side are typically farther from the major axis. We show that, if this positional information is exploited in addition to number counts, then the number of candidate events required to confirm asymmetry for a range of flattened and anisotropic halo models is achievable, even with significant contamination by variable stars and foreground microlensing events. For pixel lensing surveys that probe a representative portion of the M31 disk, a sample of around 50 candidates is likely to be sufficient to detect asymmetry within spherical halos, even if half the sample is contaminated, or to detect asymmetry in halos as flat as q = 0.3, provided less than a third of the sample comprises contaminants. We also argue that, provided its mass-to-light ratio is less than 100, the recently observed stellar stream around M31 is not problematic for the detection of asymmetry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods.

This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state-of-the-art registration methodologies used in a variety of targeted applications.

Key features:
- Provides a state-of-the-art review of image and video registration techniques, allowing readers to develop an understanding of how well the techniques perform by using specific quality assessment criteria
- Addresses a range of applications from familiar image and video processing domains to satellite and medical imaging among others, enabling readers to discover novel methodologies with utility in their own research
- Discusses quality evaluation metrics for each application domain with an interdisciplinary approach from different research perspectives

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods. This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state–of–the–art registration methodologies used in a variety of targeted applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Imagery registration is a fundamental step, which greatly affects later processes in image mosaic, multi-spectral image fusion, digital surface modelling, etc., where the final solution needs blending of pixel information from more than one images. It is highly desired to find a way to identify registration regions among input stereo image pairs with high accuracy, particularly in remote sensing applications in which ground control points (GCPs) are not always available, such as in selecting a landing zone on an outer space planet. In this paper, a framework for localization in image registration is developed. It strengthened the local registration accuracy from two aspects: less reprojection error and better feature point distribution. Affine scale-invariant feature transform (ASIFT) was used for acquiring feature points and correspondences on the input images. Then, a homography matrix was estimated as the transformation model by an improved random sample consensus (IM-RANSAC) algorithm. In order to identify a registration region with a better spatial distribution of feature points, the Euclidean distance between the feature points is applied (named the S criterion). Finally, the parameters of the homography matrix were optimized by the Levenberg–Marquardt (LM) algorithm with selective feature points from the chosen registration region. In the experiment section, the Chang’E-2 satellite remote sensing imagery was used for evaluating the performance of the proposed method. The experiment result demonstrates that the proposed method can automatically locate a specific region with high registration accuracy between input images by achieving lower root mean square error (RMSE) and better distribution of feature points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.