962 resultados para Pixel
Resumo:
We present an image-based method for relighting a scene by analytically fitting cosine lobes to the reflectance function at each pixel, based on gradient illumination photographs. Realistic relighting results for many materials are obtained using a single per-pixel cosine lobe obtained from just two color photographs: one under uniform white illumination and the other under colored gradient illumination. For materials with wavelength-dependent scattering, a better fit can be obtained using independent cosine lobes for the red, green, and blue channels, obtained from three achromatic gradient illumination conditions instead of the colored gradient condition. We explore two cosine lobe reflectance functions, both of which allow an analytic fit to the gradient conditions. One is non-zero over half the sphere of lighting directions, which works well for diffuse and specular materials, but fails for materials with broader scattering such as fur. The other is non-zero everywhere, which works well for broadly scattering materials and still produces visually plausible results for diffuse and specular materials. We also perform an approximate diffuse/specular separation of the reflectance, and estimate scene geometry from the recovered photometric normals to produce hard shadows cast by the geometry, while still reconstructing the input photographs exactly.
Resumo:
OBJECTIVE To determine whether myocardial contrast echocardiography can be used to quantify collateral derived myocardial flow in humans. METHODS In 25 patients undergoing coronary angioplasty, a collateral flow index (CFI) was determined using intracoronary wedge pressure distal to the stenosis to be dilated, with simultaneous mean aortic pressure measurements. During balloon occlusion, echo contrast was injected into both main coronary arteries simultaneously. Echocardiography of the collateral receiving myocardial area was performed. The time course of myocardial contrast enhancement in images acquired at end diastole was quantified by measuring pixel intensities (256 grey units) within a region of interest. Perfusion variables, such as background subtracted peak pixel intensity and contrast transit rate, were obtained from a fitted gamma variate curve. RESULTS 16 patients had a left anterior descending coronary artery stenosis, four had a left circumflex coronary artery stenosis, and five had a right coronary artery stenosis. The mean (SD) CFI was 19 (12)% (range 0-47%). Mean contrast transit rate was 11 (8) seconds. In 17 patients, a significant collateral contrast effect was observed (defined as peak pixel intensity more than the mean + 2 SD of background). Peak pixel intensity was linearly related to CFI in patients with a significant contrast effect (p = 0.002, r = 0.69) as well as in all patients (p = 0.0003, r = 0.66). CONCLUSIONS Collateral derived perfusion of myocardial areas at risk can be demonstrated using intracoronary echo contrast injections. The peak echo contrast effect is directly related to the magnitude of collateral flow.
Resumo:
The Moon appears bright in the sky as a source of energetic neutral atoms (ENAs). These ENAs have recently been imaged over a broad energy range both from near the lunar surface, by India's Chandrayaan-1 mission (CH-1), and from a much more distant Earth orbit by NASA's Interstellar Boundary Explorer (IBEX) satellite. Both sets of observations have indicated that a relatively large fraction of the solar wind is reflected from the Moon as energetic neutral hydrogen. CH-1's angular resolution over different viewing angles of the lunar surface has enabled measurement of the emission as a function of angle. IBEX in contrast views not just a swath but a whole quadrant of the Moon as effectively a single pixel, as it subtends even at the closest approach no more than a few degrees on the sky. Here we use the scattering function measured by CH-1 to model global lunar ENA emission and combine these with IBEX observations. The deduced global reflection is modestly larger (by a factor of 1.25) when the angular scattering function is included. This provides a slightly updated IBEX estimate of AH=0.11±0.06 for the global neutralized albedo, which is ˜25% larger than the previous values of 0.09±0.05, based on an assumed uniform scattering distribution.
Resumo:
We propose a method that robustly combines color and feature buffers to denoise Monte Carlo renderings. On one hand, feature buffers, such as per pixel normals, textures, or depth, are effective in determining denoising filters because features are highly correlated with rendered images. Filters based solely on features, however, are prone to blurring image details that are not well represented by the features. On the other hand, color buffers represent all details, but they may be less effective to determine filters because they are contaminated by the noise that is supposed to be removed. We propose to obtain filters using a combination of color and feature buffers in an NL-means and cross-bilateral filtering framework. We determine a robust weighting of colors and features using a SURE-based error estimate. We show significant improvements in subjective and quantitative errors compared to the previous state-of-the-art. We also demonstrate adaptive sampling and space-time filtering for animations.
Resumo:
The ancient southern highlands on Mars (~3.5 Gyr old) contain > 600 regions that display spectral evidence in the infrared for the presence of chloride-bearing materials. Many of these locations were previously reported to display polygonal cracking patterns. We studied more than 80 of the chloride-bearing terrains using high-resolution (0.25-0.5 m/pixel) images, as well as near-infrared spectral data, to characterize the surface textures and the associated cracking patterns and mineralogies. Our study indicates that ~75% of the studied locations display polygonal cracks that resemble desiccation cracks, while some resemble salt expansion/thrust polygons. Furthermore, we detect, spectrally, the presence of smectites in association with ~30% of the studied fractured terrains. We note that smectites are a special class of swelling clay minerals that can induce formation of large desiccation cracks. As such, we suggest that the cracking patterns are indicative of the presence of smectite phyllosilicates even in the absence of spectral confirmation. Our results suggest that many chloride-bearing terrains have a lacustrine origin and a geologic setting similar to playas on Earth. Such locations would have contained ephemeral lakes that may have undergone repeated cycles of desiccation and recharging by a near-surface fluctuating water table in order to account for the salt-phyllosilicates associations. These results have notable implications for the ancient hydrology of Mars. We propose that the morphologies and sizes of the polygonal cracks can be used as paleoenvironmental, as well as lithological, indicators that could be helpful in planning future missions.
Resumo:
Mapping ecosystem services (ES) and their trade-offs is a key requirement for informed decision making for land use planning and management of natural resources that aim to move towards increasing the sustainability of landscapes. The negotiations of the purposes of landscapes and the services they should provide are difficult as there is an increasing number of stakeholders active at different levels with a variety of interests present on one particular landscape.Traditionally, land cover data is at the basis for mapping and spatial monitoring of ecosystem services. In light of complex landscapes it is however questionable whether land cover per se and as a spatial base unit is suitable for monitoring and management at the meso-scale. Often the characteristics of a landscape are defined by prevalence, composition and specific spatial and temporal patterns of different land cover types. The spatial delineation of shifting cultivation agriculture represents a prominent example of a land use system with its different land use intensities that requires alternative methodologies that go beyond the common remote sensing approaches of pixel-based land cover analysis due to the spatial and temporal dynamics of rotating cultivated and fallow fields.Against this background we advocate that adopting a landscape perspective to spatial planning and decision making offers new space for negotiation and collaboration, taking into account the needs of local resource users, and of the global community. For this purpose we introduce landscape mosaicsdefined as new spatial unit describing generalized land use types. Landscape mosaics have allowed us to chart different land use systems and land use intensities and permitted us to delineate changes in these land use systems based on changes of external claims on these landscapes. The underlying idea behindthe landscape mosaics is to use land cover data typically derived from remote sensing data and to analyse and classify spatial patterns of this land cover data using a moving window approach. We developed the landscape mosaics approach in tropical, forest dominated landscapesparticularly shifting cultivation areas and present examples ofour work from northern Laos, eastern Madagascarand Yunnan Province in China.
Resumo:
Morphometric investigations using a point and intersection counting strategy in the lung often are not able to reveal the full set of morphologic changes. This happens particularly when structural modifications are not expressed in terms of volume density changes and when rough and fine surface density alterations cancel each other at different magnifications. Making use of digital image processing, we present a methodological approach that allows to easily and quickly quantify changes of the geometrical properties of the parenchymal lung structure and reflects closely the visual appreciation of the changes. Randomly sampled digital images from light microscopic sections of lung parenchyma are filtered, binarized, and skeletonized. The lung septa are thus represented as a single-pixel wide line network with nodal points and end points and the corresponding internodal and end segments. By automatically counting the number of points and measuring the lengths of the skeletal segments, the lung architecture can be characterized and very subtle structural changes can be detected. This new methodological approach to lung structure analysis is highly sensitive to morphological changes in the parenchyma: it detected highly significant quantitative alterations in the structure of lungs of rats treated with a glucocorticoid hormone, where the classical morphometry had partly failed.
Resumo:
Objective: The PEM Flex Solo II (Naviscan, Inc., San Diego, CA) is currently the only commercially-available positron emission mammography (PEM) scanner. This scanner does not apply corrections for count rate effects, attenuation or scatter during image reconstruction, potentially affecting the quantitative accuracy of images. This work measures the overall quantitative accuracy of the PEM Flex system, and determines the contributions of error due to count rate effects, attenuation and scatter. Materials and Methods: Gelatin phantoms were designed to simulate breasts of different sizes (4 – 12 cm thick) with varying uniform background activity concentration (0.007 – 0.5 μCi/cc), cysts and lesions (2:1, 5:1, 10:1 lesion-to-background ratios). The overall error was calculated from ROI measurements in the phantoms with a clinically relevant background activity concentration (0.065 μCi/cc). The error due to count rate effects was determined by comparing the overall error at multiple background activity concentrations to the error at 0.007 μCi/cc. A point source and cold gelatin phantoms were used to assess the errors due to attenuation and scatter. The maximum pixel values in gelatin and in air were compared to determine the effect of attenuation. Scatter was evaluated by comparing the sum of all pixel values in gelatin and in air. Results: The overall error in the background was found to be negative in phantoms of all thicknesses, with the exception of the 4-cm thick phantoms (0%±7%), and it increased with thickness (-34%±6% for the 12-cm phantoms). All lesions exhibited large negative error (-22% for the 2:1 lesions in the 4-cm phantom) which increased with thickness and with lesion-to-background ratio (-85% for the 10:1 lesions in the 12-cm phantoms). The error due to count rate in phantoms with 0.065 μCi/cc background was negative (-23%±6% for 4-cm thickness) and decreased with thickness (-7%±7% for 12 cm). Attenuation was a substantial source of negative error and increased with thickness (-51%±10% to -77% ±4% in 4 to 12 cm phantoms, respectively). Scatter contributed a relatively constant amount of positive error (+23%±11%) for all thicknesses. Conclusion: Applying corrections for count rate, attenuation and scatter will be essential for the PEM Flex Solo II to be able to produce quantitatively accurate images.
Resumo:
Firn microstructure is accurately characterized using images obtained from scanning electron microscopy (SEM). Visibly etched grain boundaries within images are used to create a skeleton outline of the microstructure. A pixel-counting utility is applied to the outline to determine grain area. Firn grain sizes calculated using the technique described here are compared to those calculated using the techniques of Cow (1969) and Gay and Weiss (1999) on samples of the same material, and are found to be substantially smaller. The differences in grain size between the techniques are attributed to sampling deficiencies (e.g. the inclusion of pore filler in the grain area) in earlier methods. The new technique offers the advantages of greater accuracy and the ability to determine individual components of the microstructure (grain and pore), which have important applications in ice-core analyses. The new method is validated by calculating activation energies of grain boundary diffusion using predicted values based on the ratio of grain-size measurements between the new and existing techniques. The resulting activation energy falls within the range of values previously reported for firn/ice.
Resumo:
This paper presents a summary of beam-induced backgrounds observed in the ATLAS detector and discusses methods to tag and remove background contaminated events in data. Trigger-rate based monitoring of beam-related backgrounds is presented. The correlations of backgrounds with machine conditions, such as residual pressure in the beam-pipe, are discussed. Results from dedicated beam-background simulations are shown, and their qualitative agreement with data is evaluated. Data taken during the passage of unpaired, i.e. non-colliding, proton bunches is used to obtain background-enriched data samples. These are used to identify characteristic features of beam-induced backgrounds, which then are exploited to develop dedicated background tagging tools. These tools, based on observables in the Pixel detector, the muon spectrometer and the calorimeters, are described in detail and their efficiencies are evaluated. Finally an example of an application of these techniques to a monojet analysis is given, which demonstrates the importance of such event cleaning techniques for some new physics searches.
Resumo:
CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contains their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration,and the potential to perform image processing operations on-chip and in real-time. Here, the major challenges and design drivers for ground-based and space-based optical observation strategies for objects in Earth orbit have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Finally, we simulated several observation scenarios for ground- and space-based sensor by assuming different observation and sensor properties. We will introduce the analyzed end-to-end simulations of the ground- and spacebased strategies in order to investigate the orbit determination accuracy and its sensitivity which may result from different values for the frame-rate, pixel scale, astrometric and epoch registration accuracies. Two cases were simulated, a survey assuming a ground-based sensor to observe objects in LEO for surveillance applications, and a statistical survey with a space-based sensor orbiting in LEO observing small-size debris in LEO. The ground-based LEO survey uses a dynamical fence close to the Earth shadow a few hours after sunset. For the space-based scenario a sensor in a sun-synchronous LEO orbit, always pointing in the anti-sun direction to achieve optimum illumination conditions for small LEO debris was simulated.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
Superresolution from plenoptic cameras or camera arrays is usually treated similarly to superresolution from video streams. However, the transformation between the low-resolution views can be determined precisely from camera geometry and parallax. Furthermore, as each low-resolution image originates from a unique physical camera, its sampling properties can also be unique. We exploit this option with a custom design of either the optics or the sensor pixels. This design makes sure that the sampling matrix of the complete system is always well-formed, enabling robust and high-resolution image reconstruction. We show that simply changing the pixel aspect ratio from square to anamorphic is sufficient to achieve that goal, as long as each camera has a unique aspect ratio. We support this claim with theoretical analysis and image reconstruction of real images. We derive the optimal aspect ratios for sets of 2 or 4 cameras. Finally, we verify our solution with a camera system using an anamorphic lens.
Resumo:
This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.
Resumo:
Ameasurement is presented of the φ×BR(φ → K+K−) production cross section at √s = 7 TeV using pp collision data corresponding to an integrated luminosity of 383 μb−1, collected with theATLAS experiment at the LHC. Selection of φ(1020) mesons is based on the identification of charged kaons by their energy loss in the pixel detector. The differential cross section ismeasured as a function of the transverse momentum, pT,φ , and rapidity, yφ, of the φ(1020) meson in the fiducial region 500< pT,φ <1200MeV, |yφ| < 0.8, kaon pT,K > 230 MeV and kaon momentum pK < 800 MeV. The integrated φ(1020)-meson production cross section in this fiducial range is measured to be σφ×BR(φ → K+K−) = 570 ± 8 (stat) ± 66 (syst) ± 20 (lumi) μb.