869 resultados para artifacts
Resumo:
We present experimental investigation of a new reconstruction method for off-axis digital holographic microscopy (DHM). This method effectively suppresses the object auto-correlation, commonly called the zero-order term, from holographic measurements, thereby suppressing the artifacts generated by the intensities of the two beams employed for interference from complex wavefield reconstruction. The algorithm is based on non-linear filtering, and can be applied to standard DHM setups, with realistic recording conditions. We study the applicability of the technique under different experimental configurations, such as topographic images of microscopic specimens or speckle holograms.
Resumo:
In engineering design, the end goal is the creation of an artifact, product, system, or process that fulfills some functional requirements at some desired level of performance. As such, knowledge of functionality is essential in a wide variety of tasks in engineering activities, including modeling, generation, modification, visualization, explanation, evaluation, diagnosis, and repair of these artifacts and processes. A formal representation of functionality is essential for supporting any of these activities on computers. The goal of Parts 1 and 2 of this Special Issue is to bring together the state of knowledge of representing functionality in engineering applications from both the engineering and the artificial intelligence (AI) research communities.
Resumo:
As with 1,2-diphenylethane (dpe), X-ray crystallographic methods measure the central bond in meso-3,4-diphenylhexane-2,5-done (dphd) as significantly shorter than normal for an sp(3)-sp(3) bond. The same methods measure the benzylic (ethane C-Ph) bonds in dphd as unusually long for sp(3)-sp(2) liaisons. Torsional motions of the phenyl rings about the C-Ph bonds have been proposed as the artifacts behind the result of a 'short' central bond in dpe. While a similar explanation can, presumably, hold for the even 'shorter' central bond in dphd, it cannot account for the 'long' C-Ph bonds. The phenyl groups, departing much from regular hexagonal shape, adopt highly skewed conformations with respect to the plane constituted by the four central atoms. It is thought that-the thermal motions of the phenyl rings, conditioned by the potential wells in which they are ensconced in the unit cell, are largely libratory around their normal axes. In what appears to be a straightforward explanation under the 'rigid-body' concept, it appears that these libratory motions of the phenyl rings, that account, at the same time, for the 'short' central bond, are the artifacts behind the 'long' measurement of the C-Ph bonds. These motions could be superimposed on torsional motions analogous to those proposed in the case of dpe. An inspection of the ORTEP diagram from the 298 K data on dphd clearly suggests these possibilities. Supportive evidence for these qualitative explanations from an analysis of the differences between the mean square displacements of C(1) and C(7)/C(1a) and C(7a) based on the 'rigid-body model' is discussed. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We measured the two components of the complex dielectric function of insulating and metallic PF6 doped polypyrrole up to 4 meV simultaneously. These data are used as an input performed Kramers-Kronig analysis on the higher energy reflection data. This has helped to clarify those results, which were previously deduced from the FIR and UV/VIS data, that could be artifacts.
Resumo:
Creep resistant Mg alloy QE22 reinforced with maftec(R), saffil(R) or supertec(R) short fibres is cycled between room temperature and 308degreesC at different ramp rates in the longitudinal and transverse directions. From the careful analysis of the strain vs. temperature thermal cycling curves true material behaviour and artifacts from the dilatometer are deciphered. From this analysis true coefficient of thermal expansion and relaxation processes are deduced. Hysteresis at higher temperatures is attributed to the relaxation process, whereas hysteresis at low temperatures giving a tilt-ground shape to the thermal cycling curves is again an artifact due to the instrument. The change in ramp rate highlights this effect. Finally, the effect of thermal cycling on microstructure is examined.
Resumo:
Literature of the ancient Chola Dynasty (A.D. 9th-11th centuries) of South India and recent archaeological excavations allude to a sea flood that crippled the ancient port at Kaveripattinam, a trading hub for Southeast Asia, and probably affected the entire South Indian coast, analogous to the 2004 Indian Ocean tsunami impact. We present sedimentary evidence from an archaeological site to validate the textual references to this early medieval event. A sandy layer showing bed forms representing high-energy conditions, possibly generated by a seaborne wave, was identified at the Kaveripattinam coast of Tamil Nadu, South India. Its sedimentary characteristics include hummocky cross-stratification, convolute lamination with heavy minerals, rip-up clasts, an erosional contact with the underlying mud bed, and a landward thinning geometry. Admixed with 1000-year-old Chola period artifacts, it provided an optically stimulated luminescence age of 1091 perpendicular to 66 yr and a thermoluminescence age of 993 perpendicular to 73 yr for the embedded pottery sherds. The dates of these proxies converge around 1000 yr B. P., correlative of an ancient tsunami reported from elsewhere along the Indian Ocean coasts. (C) 2011 Wiley Periodicals, Inc.
Resumo:
With the introduction of 2D flat-panel X-ray detectors, 3D image reconstruction using helical cone-beam tomography is fast replacing the conventional 2D reconstruction techniques. In 3D image reconstruction, the source orbit or scanning geometry should satisfy the data sufficiency or completeness condition for exact reconstruction. The helical scan geometry satisfies this condition and hence can give exact reconstruction. The theoretically exact helical cone-beam reconstruction algorithm proposed by Katsevich is a breakthrough and has attracted interest in the 3D reconstruction using helical cone-beam Computed Tomography.In many practical situations, the available projection data is incomplete. One such case is where the detector plane does not completely cover the full extent of the object being imaged in lateral direction resulting in truncated projections. This result in artifacts that mask small features near to the periphery of the ROI when reconstructed using the convolution back projection (CBP) method assuming that the projection data is complete. A number of techniques exist which deal with completion of missing data followed by the CBP reconstruction. In 2D, linear prediction (LP)extrapolation has been shown to be efficient for data completion, involving minimal assumptions on the nature of the data, producing smooth extensions of the missing projection data.In this paper, we propose to extend the LP approach for extrapolating helical cone beam truncated data. The projection on the multi row flat panel detectors has missing columns towards either ends in the lateral direction in truncated data situation. The available data from each detector row is modeled using a linear predictor. The available data is extrapolated and this completed projection data is backprojected using the Katsevich algorithm. Simulation results show the efficacy of the proposed method.
Resumo:
Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design filters based on discrete cosine transform (DCT) is proposed in this study for optimal medical image filtering. This algorithm exploits the better energy compaction property of DCT and re-arrange these coefficients in a wavelet manner to get the better energy clustering at desired spatial locations. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions.
Resumo:
This paper presents the image reconstruction using the fan-beam filtered backprojection (FBP) algorithm with no backprojection weight from windowed linear prediction (WLP) completed truncated projection data. The image reconstruction from truncated projections aims to reconstruct the object accurately from the available limited projection data. Due to the incomplete projection data, the reconstructed image contains truncation artifacts which extends into the region of interest (ROI) making the reconstructed image unsuitable for further use. Data completion techniques have been shown to be effective in such situations. We use windowed linear prediction technique for projection completion and then use the fan-beam FBP algorithm with no backprojection weight for the 2-D image reconstruction. We evaluate the quality of the reconstructed image using fan-beam FBP algorithm with no backprojection weight after WLP completion.
Resumo:
Droplet collision occurs frequently in regions where the droplet number density is high. Even for Lean Premixed and Pre-vaporized (LPP) liquid sprays, the collision effects can be very high on the droplet size distributions, which will in turn affect the droplet vaporization process. Hence, in conjunction with vaporization modeling, collision modeling for such spray systems is also essential. The standard O'Rourke's collision model, usually implemented in CFD codes, tends to generate unphysical numerical artifact when simulations are performed on Cartesian grid and the results are not grid independent. Thus, a new collision modeling approach based on no-time-counter method (NTC) proposed by Schmidt and Rutland is implemented to replace O'Rourke's collision algorithm to solve a spray injection problem in a cylindrical coflow premixer. The so called ``four-leaf clover'' numerical artifacts are eliminated by the new collision algorithm and results from a diesel spray show very good grid independence. Next, the dispersion and vaporization processes for liquid fuel sprays are simulated in a coflow premixer. Two liquid fuels under investigation are jet-A and Rapeseed Methyl Esters (RME). Results show very good grid independence in terms of SMD distribution, droplet number distribution and fuel vapor mass flow rate. A baseline test is first established with a spray cone angle of 90 degrees and injection velocity of 3 m/s and jet-A achieves much better vaporization performance than RME due to its higher vapor pressure. To improve the vaporization performance for both fuels, a series of simulations have been done at several different combinations of spray cone angle and injection velocity. At relatively low spray cone angle and injection velocity, the collision effect on the average droplet size and the vaporization performance are very high due to relatively high coalescence rate induced by droplet collisions. Thus, at higher spray cone angle and injection velocity, the results expectedly show improvement in fuel vaporization performance since smaller droplet has a higher vaporization rate. The vaporization performance and the level of homogeneity of fuel-air mixture can be significantly improved when the dispersion level is high, which can be achieved by increasing the spray cone angle and injection velocity. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]
Resumo:
The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6 `' angular resolution and 72 mu Jy beam(-1) rms noise. The images (centered at R. A. 00(h)35(m)00(s), decl. -67 degrees 00'00 `' and R. A. 00(h)59(m)17(s), decl. -67.00'00 `', J2000 epoch) cover 8.42 deg(2) sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection threshold was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50 `'. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.
Resumo:
We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.
Resumo:
We address the problem of speech enhancement in real-world noisy scenarios. We propose to solve the problem in two stages, the first comprising a generalized spectral subtraction technique, followed by a sequence of perceptually-motivated post-processing algorithms. The role of the post-processing algorithms is to compensate for the effects of noise as well as to suppress any artifacts created by the first-stage processing. The key post-processing mechanisms are aimed at suppressing musical noise and to enhance the formant structure of voiced speech as well as to denoise the linear-prediction residual. The parameter values in the techniques are fixed optimally by experimentally evaluating the enhancement performance as a function of the parameters. We used the Carnegie-Mellon university Arctic database for our experiments. We considered three real-world noise types: fan noise, car noise, and motorbike noise. The enhancement performance was evaluated by conducting listening experiments on 12 subjects. The listeners reported a clear improvement (MOS improvement of 0.5 on an average) over the noisy signal in the perceived quality (increase in the mean-opinion score (MOS)) for positive signal-to-noise-ratios (SNRs). For negative SNRs, however, the improvement was found to be marginal.
Resumo:
In order to reduce the motion artifacts in DSA, non-rigid image registration is commonly used before subtracting the mask from the contrast image. Since DSA registration requires a set of spatially non-uniform control points, a conventional MRF model is not very efficient. In this paper, we introduce the concept of pivotal and non-pivotal control points to address this, and propose a non-uniform MRF for DSA registration. We use quad-trees in a novel way to generate the non-uniform grid of control points. Our MRF formulation produces a smooth displacement field and therefore results in better artifact reduction than that of registering the control points independently. We achieve improved computational performance using pivotal control points without compromising on the artifact reduction. We have tested our approach using several clinical data sets, and have presented the results of quantitative analysis, clinical assessment and performance improvement on a GPU. (C) 2013 Elsevier Ltd. All rights reserved.