109 resultados para quantization artifacts
Strongly magnetized cold degenerate electron gas: Mass-radius relation of the magnetized white dwarf
Resumo:
We consider a relativistic, degenerate electron gas at zero temperature under the influence of a strong, uniform, static magnetic field, neglecting any form of interactions. Since the density of states for the electrons changes due to the presence of the magnetic field (which gives rise to Landau quantization), the corresponding equation of state also gets modified. In order to investigate the effect of very strong magnetic field, we focus only on systems in which a maximum of either one, two, or three Landau level(s) is/are occupied. This is important since, if a very large number of Landau levels are filled, it implies a very low magnetic field strength which yields back Chandrasekhar's celebrated nonmagnetic results. The maximum number of occupied Landau levels is fixed by the correct choice of two parameters, namely, the magnetic field strength and the maximum Fermi energy of the system. We study the equations of state of these one-level, two-level, and three-level systems and compare them by taking three different maximum Fermi energies. We also find the effect of the strong magnetic field on the mass-radius relation of the underlying star composed of the gas stated above. We obtain an exciting result that it is possible to have an electron-degenerate static star, namely, magnetized white dwarfs, with a mass significantly greater than the Chandrasekhar limit in the range 2.3-2.6M(circle dot), provided it has an appropriate magnetic field strength and central density. In fact, recent observations of peculiar type Ia supernovae-SN 2006gz, SN 2007if, SN 2009dc, SN 2003fg-seem to suggest super-Chandrasekhar-mass white dwarfs with masses up to 2.4-2.8M(circle dot) as their most likely progenitors. Interestingly, our results seem to lie within these observational limits.
Resumo:
Droplet collision occurs frequently in regions where the droplet number density is high. Even for Lean Premixed and Pre-vaporized (LPP) liquid sprays, the collision effects can be very high on the droplet size distributions, which will in turn affect the droplet vaporization process. Hence, in conjunction with vaporization modeling, collision modeling for such spray systems is also essential. The standard O'Rourke's collision model, usually implemented in CFD codes, tends to generate unphysical numerical artifact when simulations are performed on Cartesian grid and the results are not grid independent. Thus, a new collision modeling approach based on no-time-counter method (NTC) proposed by Schmidt and Rutland is implemented to replace O'Rourke's collision algorithm to solve a spray injection problem in a cylindrical coflow premixer. The so called ``four-leaf clover'' numerical artifacts are eliminated by the new collision algorithm and results from a diesel spray show very good grid independence. Next, the dispersion and vaporization processes for liquid fuel sprays are simulated in a coflow premixer. Two liquid fuels under investigation are jet-A and Rapeseed Methyl Esters (RME). Results show very good grid independence in terms of SMD distribution, droplet number distribution and fuel vapor mass flow rate. A baseline test is first established with a spray cone angle of 90 degrees and injection velocity of 3 m/s and jet-A achieves much better vaporization performance than RME due to its higher vapor pressure. To improve the vaporization performance for both fuels, a series of simulations have been done at several different combinations of spray cone angle and injection velocity. At relatively low spray cone angle and injection velocity, the collision effect on the average droplet size and the vaporization performance are very high due to relatively high coalescence rate induced by droplet collisions. Thus, at higher spray cone angle and injection velocity, the results expectedly show improvement in fuel vaporization performance since smaller droplet has a higher vaporization rate. The vaporization performance and the level of homogeneity of fuel-air mixture can be significantly improved when the dispersion level is high, which can be achieved by increasing the spray cone angle and injection velocity. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]
Resumo:
The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6 `' angular resolution and 72 mu Jy beam(-1) rms noise. The images (centered at R. A. 00(h)35(m)00(s), decl. -67 degrees 00'00 `' and R. A. 00(h)59(m)17(s), decl. -67.00'00 `', J2000 epoch) cover 8.42 deg(2) sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection threshold was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50 `'. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.
Resumo:
We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.
Resumo:
In this paper, we investigate the achievable rate region of Gaussian multiple access channels (MAC) with finite input alphabet and quantized output. With finite input alphabet and an unquantized receiver, the two-user Gaussian MAC rate region was studied. In most high throughput communication systems based on digital signal processing, the analog received signal is quantized using a low precision quantizer. In this paper, we first derive the expressions for the achievable rate region of a two-user Gaussian MAC with finite input alphabet and quantized output. We show that, with finite input alphabet, the achievable rate region with the commonly used uniform receiver quantizer has a significant loss in the rate region compared. It is observed that this degradation is due to the fact that the received analog signal is densely distributed around the origin, and is therefore not efficiently quantized with a uniform quantizer which has equally spaced quantization intervals. It is also observed that the density of the received analog signal around the origin increases with increasing number of users. Hence, the loss in the achievable rate region due to uniform receiver quantization is expected to increase with increasing number of users. We, therefore, propose a novel non-uniform quantizer with finely spaced quantization intervals near the origin. For a two-user Gaussian MAC with a given finite input alphabet and low precision receiver quantization, we show that the proposed non-uniform quantizer has a significantly larger rate region compared to what is achieved with a uniform quantizer.
Resumo:
We address the problem of speech enhancement in real-world noisy scenarios. We propose to solve the problem in two stages, the first comprising a generalized spectral subtraction technique, followed by a sequence of perceptually-motivated post-processing algorithms. The role of the post-processing algorithms is to compensate for the effects of noise as well as to suppress any artifacts created by the first-stage processing. The key post-processing mechanisms are aimed at suppressing musical noise and to enhance the formant structure of voiced speech as well as to denoise the linear-prediction residual. The parameter values in the techniques are fixed optimally by experimentally evaluating the enhancement performance as a function of the parameters. We used the Carnegie-Mellon university Arctic database for our experiments. We considered three real-world noise types: fan noise, car noise, and motorbike noise. The enhancement performance was evaluated by conducting listening experiments on 12 subjects. The listeners reported a clear improvement (MOS improvement of 0.5 on an average) over the noisy signal in the perceived quality (increase in the mean-opinion score (MOS)) for positive signal-to-noise-ratios (SNRs). For negative SNRs, however, the improvement was found to be marginal.
Resumo:
Several recently discovered peculiar Type Ia supernovae seem to demand an altogether new formation theory that might help explain the puzzling dissimilarities between them and the standard Type Ia supernovae. The most striking aspect of the observational analysis is the necessity of invoking super-Chandrasekhar white dwarfs having masses similar to 2.1-2.8 M-circle dot, M-circle dot being the mass of Sun, as their most probable progenitors. Strongly magnetized white dwarfs having super-Chandrasekhar masses have already been established as potential candidates for the progenitors of peculiar Type Ia supernovae. Owing to the Landau quantization of the underlying electron degenerate gas, theoretical results yielded the observationally inferred mass range. Here, we sketch a possible evolutionary scenario by which super-Chandrasekhar white dwarfs could be formed by accretion on to a commonly observed magnetized white dwarf, invoking the phenomenon of flux freezing. This opens multiple possible evolution scenarios ending in supernova explosions of super-Chandrasekhar white dwarfs having masses within the range stated above. We point out that our proposal has observational support, such as the recent discovery of a large number of magnetized white dwarfs by the Sloan Digital Sky Survey.
Resumo:
In order to reduce the motion artifacts in DSA, non-rigid image registration is commonly used before subtracting the mask from the contrast image. Since DSA registration requires a set of spatially non-uniform control points, a conventional MRF model is not very efficient. In this paper, we introduce the concept of pivotal and non-pivotal control points to address this, and propose a non-uniform MRF for DSA registration. We use quad-trees in a novel way to generate the non-uniform grid of control points. Our MRF formulation produces a smooth displacement field and therefore results in better artifact reduction than that of registering the control points independently. We achieve improved computational performance using pivotal control points without compromising on the artifact reduction. We have tested our approach using several clinical data sets, and have presented the results of quantitative analysis, clinical assessment and performance improvement on a GPU. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
The design of modulation schemes for the physical layer network-coded two-way relaying scenario is considered with a protocol which employs two phases: multiple access (MA) phase and broadcast (BC) phase. It was observed by Koike-Akino et al. that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of MA interference which occurs at the relay during the MA phase and all these network coding maps should satisfy a requirement called the exclusive law. We show that every network coding map that satisfies the exclusive law is representable by a Latin Square and conversely, that this relationship can be used to get the network coding maps satisfying the exclusive law. The channel fade states for which the minimum distance of the effective constellation at the relay become zero are referred to as the singular fade states. For M - PSK modulation (M any power of 2), it is shown that there are (M-2/4 - M/2 + 1) M singular fade states. Also, it is shown that the constraints which the network coding maps should satisfy so that the harmful effects of the singular fade states are removed, can be viewed equivalently as partially filled Latin Squares (PFLS). The problem of finding all the required maps is reduced to finding a small set of maps for M - PSK constellations (any power of 2), obtained by the completion of PFLS. Even though the completability of M x M PFLS using M symbols is an open problem, specific cases where such a completion is always possible are identified and explicit construction procedures are provided. Having obtained the network coding maps, the set of all possible channel realizations (the complex plane) is quantized into a finite number of regions, with a specific network coding map chosen in a particular region. It is shown that the complex plane can be partitioned into two regions: a region in which any network coding map which satisfies the exclusive law gives the same best performance and a region in which the choice of the network coding map affects the performance. The quantization thus obtained analytically, leads to the same as the one obtained using computer search for M = 4-PSK signal set by Koike-Akino et al., when specialized for Simulation results show that the proposed scheme performs better than the conventional exclusive-OR (XOR) network coding and in some cases outperforms the scheme proposed by Koike-Akino et al.
Resumo:
This paper presents the design and development of a novel optical vehicle classifier system, which is based on interruption of laser beams, that is suitable for use in places with poor transportation infrastructure. The system can estimate the speed, axle count, wheelbase, tire diameter, and the lane of motion of a vehicle. The design of the system eliminates the need for careful optical alignment, whereas the proposed estimation strategies render the estimates insensitive to angular mounting errors and to unevenness of the road. Strategies to estimate vehicular parameters are described along with the optimization of the geometry of the system to minimize estimation errors due to quantization. The system is subsequently fabricated, and the proposed features of the system are experimentally demonstrated. The relative errors in the estimation of velocity and tire diameter are shown to be within 0.5% and to change by less than 17% for angular mounting errors up to 30 degrees. In the field, the classifier demonstrates accuracy better than 97.5% and 94%, respectively, in the estimation of the wheelbase and lane of motion and can classify vehicles with an average accuracy of over 89.5%.
Resumo:
Super-resolution microscopy has tremendously progressed our understanding of cellular biophysics and biochemistry. Specifically, 4pi fluorescence microscopy technique stands out because of its axial super-resolution capability. All types of 4pi-microscopy techniques work well in conjugation with deconvolution techniques to get rid of artifacts due to side-lobes. In this regard, we propose a technique based on spatial filter in a 4pi-type-C confocal setup to get rid of these artifacts. Using a special spatial filter, we have reduced the depth-of-focus. Interference of two similar depth-of-focus beams in a 4 pi geometry result in substantial reduction of side-lobes. Studies show a reduction of side-lobes by 46% and 76% for single and two photon variant compared to 4pi - type - C confocal system. This is incredible considering the resolving capability of the existing 4pi - type - C confocal microscopy. Moreover, the main lobe is found to be 150 nm for the proposed spatial filtering technique as compared to 690 nm of the state-of-art confocal system. Reconstruction of experimentally obtained 2PE - 4pi data of green fluorescent protein (GFP)-tagged mitocondrial network shows near elimination of artifacts arising out of side-lobes. Proposed technique may find interesting application in fluorescence microscopy, nano-lithography, and cell biology. (C) 2013 AIP Publishing LLC.
Resumo:
This paper, for the first time, explores the charcatersictics of MOS capacitor controlled by independent double gates by numerical simulation and analytical modeling for its possible use in RF circuit design as a varactor. By numerical simulation it is shown how the quasi-static and non-quasi-static characteristics of the first gate capacitance could be tuned by the second gate biases. Effect of body doping and energy quantization are also discussed in this regard. A semi-empirical quasi-static model is also developed by using the existing incomplete Poisson solution of independent double gate transistors. Proposed model, which is valid from accumulation to inversion, is shown to have excellent agreement with numerical simulation for practical bias conditions.
Resumo:
The design of modulation schemes for the physical layer network-coded two way wireless relaying scenario is considered. It was observed by Koike-Akino et al. for the two way relaying scenario, that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of multiple access interference which occurs at the relay during the MA Phase and all these network coding maps should satisfy a requirement called exclusive law. We extend this approach to an Accumulate-Compute and Forward protocol which employs two phases: Multiple Access (MA) phase consisting of two channel uses with independent messages in each channel use, and Broadcast (BC) phase having one channel use. Assuming that the two users transmit points from the same 4-PSK constellation, every such network coding map that satisfies the exclusive law can be represented by a Latin Square with side 16, and conversely, this relationship can be used to get the network coding maps satisfying the exclusive law. Two methods of obtaining this network coding map to be used at the relay are discussed. Using the structural properties of the Latin Squares for a given set of parameters, the problem of finding all the required maps is reduced to finding a small set of maps. Having obtained all the Latin Squares, the set of all possible channel realizations is quantized, depending on which one of the Latin Squares obtained optimizes the performance. The quantization thus obtained, is shown to be the same as the one obtained in [7] for the 2-stage bidirectional relaying.
Resumo:
Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 mu m. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 mu m mark. (C) 2013 AIP Publishing LLC.