511 resultados para Chandra Bhimull
Resumo:
Lead Telluride (PbTe) with bismuth secondary phase embedded in the bulk has been prepared by matrix encapsulation technique. X-Ray Diffraction results indicated crystalline PbTe, while Rietveld analysis showed that Bi did not substitute at either Pb or Te site, which was further confirmed by Raman and X-Ray Photoelectron Spectroscopy. Scanning Electron Microscopy showed the expected presence of a secondary phase, while Energy Dispersive Spectroscopy results showed a slight deficiency of tellurium in the PbTe matrix, which might have occurred during synthesis due to higher vapor pressure of Te. Transmission Electron Microscopy results did not show any nanometer sized Bi phase. Seebeck coefficient (S) and electrical conductivity (sigma) were measured from room temperature to 725 K. A decrease in S and sigma with increasing Bi content showed an increased scattering of electrons from PbTe-Bi interfaces, along with a possible electron acceptor role of Bi secondary phase. An overall decrease in the power factor was thus observed. Thermal conductivity, measured from 400K to 725K, was smaller at starting temperature with increasing Bi concentration, and almost comparable to that of PbTe at higher temperatures, indicating a more important role of electrons as compared to phonons at PbTe-Bi interfaces. Still, a reasonable zT of 0.8 at 725K was achieved for undoped PbTe, but no improvement was found for bismuth added samples with micrometer inclusions. (C) 2013 American Institute of Physics. http://dx.doi.org/10.1063/1.4796148]
Resumo:
Quaternary chalcogenide compounds Cu2+ xZnSn1-xSe4 (0 <= x <= 0.15) were prepared by solid state synthesis. Rietveld powder X-ray diffraction (XRD) refinements combined with Electron Probe Micro Analyses (EPMA, WDS-Wavelength Dispersive Spectroscopy) and Raman spectra of all samples confirmed the stannite structure (Cu2FeSnS4-type) as the main phase. In addition to the main phase, small amounts of secondary phases like ZnSe, CuSe and SnSe were observed. Transport properties of all samples were measured as a function of temperature in the range from 300 K to 720 K. The electrical resistivity of all samples decreases with an increase in Cu content except for Cu2.1ZnSn0.9Se4, most likely due to a higher content of the ZnSe. All samples showed positive Seebeck coefficients indicating that holes are the majority charge carriers. The thermal conductivity of doped samples was high compared to Cu2ZnSnSe4 and this may be due to the larger electronic contribution and the presence of the ZnSe phase in the doped samples. The maximum zT = 0.3 at 720 K occurs for Cu2.05ZnSn0.95Se4 for which a high-pressure torsion treatment resulted in an enhancement of zT by 30% at 625 K. Copyright 2013 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. http://dx.doi.org/10.1063/1.4794733]
Resumo:
We analyze the spectral zero-crossing rate (SZCR) properties of transient signals and show that SZCR contains accurate localization information about the transient. For a train of pulses containing transient events, the SZCR computed on a sliding window basis is useful in locating the impulse locations accurately. We present the properties of SZCR on standard stylized signal models and then show how it may be used to estimate the epochs in speech signals. We also present comparisons with some state-of-the-art techniques that are based on the group-delay function. Experiments on real speech show that the proposed SZCR technique is better than other group-delay-based epoch detectors. In the presence of noise, a comparison with the zero-frequency filtering technique (ZFF) and Dynamic programming projected Phase-Slope Algorithm (DYPSA) showed that performance of the SZCR technique is better than DYPSA and inferior to that of ZFF. For highpass-filtered speech, where ZFF performance suffers drastically, the identification rates of SZCR are better than those of DYPSA.
Resumo:
The goal of speech enhancement algorithms is to provide an estimate of clean speech starting from noisy observations. The often-employed cost function is the mean square error (MSE). However, the MSE can never be computed in practice. Therefore, it becomes necessary to find practical alternatives to the MSE. In image denoising problems, the cost function (also referred to as risk) is often replaced by an unbiased estimator. Motivated by this approach, we reformulate the problem of speech enhancement from the perspective of risk minimization. Some recent contributions in risk estimation have employed Stein's unbiased risk estimator (SURE) together with a parametric denoising function, which is a linear expansion of threshold/bases (LET). We show that the first-order case of SURE-LET results in a Wiener-filter type solution if the denoising function is made frequency-dependent. We also provide enhancement results obtained with both techniques and characterize the improvement by means of local as well as global SNR calculations.
Resumo:
We address the problem of signal reconstruction from Fourier transform magnitude spectrum. The problem arises in many real-world scenarios where magnitude-only measurements are possible, but it is required to construct a complex-valued signal starting from those measurements. We present some new general results in this context and show that the previously known results on minimum-phase rational transfer functions, and recoverability of minimum-phase functions from magnitude spectrum, form special cases of the results reported in this paper. Some simulation results are also provided to demonstrate the practical feasibility of the reconstruction methodology.
Resumo:
Bilateral filters perform edge-preserving smoothing and are widely used for image denoising. The denoising performance is sensitive to the choice of the bilateral filter parameters. We propose an optimal parameter selection for bilateral filtering of images corrupted with Poisson noise. We employ the Poisson's Unbiased Risk Estimate (PURE), which is an unbiased estimate of the Mean Squared Error (MSE). It does not require a priori knowledge of the ground truth and is useful in practical scenarios where there is no access to the original image. Experimental results show that quality of denoising obtained with PURE-optimal bilateral filters is almost indistinguishable with that of the Oracle-MSE-optimal bilateral filters.
Resumo:
Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.
Resumo:
We address the problem of speech enhancement using a risk- estimation approach. In particular, we propose the use the Stein’s unbiased risk estimator (SURE) for solving the problem. The need for a suitable finite-sample risk estimator arises because the actual risks invariably depend on the unknown ground truth. We consider the popular mean-squared error (MSE) criterion first, and then compare it against the perceptually-motivated Itakura-Saito (IS) distortion, by deriving unbiased estimators of the corresponding risks. We use a generalized SURE (GSURE) development, recently proposed by Eldar for MSE. We consider dependent observation models from the exponential family with an additive noise model,and derive an unbiased estimator for the risk corresponding to the IS distortion, which is non-quadratic. This serves to address the speech enhancement problem in a more general setting. Experimental results illustrate that the IS metric is efficient in suppressing musical noise, which affects the MSE-enhanced speech. However, in terms of global signal-to-noise ratio (SNR), the minimum MSE solution gives better results.
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.
Resumo:
Fast and efficient channel estimation is key to achieving high data rate performance in mobile and vehicular communication systems, where the channel is fast time-varying. To this end, this work proposes and optimizes channel-dependent training schemes for reciprocal Multiple-Input Multiple-Output (MIMO) channels with beamforming (BF) at the transmitter and receiver. First, assuming that Channel State Information (CSI) is available at the receiver, a channel-dependent Reverse Channel Training (RCT) signal is proposed that enables efficient estimation of the BF vector at the transmitter with a minimum training duration of only one symbol. In contrast, conventional orthogonal training requires a minimum training duration equal to the number of receive antennas. A tight approximation to the capacity lower bound on the system is derived, which is used as a performance metric to optimize the parameters of the RCT. Next, assuming that CSI is available at the transmitter, a channel-dependent forward-link training signal is proposed and its power and duration are optimized with respect to an approximate capacity lower bound. Monte Carlo simulations illustrate the significant performance improvement offered by the proposed channel-dependent training schemes over the existing channel-agnostic orthogonal training schemes.
Resumo:
Transient signals such as plosives in speech or Castanets in audio do not have a specific modulation or periodic structure in time domain. However, in the spectral domain they exhibit a prominent modulation structure, which is a direct consequence of their narrow time localization. Based on this observation, a spectral-domain AM-FM model for transients is proposed. The spectral AM-FM model is built starting from real spectral zero-crossings. The AM and FM correspond to the spectral envelope (SE) and group delay (GD), respectively. Taking into account the modulation structure and spectral continuity, a local polynomial regression technique is proposed to estimate the GD function from the real spectral zeros. The SE is estimated based on the phase function computed from the estimated GD. Since the GD estimation is parametric, the degree of smoothness can be controlled directly. Simulation results based on synthetic transient signals generated using a beta density function are presented to analyze the noise-robustness of the SEGD model. Three specific applications are considered: (1) SEGD based modeling of Castanet sounds; (2) appropriateness of the model for transient compression; and (3) determining glottal closure instants in speech using a short-time SEGD model of the linear prediction residue.
Resumo:
Metal-oxide semiconductor capacitors based on titanium dioxide (TiO2) gate dielectrics were prepared by RF magnetron sputtering technique. The deposited films were post-annealed at temperatures in the range 773-1173 K in air for 1 hour. The effect of annealing temperature on the structural properties of TiO2 films was investigated by X-ray diffraction and Raman spectroscopy, the surface morphology was studied by atomic force microscopy (AFM) and the electrical properties of Al/TiO2/p-Si structure were measured recording capacitance-voltage and current-voltage characteristics. The as-deposited films and the films annealed at temperatures lower than 773 K formed in the anatase phase, while those annealed at temperatures higher than 973 K were made of mixtures of the rutile and anatase phases. FTIR analysis revealed that, in the case of films annealed at 1173 K, an interfacial layer had formed, thereby reducing the dielectric constant. The dielectric constant of the as-deposited films was 14 and increased from 25 to 50 with increases in the annealing temperature from 773 to 973 K. The leakage current density of as-deposited films was 1.7 x 10(-5) and decreased from 4.7 X 10(-6) to 3.5 x 10(-9) A/cm(2) with increases in the annealing temperature from 773 to 1173 K. The electrical conduction in the Al/TiO2/p-Si structures was studied on the basis of the plots of Schottky emission, Poole-Frenkel emission and Fowler-Nordheim tunnelling. The effect of structural changes on the current-voltage and capacitance-voltage characteristics of Al/TiO2/p-Si capacitors was also discussed.
Resumo:
We address the problem of sampling and reconstruction of two-dimensional (2-D) finite-rate-of-innovation (FRI) signals. We propose a three-channel sampling method for efficiently solving the problem. We consider the sampling of a stream of 2-D Dirac impulses and a sum of 2-D unit-step functions. We propose a 2-D causal exponential function as the sampling kernel. By causality in 2-D, we mean that the function has its support restricted to the first quadrant. The advantage of using a multichannel sampling method with causal exponential sampling kernel is that standard annihilating filter or root-finding algorithms are not required. Further, the proposed method has inexpensive hardware implementation and is numerically stable as the number of Dirac impulses increases.
Resumo:
The phenomenon of fatigue is commonly observed in majority of concrete structures and it is important to mathematically model it in order to predict their remaining life. An energy approach is adopted in this research by using the framework of thermodynamics wherein the dissipative phenomenon is described by a dissipation potential. An analytical expression is derived for the dissipation potential using the concepts of dimensional analysis and self-similarity to describe a fatigue crack propagation model for concrete. This is validated using available experimental results. Through a sensitivity analysis, the hierarchy of importance of different parameters is highlighted.
Resumo:
In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.