62 resultados para Voice quality
Resumo:
We analyze the performance of an SIR based admission control strategy in cellular CDMA systems with both voice and data traffic. Most studies In the current literature to estimate CDMA system capacity with both voice and data traf-Bc do not take signal-tlFlnterference ratio (SIR) based admission control into account In this paper, we present an analytical approach to evaluate the outage probability for voice trafllc, the average system throughput and the mean delay for data traffic for a volce/data CDMA system which employs an SIR based admission controL We show that for a dataaniy system, an improvement of about 25% In both the Erlang capacity as well as the mean delay performance is achieved with an SIR based admission control as compared to code availability based admission control. For a mixed voice/data srtem with 10 Erlangs of voice traffic, the Lmprovement in the mean delay performance for data Is about 40%.Ah, for a mean delay of 50 ms with 10 Erlangs voice traffic, the data Erlang capacity improves by about 9%.
Resumo:
A modeling framework is presented in this paper, integrating hydrologic scenarios projected from a General Circulation Model (GCM) with a water quality simulation model to quantify the future expected risk. Statistical downscaling with a Canonical Correlation Analysis (CCA) is carried out to develop the future scenarios of hydro-climate variables starting with simulations provided by a GCM. A Multiple Logistic Regression (MLR) is used to quantify the risk of Low Water Quality (LWQ) corresponding to a threshold quality level, by considering the streamflow and water temperature as explanatory variables. An Imprecise Fuzzy Waste Load Allocation Model (IFWLAM) presented in an earlier study is then used to develop adaptive policies to address the projected water quality risks. Application of the proposed methodology is demonstrated with the case study of Tunga-Bhadra river in India. The results showed that the projected changes in the hydro-climate variables tend to diminish DO levels, thus increasing the future risk levels of LWQ. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We study the performance of cognitive (secondary) users in a cognitive radio network which uses a channel whenever the primary users are not using the channel. The usage of the channel by the primary users is modelled by an ON-OFF renewal process. The cognitive users may be transmitting data using TCP connections and voice traffic. The voice traffic is given priority over the data traffic. We theoretically compute the mean delay of TCP and voice packets and also the mean throughput of the different TCP connections. We compare the theoretical results with simulations.
Resumo:
We propose a set of metrics that evaluate the uniformity, sharpness, continuity, noise, stroke width variance,pulse width ratio, transient pixels density, entropy and variance of components to quantify the quality of a document image. The measures are intended to be used in any optical character recognition (OCR) engine to a priori estimate the expected performance of the OCR. The suggested measures have been evaluated on many document images, which have different scripts. The quality of a document image is manually annotated by users to create a ground truth. The idea is to correlate the values of the measures with the user annotated data. If the measure calculated matches the annotated description,then the metric is accepted; else it is rejected. In the set of metrics proposed, some of them are accepted and the rest are rejected. We have defined metrics that are easily estimatable. The metrics proposed in this paper are based on the feedback of homely grown OCR engines for Indic (Tamil and Kannada) languages. The metrics are independent of the scripts, and depend only on the quality and age of the paper and the printing. Experiments and results for each proposed metric are discussed. Actual recognition of the printed text is not performed to evaluate the proposed metrics. Sometimes, a document image containing broken characters results in good document image as per the evaluated metrics, which is part of the unsolved challenges. The proposed measures work on gray scale document images and fail to provide reliable information on binarized document image.
Resumo:
Seasonal studies were carried out from 21 stations, comprising of three zones, of Cochin Estuary, to assess the organic matter quality and trophic status. The hydographical parameters showed significant seasonal variations and nutrients and chlorophylls were generally higher during the monsoon season. However, chemical contamination along with the seasonal limitations of light and nitrogen imposed restrictions on the primary production and as a result, mesotrophic conditions generally prevailed in the water column. The nutrient stoichometries and delta C-13 values of surficial sediments indicated significant allochthonous contribution of organic matter. Irrespective of the higher content of total organic matter, the labile organic matter was very low. Dominance of carbohydrates over lipids and proteins indicated the lower nutritive aspect of the organic matter, and their aged and refractory nature. This, along with higher amount of phytodetritus and the low algal contribution to the biopolymeric carbon corroborated the dominance of allochthonous organic matter and the heterotrophic nature. The spatial and seasonal variations of labile organic components could effectively substantiate the observed shift in the productivity pattern. An alternative ratio, lipids to tannins and lignins, was proposed to ascertain the relative contribution of allochthonous organic matter in the estuary. This study confirmed the efficiency of an integrated biogeochemical approach to establish zones with distinct benthic trophic status associated with different degrees of natural and anthropogenic input. Nevertheless, our results also suggest that the biochemical composition alone could lead to erroneous conclusions in the case of regions that receive enormous amounts of anthropogenic inputs.
Resumo:
In this paper optical code-division multiple-access (O-CDMA) packet network is considered, which offers inherent security in the access networks. The application of O-CDMA to multimedia transmission (voice, data, and video) is investigated. The simultaneous transmission of various services is achieved by assigning to each user unique multiple code signatures. Thus, by applying a parallel mapping technique, we achieve multi-rate services. A random access protocol is proposed, here, where all distinct codes are used, for packet transmission. The codes, Optical Orthogonal Code (OOC), or 1D codes and Wavelength/Time Single-Pulse-per-Row (W/T SPR), or 2D codes, are analyzed. These 1D and 2D codes with varied weight are used to differentiate the Quality of Service (QoS). The theoretical bit error probability corresponding to the quality of each service is established using 1D and 2D codes in the receiver noiseless case and compared. The results show that, using 2D codes QoS in multimedia transmission is better than using 1D codes.
Resumo:
Epoch is defined as the instant of significant excitation within a pitch period of voiced speech. Epoch extraction continues to attract the interest of researchers because of its significance in speech analysis. Existing high performance epoch extraction algorithms require either dynamic programming techniques or a priori information of the average pitch period. An algorithm without such requirements is proposed based on integrated linear prediction residual (ILPR) which resembles the voice source signal. Half wave rectified and negated ILPR (or Hilbert transform of ILPR) is used as the pre-processed signal. A new non-linear temporal measure named the plosion index (PI) has been proposed for detecting `transients' in speech signal. An extension of PI, called the dynamic plosion index (DPI) is applied on pre-processed signal to estimate the epochs. The proposed DPI algorithm is validated using six large databases which provide simultaneous EGG recordings. Creaky and singing voice samples are also analyzed. The algorithm has been tested for its robustness in the presence of additive white and babble noise and on simulated telephone quality speech. The performance of the DPI algorithm is found to be comparable or better than five state-of-the-art techniques for the experiments considered.
Resumo:
This paper proposes an automatic acoustic-phonetic method for estimating voice-onset time of stops. This method requires neither transcription of the utterance nor training of a classifier. It makes use of the plosion index for the automatic detection of burst onsets of stops. Having detected the burst onset, the onset of the voicing following the burst is detected using the epochal information and a temporal measure named the maximum weighted inner product. For validation, several experiments are carried out on the entire TIMIT database and two of the CMU Arctic corpora. The performance of the proposed method compares well with three state-of-the-art techniques. (C) 2014 Acoustical Society of America
Resumo:
Vertically aligned zinc oxide nanorods (ZnO NRs) were synthesized on kapton flexible sheets using a simple and cost-effective three-step process (electrochemical seeding, annealing under ambient conditions, and chemical solution growth). Scanning electron microscopy studies reveal that ZnO NRs grown on seed-layers, developed by electrochemical deposition at a negative potential of 1.5 V over a duration of 2.5 min and annealed at 200 degrees C for 2 h, consist of uniform morphology and good chemical stoichiometry. Transmission electron microscopy analyses show that the as-grown ZnO NRs have single crystalline hexagonal structure with a preferential growth direction of < 001 >. Highly flexible p-n junction diodes fabricated by using p-type conductive polymer exhibited excellent diode characteristics even under the fold state.
Resumo:
Amorphous hydrogenated silicon (a-Si:H) is well-known material in the global semiconductor industry. The quality of the a-Si:H films is generally decided by silicon and hydrogen bonding configuration (Si-H-x, x=1,2) and hydrogen concentration (C-H). These quality aspects are correlated with the plasma parameters like ion density (N-i) and electron temperature (T-e) of DC, Pulsed DC (PDC) and RF plasmas during the sputter-deposition of a-Si:H thin films. It was found that the N-i and T-e play a major role in deciding Si-H-x bonding configuration and the C-H value in a-Si:H films. We observed a trend in the variation of Si-H and Si-H-2 bonding configurations, and C-H in the films deposited by DC, Pulsed DC and RF reactive sputtering techniques. Ion density and electron energy are higher in RF plasma followed by PDC and DC plasma. Electrons with two different energies were observed in all the plasmas. At a particular hydrogen partial pressure, RF deposited films have higher C-H followed by PDC and then DC deposited films. The maximum energy that can be acquired by the ions was found to be higher in RF plasma. Floating potential (V-f) is more negative in DC plasma, whereas, plasma potential (V-p) is found to be more positive in RF plasma. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The paper explores the synthesis of oxide-free nanoparticles of Ag and Cu through laser ablation of pure targets under aqueous medium and tuning the quality and size through addition of Polyvinylpyrrolidone (PVP) in the medium. The size distribution of nanoparticles reduces from 37 +/- 30 nm and 13 +/- 5 nm to 32 +/- 12 nm and 4 +/- 1 nm for Ag and Cu with changes in PVP concentration from 0.00 to 0.02 M, respectively. Irregular shaped particles of Ag with Ag2O phase and a Cu-Cu2O core-shell particles form without the addition of PVP, while oxide layer is absent with 0.02 M of PVP. The recent understanding of the mechanism of particle formation during laser ablation under liquid medium allows us to rationalize our observation.
Resumo:
We consider optimal power allocation policies for a single server, multiuser system. The power is consumed in transmission of data only. The transmission channel may experience multipath fading. We obtain very efficient, low computational complexity algorithms which minimize power and ensure stability of the data queues. We also obtain policies when the users may have mean delay constraints. If the power required is a linear function of rate then we exploit linearity and obtain linear programs with low complexity.
Resumo:
A characterization of the voice source (VS) signal by the pitch synchronous (PS) discrete cosine transform (DCT) is proposed. With the integrated linear prediction residual (ILPR) as the VS estimate, the PS DCT of the ILPR is evaluated as a feature vector for speaker identification (SID). On TIMIT and YOHO databases, using a Gaussian mixture model (GMM)-based classifier, it performs on par with existing VS-based features. On the NIST 2003 database, fusion with a GMM-based classifier using MFCC features improves the identification accuracy by 12% in absolute terms, proving that the proposed characterization has good promise as a feature for SID studies. (C) 2015 Acoustical Society of America
Resumo:
We demonstrate in here a powerful scalable technology to synthesize continuously high quality CdSe quantum dots (QDs) in supercritical hexane. Using a low cost, highly thermally stable Cd-precursor, cadmium deoxycholate, the continuous synthesis is performed in 400 mu m ID stainless steel capillaries resulting in CdSe QDs having sharp full-width-at-half-maxima (23 nm) and high photoluminescence quantum yields (45-55%). Transmission electron microscopy images show narrow particles sizes distribution (sigma <= 5%) with well-defined crystal lattices. Using two different synthesis temperatures (250 degrees C and 310 degrees C), it was possible to obtain zinc blende and wurtzite crystal structures of CdSe QDs, respectively. This synthetic approach allows achieving substantial production rates up to 200 mg of QDs per hour depending on the targeted size, and could be easily scaled to gram per hour.
Resumo:
In this paper we present a depth-guided photometric 3D reconstruction method that works solely with a depth camera like the Kinect. Existing methods that fuse depth with normal estimates use an external RGB camera to obtain photometric information and treat the depth camera as a black box that provides a low quality depth estimate. Our contribution to such methods are two fold. Firstly, instead of using an extra RGB camera, we use the infra-red (IR) camera of the depth camera system itself to directly obtain high resolution photometric information. We believe that ours is the first method to use an IR depth camera system in this manner. Secondly, photometric methods applied to complex objects result in numerous holes in the reconstructed surface due to shadows and self-occlusions. To mitigate this problem, we develop a simple and effective multiview reconstruction approach that fuses depth and normal information from multiple viewpoints to build a complete, consistent and accurate 3D surface representation. We demonstrate the efficacy of our method to generate high quality 3D surface reconstructions for some complex 3D figurines.