30 resultados para low rate speech coding
Resumo:
To model strength degradation due to low cycle fatigue, at least three different approaches can be considered. One possibility is based on the formulation of a new free energy function and damage energy release rate, as was proposed by Ju(1989). The second approach uses the notion of bounding surface introduced in cyclic plasticity by Dafalias and Popov (1975). From this concept, some models have been proposed to quantify damage in concrete or RC (Suaris et al. 1990). The model proposed by the author to include fatigue effects is based essentially in Marigo (1985) and can be included in this approach.
Resumo:
Damage models based on the Continuum Damage Mechanics (CDM) include explicitly the coupling between damage and mechanical behavior and, therefore, are consistent with the definition of damage as a phenomenon with mechanical consequences. However, this kind of models is characterized by their complexity. Using the concept of lumped models, possible simplifications of the coupled models have been proposed in the literature to adapt them to the study of beams and frames. On the other hand, in most of these coupled models damage is associated only with the damage energy release rate which is shown to be the elastic strain energy. According to this, damage is a function of the maximum amplitude of cyclic deformation but does not depend on the number of cycles. Therefore, low cycle effects are not taking into account. From the simplified model proposed by Flórez-López, it is the purpose of this paper to present a formulation that allows to take into account the degradation produced not only by the peak values but also by the cumulative effects such as the low cycle fatigue. For it, the classical damage dissipative potential based on the concept of damage energy release rate is modified using a fatigue function in order to include cumulative effects. The fatigue function is determined through parameters such as the cumulative rotation and the total rotation and the number of cycles to failure. Those parameters can be measured or identified physically through the haracteristics of the RC. So the main advantage of the proposed model is the possibility of simulating the low cycle fatigue behavior without introducing parameters with no suitable physical meaning. The good performance of the proposed model is shown through a comparison between numerical and test results under cycling loading.
Resumo:
A series of quasi-static and dynamic tensile tests at varying temperatures were carried out to determine the mechanical behaviour of Ti-45Al-2Nb-2Mn+0.8vol.% TiB2 XD as-HIPed alloy. The temperature for the tests ranged from room temperature to 850 ∘C. The effect of the temperature on the ultimate tensile strength, as expected, was almost negligible within the selected temperature range. Nevertheless, the plastic flow suffered some softening because of the temperature. This alloy presents a relatively low ductility; thus, a low tensile strain to failure. The dynamic tests were performed in a Split Hopkinson Tension Bar, showing an increase of the ultimate tensile strength due to the strain rate hardening effect. Johnson-Cook constitutive relation was used to model the plastic flow. A post-testing microstructural of the specimens revealed an inhomogeneous structure, consisting of lamellar α2 + γ structure and γ phase equiaxed grains in the centre, and a fully lamellar structure on the rest. The assessment of the duplex-fully lamellar area ratio showed a clear relationship between the microstructure and the fracture behaviour.
Resumo:
Solar drying is one of the important processes used for extending the shelf life of agricultural products. Regarding consumer requirements, solar drying should be more suitable in terms of curtailing total drying time and preserving product quality. Therefore, the objective of this study was to develop a fuzzy logic-based control system, which performs a ?human-operator-like? control approach through using the previously developed low-cost model-based sensors. Fuzzy logic toolbox of MatLab and Borland C++ Builder tool were utilized to develop a required control system. An experimental solar dryer, constructed by CONA SOLAR (Austria) was used during the development of the control system. Sensirion sensors were used to characterize the drying air at different positions in the dryer, and also the smart sensor SMART-1 was applied to be able to include the rate of wood water extraction into the control system (the difference of absolute humidity of the air between the outlet and the inlet of solar dryer is considered by SMART-1 to be the extracted water). A comprehensive test over a 3 week period for different fuzzy control models has been performed, and data, obtained from these experiments, were analyzed. Findings from this study would suggest that the developed fuzzy logic-based control system is able to tackle difficulties, related to the control of solar dryer process.
Resumo:
This paper presents a description of our system for the Albayzin 2012 LRE competition. One of the main characteristics of this evaluation was the reduced number of available files for training the system, especially for the empty condition where no training data set was provided but only a development set. In addition, the whole database was created from online videos and around one third of the training data was labeled as noisy files. Our primary system was the fusion of three different i-vector based systems: one acoustic system based on MFCCs, a phonotactic system using trigrams of phone-posteriorgram counts, and another acoustic system based on RPLPs that improved robustness against noise. A contrastive system that included new features based on the glottal source was also presented. Official and postevaluation results for all the conditions using the proposed metrics for the evaluation and the Cavg metric are presented in the paper.
Resumo:
The aim of this work was to assess the effects of four doses of three commercial fibrolytic enzymes on ruminal fermentation of rice straw, maize stover and Pennisetum purpureum clon Cuba CT115 hay in batch cultures of ruminal micro-organisms from sheep. One enzyme was produced by Penicillium funiculosum (PEN) and two were from Trichoderma longibrachiatum (TL1 and TL2). Each liquid enzyme was diluted 200 (D1), 100 (D2), 50 (D3) and 10 (D4) - fold and applied to each substrate in quadruplicate over time and incubated for 120 h in rumen fluid. The D4 dose of each enzyme increased (P<0.05) the fractional rate of gas production and organic matter effective degradability for all substrates, and TL2 had similar effects when applied at D3. In 9 h incubations, PEN at D4, TL1 at all tested doses, and TL2 at D2, D3 and D4 increased (P<0.05) volatile fatty acid production and dry matter degradability for all substrates. The commercial enzymes tested were effective at increasing in vitro ruminal fermentation of low-quality forages, although effective doses varied with the enzyme.
Resumo:
When aqueous suspensions of gold nanorods are irradiated with a pulsing laser (808 nm), pressure waves appear even at low frequencies (pulse repetition rate of 25 kHz). We found that the pressure wave amplitude depends on the dynamics of the phenomenon. For fixed concentration and average laser current intensity, the amplitude of the pressure waves shows a trend of increasing with the pulse slope and the pulse maximum amplitude.We postulate that the detected ultrasonic pressure waves are a sort of shock waves that would be generated at the beginning of each pulse, because the pressure wave amplitude would be the result of the positive interference of all the individual shock waves.
Resumo:
A novel scheme for depth sequences compression, based on a perceptual coding algorithm, is proposed. A depth sequence describes the object position in the 3D scene, and is used, in Free Viewpoint Video, for the generation of synthetic video sequences. In perceptual video coding the human visual system characteristics are exploited to improve the compression efficiency. As depth sequences are never shown, the perceptual video coding, assessed over them, is not effective. The proposed algorithm is based on a novel perceptual rate distortion optimization process, assessed over the perceptual distortion of the rendered views generated through the encoded depth sequences. The experimental results show the effectiveness of the proposed method, able to obtain a very considerable improvement of the rendered view perceptual quality.
Resumo:
We analyze the performance of the geometric distortion, incurred when coding depth maps in 3D Video, as an estimator of the distortion of synthesized views. Our analysis is motivated by the need of reducing the computational complexity required for the computation of synthesis distortion in 3D video encoders. We propose several geometric distortion models that capture (i) the geometric distortion caused by the depth coding error, and (ii) the pixel-mapping precision in view synthesis. Our analysis starts with the evaluation of the correlation of geometric distortion values obtained with these models and the actual distortion on synthesized views. Then, the different geometric distortion models are employed in the rate-distortion optimization cycle of depth map coding, in order to assess the results obtained by the correlation analysis. Results show that one of the geometric distortion models is performing consistently better than the other models in all tests. Therefore, it can be used as a reasonable estimator of the synthesis distortion in low complexity depth encoders.
Resumo:
Las principal conclusión que se puede obtener tras el estudio es que el satélite, tal y como se ha tenido en cuenta, es perfectamente funcional desde el punto de vista eléctrico. Por la parte de la generación de potencia, los paneles son capaces de ofreces una cantidad tal como para que aproximadamente la mitad (en el caso de funcionamiento normal) de esta potencia sea destinada a la carga útil. Además, incluso en los modos de fallo definidos, el valor de potencia dedicada a la carga útil, es suficientemente alta como para que merezca la pena mantener el satélite operativo. Respecto de las baterías, se puede observar por su comportamiento que están, sobredimensionadas y por ello actúan como un elemento regulador del sistema completo, ya que tiene un amplio margen de trabajo por el cual se puede modificar el funcionamiento general. Y esto se demuestra no sólo en cuanto al estado de carga, que para el perfil de consumo constante y el de cuatro pulsos de 120 W por día se mantiene siempre por encima del 99%, si no también en términos de charging rate, el cual se está siempre dentro de los límites establecidos por el fabricante, asegurando una vida operativa acorde con la nominal. Por último, sobre el propio método de simulación se puede extraer que aun no siendo la mejor plataforma donde estudiar estos comportamientos. Presenta el inconveniente de que, en ciertas partes, restringe la flexibilidad a la hora de cambiar múltiples condiciones al mismo tiempo, pero a cambio permite un estudio bastante amplio con un requisito de conocimientos y de complejidad bajo, de manera que habilita a cualquier estudiante a llevar a cabo estudios similares.
Resumo:
We propose and demonstrate a low-cost alternative scheme of direct-detection to detect a 100Gbps polarization-multiplexed differential quadrature phase-shift keying (PM-DQPSK) signal. The proposed scheme is based on a delay line and a polarization rotator; the phase-shift keying signal is first converted into a polarization shift keying signal. Then, this signal is converted into an intensity modulated signal by a polarization beam splitter. Finally, the intensity-modulated signal is detected by balanced photodetectors. In order to demonstrate that our proposed receiver is suitable for using as a PM-DQPSK demodulator, a set of simulations have been performed. In addition to testing the sensitivity, the performance under various impairments, including narrow optical filtering, polarization mode dispersion, chromatic dispersion and polarization sensitivity, is analyzed. The simulation results show that our performance receiver is as good as a conventional receiver based on four delay interferometers. Moreover, in comparison with the typical receiver, fewer components are used in our receiver. Hence, implementation is easier, and total cost is reduced. In addition, our receiver can be easily improved to a bit-rate tunable receiver.
Resumo:
We propose a new algorithm for the design of prediction structures with low delay and limited penalty in the rate-distortion performance for multiview video coding schemes. This algorithm constitutes one of the elements of a framework for the analysis and optimization of delay in multiview coding schemes that is based in graph theory. The objective of the algorithm is to find the best combination of prediction dependencies to prune from a multiview prediction structure, given a number of cuts. Taking into account the properties of the graph-based analysis of the encoding delay, the algorithm is able to find the best prediction dependencies to eliminate from an original prediction structure, while limiting the number of cut combinations to evaluate. We show that this algorithm obtains optimum results in the reduction of the encoding latency with a lower computational complexity than exhaustive search alternatives.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
Traditional Text-To-Speech (TTS) systems have been developed using especially-designed non-expressive scripted recordings. In order to develop a new generation of expressive TTS systems in the Simple4All project, real recordings from the media should be used for training new voices with a whole new range of speaking styles. However, for processing this more spontaneous material, the new systems must be able to deal with imperfect data (multi-speaker recordings, background and foreground music and noise), filtering out low-quality audio segments and creating mono-speaker clusters. In this paper we compare several architectures for combining speaker diarization and music and noise detection which improve the precision and overall quality of the segmentation.
Resumo:
This paper presents a new methodology for measurement of the instantaneous average exhaust mass flow rate in reciprocating internal combustion engines to be used to determinate real driving emissions on light duty vehicles, as part of a Portable Emission Measurement System (PEMS). Firstly a flow meter, named MIVECO flow meter, was designed based on a Pitot tube adapted to exhaust gases which are characterized by moisture and particle content, rapid changes in flow rate and chemical composition, pulsating and reverse flow at very low engine speed. Then, an off-line methodology was developed to calculate the instantaneous average flow, considering the ?square root error? phenomenon. The paper includes the theoretical fundamentals, the developed flow meter specifications, the calibration tests, the description of the proposed off-line methodology and the results of the validation test carried out in a chassis dynamometer, where the validity of the mass flow meter and the methodology developed are demonstrated.