194 resultados para Parametric
Resumo:
A low thermal diffusivity of adsorption beds induces a large thermal gradient across cylindrical adsorbers used in adsorption cooling cycles. This reduces the concentration difference across which a thermal compressor operates. Slow adsorption kinetics in conjunction with the void volume effect further diminishes throughputs from those adsorption thermal compressors. The problem can be partially alleviated by increasing the desorption temperatures. The theme of this paper is the determination the minimum desorption temperature required for a given set of evaporating/condensing temperatures for an activated carbon + HFC 134a adsorption cooler. The calculation scheme is validated from experimental data. Results from a parametric analysis covering a range of evaporating/condensing/desorption temperatures are presented. It is found that the overall uptake efficiency and Carnot COP characterize these bounds. A design methodology for adsorber sizing is evolved. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The safety of an in-service brick arch railway bridge is assessed through field testing and finite-element analysis. Different loading test train configurations have been used in the field testing. The response of the bridge in terms of displacements, strains, and accelerations is measured under the ambient and design train traffic loading conditions. Nonlinear fracture mechanics-based finite-element analyses are performed to assess the margin of safety. A parametric study is done to study the effects of tensile strength on the progress of cracking in the arch. Furthermore, a stability analysis to assess collapse of the arch caused by lateral movement at the springing of one of the abutments that is elastically supported is carried out. The margin of safety with respect to cracking and stability failure is computed. Conclusions are drawn with some remarks on the state of the bridge within the framework of the information available and inferred information. DOI: 10.1061/(ASCE)BE.1943-5592.0000338. (C) 2013 American Society of Civil Engineers.
Resumo:
The goal of speech enhancement algorithms is to provide an estimate of clean speech starting from noisy observations. The often-employed cost function is the mean square error (MSE). However, the MSE can never be computed in practice. Therefore, it becomes necessary to find practical alternatives to the MSE. In image denoising problems, the cost function (also referred to as risk) is often replaced by an unbiased estimator. Motivated by this approach, we reformulate the problem of speech enhancement from the perspective of risk minimization. Some recent contributions in risk estimation have employed Stein's unbiased risk estimator (SURE) together with a parametric denoising function, which is a linear expansion of threshold/bases (LET). We show that the first-order case of SURE-LET results in a Wiener-filter type solution if the denoising function is made frequency-dependent. We also provide enhancement results obtained with both techniques and characterize the improvement by means of local as well as global SNR calculations.
Resumo:
Transient signals such as plosives in speech or Castanets in audio do not have a specific modulation or periodic structure in time domain. However, in the spectral domain they exhibit a prominent modulation structure, which is a direct consequence of their narrow time localization. Based on this observation, a spectral-domain AM-FM model for transients is proposed. The spectral AM-FM model is built starting from real spectral zero-crossings. The AM and FM correspond to the spectral envelope (SE) and group delay (GD), respectively. Taking into account the modulation structure and spectral continuity, a local polynomial regression technique is proposed to estimate the GD function from the real spectral zeros. The SE is estimated based on the phase function computed from the estimated GD. Since the GD estimation is parametric, the degree of smoothness can be controlled directly. Simulation results based on synthetic transient signals generated using a beta density function are presented to analyze the noise-robustness of the SEGD model. Three specific applications are considered: (1) SEGD based modeling of Castanet sounds; (2) appropriateness of the model for transient compression; and (3) determining glottal closure instants in speech using a short-time SEGD model of the linear prediction residue.
Resumo:
In this work, the wave propagation analysis of built-up composite structures is performed using frequency domain spectral finite elements, to study the high frequency wave responses. The paper discusses basically two methods for modeling stiffened structures. In the first method, the concept of assembly of 2D spectral plate elements is used to model a built-up structure. In the second approach, spectral finite element method (SFEM) model is developed to model skin-stiffener structures, where the skin is considered as plate element and the stiffener as beam element. The SFEM model developed using the plate-beam coupling approach is then used to model wave propagation in a multiple stiffened structure and also extended to model the stiffened structures with different cross sections such as T-section, I-section and hat section. A number of parametric studies are performed to capture the mode coupling, that is, the flexural-axial coupling present in the wave responses.
Resumo:
A numerical model to study the growth of dendrites in a pure metal solidification process with an imposed rotational flow field is presented. The micro-scale features of the solidification are modeled by the well-known enthalpy technique. The effect of flow changing the position of the dendrite is captured by the Volume of Fluid (VOF) method. An imposed rigid-body rotational flow is found to gradually transform the dendrite into a globular microstructure. A parametric study is carried out for various angular velocities and the time for merger of dendrite arms is compared with the order estimate obtained from scaling.
Resumo:
We study the process of bound state formation in a D-brane collision. We consider two mechanisms for bound state formation. The first, operative at weak coupling in the worldvolume gauge theory, is pair creation of W-bosons. The second, operative at strong coupling, corresponds to formation of a large black hole in the dual supergravity. These two processes agree qualitatively at intermediate coupling, in accord with the correspondence principle of Horowitz and Polchinski. We show that the size of the bound state and time scale for formation of a bound state agree at the correspondence point. The time scale involves matching a parametric resonance in the gauge theory to a quasinormal mode in supergravity.
Resumo:
This paper illustrates a Wavelet Coefficient based approach using experiments to understand the sensitivity of ultrasonic signals due to parametric variation of a crack configuration in a metal plate. A PZT patch sensor/actuator system integrated to a metal plate with through-thickness crack is used. The proposed approach uses piezoelectric patches, which can be used to both actuate and sense the ultrasonic signals. While this approach leads to more flexibility and reduced cost for larger scalability of the sensor/actuator network, the complexity of the signals increases as compared to what is encountered in conventional ultrasonic NDE problems using selective wave modes. A Damage Index (DI) has been introduced, which is function of wavelet coefficient. Experiments have been carried out for various crack sizes, crack orientations and band-limited tone-burst signal through FIR filter. For a 1 cm long crack interrogated with 20 kHz tone-burst signal, the Damage Index (DI) for the horizontal crack orientation increases by about 70% with respect to that for 135 degrees oriented crack and it increases by about 33% with respect to the vertically oriented crack. The detailed results reported in this paper is a step forward to developing computational schemes for parametric identification of damage using sensor/actuator network and ultrasonic wave.
Resumo:
Latent variable methods, such as PLCA (Probabilistic Latent Component Analysis) have been successfully used for analysis of non-negative signal representations. In this paper, we formulate PLCS (Probabilistic Latent Component Segmentation), which models each time frame of a spectrogram as a spectral distribution. Given the signal spectrogram, the segmentation boundaries are estimated using a maximum-likelihood approach. For an efficient solution, the algorithm imposes a hard constraint that each segment is modelled by a single latent component. The hard constraint facilitates the solution of ML boundary estimation using dynamic programming. The PLCS framework does not impose a parametric assumption unlike earlier ML segmentation techniques. PLCS can be naturally extended to model coarticulation between successive phones. Experiments on the TIMIT corpus show that the proposed technique is promising compared to most state of the art speech segmentation algorithms.
Resumo:
This paper presents the formulation and performance analysis of four techniques for detection of a narrowband acoustic source in a shallow range-independent ocean using an acoustic vector sensor (AVS) array. The array signal vector is not known due to the unknown location of the source. Hence all detectors are based on a generalized likelihood ratio test (GLRT) which involves estimation of the array signal vector. One non-parametric and three parametric (model-based) signal estimators are presented. It is shown that there is a strong correlation between the detector performance and the mean-square signal estimation error. Theoretical expressions for probability of false alarm and probability of detection are derived for all the detectors, and the theoretical predictions are compared with simulation results. It is shown that the detection performance of an AVS array with a certain number of sensors is equal to or slightly better than that of a conventional acoustic pressure sensor array with thrice as many sensors.
Resumo:
The current study analyzes the leachate distribution in the Orchard Hills Landfill, Davis Junction, Illinois, using a two-phase flow model to assess the influence of variability in hydraulic conductivity on the effectiveness of the existing leachate recirculation system and its operations through reliability analysis. Numerical modeling, using finite-difference code, is performed with due consideration to the spatial variation of hydraulic conductivity of the municipal solid waste (MSW). The inhomogeneous and anisotropic waste condition is assumed because it is a more realistic representation of the MSW. For the reliability analysis, the landfill is divided into 10 MSW layers with different mean values of vertical and horizontal hydraulic conductivities (decreasing from top to bottom), and the parametric study is performed by taking the coefficients of variation (COVs) as 50, 100, 150, and 200%. Monte Carlo simulations are performed to obtain statistical information (mean and COV) of output parameters of the (1) wetted area of the MSW, (2) maximum induced pore pressure, and (3) leachate outflow. The results of the reliability analysis are used to determine the influence of hydraulic conductivity on the effectiveness of the leachate recirculation and are discussed in the light of a deterministic approach. The study is useful in understanding the efficiency of the leachate recirculation system. (C) 2013 American Society of Civil Engineers.
Resumo:
We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not been studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Face book data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends beyond the capability of existing approaches.
Resumo:
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search.
Resumo:
The paper focuses on the use of oxygen and steam as the gasification agents in the thermochemical conversion of biomass to produce hydrogen rich syngas, using a downdraft reactor configuration. Performance of the reactor is evaluated for different equivalence ratios (ER), steam to biomass ratios (SBR) and moisture content in the fuel. The results are compared and evaluated with chemical equilibrium analysis and reaction kinetics along with the results available in the literature. Parametric study suggests that, with increase in SBR, hydrogen fraction in the syngas increases but necessitates an increase in the ER to maintain reactor temperature toward stable operating conditions. SBR is varied from 0.75 to 2.7 and ER from 0.18 to 0.3. The peak hydrogen yield is found to be 104g/kg of biomass at SBR of 2.7. Further, significant enhancement in H-2 yield and H-2 to CO ratio is observed at higher SBR (SBR=1.5-2.7) compared with lower range SBR (SBR=0.75-1.5). Experiments were conducted using wet wood chips to induce moisture into the reacting system and compare the performance with dry wood with steam. The results clearly indicate the both hydrogen generation and the gasification efficiency ((g)) are better in the latter case. With the increase in SBR, gasification efficiency ((g)) and lower heating value (LHV) tend to reduce. Gasification efficiency of 85.8% is reported with LHV of 8.9MJNm(-3) at SBR of 0.75 compared with 69.5% efficiency at SBR of 2.5 and lower LHV of 7.4 at MJNm(-3) at SBR of 2.7. These are argued on the basis of the energy required for steam generation and the extent of steam consumption during the reaction, which translates subsequently in the LHV of syngas. From the analysis of the results, it is evident that reaction kinetics plays a crucial role in the conversion process. The study also presents the importance of reaction kinetics, which controls the overall performance related to efficiency, H-2 yield, H-2 to CO fraction and LHV of syngas, and their dependence on the process parameters SBR and ER. Copyright (c) 2013 John Wiley & Sons, Ltd.
Resumo:
Experiments were conducted to measure the heat flux in the vicinity of a three-dimensional protuberance placed on a flat plate facing a hypersonic flow at zero angle of attack. The effects of flow enthalpy and height of the protuberance on the interference heating in its vicinity were studied. Evidence of disturbed flow with highly three-dimensional characteristics and heightened vorticity was observed near the protrusion. A parametric study by changing the deflection angle of the protuberance was also made. Correlations exist in the open literature for enthalpy values lower than 2 MJ/kg. This effort has yielded a new correlation that is valid for enthalpies up to 6 MJ/kg. The Z-type schlieren technique was used to visualize the flow features qualitatively for one of the flow conditions studied.