10 resultados para physically based modeling
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.
Resumo:
This thesis explores system performance for reconfigurable distributed systems and provides an analytical model for determining throughput of theoretical systems based on the OpenSPARC FPGA Board and the SIRC Communication Framework. This model was developed by studying a small set of variables that together determine a system¿s throughput. The importance of this model is in assisting system designers to make decisions as to whether or not to commit to designing a reconfigurable distributed system based on the estimated performance and hardware costs. Because custom hardware design and distributed system design are both time consuming and costly, it is important for designers to make decisions regarding system feasibility early in the development cycle. Based on experimental data the model presented in this paper shows a close fit with less than 10% experimental error on average. The model is limited to a certain range of problems, but it can still be used given those limitations and also provides a foundation for further development of modeling reconfigurable distributed systems.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
A computationally efficient procedure for modeling the alkaline hydrolysis of esters is proposed based on calculations performed on methyl acetate and methyl benzoate systems. Extensive geometry and energy comparisons were performed on the simple ester methyl acetate. The effectiveness of performing high level single point ab initio energy calculations on the geometries obtained from semiempirical and ab initio methods was determined. The AM1 and PM3 semiempirical methods are evaluated for their ability to model the transition states and intermediates for ester hydrolysis. The Cramer/Truhlar SM3 solvation method was used to determine activation energies. The most computationally efficient way to model the transition states of large esters is to use the PM3 method. The PM3 transition structure can then be used as a template for the design of haptens capable of inducing catalytic antibodies.
Resumo:
Extensive research conducted over the past several decades has indicated that semipermeable membrane behavior (i.e., the ability of a porous medium to restrict the passage of solutes) may have a significant influence on solute migration through a wide variety of clay-rich soils, including both natural clay formations (aquitards, aquicludes) and engineered clay barriers (e.g., landfill liners and vertical cutoff walls). Restricted solute migration through clay membranes generally has been described using coupled flux formulations based on nonequilibrium (irreversible) thermodynamics. However, these formulations have differed depending on the assumptions inherent in the theoretical development, resulting in some confusion regarding the applicability of the formulations. Accordingly, a critical review of coupled flux formulations for liquid, current, and solutes through a semipermeable clay membrane under isothermal conditions is undertaken with the goals of explicitly resolving differences among the formulations and illustrating the significance of the differences from theoretical and practical perspectives. Formulations based on single-solute systems (i.e., uncharged solute), single-salt systems, and general systems containing multiple cations or anions are presented. Also, expressions relating the phenomenological coefficients in the coupled flux equations to relevant soil properties (e.g., hydraulic conductivity and effective diffusion coefficient) are summarized for each system. A major difference in the formulations is shown to exist depending on whether counter diffusion or salt diffusion is assumed. This difference between counter and salt diffusion is shown to affect the interpretation of values for the effective diffusion coefficient in a clay membrane based on previously published experimental data. Solute transport theories based on both counter and salt diffusion then are used to re-evaluate previously published column test data for the same clay membrane. The results indicate that, despite the theoretical inconsistency between the counter-diffusion assumption and the salt-diffusion conditions of the experiments, the predictive ability of solute transport theory based on the assumption of counter diffusion is not significantly different from that based on the assumption of salt diffusion, provided that the input parameters used in each theory are derived under the same assumption inherent in the theory. Nonetheless, salt-diffusion theory is fundamentally correct and, therefore, is more appropriate for problems involving salt diffusion in clay membranes. Finally, the fact that solute diffusion cannot occur in an ideal or perfect membrane is not explicitly captured in any of the theoretical expressions for total solute flux in clay membranes, but rather is generally accounted for via inclusion of an effective porosity, n(e), or a restrictive tortuosity factor, tau(r), in the formulation of Fick's first law for diffusion. Both n(e) and tau(r) have been correlated as a linear function of membrane efficiency. This linear correlation is supported theoretically by pore-scale modeling of solid-liquid interactions, but experimental support is limited. Additional data are needed to bolster the validity of the linear correlation for clay membranes. Copyright 2012 Elsevier B.V. All rights reserved.
Resumo:
Smoke spikes occurring during transient engine operation have detrimental health effects and increase fuel consumption by requiring more frequent regeneration of the diesel particulate filter. This paper proposes a decision tree approach to real-time detection of smoke spikes for control and on-board diagnostics purposes. A contemporary, electronically controlled heavy-duty diesel engine was used to investigate the deficiencies of smoke control based on the fuel-to-oxygen-ratio limit. With the aid of transient and steady state data analysis and empirical as well as dimensional modeling, it was shown that the fuel-to-oxygen ratio was not estimated correctly during the turbocharger lag period. This inaccuracy was attributed to the large manifold pressure ratios and low exhaust gas recirculation flows recorded during the turbocharger lag period, which meant that engine control module correlations for the exhaust gas recirculation flow and the volumetric efficiency had to be extrapolated. The engine control module correlations were based on steady state data and it was shown that, unless the turbocharger efficiency is artificially reduced, the large manifold pressure ratios observed during the turbocharger lag period cannot be achieved at steady state. Additionally, the cylinder-to-cylinder variation during this period were shown to be sufficiently significant to make the average fuel-to-oxygen ratio a poor predictor of the transient smoke emissions. The steady state data also showed higher smoke emissions with higher exhaust gas recirculation fractions at constant fuel-to-oxygen-ratio levels. This suggests that, even if the fuel-to-oxygen ratios were to be estimated accurately for each cylinder, they would still be ineffective as smoke limiters. A decision tree trained on snap throttle data and pruned with engineering knowledge was able to use the inaccurate engine control module estimates of the fuel-to-oxygen ratio together with information on the engine control module estimate of the exhaust gas recirculation fraction, the engine speed, and the manifold pressure ratio to predict 94% of all spikes occurring over the Federal Test Procedure cycle. The advantages of this non-parametric approach over other commonly used parametric empirical methods such as regression were described. An application of accurate smoke spike detection in which the injection pressure is increased at points with a high opacity to reduce the cumulative particulate matter emissions substantially with a minimum increase in the cumulative nitrogrn oxide emissions was illustrated with dimensional and empirical modeling.
Resumo:
Extensive research conducted over the past several decades has indicated that semipermeable membrane behavior (i.e., the ability of a porous medium to restrict the passage of solutes) may have a significant influence on solute migration through a wide variety of clay-rich soils, including both natural clay formations (aquitards, aquicludes) and engineered clay barriers (e.g., landfill liners and vertical cutoff walls). Restricted solute migration through clay membranes generally has been described using coupled flux formulations based on nonequilibrium (irreversible) thermodynamics. However, these formulations have differed depending on the assumptions inherent in the theoretical development, resulting in some confusion regarding the applicability of the formulations. Accordingly, a critical review of coupled flux formulations for liquid, current, and solutes through a semipermeable clay membrane under isothermal conditions is undertaken with the goals of explicitly resolving differences among the formulations and illustrating the significance of the differences from theoretical and practical perspectives. Formulations based on single-solute systems (i.e., uncharged solute), single-salt systems, and general systems containing multiple cations or anions are presented. Also, expressions relating the phenomenological coefficients in the coupled flux equations to relevant soil properties (e.g., hydraulic conductivity and effective diffusion coefficient) are summarized for each system. A major difference in the formulations is shown to exist depending on whether counter diffusion or salt diffusion is assumed. This difference between counter and salt diffusion is shown to affect the interpretation of values for the effective diffusion coefficient in a clay membrane based on previously published experimental data. Solute transport theories based on both counter and salt diffusion then are used to re-evaluate previously published column test data for the same clay membrane. The results indicate that, despite the theoretical inconsistency between the counter-diffusion assumption and the salt-diffusion conditions of the experiments, the predictive ability of solute transport theory based on the assumption of counter diffusion is not significantly different from that based on the assumption of salt diffusion, provided that the input parameters used in each theory are derived under the same assumption inherent in the theory. Nonetheless, salt-diffusion theory is fundamentally correct and, therefore, is more appropriate for problems involving salt diffusion in clay membranes. Finally, the fact that solute diffusion cannot occur in an ideal or perfect membrane is not explicitly captured in any of the theoretical expressions for total solute flux in clay membranes, but rather is generally accounted for via inclusion of an effective porosity, ne, or a restrictive tortuosity factor, tr, in the formulation of Fick's first law for diffusion. Both ne and tr have been correlated as a linear function of membrane efficiency. This linear correlation is supported theoretically by pore-scale modeling of solid-liquid interactions, but experimental support is limited. Additional data are needed to bolster the validity of the linear correlation for clay membranes.
Resumo:
Region-specific empirically based ground-truth (EBGT) criteria used to estimate the epicentral-location accuracy of seismic events have been developed for the Main Ethiopian Rift and the Tibetan plateau. Explosions recorded during the Ethiopia-Afar Geoscientific Lithospheric Experiment (EAGLE), the International Deep Profiling of Tibet, and the Himalaya (INDEPTH III) experiment provided the necessary GT0 reference events. In each case, the local crustal structure is well known and handpicked arrival times were available, facilitating the establishment of the location accuracy criteria through the stochastic forward modeling of arrival times for epicentral locations. In the vicinity of the Main Ethiopian Rift, a seismic event is required to be recorded on at least 8 stations within the local Pg/Pn crossover distance and to yield a network-quality metric of less than 0.43 in order to be classified as EBGT5(95%) (GT5 with 95% confidence). These criteria were subsequently used to identify 10 new GT5 events with magnitudes greater than 2.1 recorded on the Ethiopian Broadband Seismic Experiment (EBSE) network and 24 events with magnitudes greater than 2.4 recorded on the EAGLE broadband network. The criteria for the Tibetan plateau are similar to the Ethiopia criteria, yet slightly less restrictive as the network-quality metric needs to be less than 0.45. Twenty-seven seismic events with magnitudes greater than 2.5 recorded on the INDEPTH III network were identified as GT5 based on the derived criteria. When considered in conjunction with criteria developed previously for the Kaapvaal craton in southern Africa, it is apparent that increasing restrictions on the network-quality metric mirror increases in the complexity of geologic structure from craton to plateau to rift. Accession Number: WOS:000322569200012
Resumo:
As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.
Resumo:
Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.