10 resultados para State-Space Modeling
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
Recent developments in vehicle steering systems offer new opportunities to measure the steering torque and reliably estimate the vehicle sideslip and the tire-road friction coefficient. This paper presents an approach to vehicle stabilization that leverages these estimates to define state boundaries that exclude unstable vehicle dynamics and utilizes a model predictive envelope controller to bound the vehicle motion within this stable region of the state space. This approach provides a large operating region accessible by the driver and smooth interventions at the stability boundaries. Experimental results obtained with a steer-by-wire vehicle and a proof of envelope invariance demonstrate the efficacy of the envelope controller in controlling the vehicle at the limits of handling.
Resumo:
Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Comparison of the crystal structure of a transition state analogue that was used to raise catalytic antibodies for the benzoyl ester hydrolysis of cocaine with structures calculated by ab initio, semiempirical, and solvation semiempirical methods reveals that modeling of solvation is crucial for replicating the crystal structure geometry. Both SM3 and SM2 calculations, starting from the crystal structure TSA I, converged on structures similar to the crystal structure. The 3-21G(*)/HF, 6-31G*/HF, PM3, and AM1 calculations converged on structures similar to each other, but these gas-phase structures were significantly extended relative to the condensed phase structures. Two transition states for the hydrolysis of the benzoyl ester of cocaine were located with the SM3 method. The gas phase calculations failed to locate reasonable transition state structures for this reaction. These results imply that accurate modeling of the potential energy surfaces for the hydrolysis of cocaine requires solvation methods.
Resumo:
Over the past 7 years, the enediyne anticancer antibiotics have been widely studied due to their DNA cleaving ability. The focus of these antibiotics, represented by kedarcidin chromophore, neocarzinostatin chromophore, calicheamicin, esperamicin A, and dynemicin A, is on the enediyne moiety contained within each of these antibiotics. In its inactive form, the moiety is benign to its environment. Upon suitable activation, the system undergoes a Bergman cycloaromatization proceeding through a 1,4-dehydrobenzene diradical intermediate. It is this diradical intermediate that is thought to cleave double-stranded dna through hydrogen atom abstraction. Semiempirical, semiempiricalci, Hartree–Fock ab initio, and mp2 electron correlation methods have been used to investigate the inactive hex-3-ene-1,5-diyne reactant, the 1,4-dehydrobenzene diradical, and a transition state structure of the Bergman reaction. Geometries calculated with different basis sets and by semiempirical methods have been used for single-point calculations using electron correlation methods. These results are compared with the best experimental and theoretical results reported in the literature. Implications of these results for computational studies of the enediyne anticancer antibiotics are discussed.
Resumo:
We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.
Resumo:
The US penitentiary at Lewisburg, Pennsylvania, was retrofitted in 2008 to offer the country’s first federal Special Management Unit (SMU) program of its kind. This model SMU is designed for federal inmates from around the country identified as the most intractably troublesome, and features double-celling of inmates in tiny spaces, in 23-hour or 24-hour a day lockdown, requiring them to pass through a two-year program of readjustment. These spatial tactics, and the philosophy of punishment underlying them, contrast with the modern reform ideals upon which the prison was designed and built in 1932. The SMU represents the latest punitive phase in American penology, one that neither simply eliminates men as in the premodern spectacle, nor creates the docile, rehabilitated bodies of the modern panopticon; rather, it is a late-modern structure that produces only fear, terror, violence, and death. This SMU represents the latest of the late-modern prisons, similar to other supermax facilities in the US but offering its own unique system of punishment as well. While the prison exists within the system of American law and jurisprudence, it also manifests features of Agamben’s lawless, camp-like space that emerges during a state of exception, exempt from outside scrutiny with inmate treatment typically beyond the scope of the law.
Resumo:
The US penitentiary at Lewisburg, Pennsylvania, was retrofitted in 2008 to offer the country's first federal Special Management Unit (SMU) program of its kind. This model SMU is designed for federal inmates from around the country identified as the most intractably troublesome, and features double-celling of inmates in tiny spaces, in 23-hour or 24-hour a day lockdown, requiring them to pass through a two-year program of readjustment. These spatial tactics, and the philosophy of punishment underlying them, contrast with the modern reform ideals upon which the prison was designed and built in 1932. The SMU represents the latest punitive phase in American penology, one that neither simply eliminates men as in the premodern spectacle, nor creates the docile, rehabilitated bodies of the modern panopticon; rather, it is a late-modern structure that produces only fear, terror, violence, and death. This SMU represents the latest of the late-modern prisons, similar to other supermax facilities in the US but offering its own unique system of punishment as well. While the prison exists within the system of American law and jurisprudence, it also manifests features of Agamben's lawless, camp-like space that emerges during a state of exception, exempt from outside scrutiny with inmate treatment typically beyond the scope of the law
Resumo:
The hydraulic fracturing of the Marcellus Formation creates a byproduct known as frac water. Five frac water samples were collected in Bradford County, PA. Inorganic chemical analysis, field parameters analysis, alkalinity titrations, total dissolved solids(TDS), total suspended solids (TSS), biological oxygen demand (BOD), and chemical oxygen demand (COD) were conducted on each sample to characterize frac water. A database of frac water chemistry results from across the state of Pennsylvania from multiple sources was compiled in order to provide the public and research communitywith an accurate characterization of frac water. Four geochemical models were created to model the reactions between frac water and the Marcellus Formation, Purcell Limestone, and the oil field brines presumed present in the formations. The average concentrations of chloride and TDS in the five frac water samples were 1.1 �± 0.5 x 105 mg/L (5.5X average seawater) and 140,000 mg/L (4X average seawater). BOD values for frac water immediately upon flow back were over 10X greater than the BOD of typical wastewater, but decreased into the range of typical wastewater after a short period of time. The COD of frac water decreases dramatically with an increase in elapsed time from flow back, but remain considerably higher than typicalwastewater. Different alkalinity calculation methods produced a range of alkalinity values for frac water: this result is most likely due to high concentrations of aliphatic acid anions present in the samples. Laboratory analyses indicate that the frac watercomposition is quite variable depending on the companies from which the water was collected, the geology of the local area, and number of fracturing jobs in which the frac water was used, but will require more treatment than typical wastewater regardless of theprecise composition of each sample. The geochemical models created suggest that the presence of organic complexes in an oil field brine and Marcellus Formation aid in the dissolution of ions such as bariumand strontium into the solution. Although equilibration reactions between the Marcellus Formation and the slickwater account for some of the final frac water composition, the predominant control of frac water composition appears to be the ratio of the mixture between the oil field brine and slickwater. The high concentration of barium in the frac water is likely due to the abundance of barite nodules in the Purcell Limestone, and the lack of sulfate in the frac water samples is due to the reducing, anoxic conditions in the earth's subsurface that allow for the degassing of H2S(g).