791 resultados para Concerns Based Adoption Model CBAM
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
We show that the variation of flow stress with strain rate and grain size in a magnesium alloy deformed at a constant strain rate and 450 °C can be predicted by a crystal plasticity model that includes grain boundary sliding and diffusion. The model predicts the grain size dependence of the critical strain rate that will cause a transition in deformation mechanism from dislocation creep to grain boundary sliding, and yields estimates for grain boundary fluidity and diffusivity.
Resumo:
Many research-based instruction strategies (RBISs) have been developed; their superior efficacy with respect to student learning has been demonstrated in many studies. Collecting and interpreting evidence about: 1) the extent to which electrical and computer engineering (ECE) faculty members are using RBISs in core, required engineering science courses, and 2) concerns that they express about using them, are important aspects of understanding how engineering education is evolving. The authors surveyed ECE faculty members, asking about their awareness and use of selected RBISs. The survey also asked what concerns ECE faculty members had about using RBISs. Respondent data showed that awareness of RBISs was very high, but estimates of use of RBISs, based on survey data, varied from 10% to 70%, depending on characteristics of the strategy. The most significant concern was the amount of class time that using an RBIS might take; efforts to increase use of RBISs must address this.
Resumo:
The Simulation Automation Framework for Experiments (SAFE) is a project created to raise the level of abstraction in network simulation tools and thereby address issues that undermine credibility. SAFE incorporates best practices in network simulationto automate the experimental process and to guide users in the development of sound scientific studies using the popular ns-3 network simulator. My contributions to the SAFE project: the design of two XML-based languages called NEDL (ns-3 Experiment Description Language) and NSTL (ns-3 Script Templating Language), which facilitate the description of experiments and network simulationmodels, respectively. The languages provide a foundation for the construction of better interfaces between the user and the ns-3 simulator. They also provide input to a mechanism which automates the execution of network simulation experiments. Additionally,this thesis demonstrates that one can develop tools to generate ns-3 scripts in Python or C++ automatically from NSTL model descriptions.
Resumo:
Since the development and prognosis of alcohol-induced liver disease (ALD) vary significantly with genetic background, identification of a genetic background-independent noninvasive ALD biomarker would significantly improve screening and diagnosis. This study explored the effect of genetic background on the ALD-associated urinary metabolome using the Ppara-null mouse model on two different backgrounds, C57BL/6 (B6) and 129/SvJ (129S), along with their wild-type counterparts. Reversed-phase gradient UPLC-ESI-QTOF-MS analysis revealed that urinary excretion of a number of metabolites, such as ethylsulfate, 4-hydroxyphenylacetic acid, 4-hydroxyphenylacetic acid sulfate, adipic acid, pimelic acid, xanthurenic acid, and taurine, were background-dependent. Elevation of ethyl-β-d-glucuronide and N-acetylglycine was found to be a common signature of the metabolomic response to alcohol exposure in wild-type as well as in Ppara-null mice of both strains. However, increased excretion of indole-3-lactic acid and phenyllactic acid was found to be a conserved feature exclusively associated with the alcohol-treated Ppara-null mouse on both backgrounds that develop liver pathologies similar to the early stages of human ALD. These markers reflected the biochemical events associated with early stages of ALD pathogenesis. The results suggest that indole-3-lactic acid and phenyllactic acid are potential candidates for conserved and pathology-specific high-throughput noninvasive biomarkers for early stages of ALD.
Resumo:
In order to achieve host cell entry, the apicomplexan parasite Neospora caninum relies on the contents of distinct organelles, named micronemes, rhoptries and dense granules, which are secreted at defined timepoints during and after host cell entry. It was shown previously that a vaccine composed of a mixture of three recombinant antigens, corresponding to the two microneme antigens NcMIC1 and NcMIC3 and the rhoptry protein NcROP2, prevented disease and limited cerebral infection and transplacental transmission in mice. In this study, we selected predicted immunogenic domains of each of these proteins and created four different chimeric antigens, with the respective domains incorporated into these chimers in different orders. Following vaccination, mice were challenged intraperitoneally with 2 × 10(6)N. caninum tachzyoites and were then carefully monitored for clinical symptoms during 4 weeks post-infection. Of the four chimeric antigens, only recNcMIC3-1-R provided complete protection against disease with 100% survivors, compared to 40-80% of survivors in the other groups. Serology did not show any clear differences in total IgG, IgG1 and IgG2a levels between the different treatment groups. Vaccination with all four chimeric variants generated an IL-4 biased cytokine expression, which then shifted to an IFN-γ-dominated response following experimental infection. Sera of recNcMIC3-1-R vaccinated mice reacted with each individual recombinant antigen, as well as with three distinct bands in Neospora extracts with similar Mr as NcMIC1, NcMIC3 and NcROP2, and exhibited distinct apical labeling in tachyzoites. These results suggest that recNcMIC3-1-R is an interesting chimeric vaccine candidate and should be followed up in subsequent studies in a fetal infection model.