966 resultados para Transient Calibration


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ÎP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ÎP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ÎP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ÎP and low fresh air flowrates, while the second mode is driven by high engine ÎP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Decision trees have been proposed as a basis for modifying table based injection to reduce transient particulate spikes during the turbocharger lag period. It has been shown that decision trees can detect particulate spikes in real time. In well calibrated electronically controlled diesel engines these spikes are narrow and are encompassed by a wider NOx spike. Decision trees have been shown to pinpoint the exact location of measured opacity spikes in real time thus enabling targeted PM reduction with near zero NOx penalty. A calibrated dimensional model has been used to demonstrate the possible reduction of particulate matter with targeted injection pressure pulses. Post injection strategy optimized for near stoichiometric combustion has been shown to provide additional benefits. Empirical models have been used to calculate emission tradeoffs over the entire FTP cycle. An empirical model based transient calibration has been used to demonstrate that such targeted transient modifiers are more beneficial at lower engine-out NOx levels.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ongoing depletion of the coastal aquifer in the Gaza strip due to groundwater overexploitation has led to the process of seawater intrusion, which is continually becoming a serious problem in Gaza, as the seawater has further invaded into many sections along the coastal shoreline. As a first step to get a hold on the problem, the artificial neural network (ANN)-model has been applied as a new approach and an attractive tool to study and predict groundwater levels without applying physically based hydrologic parameters, and also for the purpose to improve the understanding of complex groundwater systems and which is able to show the effects of hydrologic, meteorological and anthropogenic impacts on the groundwater conditions. Prediction of the future behaviour of the seawater intrusion process in the Gaza aquifer is thus of crucial importance to safeguard the already scarce groundwater resources in the region. In this study the coupled three-dimensional groundwater flow and density-dependent solute transport model SEAWAT, as implemented in Visual MODFLOW, is applied to the Gaza coastal aquifer system to simulate the location and the dynamics of the saltwaterâfreshwater interface in the aquifer in the time period 2000-2010. A very good agreement between simulated and observed TDS salinities with a correlation coefficient of 0.902 and 0.883 for both steady-state and transient calibration is obtained. After successful calibration of the solute transport model, simulation of future management scenarios for the Gaza aquifer have been carried out, in order to get a more comprehensive view of the effects of the artificial recharge planned in the Gaza strip for some time on forestall, or even to remedy, the presently existing adverse aquifer conditions, namely, low groundwater heads and high salinity by the end of the target simulation period, year 2040. To that avail, numerous management scenarios schemes are examined to maintain the ground water system and to control the salinity distributions within the target period 2011-2040. In the first, pessimistic scenario, it is assumed that pumping from the aquifer continues to increase in the near future to meet the rising water demand, and that there is not further recharge to the aquifer than what is provided by natural precipitation. The second, optimistic scenario assumes that treated surficial wastewater can be used as a source of additional artificial recharge to the aquifer which, in principle, should not only lead to an increased sustainable yield of the latter, but could, in the best of all cases, revert even some of the adverse present-day conditions in the aquifer, i.e., seawater intrusion. This scenario has been done with three different cases which differ by the locations and the extensions of the injection-fields for the treated wastewater. The results obtained with the first (do-nothing) scenario indicate that there will be ongoing negative impacts on the aquifer, such as a higher propensity for strong seawater intrusion into the Gaza aquifer. This scenario illustrates that, compared with 2010 situation of the baseline model, at the end of simulation period, year 2040, the amount of saltwater intrusion into the coastal aquifer will be increased by about 35 %, whereas the salinity will be increased by 34 %. In contrast, all three cases of the second (artificial recharge) scenario group can partly revert the present seawater intrusion. From the water budget point of view, compared with the first (do nothing) scenario, for year 2040, the water added to the aquifer by artificial recharge will reduces the amount of water entering the aquifer by seawater intrusion by 81, 77and 72 %, for the three recharge cases, respectively. Meanwhile, the salinity in the Gaza aquifer will be decreased by 15, 32 and 26% for the three cases, respectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Desenvolve-se um método para estimar os parâmetros de uma rede hidráulica a partir de dados observados de cargas hidráulicas transientes. Os parâmetros físicos da rede como fatores de atrito, rugosidades absolutas, diâmetros e a identificação e quantificação de vazamentos são as grandezas desconhecidas. O problema transiente inverso é resolvido utilizando uma abordagem indireta que compara os dados disponíveis de carga hidráulica transiente observados com os calculados através de um método matemático. O Método Transiente Inverso (MTI) com um Algoritmo Genético (AG) emprega o Método das Características (MOC) na solução das equações do movimento para escoamento transiente em redes de tubos. As condições de regime permanente são desconhecidas. Para avaliar a confiabilidade do MTI-AG desenvolvido aqui, uma rede-exemplo é usada para os vários problemas de calibração propostos. O comportamento transiente é imposto por duas manobras distintas de uma válvula de controle localizada em um dos nós da rede. Analisam-se, ainda, o desempenho do método proposto mediante a variabilidade do tamanho do registro transiente e de possíveis erros de leitura nas cargas hidráulicas. Ensaios numéricos realizados mostram que o método é viável e aplicável à solução de problema inverso em redes hidráulicas, sobretudo recorrendo-se a poucos dados observados e ao desconhecimento das condições iniciais de estado permanente. Nos diversos problemas de identificação, as informações transientes obtidas da manobra mais brusca produziu estimações mais eficientes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The combustion strategy in a diesel engine has an impact on the emissions, fuel consumption and the exhaust temperatures. The PM mass retained in the CPF is a function of NO2 and PM concentrations in addition to the exhaust temperatures and the flow rates. Thus the engine combustion strategy affects exhaust characteristics which has an impact on the CPF operation and PM mass retained and oxidized. In this report, a process has been developed to simulate the relationship between engine calibration, performance and HC and PM oxidation in the DOC and CPF respectively. Fuel Rail Pressure (FRP) and Start of Injection (SOI) sweeps were carried out at five steady state engine operating conditions. This data, along with data from a previously carried out surrogate HD-FTP cycle [1], was used to create a transfer function model which estimates the engine out emissions, flow rates, temperatures for varied FRP and SOI over a transient cycle. Four different calibrations (test cases) were considered in this study, which were simulated through the transfer function model and the DOC model [1, 2]. The DOC outputs were then input into a model which simulates the NO2 assisted and thermal PM oxidation inside a CPF. Finally, results were analyzed as to how engine calibration impacts the engine fuel consumption, HC oxidation in the DOC and the PM oxidation in the CPF. Also, active regeneration for various test cases was simulated and a comparative analysis of the fuel penalties involved was carried out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variation in temperature and concentration plays a crucial role in predicting the final microstructure during solidification of a binary alloy. Most of the experimental techniques used to measure concentration and temperature are intrusive in nature and affect the flow field. In this paper, the main focus is laid on in-situ, non-intrusive, transient measurement of concentration and temperature during the solidification of a binary mixture of aqueous ammonium chloride solution (a metal-analog system) in a top cooled cavity using laser based Mach-Zehnder Interferometric technique. It was found from the interferogram, that the angular deviation of fringe pattern and the total number of fringes exhibit significant sensitivity to refractive index and hence are functions of the local temperature and concentration of the NH4Cl solution inside the cavity. Using the fringe characteristics, calibration curves were established for the range of temperature and concentration levels expected during the solidification process. In the actual solidification experiment, two hypoeutectic solutions (5% and 15% NH4Cl) were chosen. The calibration curves were used to determine the temperature and concentration of the solution inside the cavity during solidification of 5% and 15% NH4Cl solution at different instants of time. The measurement was carried out at a fixed point in the cavity, and the concentration variation with time was recorded as the solid-liquid interface approached the measurement point. The measurement exhibited distinct zones of concentration distribution caused by solute rejection and Rayleigh Benard convection. Further studies involving flow visualization with laser scattering confirmed the Rayleigh Benard convection. Computational modeling was also performed, which corroborated the experimental findings. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a wireless receiver, a down-converted RF signal undergoes a transient phase shift, when the gain state is changed to adjust for varying conditions in transmission and propagation. A method is developed, in which such phase shifts are detected asynchronously, and their undesirable effects on the bit error rate are corrected. The method was developed for and used in, the system-level characterization and calibration of a 65-nm CMOS UHF receiver. The phase-shifts associated with specific gain-state transitions were measured within a test framework, and used in the baseband signal processing blocks to compensate for errors, whenever the receiver anticipated a gain-state transition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here we present a novel signal processing technique for a square wave thermally-modulated carbon black/polymer composite chemoresistor. The technique consists of only two mathematical operations: summing the off-transient and on-transient conductance signals; and subtracting the steady-state conductance signal. A single carbon black/polyvinylpyrrolidone composite chemo -resistor was fabricated and used to demonstrate the validity of the technique. Classification of water, methanol and ethanol vapours was successfully performed using only the peak time of the resultant curves. Quantification of the different vapours was also possible using the height of the peaks, because it was linearly proportional to concentration. This technique does not require zero-gas calibration and thus is superior to previously reported methods. ©2009 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development of a two-dimensional transient catalyst model. Although designed primarily for two-stroke direct injection engines, the model is also applicable to four-stroke lean burn and diesel applications. The first section describes the geometries, properties and chemical processes simulated by the model and discusses the limitations and assumptions applied. A review of the modeling techniques adopted by other researchers is also included. The mathematical relationships which are used to represent the system are then described, together with the finite volume method used in the computer program. The need for a two-dimensional approach is explained and the methods used to model effects such as flow and temperature distribution are presented. The problems associated with developing surface reaction rates are discussed in detail and compared with published research. Validation and calibration of the model is achieved by comparing predictions with measurements from a flow reactor. While an extensive validation process, involving detailed measurements of gas composition and thermal gradients, has been completed, the analysis is too detailed for publication here and is the subject of a separate technical paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the detailed validation of a computer model designed to simulate the transient light-off in a two-stroke oxidation catalyst. A plug flow reactor is employed to provide measurements of temperature and gas concentration at various radial and axial locations inside the catalyst. These measurements are recorded at discrete intervals during a transient light-off in which the inlet temperature is increased from ambient to 300oC at rates of up to 6oC/sec. The catalyst formulation used in the flow reactor, and its associated test procedures, are then simulated by the computer and a comparison made between experimental readings and model predictions. The design of the computer model to which this validation exercise relates is described in detail in a separate technical paper. The first section of the paper investigates the warm-up characteristics of the substrate and examines the validity of the heat transfer predictions between the wall and the gas in the absence of chemical reactions. The predictions from a typical single-component CO transient light-off test are discussed in the second section and are compared with experimental data. In particular the effect of the temperature ramp on the light-off curve and reaction zone development is examined. An analysis of the C3H6 conversion is given in the third section while the final section examines the accuracy of the light-off curves which are produced when both CO and C3H6 are present in the feed gas. The analysis shows that the heat and mass transfer calculations provided reliable predictions of the warm-up behaviour and post light-off gas concentration profiles. The self-inhibition and cross-inhibition terms in the global rate expressions were also found to be reasonably reliable although the surface reaction rates required calibration with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

<p>Context. The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. PESSTO classifies transients from publicly available sources and wide-field surveys, and selects science targets for detailed spectroscopic and photometric follow-up. PESSTO runs for nine months of the year, January - April and August - December inclusive, and typically has allocations of 10 nights per month. </p><p>Aims. We describe the data reduction strategy and data products that are publicly available through the ESO archive as the Spectroscopic Survey data release 1 (SSDR1). </p><p>Methods. PESSTO uses the New Technology Telescope with the instruments EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5&lt;sup&gt;m&lt;/sup&gt; for classification. Science targets are selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. We use standard EFOSC2 set-ups providing spectra with resolutions of 13-18 à between 3345-9995 Ã. A subset of the brighter science targets are selected for SOFI spectroscopy with the blue and red grisms (0.935-2.53 μm and resolutions 23-33 Ã) and imaging with broadband JHK&lt;inf&gt;s&lt;/inf&gt; filters. </p><p>Results. This first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013). A total of 221 confirmed supernovae were classified, and we released calibrated optical spectra and classifications publicly within 24 h of the data being taken (via WISeREP). The data in SSDR1 replace those released spectra. They have more reliable and quantifiable flux calibrations, correction for telluric absorption, and are made available in standard ESO Phase 3 formats. We estimate the absolute accuracy of the flux calibrations for EFOSC2 across the whole survey in SSDR1 to be typically ∼15%, although a number of spectra will have less reliable absolute flux calibration because of weather and slit losses. Acquisition images for each spectrum are available which, in principle, can allow the user to refine the absolute flux calibration. The standard NIR reduction process does not produce high accuracy absolute spectrophotometry but synthetic photometry with accompanying JHK&lt;inf&gt;s&lt;/inf&gt; imaging can improve this. Whenever possible, reduced SOFI images are provided to allow this. </p><p>Conclusions. Future data releases will focus on improving the automated flux calibration of the data products. The rapid turnaround between discovery and classification and access to reliable pipeline processed data products has allowed early science papers in the first few months of the survey.</p>

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The classic vertical advection-diffusion (VAD) balance is a central concept in studying the ocean heat budget, in particular in simple climate models (SCMs). Here we present a new framework to calibrate the parameters of the VAD equation to the vertical ocean heat balance of two fully-coupled climate models that is traceable to the modelsâ circulation as well as to vertical mixing and diffusion processes. Based on temperature diagnostics, we derive an effective vertical velocity w∠and turbulent diffusivity k∠for each individual physical process. In steady-state, we find that the residual vertical velocity and diffusivity change sign in mid-depth, highlighting the different regional contributions of isopycnal and diapycnal diffusion in balancing the modelsâ residual advection and vertical mixing. We quantify the impacts of the time-evolution of the effective quantities under a transient 1%CO2 simulation and make the link to the parameters of currently employed SCMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.