914 resultados para Model transformation analysis
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Dahl salt-sensitive (DS) and salt-resistant (DR) inbred rat strains represent a well established animal model for cardiovascular research. Upon prolonged administration of high-salt-containing diet, DS rats develop systemic hypertension, and as a consequence they develop left ventricular hypertrophy, followed by heart failure. The aim of this work was to explore whether this animal model is suitable to identify biomarkers that characterize defined stages of cardiac pathophysiological conditions. The work had to be performed in two stages: in the first part proteomic differences that are attributable to the two separate rat lines (DS and DR) had to be established, and in the second part the process of development of heart failure due to feeding the rats with high-salt-containing diet has to be monitored. This work describes the results of the first stage, with the outcome of protein expression profiles of left ventricular tissues of DS and DR rats kept under low salt diet. Substantial extent of quantitative and qualitative expression differences between both strains of Dahl rats in heart tissue was detected. Using Principal Component Analysis, Linear Discriminant Analysis and other statistical means we have established sets of differentially expressed proteins, candidates for further molecular analysis of the heart failure mechanisms.
Resumo:
Consultation is promoted throughout school psychology literature as a best practice in service delivery. This method has numerous benefits including being able to work with more students at one time, providing practitioners with preventative rather than strictly reactive strategies, and helping school professionals meet state and federal education mandates and initiatives. Despite the benefits of consultation, teachers are sometimes resistant to this process.This research studies variables hypothesized to lead to resistance (Gonzalez, Nelson, Gutkin, & Shwery, 2004) and attempts to distinguish differences between school level (elementary, middle and high school) with respect to the role played by these variables and to determine if the model used to identify students for special education services has an influence on resistance factors. Twenty-sixteachers in elementary and middle schools responded to a demographicquestionnaire and a survey developed by Gonzalez, et al. (2004). This survey measures eight variables related to resistance to consultation. No high school teachers responded to the request to participate. Results of analysis of variance indicated a significant difference in the teaching efficacy subscale with elementary teachers reporting more efficacy in teaching than middle school teachers. Results also indicate a significant difference in classroom managementefficacy with teachers who work in schools that identify students according to a Response to Intervention model reporting higher classroom management efficacy than teachers who work in schools that identify students according to a combined method of refer-test-place/RtI combination model. Implications, limitations and directions for future research are discussed.
Resumo:
STUDY DESIGN: Ex vivo in vitro study evaluating a novel intervertebral disc/endplate culture system. OBJECTIVES: To establish a whole-organ intervertebral disc culture model for the study of disc degeneration in vitro, including the characterization of basic cell and organ function. SUMMARY OF BACKGROUND DATA: With current in vivo models for the study of disc and endplate degeneration, it remains difficult to investigate the complex disc metabolism and signaling cascades. In contrast, more controlled but simplified in vitro systems using isolated cells or disc fragments are difficult to culture due to the unconstrained conditions, with often-observed cell death or cell dedifferentiation. Therefore, there is a demand for a controlled culture model with preserved cell function that offers the possibility to investigate disc and endplate pathologies in a structurally intact organ. METHODS: Naturally constrained intervertebral disc/endplate units from rabbits were cultured in multi-well plates. Cell viability, metabolic activity, matrix composition, and matrix gene expression profile were monitored using the Live/Dead cell viability test (Invitrogen, Basel, Switzerland), tetrazolium salt reduction (WST-8), proteoglycan and deoxyribonucleic acid quantification assays, and quantitative polymerase chain reaction. RESULTS: Viability and organ integrity were preserved for at least 4 weeks, while proteoglycan and deoxyribonucleic acid content decreased slightly, and matrix genes exhibited a degenerative profile with up-regulation of type I collagen and suppression of collagen type II and aggrecan genes. Additionally, cell metabolic activity was reduced to one third of the initial value. CONCLUSIONS: Naturally constrained intervertebral rabbit discs could be cultured for several weeks without losing cell viability. Structural integrity and matrix composition were retained. However, the organ responded to the artificial environment with a degenerative gene expression pattern and decreased metabolic rate. Therefore, the described system serves as a promising in vitro model to study disc degeneration in a whole organ.
Resumo:
The Receiver Operating Characteristic (ROC) curve is a prominent tool for characterizing the accuracy of continuous diagnostic test. To account for factors that might invluence the test accuracy, various ROC regression methods have been proposed. However, as in any regression analysis, when the assumed models do not fit the data well, these methods may render invalid and misleading results. To date practical model checking techniques suitable for validating existing ROC regression models are not yet available. In this paper, we develop cumulative residual based procedures to graphically and numerically assess the goodness-of-fit for some commonly used ROC regression models, and show how specific components of these models can be examined within this framework. We derive asymptotic null distributions for the residual process and discuss resampling procedures to approximate these distributions in practice. We illustrate our methods with a dataset from the Cystic Fibrosis registry.
Resumo:
There is an emerging interest in modeling spatially correlated survival data in biomedical and epidemiological studies. In this paper, we propose a new class of semiparametric normal transformation models for right censored spatially correlated survival data. This class of models assumes that survival outcomes marginally follow a Cox proportional hazard model with unspecified baseline hazard, and their joint distribution is obtained by transforming survival outcomes to normal random variables, whose joint distribution is assumed to be multivariate normal with a spatial correlation structure. A key feature of the class of semiparametric normal transformation models is that it provides a rich class of spatial survival models where regression coefficients have population average interpretation and the spatial dependence of survival times is conveniently modeled using the transformed variables by flexible normal random fields. We study the relationship of the spatial correlation structure of the transformed normal variables and the dependence measures of the original survival times. Direct nonparametric maximum likelihood estimation in such models is practically prohibited due to the high dimensional intractable integration of the likelihood function and the infinite dimensional nuisance baseline hazard parameter. We hence develop a class of spatial semiparametric estimating equations, which conveniently estimate the population-level regression coefficients and the dependence parameters simultaneously. We study the asymptotic properties of the proposed estimators, and show that they are consistent and asymptotically normal. The proposed method is illustrated with an analysis of data from the East Boston Ashma Study and its performance is evaluated using simulations.
Resumo:
Aggregates were historically a low cost commodity but with communities and governmental agencies reducing the amount of mining the cost is increasing dramatically. An awareness needs to be brought to communities that aggregate production is necessary for ensuring the existing infrastructure in today’s world. This can be accomplished using proven technologies in other areas and applying them to show how viable reclamation is feasible. A proposed mine reclamation, Douglas Township quarry (DTQ), in Dakota Township, MN was evaluated using Visual Hydrologic Evaluation of Landfill Performance (HELP) model. The HELP is commonly employed for estimating the water budget of a landfill, however, it was applied to determine the water budget of the DTQ following mining. Using an environmental impact statement as the case study, modeling predictions indicated the DTQ will adequately drain the water being put into the system. The height of the groundwater table will rise slightly due to the mining excavations but no ponding will occur. The application of HELP model determined the water budget of the DTQ and can be used as a viable option for mining companies to demonstrate how land can be reclaimed following mining operations.
Resumo:
A fundamental combustion model for spark-ignition engine is studied in this report. The model is implemented in SIMULINK to simulate engine outputs (mass fraction burn and in-cylinder pressure) under various engine operation conditions. The combustion model includes a turbulent propagation and eddy burning processes based on literature [1]. The turbulence propagation and eddy burning processes are simulated by zero-dimensional method and the flame is assumed as sphere. To predict pressure, temperature and other in-cylinder variables, a two-zone thermodynamic model is used. The predicted results of this model match well with the engine test data under various engine speeds, loads, spark ignition timings and air fuel mass ratios. The developed model is used to study cyclic variation and combustion stability at lean (or diluted) combustion conditions. Several variation sources are introduced into the combustion model to simulate engine performance observed in experimental data. The relations between combustion stability and the introduced variation amount are analyzed at various lean combustion levels.
Resumo:
In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.