936 resultados para MODEL ANALYSIS
Resumo:
Stereology is an essential method for quantitative analysis of lung structure. Adequate fixation is a prerequisite for stereological analysis to avoid bias in pulmonary tissue, dimensions and structural details. We present a technique for in situ fixation of large animal lungs for stereological analysis, based on closed loop perfusion fixation.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Dahl salt-sensitive (DS) and salt-resistant (DR) inbred rat strains represent a well established animal model for cardiovascular research. Upon prolonged administration of high-salt-containing diet, DS rats develop systemic hypertension, and as a consequence they develop left ventricular hypertrophy, followed by heart failure. The aim of this work was to explore whether this animal model is suitable to identify biomarkers that characterize defined stages of cardiac pathophysiological conditions. The work had to be performed in two stages: in the first part proteomic differences that are attributable to the two separate rat lines (DS and DR) had to be established, and in the second part the process of development of heart failure due to feeding the rats with high-salt-containing diet has to be monitored. This work describes the results of the first stage, with the outcome of protein expression profiles of left ventricular tissues of DS and DR rats kept under low salt diet. Substantial extent of quantitative and qualitative expression differences between both strains of Dahl rats in heart tissue was detected. Using Principal Component Analysis, Linear Discriminant Analysis and other statistical means we have established sets of differentially expressed proteins, candidates for further molecular analysis of the heart failure mechanisms.
Resumo:
Consultation is promoted throughout school psychology literature as a best practice in service delivery. This method has numerous benefits including being able to work with more students at one time, providing practitioners with preventative rather than strictly reactive strategies, and helping school professionals meet state and federal education mandates and initiatives. Despite the benefits of consultation, teachers are sometimes resistant to this process.This research studies variables hypothesized to lead to resistance (Gonzalez, Nelson, Gutkin, & Shwery, 2004) and attempts to distinguish differences between school level (elementary, middle and high school) with respect to the role played by these variables and to determine if the model used to identify students for special education services has an influence on resistance factors. Twenty-sixteachers in elementary and middle schools responded to a demographicquestionnaire and a survey developed by Gonzalez, et al. (2004). This survey measures eight variables related to resistance to consultation. No high school teachers responded to the request to participate. Results of analysis of variance indicated a significant difference in the teaching efficacy subscale with elementary teachers reporting more efficacy in teaching than middle school teachers. Results also indicate a significant difference in classroom managementefficacy with teachers who work in schools that identify students according to a Response to Intervention model reporting higher classroom management efficacy than teachers who work in schools that identify students according to a combined method of refer-test-place/RtI combination model. Implications, limitations and directions for future research are discussed.
Resumo:
The Receiver Operating Characteristic (ROC) curve is a prominent tool for characterizing the accuracy of continuous diagnostic test. To account for factors that might invluence the test accuracy, various ROC regression methods have been proposed. However, as in any regression analysis, when the assumed models do not fit the data well, these methods may render invalid and misleading results. To date practical model checking techniques suitable for validating existing ROC regression models are not yet available. In this paper, we develop cumulative residual based procedures to graphically and numerically assess the goodness-of-fit for some commonly used ROC regression models, and show how specific components of these models can be examined within this framework. We derive asymptotic null distributions for the residual process and discuss resampling procedures to approximate these distributions in practice. We illustrate our methods with a dataset from the Cystic Fibrosis registry.
Resumo:
Aggregates were historically a low cost commodity but with communities and governmental agencies reducing the amount of mining the cost is increasing dramatically. An awareness needs to be brought to communities that aggregate production is necessary for ensuring the existing infrastructure in today’s world. This can be accomplished using proven technologies in other areas and applying them to show how viable reclamation is feasible. A proposed mine reclamation, Douglas Township quarry (DTQ), in Dakota Township, MN was evaluated using Visual Hydrologic Evaluation of Landfill Performance (HELP) model. The HELP is commonly employed for estimating the water budget of a landfill, however, it was applied to determine the water budget of the DTQ following mining. Using an environmental impact statement as the case study, modeling predictions indicated the DTQ will adequately drain the water being put into the system. The height of the groundwater table will rise slightly due to the mining excavations but no ponding will occur. The application of HELP model determined the water budget of the DTQ and can be used as a viable option for mining companies to demonstrate how land can be reclaimed following mining operations.
Resumo:
A fundamental combustion model for spark-ignition engine is studied in this report. The model is implemented in SIMULINK to simulate engine outputs (mass fraction burn and in-cylinder pressure) under various engine operation conditions. The combustion model includes a turbulent propagation and eddy burning processes based on literature [1]. The turbulence propagation and eddy burning processes are simulated by zero-dimensional method and the flame is assumed as sphere. To predict pressure, temperature and other in-cylinder variables, a two-zone thermodynamic model is used. The predicted results of this model match well with the engine test data under various engine speeds, loads, spark ignition timings and air fuel mass ratios. The developed model is used to study cyclic variation and combustion stability at lean (or diluted) combustion conditions. Several variation sources are introduced into the combustion model to simulate engine performance observed in experimental data. The relations between combustion stability and the introduced variation amount are analyzed at various lean combustion levels.
Resumo:
In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.