877 resultados para Model-based geostatistics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 6: Engineering and Implementation of Collaborative Networks

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A NOx reduction efficiency higher than 95% with NH3 slip less than 30 ppm is desirable for heavy-duty diesel (HDD) engines using selective catalytic reduction (SCR) systems to meet the US EPA 2010 NOx standard and the 2014-2018 fuel consumption regulation. The SCR performance needs to be improved through experimental and modeling studies. In this research, a high fidelity global kinetic 1-dimensional 2-site SCR model with mass transfer, heat transfer and global reaction mechanisms was developed for a Cu-zeolite catalyst. The model simulates the SCR performance for the engine exhaust conditions with NH3 maldistribution and aging effects, and the details are presented. SCR experimental data were collected for the model development, calibration and validation from a reactor at Oak Ridge National Laboratory (ORNL) and an engine experimental setup at Michigan Technological University (MTU) with a Cummins 2010 ISB engine. The model was calibrated separately to the reactor and engine data. The experimental setup, test procedures including a surrogate HD-FTP cycle developed for transient studies and the model calibration process are described. Differences in the model parameters were determined between the calibrations developed from the reactor and the engine data. It was determined that the SCR inlet NH3 maldistribution is one of the reasons causing the differences. The model calibrated to the engine data served as a basis for developing a reduced order SCR estimator model. The effect of the SCR inlet NO2/NOx ratio on the SCR performance was studied through simulations using the surrogate HD-FTP cycle. The cumulative outlet NOx and the overall NOx conversion efficiency of the cycle are highest with a NO2/NOx ratio of 0.5. The outlet NH3 is lowest for the NO2/NOx ratio greater than 0.6. A combined engine experimental and simulation study was performed to quantify the NH3 maldistribution at the SCR inlet and its effects on the SCR performance and kinetics. The uniformity index (UI) of the SCR inlet NH3 and NH3/NOx ratio (ANR) was determined to be below 0.8 for the production system. The UI was improved to 0.9 after installation of a swirl mixer into the SCR inlet cone. A multi-channel model was developed to simulate the maldistribution effects. The results showed that reducing the UI of the inlet ANR from 1.0 to 0.7 caused a 5-10% decrease in NOx reduction efficiency and 10-20 ppm increase in the NH3 slip. The simulations of the steady-state engine data with the multi-channel model showed that the NH3 maldistribution is a factor causing the differences in the calibrations developed from the engine and the reactor data. The Reactor experiments were performed at ORNL using a Spaci-IR technique to study the thermal aging effects. The test results showed that the thermal aging (at 800°C for 16 hours) caused a 30% reduction in the NH3 stored on the catalyst under NH3 saturation conditions and different axial concentration profiles under SCR reaction conditions. The kinetics analysis showed that the thermal aging caused a reduction in total NH3 storage capacity (94.6 compared to 138 gmol/m3), different NH3 adsorption/desorption properties and a decrease in activation energy and the pre-exponential factor for NH3 oxidation, standard and fast SCR reactions. Both reduction in the storage capability and the change in kinetics of the major reactions contributed to the change in the axial storage and concentration profiles observed from the experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimating un-measurable states is an important component for onboard diagnostics (OBD) and control strategy development in diesel exhaust aftertreatment systems. This research focuses on the development of an Extended Kalman Filter (EKF) based state estimator for two of the main components in a diesel engine aftertreatment system: the Diesel Oxidation Catalyst (DOC) and the Selective Catalytic Reduction (SCR) catalyst. One of the key areas of interest is the performance of these estimators when the catalyzed particulate filter (CPF) is being actively regenerated. In this study, model reduction techniques were developed and used to develop reduced order models from the 1D models used to simulate the DOC and SCR. As a result of order reduction, the number of states in the estimator is reduced from 12 to 1 per element for the DOC and 12 to 2 per element for the SCR. The reduced order models were simulated on the experimental data and compared to the high fidelity model and the experimental data. The results show that the effect of eliminating the heat transfer and mass transfer coefficients are not significant on the performance of the reduced order models. This is shown by an insignificant change in the kinetic parameters between the reduced order and 1D model for simulating the experimental data. An EKF based estimator to estimate the internal states of the DOC and SCR was developed. The DOC and SCR estimators were simulated on the experimental data to show that the estimator provides improved estimation of states compared to a reduced order model. The results showed that using the temperature measurement at the DOC outlet improved the estimates of the CO , NO , NO2 and HC concentrations from the DOC. The SCR estimator was used to evaluate the effect of NH3 and NOX sensors on state estimation quality. Three sensor combinations of NOX sensor only, NH3 sensor only and both NOX and NH3 sensors were evaluated. The NOX only configuration had the worst performance, the NH3 sensor only configuration was in the middle and both the NOX and NH3 sensor combination provided the best performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the use of model-based geostatistics for choosing the optimal set of sampling locations, collectively called the design, for a geostatistical analysis. Two types of design situations are considered. These are retrospective design, which concerns the addition of sampling locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing optimal positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model parameter values are unknown. The results show that in this situation a wide range of inter-point distances should be included in the design, and the widely used regular design is therefore not the optimal choice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a model-based approach to unify clustering and network modeling using time-course gene expression data. Specifically, our approach uses a mixture model to cluster genes. Genes within the same cluster share a similar expression profile. The network is built over cluster-specific expression profiles using state-space models. We discuss the application of our model to simulated data as well as to time-course gene expression data arising from animal models on prostate cancer progression. The latter application shows that with a combined statistical/bioinformatics analyses, we are able to extract gene-to-gene relationships supported by the literature as well as new plausible relationships.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Load modeling plays an important role in power system dynamic stability assessment. One of the widely used methods in assessing load model impact on system dynamic response is through parametric sensitivity analysis. Load ranking provides an effective measure of such impact. Traditionally, load ranking is based on either static or dynamic load model alone. In this paper, composite load model based load ranking framework is proposed. It enables comprehensive investigation into load modeling impacts on system stability considering the dynamic interactions between load and system dynamics. The impact of load composition on the overall sensitivity and therefore on ranking of the load is also investigated. Dynamic simulations are performed to further elucidate the results obtained through sensitivity based load ranking approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasing global competition, rapid technological changes, advances in manufacturing and information technology and discerning customers are forcing supply chains to adopt improvement practices that enable them to deliver high quality products at a lower cost and in a shorter period of time. A lean initiative is one of the most effective approaches toward achieving this goal. In the lean improvement process, it is critical to measure current and desired performance level in order to clearly evaluate the lean implementation efforts. Many attempts have tried to measure supply chain performance incorporating both quantitative and qualitative measures but failed to provide an effective method of measuring improvements in performances for dynamic lean supply chain situations. Therefore, the necessity of appropriate measurement of lean supply chain performance has become imperative. There are many lean tools available for supply chains; however, effectiveness of a lean tool depends on the type of the product and supply chain. One tool may be highly effective for a supply chain involved in high volume products but may not be effective for low volume products. There is currently no systematic methodology available for selecting appropriate lean strategies based on the type of supply chain and market strategy This thesis develops an effective method to measure the performance of supply chain consisting of both quantitative and qualitative metrics and investigates the effects of product types and lean tool selection on the supply chain performance Supply chain performance matrices and the effects of various lean tools over performance metrics mentioned in the SCOR framework have been investigated. A lean supply chain model based on the SCOR metric framework is then developed where non- lean and lean as well as quantitative and qualitative metrics are incorporated in appropriate metrics. The values of appropriate metrics are converted into triangular fuzzy numbers using similarity rules and heuristic methods. Data have been collected from an apparel manufacturing company for multiple supply chain products and then a fuzzy based method is applied to measure the performance improvements in supply chains. Using the fuzzy TOPSIS method, which chooses an optimum alternative to maximise similarities with positive ideal solutions and to minimise similarities with negative ideal solutions, the performances of lean and non- lean supply chain situations for three different apparel products have been evaluated. To address the research questions related to effective performance evaluation method and the effects of lean tools over different types of supply chains; a conceptual framework and two hypotheses are investigated. Empirical results show that implementation of lean tools have significant effects over performance improvements in terms of time, quality and flexibility. Fuzzy TOPSIS based method developed is able to integrate multiple supply chain matrices onto a single performance measure while lean supply chain model incorporates qualitative and quantitative metrics. It can therefore effectively measure the improvements for supply chain after implementing lean tools. It is demonstrated that product types involved in the supply chain and ability to select right lean tools have significant effect on lean supply chain performance. Future study can conduct multiple case studies in different contexts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile ad hoc networks (MANETs) is one of the successful wireless network paradigms which offers unrestricted mobility without depending on any underlying infrastructure. MANETs have become an exciting and im- portant technology in recent years because of the rapid proliferation of variety of wireless devices, and increased use of ad hoc networks in various applications. Like any other networks, MANETs are also prone to variety of attacks majorly in routing side, most of the proposed secured routing solutions based on cryptography and authentication methods have greater overhead, which results in latency problems and resource crunch problems, especially in energy side. The successful working of these mechanisms also depends on secured key management involving a trusted third authority, which is generally difficult to implement in MANET environ-ment due to volatile topology. Designing a secured routing algorithm for MANETs which incorporates the notion of trust without maintaining any trusted third entity is an interesting research problem in recent years. This paper propose a new trust model based on cognitive reasoning,which associates the notion of trust with all the member nodes of MANETs using a novel Behaviors-Observations- Beliefs(BOB) model. These trust values are used for detec- tion and prevention of malicious and dishonest nodes while routing the data. The proposed trust model works with the DTM-DSR protocol, which involves computation of direct trust between any two nodes using cognitive knowledge. We have taken care of trust fading over time, rewards, and penalties while computing the trustworthiness of a node and also route. A simulator is developed for testing the proposed algorithm, the results of experiments shows incorporation of cognitive reasoning for computation of trust in routing effectively detects intrusions in MANET environment, and generates more reliable routes for secured routing of data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

State estimation is one of the most important functions in an energy control centre. An computationally efficient state estimator which is free from numerical instability/ill-conditioning is essential for security assessment of electric power grid. Whereas approaches to successfully overcome the numerical ill-conditioning issues have been proposed, an efficient algorithm for addressing the convergence issues in the presence of topological errors is yet to be evolved. Trust region (TR) methods have been successfully employed to overcome the divergence problem to certain extent. In this study, case studies are presented where the conventional algorithms including the existing TR methods would fail to converge. A linearised model-based TR method for successfully overcoming the convergence issues is proposed. On the computational front, unlike the existing TR methods for state estimation which employ quadratic models, the proposed linear model-based estimator is computationally efficient because the model minimiser can be computed in a single step. The model minimiser at each step is computed by minimising the linearised model in the presence of TR and measurement mismatch constraints. The infinity norm is used to define the geometry of the TR. Measurement mismatch constraints are employed to improve the accuracy. The proposed algorithm is compared with the quadratic model-based TR algorithm with case studies on the IEEE 30-bus system, 205-bus and 514-bus equivalent systems of part of Indian grid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new thermal model based on Fourier series expansion method has been presented for dynamic thermal analysis on power devices. The thermal model based on the Fourier series method has been programmed in MATLAB SIMULINK and integrated with a physics-based electrical model previously reported. The model was verified for accuracy using a two-dimensional Fourier model and a two-dimensional finite difference model for comparison. To validate this thermal model, experiments using a 600V 50A IGBT module switching an inductive load, has been completed under high frequency operation. The result of the thermal measurement shows an excellent match with the simulated temperature variations and temperature time-response within the power module. ©2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a disturbance controller is designed for making robotic system behave as a decoupled linear system according to the concept of internal model. Based on the linear system, the paper presents an iterative learning control algorithm to robotic manipulators. A sufficient condition for convergence is provided. The selection of parameter values of the algorithm is simple and easy to meet the convergence condition. The simulation results demonstrate the effectiveness of the algorithm..