296 resultados para Process parameters


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the application of heterogeneous photocatalytic water purification process has gained wide attention due to its effectiveness in degrading and mineralizing the recalcitrant organic compounds as well as the possibility of utilizing the solar UV and visible light spectrum. This paper aims to review and summarize the recently published works on the titanium dioxide (TiO2) photocatalytic oxidation of pesticides and phenolic compounds, predominant in storm and waste water effluents. The effect of various operating parameters on the photocatalytic degradation of pesticides and phenols are discussed. Results reported here suggested that the photocatalytic degradation of organic compounds depends on the type of photocatalyst and composition, light intensity, initial substrate concentration, amount of catalyst, pH of the reaction medium, ionic components in water, solvent types, oxidizing agents/electron acceptors, catalyst application mode, and calcinations temperature in water environment. A substantial amount of research has focused on the enhancement of TiO2 photocatalysis by modification with metal, non-metal and ion doping. Recent developments in TiO2 photocatalysis for the degradation of various pesticides and phenols are also highlighted in this review. It is evident from the literature survey that photocatalysis has shown good potential for the removal of various organic pollutants. However, still there is a need to find out the practical utility of this technique on commercial scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such microcalibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, lane distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such micro-calibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, land distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelling an environmental process involves creating a model structure and parameterising the model with appropriate values to accurately represent the process. Determining accurate parameter values for environmental systems can be challenging. Existing methods for parameter estimation typically make assumptions regarding the form of the Likelihood, and will often ignore any uncertainty around estimated values. This can be problematic, however, particularly in complex problems where Likelihoods may be intractable. In this paper we demonstrate an Approximate Bayesian Computational method for the estimation of parameters of a stochastic CA. We use as an example a CA constructed to simulate a range expansion such as might occur after a biological invasion, making parameter estimates using only count data such as could be gathered from field observations. We demonstrate ABC is a highly useful method for parameter estimation, with accurate estimates of parameters that are important for the management of invasive species such as the intrinsic rate of increase and the point in a landscape where a species has invaded. We also show that the method is capable of estimating the probability of long distance dispersal, a characteristic of biological invasions that is very influential in determining spread rates but has until now proved difficult to estimate accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The compressed gas industry and government agencies worldwide utilize "adiabatic compression" testing for qualifying high-pressure valves, regulators, and other related flow control equipment for gaseous oxygen service. This test methodology is known by various terms including adiabatic compression testing, gaseous fluid impact testing, pneumatic impact testing, and BAM testing as the most common terms. The test methodology will be described in greater detail throughout this document but in summary it consists of pressurizing a test article (valve, regulator, etc.) with gaseous oxygen within 15 to 20 milliseconds (ms). Because the driven gas1 and the driving gas2 are rapidly compressed to the final test pressure at the inlet of the test article, they are rapidly heated by the sudden increase in pressure to sufficient temperatures (thermal energies) to sometimes result in ignition of the nonmetallic materials (seals and seats) used within the test article. In general, the more rapid the compression process the more "adiabatic" the pressure surge is presumed to be and the more like an isentropic process the pressure surge has been argued to simulate. Generally speaking, adiabatic compression is widely considered the most efficient ignition mechanism for directly kindling a nonmetallic material in gaseous oxygen and has been implicated in many fire investigations. Because of the ease of ignition of many nonmetallic materials by this heating mechanism, many industry standards prescribe this testing. However, the results between various laboratories conducting the testing have not always been consistent. Research into the test method indicated that the thermal profile achieved (i.e., temperature/time history of the gas) during adiabatic compression testing as required by the prevailing industry standards has not been fully modeled or empirically verified, although attempts have been made. This research evaluated the following questions: 1) Can the rapid compression process required by the industry standards be thermodynamically and fluid dynamically modeled so that predictions of the thermal profiles be made, 2) Can the thermal profiles produced by the rapid compression process be measured in order to validate the thermodynamic and fluid dynamic models; and, estimate the severity of the test, and, 3) Can controlling parameters be recommended so that new guidelines may be established for the industry standards to resolve inconsistencies between various test laboratories conducting tests according to the present standards?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a formalism for the analysis of sensitivity of nuclear magnetic resonance pulse sequences to variations of pulse sequence parameters, such as radiofrequency pulses, gradient pulses or evolution delays. The formalism enables the calculation of compact, analytic expressions for the derivatives of the density matrix and the observed signal with respect to the parameters varied. The analysis is based on two constructs computed in the course of modified density-matrix simulations: the error interrogation operators and error commutators. The approach presented is consequently named the Error Commutator Formalism (ECF). It is used to evaluate the sensitivity of the density matrix to parameter variation based on the simulations carried out for the ideal parameters, obviating the need for finite-difference calculations of signal errors. The ECF analysis therefore carries a computational cost comparable to a single density-matrix or product-operator simulation. Its application is illustrated using a number of examples from basic NMR spectroscopy. We show that the strength of the ECF is its ability to provide analytic insights into the propagation of errors through pulse sequences and the behaviour of signal errors under phase cycling. Furthermore, the approach is algorithmic and easily amenable to implementation in the form of a programming code. It is envisaged that it could be incorporated into standard NMR product-operator simulation packages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flexible design concept is a relatively new trend in airport terminal design which is believed to facilitate the ever changing needs of a terminal. Current architectural design processes become more complex every day because of the introduction of new building technologies where the concept of flexible airport terminal would apparently make the design process even more complex. Previous studies have demonstrated that ever growing aviation industry requires airport terminals to be planned, designed and constructed in such a way that should allow flexibility in design process. In order to adopt the philosophy of ‘design for flexibility’ architects need to address a wide range of differing needs. An appropriate integration of the process models, prior to the airport terminal design process, is expected to uncover the relationships that exist between spatial layout and their corresponding functions. The current paper seeks to develop a way of sharing space adjacency related information obtained from the Business Process Models (BPM) to assist in defining flexible airport terminal layouts. Critical design parameters are briefly investigated at this stage of research whilst reviewing the available design alternatives and an evaluation framework is proposed in the current paper. Information obtained from various design layouts should assist in identifying and defining flexible design matrices allowing architects to interpret and to apply those throughout the lifecycle of the terminal building.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of ethanol fumigation on the inter-cycle variability of key in-cylinder pressure parameters in a modern common rail diesel engine have been investigated. Specifically, maximum rate of pressure rise, peak pressure, peak pressure timing and ignition delay were investigated. A new methodology for investigating the start of combustion was also proposed and demonstrated—which is particularly useful with noisy in-cylinder pressure data as it can have a significant effect on the calculation of an accurate net rate of heat release indicator diagram. Inter-cycle variability has been traditionally investigated using the coefficient of variation. However, deeper insight into engine operation is given by presenting the results as kernel density estimates; hence, allowing investigation of otherwise unnoticed phenomena, including: multi-modal and skewed behaviour. This study has found that operation of a common rail diesel engine with high ethanol substitutions (>20% at full load, >30% at three quarter load) results in a significant reduction in ignition delay. Further, this study also concluded that if the engine is operated with absolute air to fuel ratios (mole basis) less than 80, the inter-cycle variability is substantially increased compared to normal operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports on an investigation to develop an advanced and comprehensive milling process model of the raw sugar factory. Although the new model can be applied to both, the four-roller and six-roller milling units, it is primarily developed for the six-roller mills which are widely used in the Australian sugar industry. The approach taken was to gain an understanding of the previous milling process simulation model "MILSIM" developed at the University of Queensland nearly four decades ago. Although the MILSIM model was widely adopted in the Australian sugar industry for simulating the milling process it did have some incorrect assumptions. The study aimed to eliminate all the incorrect assumptions of the previous model and develop an advanced model that represents the milling process correctly and tracks the flow of other cane components in the milling process which have not been considered in the previous models. The development of the milling process model was done is three stages. Firstly, an enhanced milling unit extraction model (MILEX) was developed to access the mill performance parameters and predict the extraction performance of the milling process. New definitions for the milling performance parameters were developed and a complete milling train along with the juice screen was modelled. The MILEX model was validated with factory data and the variation in the mill performance parameters was observed and studied. Some case studies were undertaken to study the effect of fibre in juice streams, juice in cush return and imbibition% fibre on extraction performance of the milling process. It was concluded from the study that the empirical relations developed for the mill performance parameters in the MILSIM model were not applicable to the new model. New empirical relations have to be developed before the model is applied with confidence. Secondly, a soluble and insoluble solids model was developed using modelling theory and experimental data to track the flow of sucrose (pol), reducing sugars (glucose and fructose), soluble ash, true fibre and mud solids entering the milling train through the cane supply and their distribution in juice and bagasse streams.. The soluble impurities and mud solids in cane affect the performance of the milling train and further processing of juice and bagasse. New mill performance parameters were developed in the model to track the flow of cane components. The developed model is the first of its kind and provides some additional insight regarding the flow of soluble and insoluble cane components and the factors affecting their distribution in juice and bagasse. The model proved to be a good extension to the MILEX model to study the overall performance of the milling train. Thirdly, the developed models were incorporated in a proprietary software package "SysCAD’ for advanced operational efficiency and for availability in the ‘whole of factory’ model. The MILEX model was developed in SysCAD software to represent a single milling unit. Eventually the entire milling train and the juice screen were developed in SysCAD using series of different controllers and features of the software. The models developed in SysCAD can be run from macro enabled excel file and reports can be generated in excel sheets. The flexibility of the software, ease of use and other advantages are described broadly in the relevant chapter. The MILEX model is developed in static mode and dynamic mode. The application of the dynamic mode of the model is still under progress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade leading to an array of approaches to business process variability modeling. This survey examines existing approaches in this field based on a common set of criteria and illustrates their key concepts using a running example. The analysis shows that existing approaches are characterized by the fact that they extend a conventional process mod- eling language with constructs that make it able to capture customizable process models. A customizable process model represents a family of process variants in a way that each variant can be derived by adding or deleting fragments according to configuration parameters or according to a domain model. The survey puts into evidence an abundance of customizable process modeling languages, embodying a diverse set of con- structs. In contrast, there is comparatively little tool support for analyzing and constructing customizable process models, as well as a scarcity of empirical evaluations of languages in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Commercially available instrumented treadmill systems that provide continuous measures of temporospatial gait parameters have recently become available for clinical gait analysis. This study evaluated the level of agreement between temporospatial gait parameters derived from a new instrumented treadmill, which incorporated a capacitance-based pressure array, with those measured by a conventional instrumented walkway (criterion standard). Methods Temporospatial gait parameters were estimated from 39 healthy adults while walking over an instrumented walkway (GAITRite®) and instrumented treadmill system (Zebris) at matched speed. Differences in temporospatial parameters derived from the two systems were evaluated using repeated measures ANOVA models. Pearson-product-moment correlations were used to investigate relationships between variables measured by each system. Agreement was assessed by calculating the bias and 95% limits of agreement. Results All temporospatial parameters measured via the instrumented walkway were significantly different from those obtained from the instrumented treadmill (P < .01). Temporospatial parameters derived from the two systems were highly correlated (r, 0.79–0.95). The 95% limits of agreement for temporal parameters were typically less than ±2% of gait cycle duration. However, 95% limits of agreement for spatial measures were as much as ±5 cm. Conclusions Differences in temporospatial parameters between systems were small but statistically significant and of similar magnitude to changes reported between shod and unshod gait in healthy young adults. Temporospatial parameters derived from an instrumented treadmill, therefore, are not representative of those obtained from an instrumented walkway and should not be interpreted with reference to literature on overground walking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carrying capacity assessments model a population’s potential self-sufficiency. A crucial first step in the development of such modelling is to examine the basic resource-based parameters defining the population’s production and consumption habits. These parameters include basic human needs such as food, water, shelter and energy together with climatic, environmental and behavioural characteristics. Each of these parameters imparts land-usage requirements in different ways and varied degrees so their incorporation into carrying capacity modelling also differs. Given that the availability and values of production parameters may differ between locations, no two carrying capacity models are likely to be exactly alike. However, the essential parameters themselves can remain consistent so one example, the Carrying Capacity Dashboard, is offered as a case study to highlight one way in which these parameters are utilised. While examples exist of findings made from carrying capacity assessment modelling, to date, guidelines for replication of such studies in other regions and scales have largely been overlooked. This paper addresses such shortcomings by describing a process for the inclusion and calibration of the most important resource-based parameters in a way that could be repeated elsewhere.