353 resultados para operations model
em University of Queensland eSpace - Australia
Resumo:
A large number of models have been derived from the two-parameter Weibull distribution and are referred to as Weibull models. They exhibit a wide range of shapes for the density and hazard functions, which makes them suitable for modelling complex failure data sets. The WPP and IWPP plot allows one to determine in a systematic manner if one or more of these models are suitable for modelling a given data set. This paper deals with this topic.
Resumo:
Current theoretical thinking about dual processes in recognition relies heavily on the measurement operations embodied within the process dissociation procedure. We critically evaluate the ability of this procedure to support this theoretical enterprise. We show that there are alternative processes that would produce a rough invariance in familiarity (a key prediction of the dual-processing approach) and that the process dissociation procedure does not have the power to differentiate between these alternative possibilities. We also show that attempts to relate parameters estimated by the process dissociation procedure to subjective reports (remember-know judgments) cannot differentiate between alternative dual-processing models and that there are problems with some of the historical evidence and with obtaining converging evidence. Our conclusion is that more specific theories incorporating ideas about representation and process are required.
Resumo:
An important consideration in the development of mathematical models for dynamic simulation, is the identification of the appropriate mathematical structure. By building models with an efficient structure which is devoid of redundancy, it is possible to create simple, accurate and functional models. This leads not only to efficient simulation, but to a deeper understanding of the important dynamic relationships within the process. In this paper, a method is proposed for systematic model development for startup and shutdown simulation which is based on the identification of the essential process structure. The key tool in this analysis is the method of nonlinear perturbations for structural identification and model reduction. Starting from a detailed mathematical process description both singular and regular structural perturbations are detected. These techniques are then used to give insight into the system structure and where appropriate to eliminate superfluous model equations or reduce them to other forms. This process retains the ability to interpret the reduced order model in terms of the physico-chemical phenomena. Using this model reduction technique it is possible to attribute observable dynamics to particular unit operations within the process. This relationship then highlights the unit operations which must be accurately modelled in order to develop a robust plant model. The technique generates detailed insight into the dynamic structure of the models providing a basis for system re-design and dynamic analysis. The technique is illustrated on the modelling for an evaporator startup. Copyright (C) 1996 Elsevier Science Ltd
Resumo:
In this paper we study the n-fold multiplicative model involving Weibull distributions and examine some properties of the model. These include the shapes for the density and failure rate functions and the WPP plot. These allow one to decide if a given data set can be adequately modelled by the model. We also discuss the estimation of model parameters based on the WPP plot. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The purpose of this study was threefold: first, the study was designed to illustrate the use of data and information collected in food safety surveys in a quantitative risk assessment. In this case, the focus was on the food service industry; however, similar data from other parts of the food chain could be similarly incorporated. The second objective was to quantitatively describe and better understand the role that the food service industry plays in the safety of food. The third objective was to illustrate the additional decision-making information that is available when uncertainty and variability are incorporated into the modelling of systems. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A model of iron carbonate (FeCO3) film growth is proposed, which is an extension of the recent mechanistic model of carbon dioxide (CO2) corrosion by Nesic, et al. In the present model, the film growth occurs by precipitation of iron carbonate once saturation is exceeded. The kinetics of precipitation is dependent on temperature and local species concentrations that are calculated by solving the coupled species transport equations. Precipitation tends to build up a layer of FeCO3 on the surface of the steel and reduce the corrosion rate. On the other hand, the corrosion process induces voids under the precipitated film, thus increasing the porosity and leading to a higher corrosion rate. Depending on the environmental parameters such as temperature, pH, CO2 partial pressure, velocity, etc., the balance of the two processes can lead to a variety of outcomes. Very protective films and low corrosion rates are predicted at high pH, temperature, CO2 partial pressure, and Fe2+ ion concentration due to formation of dense protective films as expected. The model has been successfully calibrated against limited experimental data. Parametric testing of the model has been done to gain insight into the effect of various environmental parameters on iron carbonate film formation. The trends shown in the predictions agreed well with the general understanding of the CO2 corrosion process in the presence of iron carbonate films. The present model confirms that the concept of scaling tendency is a good tool for predicting the likelihood of protective iron carbonate film formation.
Resumo:
An energy-based swing hammer mill model has been developed for coke oven feed preparation. it comprises a mechanistic power model to determine the dynamic internal recirculation and a perfect mixing mill model with a dual-classification function to mimic the operations of crusher and screen. The model parameters were calibrated using a pilot-scale swing hammer mill at various operating conditions. The effects of the underscreen configurations and the feed sizes on hammer mill operations were demonstrated through the fitted model parameters. Relationships between the model parameters and the machine configurations were established. The model was validated using the independent experimental data of single lithotype coal tests with the same BJD pilot-scale hammer mill and full operation audit data of an industrial hammer mill. The outcome of the energy-based swing hammer mill model is the capability to simulate the impact of changing blends of coal or mill configurations and operating conditions on product size distribution. Alternatively, the model can be used to select the machine settings required to achieve a desired product. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
This paper deals with an n-fold Weibull competing risk model. A characterisation of the WPP plot is given along with estimation of model parameters when modelling a given data set. These are illustrated through two examples. A study of the different possible shapes for the density and failure rate functions is also presented. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper summarises test results that were used to validate a model and scale-up procedure of the high pressure grinding roll (HPGR) which was developed at the JKMRC by Morrell et al. [Morrell, Lim, Tondo, David,1996. Modelling the high pressure grinding rolls. In: Mining Technology Conference, pp. 169-176.]. Verification of the model is based on results from four data sets that describe the performance of three industrial scale units fitted with both studded and smooth roll surfaces. The industrial units are currently in operation within the diamond mining industry and are represented by De Beers, BHP Billiton and Rio Tinto. Ore samples from the De Beers and BHP Billiton operations were sent to the JKMRC for ore characterisation and HPGR laboratory-scale tests. Rio Tinto contributed an historical data set of tests completed during a previous research project. The results conclude that the modelling of the HPGR process has matured to a point where the model may be used to evaluate new and to optimise existing comminution circuits. The model prediction of product size distribution is good and has been found to be strongly dependent of the characteristics of the material being tested. The prediction of throughput and corresponding power draw (based on throughput) is sensitive to inconsistent gap/diameter ratios observed between laboratory-scale tests and full-scale operations. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The Virtual Learning Environment (VLE) is one of the fastest growing areas in educational technology research and development. In order to achieve learning effectiveness, ideal VLEs should be able to identify learning needs and customize solutions, with or without an instructor to supplement instruction. They are called Personalized VLEs (PVLEs). In order to achieve PVLEs success, comprehensive conceptual models corresponding to PVLEs are essential. Such conceptual modeling development is important because it facilitates early detection and correction of system development errors. Therefore, in order to capture the PVLEs knowledge explicitly, this paper focuses on the development of conceptual models for PVLEs, including models of knowledge primitives in terms of learner, curriculum, and situational models, models of VLEs in general pedagogical bases, and particularly, the definition of the ontology of PVLEs on the constructivist pedagogical principle. Based on those comprehensive conceptual models, a prototyped multiagent-based PVLE has been implemented. A field experiment was conducted to investigate the learning achievements by comparing personalized and non-personalized systems. The result indicates that the PVLE we developed under our comprehensive ontology successfully provides significant learning achievements. These comprehensive models also provide a solid knowledge representation framework for PVLEs development practice, guiding the analysis, design, and development of PVLEs. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Document classification is a supervised machine learning process, where predefined category labels are assigned to documents based on the hypothesis derived from training set of labelled documents. Documents cannot be directly interpreted by a computer system unless they have been modelled as a collection of computable features. Rogati and Yang [M. Rogati and Y. Yang, Resource selection for domain-specific cross-lingual IR, in SIGIR 2004: Proceedings of the 27th annual international conference on Research and Development in Information Retrieval, ACM Press, Sheffied: United Kingdom, pp. 154-161.] pointed out that the effectiveness of document classification system may vary in different domains. This implies that the quality of document model contributes to the effectiveness of document classification. Conventionally, model evaluation is accomplished by comparing the effectiveness scores of classifiers on model candidates. However, this kind of evaluation methods may encounter either under-fitting or over-fitting problems, because the effectiveness scores are restricted by the learning capacities of classifiers. We propose a model fitness evaluation method to determine whether a model is sufficient to distinguish positive and negative instances while still competent to provide satisfactory effectiveness with a small feature subset. Our experiments demonstrated how the fitness of models are assessed. The results of our work contribute to the researches of feature selection, dimensionality reduction and document classification.
Resumo:
In this paper, we consider how refinements between state-based specifications (e.g., written in Z) can be checked by use of a model checker. Specifically, we are interested in the verification of downward and upward simulations which are the standard approach to verifying refinements in state-based notations. We show how downward and upward simulations can be checked using existing temporal logic model checkers. In particular, we show how the branching time temporal logic CTL can be used to encode the standard simulation conditions. We do this for both a blocking, or guarded, interpretation of operations (often used when specifying reactive systems) as well as the more common non-blocking interpretation of operations used in many state-based specification languages (for modelling sequential systems). The approach is general enough to use with any state-based specification language, and we illustrate how refinements between Z specifications can be checked using the SAL CTL model checker using a small example.
Resumo:
Many developing south-east Asian governments are not capturing full rent from domestic forest logging operations. Such rent losses are commonly related to institutional failures, where informal institutions tend to dominate the control of forestry activity in spite of weakly enforced regulations. Our model is an attempt to add a new dimension to thinking about deforestation. We present a simple conceptual model, based on individual decisions rather than social or forest planning, which includes the human dynamics of participation in informal activity and the relatively slower ecological dynamics of changes in forest resources. We demonstrate how incumbent informal logging operations can be persistent, and that any spending aimed at replacing the informal institutions can only be successful if it pushes institutional settings past some threshold. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
In this paper we present an algorithm as the combination of a low level morphological operation and model based Global Circular Shortest Path scheme to explore the segmentation of the Right Ventricle. Traditional morphological operations were employed to obtain the region of interest, and adjust it to generate a mask. The image cropped by the mask is then partitioned into a few overlapping regions. Global Circular Shortest Path algorithm is then applied to extract the contour from each partition. The final step is to re-assemble the partitions to create the whole contour. The technique is deemed quite reliable and robust, as this is illustrated by a very good agreement between the extracted contour and the expert manual drawing output.