904 resultados para calibration of rainfall-runoff models
Resumo:
A typology of music distribution models is proposed consisting of the ownership model, the access model, and the context model. These models are not substitutes for each other and may co‐exist serving different market niches. The paper argues that increasingly the economic value created from recorded music is based on con‐text rather than on ownership. During this process, access‐based services temporarily generate economic value, but such services are destined to eventually become commoditised.
Resumo:
Exact solutions of partial differential equation models describing the transport and decay of single and coupled multispecies problems can provide insight into the fate and transport of solutes in saturated aquifers. Most previous analytical solutions are based on integral transform techniques, meaning that the initial condition is restricted in the sense that the choice of initial condition has an important impact on whether or not the inverse transform can be calculated exactly. In this work we describe and implement a technique that produces exact solutions for single and multispecies reactive transport problems with more general, smooth initial conditions. We achieve this by using a different method to invert a Laplace transform which produces a power series solution. To demonstrate the utility of this technique, we apply it to two example problems with initial conditions that cannot be solved exactly using traditional transform techniques.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
Autonomous navigation and picture compilation tasks require robust feature descriptions or models. Given the non Gaussian nature of sensor observations, it will be shown that Gaussian mixture models provide a general probabilistic representation allowing analytical solutions to the update and prediction operations in the general Bayesian filtering problem. Each operation in the Bayesian filter for Gaussian mixture models multiplicatively increases the number of parameters in the representation leading to the need for a re-parameterisation step. A computationally efficient re-parameterisation step will be demonstrated resulting in a compact and accurate estimate of the true distribution.
Resumo:
Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.
Resumo:
This article describes a Matlab toolbox for parametric identification of fluid-memory models associated with the radiation forces ships and offshore structures. Radiation forces are a key component of force-to-motion models used in simulators, motion control designs, and also for initial performance evaluation of wave-energy converters. The software described provides tools for preparing non-parmatric data and for identification with automatic model-order detection. The identification problem is considered in the frequency domain.
Resumo:
Determining similarity between business process models has recently gained interest in the business process management community. So far similarity was addressed separately either at semantic or structural aspect of process models. Also, most of the contributions that measure similarity of process models assume an ideal case when process models are enriched with semantics - a description of meaning of process model elements. However, in real life this results in a heavy human effort consuming pre-processing phase which is often not feasible. In this paper we propose an automated approach for querying a business process model repository for structurally and semantically relevant models. Similar to the search on the Internet, a user formulates a BPMN-Q query and as a result receives a list of process models ordered by relevance to the query. We provide a business process model search engine implementation for evaluation of the proposed approach.
Resumo:
The unique physical and movement characteristics of children necessitate the development of accelerometer equations and cut points that are population specific. The purpose of this study is to develop an ecologically valid cut point for the Biotrainer Pro monitor that reflects a threshold for moderate-intensity physical activity in elementary school children. A sample of 30 children (ages 8-12) wore a Biotrainer monitor while completing a series of 7 movement tasks (calibration phase) and while participating in an organized group activity (cross-validation phase). Videotapes from each session were processed using a computerized direct-observation technique to provide a criterion measure of physical activity. Analyses involved the use of mixed-model regression and receiver operator characteristic (ROC) curves. The results indicated that a cut point of 4 counts/min provides the optimal balance between the related needs for sensitivity (accurately detecting activity) and specificity (limiting misclassification of activity as inactivity). Results with the cross-validation data demonstrated that this value yielded the best overall kappa (.58) and a high classification agreement (84%) for activity determination. The specificity of 93% demonstrates that the proposed cut point can accurately detect activity; however, the lower sensitivity value of 61% suggests that some minutes of activity might be incorrectly classified as inactivity. The cut point of 4 counts/min provides an ecologically valid cut point to capture physical activity in children using the Biotrainer Pro activity monitor.
Resumo:
This paper develops a semiparametric estimation approach for mixed count regression models based on series expansion for the unknown density of the unobserved heterogeneity. We use the generalized Laguerre series expansion around a gamma baseline density to model unobserved heterogeneity in a Poisson mixture model. We establish the consistency of the estimator and present a computational strategy to implement the proposed estimation techniques in the standard count model as well as in truncated, censored, and zero-inflated count regression models. Monte Carlo evidence shows that the finite sample behavior of the estimator is quite good. The paper applies the method to a model of individual shopping behavior. © 1999 Elsevier Science S.A. All rights reserved.
Resumo:
In this work we test the feasibility of a new calibration method for gel dosimetry. We examine, through Monte Carlo modelling, whether the inclusion of an organic plastic scintillator system at key points within the gel phantom would perturb the dose map. Such a system would remove the requirement for a separate calibration gel, removing many sources of uncertainty.
Resumo:
Building information models are increasingly being utilised for facility management of large facilities such as critical infrastructures. In such environments, it is valuable to utilise the vast amount of data contained within the building information models to improve access control administration. The use of building information models in access control scenarios can provide 3D visualisation of buildings as well as many other advantages such as automation of essential tasks including path finding, consistency detection, and accessibility verification. However, there is no mathematical model for building information models that can be used to describe and compute these functions. In this paper, we show how graph theory can be utilised as a representation language of building information models and the proposed security related functions. This graph-theoretic representation allows for mathematically representing building information models and performing computations using these functions.
Resumo:
Conceptual modelling continues to be an important means for graphically capturing the requirements of an information system. Observations of modelling practice suggest that modellers often use multiple conceptual models in combination, because they articulate different aspects of real-world domains. Yet, the available empirical as well as theoretical research in this area has largely studied the use of single models, or single modelling grammars. We develop a Theory of Combined Ontological Coverage by extending an existing theory of ontological expressiveness of conceptual modelling grammars. Our new theory posits that multiple conceptual models are used to increase the maximum coverage of the real-world domain being modelled, whilst trying to minimize the ontological overlap between the models. We illustrate how the theory can be applied to analyse sets of conceptual models. We develop three propositions of the theory about evaluations of model combinations in terms of users’ selection, understandability and usefulness of conceptual models.
Resumo:
Intensity Modulated Radiotherapy (IMRT) is a well established technique for delivering highly conformal radiation dose distributions. The complexity of the delivery techniques and high dose gradients around the target volume make verification of the patient treatment crucial to the success of the treatment. Conventional treatment protocols involve imaging the patient prior to treatment, comparing the patient set-up to the planned set-up and then making any necessary shifts in the patient position to ensure target volume coverage. This paper presents a method for calibrating electronic portal imaging device (EPID) images acquired during IMRT delivery so that they can be used for verifying the patient set-up.