55 resultados para continuous-resource model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of the equity premium puzzle has focused on private sector capital markets. The object of this paper is to consider the welfare and policy implications of each of the broad classes of explanations of the equity premium puzzle. As would be expected, the greater the deviation from the first-best outcome implied by a given explanation of the equity premium puzzle, the more interventionist are the implied policy conclusions. Nevertheless, even explanations of the equity premium puzzle consistent with a general consumption-based asset pricing model have important welfare and policy implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many developing south-east Asian governments are not capturing full rent from domestic forest logging operations. Such rent losses are commonly related to institutional failures, where informal institutions tend to dominate the control of forestry activity in spite of weakly enforced regulations. Our model is an attempt to add a new dimension to thinking about deforestation. We present a simple conceptual model, based on individual decisions rather than social or forest planning, which includes the human dynamics of participation in informal activity and the relatively slower ecological dynamics of changes in forest resources. We demonstrate how incumbent informal logging operations can be persistent, and that any spending aimed at replacing the informal institutions can only be successful if it pushes institutional settings past some threshold. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To identify an appropriate dosage strategy for patients receiving enoxaparin by continuous intravenous infusion (CII). Methods: Monte Carlo simulations were performed in NONMEM, (200 replicates of 1000 patients) to predict steady state anti-Xa concentrations (Css) for patients receiving a CII of enoxaparin. The covariate distribution model was simulated based on covariate demographics in the CII study population. The impact of patient weight, renal function (creatinine clearance (CrCL)) and patient location (intensive care unit (ICU)) were evaluated. A population pharmacokinetic model was used as the input-output model (1-compartment first order output model with mixed residual error structure). Success of a dosing regimen was based on the percent of Css that is between the therapeutic range of 0.5 IU/ml to 1.2 IU/ml. Results: The best dose for patients in the ICU was 4.2IU/kg/h (success mean 64.8% and 90% prediction interval (PI): 60.1–69.8%) if CrCL60ml/min, the best dose was 8.3IU/kg/h (success mean 65.4%, 90% PI: 58.5–73.2%). Simulations suggest that there was a 50% improvement in the success of the CII if the dose rate for ICU patients with CrCL

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Action systems are a framework for reasoning about discrete reactive systems. Back, Petre and Porres have extended these action systems to continuous action systems, which can be. used to model hybrid systems. In this paper we define a refinement relation, and develop practical data refinement rules for continuous action systems. The meaning of continuous action systems is expressed in terms of a mapping from continuous action systems to action systems. First, we present a new mapping from continuous act ion systems to action systems, such that Back's definition of trace refinement is correct with respect to it. Second, we present a stream semantics that is compatible with the trace semantics, but is preferable to it because it is more general. Although action system trace refinement rules are applicable to continuous action systems with a stream semantics, they are not complete. Finally, we introduce a new data refinement rule that is valid with respect to the stream semantics and can be used to prove refinements that are not possible in the trace semantics, and we analyse the completeness of our new rule in conjunction with the existing trace refinement rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents some initial attempts to mathematically model the dynamics of a continuous estimation of distribution algorithm (EDA) based on a Gaussian distribution and truncation selection. Case studies are conducted on both unimodal and multimodal problems to highlight the effectiveness of the proposed technique and explore some important properties of the EDA. With some general assumptions, we show that, for ID unimodal problems and with the (mu, lambda) scheme: (1). The behaviour of the EDA is dependent only on the general shape of the test function, rather than its specific form; (2). When initialized far from the global optimum, the EDA has a tendency to converge prematurely; (3). Given a certain selection pressure, there is a unique value for the proposed amplification parameter that could help the EDA achieve desirable performance; for ID multimodal problems: (1). The EDA could get stuck with the (mu, lambda) scheme; (2). The EDA will never get stuck with the (mu, lambda) scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As an alternative to traditional evolutionary algorithms (EAs), population-based incremental learning (PBIL) maintains a probabilistic model of the best individual(s). Originally, PBIL was applied in binary search spaces. Recently, some work has been done to extend it to continuous spaces. In this paper, we review two such extensions of PBIL. An improved version of the PBIL based on Gaussian model is proposed that combines two main features: a new updating rule that takes into account all the individuals and their fitness values and a self-adaptive learning rate parameter. Furthermore, a new continuous PBIL employing a histogram probabilistic model is proposed. Some experiments results are presented that highlight the features of the new algorithms.