936 resultados para Model Participation Rules
Resumo:
One of the assumptions of the Capacitated Facility Location Problem (CFLP) is thatdemand is known and fixed. Most often, this is not the case when managers take somestrategic decisions such as locating facilities and assigning demand points to thosefacilities. In this paper we consider demand as stochastic and we model each of thefacilities as an independent queue. Stochastic models of manufacturing systems anddeterministic location models are put together in order to obtain a formula for thebacklogging probability at a potential facility location.Several solution techniques have been proposed to solve the CFLP. One of the mostrecently proposed heuristics, a Reactive Greedy Adaptive Search Procedure, isimplemented in order to solve the model formulated. We present some computationalexperiments in order to evaluate the heuristics performance and to illustrate the use ofthis new formulation for the CFLP. The paper finishes with a simple simulationexercise.
Resumo:
We estimate a forward-looking monetary policy reaction function for thepostwar United States economy, before and after Volcker's appointmentas Fed Chairman in 1979. Our results point to substantial differencesin the estimated rule across periods. In particular, interest ratepolicy in the Volcker-Greenspan period appears to have been much moresensitive to changes in expected inflation than in the pre-Volckerperiod. We then compare some of the implications of the estimated rulesfor the equilibrium properties of inflation and output, using a simplemacroeconomic model, and show that the Volcker-Greenspan rule is stabilizing.
Resumo:
We examine the effects of extracting monetary policy disturbances with semi-structural and structural VARs, using data generated bya limited participation model under partial accommodative and feedback rules. We find that, in general, misspecification is substantial: short run coefficients often have wrong signs; impulse responses and variance decompositions give misleadingrepresentations of the dynamics. Explanations for the results and suggestions for macroeconomic practice are provided.
Resumo:
This article presents a formal model of policy decision-making in an institutional framework of separation of powers in which the main actors are pivotal political parties with voting discipline. The basic model previously developed from pivotal politics theory for the analysis of the United States lawmaking is here modified to account for policy outcomes and institutional performances in other presidential regimes, especially in Latin America. Legislators' party indiscipline at voting and multi-partism appear as favorable conditions to reduce the size of the equilibrium set containing collectively inefficient outcomes, while a two-party system with strong party discipline is most prone to produce 'gridlock', that is, stability of socially inefficient policies. The article provides a framework for analysis which can induce significant revisions of empirical data, especially regarding the effects of situations of (newly defined) unified and divided government, different decision rules, the number of parties and their discipline. These implications should be testable and may inspire future analytical and empirical work.
Resumo:
The Drivers Scheduling Problem (DSP) consists of selecting a set of duties for vehicle drivers, for example buses, trains, plane or boat drivers or pilots, for the transportation of passengers or goods. This is a complex problem because it involves several constraints related to labour and company rules and can also present different evaluation criteria and objectives. Being able to develop an adequate model for this problem that can represent the real problem as close as possible is an important research area.The main objective of this research work is to present new mathematical models to the DSP problem that represent all the complexity of the drivers scheduling problem, and also demonstrate that the solutions of these models can be easily implemented in real situations. This issue has been recognized by several authors and as important problem in Public Transportation. The most well-known and general formulation for the DSP is a Set Partition/Set Covering Model (SPP/SCP). However, to a large extend these models simplify some of the specific business aspects and issues of real problems. This makes it difficult to use these models as automatic planning systems because the schedules obtained must be modified manually to be implemented in real situations. Based on extensive passenger transportation experience in bus companies in Portugal, we propose new alternative models to formulate the DSP problem. These models are also based on Set Partitioning/Covering Models; however, they take into account the bus operator issues and the perspective opinions and environment of the user.We follow the steps of the Operations Research Methodology which consist of: Identify the Problem; Understand the System; Formulate a Mathematical Model; Verify the Model; Select the Best Alternative; Present the Results of theAnalysis and Implement and Evaluate. All the processes are done with close participation and involvement of the final users from different transportation companies. The planner s opinion and main criticisms are used to improve the proposed model in a continuous enrichment process. The final objective is to have a model that can be incorporated into an information system to be used as an automatic tool to produce driver schedules. Therefore, the criteria for evaluating the models is the capacity to generate real and useful schedules that can be implemented without many manual adjustments or modifications. We have considered the following as measures of the quality of the model: simplicity, solution quality and applicability. We tested the alternative models with a set of real data obtained from several different transportation companies and analyzed the optimal schedules obtained with respect to the applicability of the solution to the real situation. To do this, the schedules were analyzed by the planners to determine their quality and applicability. The main result of this work is the proposition of new mathematical models for the DSP that better represent the realities of the passenger transportation operators and lead to better schedules that can be implemented directly in real situations.
Resumo:
Existing models of equilibrium unemployment with endogenous labor market participation are complex, generate procyclical unemployment rates and cannot match unemployment variability relative to GDP. We embed endogenous participation in a simple, tractable job market matching model, show analytically how variations in the participation rate are driven by the cross-sectional density of home productivity near the participation threshold, andhow this density translates into an extensive-margin labor supply elasticity. A calibration of the model to macro data not only matches employment and participation variabilities but also generates strongly countercyclical unemployment rates. With some wage rigidity the model also matches unemployment variations well. Furthermore, the labor supply elasticity implied by our calibration is consistent with microeconometric evidence for the US.
Resumo:
Understanding how communities of living organisms assemble has been a central question in ecology since the early days of the discipline. Disentangling the different processes involved in community assembly is not only interesting in itself but also crucial for an understanding of how communities will behave under future environmental scenarios. The traditional concept of assembly rules reflects the notion that species do not co-occur randomly but are restricted in their co-occurrence by interspecific competition. This concept can be redefined in a more general framework where the co-occurrence of species is a product of chance, historical patterns of speciation and migration, dispersal, abiotic environmental factors, and biotic interactions, with none of these processes being mutually exclusive. Here we present a survey and meta-analyses of 59 papers that compare observed patterns in plant communities with null models simulating random patterns of species assembly. According to the type of data under study and the different methods that are applied to detect community assembly, we distinguish four main types of approach in the published literature: species co-occurrence, niche limitation, guild proportionality and limiting similarity. Results from our meta-analyses suggest that non-random co-occurrence of plant species is not a widespread phenomenon. However, whether this finding reflects the individualistic nature of plant communities or is caused by methodological shortcomings associated with the studies considered cannot be discerned from the available metadata. We advocate that more thorough surveys be conducted using a set of standardized methods to test for the existence of assembly rules in data sets spanning larger biological and geographical scales than have been considered until now. We underpin this general advice with guidelines that should be considered in future assembly rules research. This will enable us to draw more accurate and general conclusions about the non-random aspect of assembly in plant communities.
Resumo:
We study energy-weighted sum rules of the pion and kaon propagator in nuclear matter at finite temperature. The sum rules are obtained from matching the Dyson form of the meson propagator with its spectral Lehmann representation at low and high energies. We calculate the sum rules for specific models of the kaon and pion self-energy. The in-medium spectral densities of the K and (K) over bar mesons are obtained from a chiral unitary approach in coupled channels that incorporates the S and P waves of the kaon-nucleon interaction. The pion self-energy is determined from the P-wave coupling to particle-hole and Delta-hole excitations, modified by short-range correlations. The sum rules for the lower-energy weights are fulfilled satisfactorily and reflect the contributions from the different quasiparticle and collective modes of the meson spectral function. We discuss the sensitivity of the sum rules to the distribution of spectral strength and their usefulness as quality tests of model calculations.
Resumo:
Background: MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample.Results: Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace.Conclusion: Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed.
Resumo:
Division of labor in social insects is determinant to their ecological success. Recent models emphasize that division of labor is an emergent property of the interactions among nestmates obeying to simple behavioral rules. However, the role of evolution in shaping these rules has been largely neglected. Here, we investigate a model that integrates the perspectives of self-organization and evolution. Our point of departure is the response threshold model, where we allow thresholds to evolve. We ask whether the thresholds will evolve to a state where division of labor emerges in a form that fits the needs of the colony. We find that division of labor can indeed evolve through the evolutionary branching of thresholds, leading to workers that differ in their tendency to take on a given task. However, the conditions under which division of labor evolves depend on the strength of selection on the two fitness components considered: amount of work performed and on worker distribution over tasks. When selection is strongest on the amount of work performed, division of labor evolves if switching tasks is costly. When selection is strongest on worker distribution, division of labor is less likely to evolve. Furthermore, we show that a biased distribution (like 3:1) of workers over tasks is not easily achievable by a threshold mechanism, even under strong selection. Contrary to expectation, multiple matings of colony foundresses impede the evolution of specialization. Overall, our model sheds light on the importance of considering the interaction between specific mechanisms and ecological requirements to better understand the evolutionary scenarios that lead to division of labor in complex systems. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00265-012-1343-2) contains supplementary material, which is available to authorized users.
Resumo:
The bio-economic model "Heures" is a first attempt to develop a simulation procedure to understand the Northwestern Mediterranean fisheries, to evaluate management strategies and to analyze the feasibility of implementing an adaptative management. The model is built on the interaction among three boxes simulating the dynamics of each of the basic actors of a fishery: the stock, the market and the fishermen. A fourth actor, the manager, imposes or modifies the rules, or, in terms of the model, modifies some particular parameters. Thus, the model allows us to simulate and evaluate the mid-term biologic and economic effects of particular management measures. The bio-economic nature of the model is given by the interaction among the three boxes, by the market simulation and, particularly, by the fishermen behaviour. This last element confers to the model its Mediterranean"selfregulated" character. The fishermen allocate their investments to maximize fishing mortality but, having a legal effort limit, they invest in maintenance and technology in order to increase the catchability, which, as a consequence. will be function of the invested capital.
Resumo:
This paper aims at investigating the socio-cultural factors that affect leisure-time sport participation in Switzerland. Data drawn from 8 waves of the Swiss Household Panel is used to evaluate a probit model with random effects, that takes into account the socioeconomic and demographic characteristics of the respondents. In line with existing literature, findings from the multivariate analysis show inequalities in sport involvement in Switzerland. These are significantly related to age, income, education, citizenship and cultural aspects. Appropriate and targeted policies promoting participation in sports among the community can be found on the basis of the critical modifiers in the model and their impact.
Resumo:
There is a lack of dedicated tools for business model design at a strategic level. However, in today's economic world the need to be able to quickly reinvent a company's business model is essential to stay competitive. This research focused on identifying the functionalities that are necessary in a computer-aided design (CAD) tool for the design of business models in a strategic context. Using design science research methodology a series of techniques and prototypes have been designed and evaluated to offer solutions to the problem. The work is a collection of articles which can be grouped into three parts: First establishing the context of how the Business Model Canvas (BMC) is used to design business models and explore the way in which CAD can contribute to the design activity. The second part extends on this by proposing new technics and tools which support elicitation, evaluation (assessment) and evolution of business models design with CAD. This includes features such as multi-color tagging to easily connect elements, rules to validate coherence of business models and features that are adapted to the correct business model proficiency level of its users. A new way to describe and visualize multiple versions of a business model and thereby help in addressing the business model as a dynamic object was also researched. The third part explores extensions to the business model canvas such as an intermediary model which helps IT alignment by connecting business model and enterprise architecture. And a business model pattern for privacy in a mobile environment, using privacy as a key value proposition. The prototyped techniques and proposition for using CAD tools in business model modeling will allow commercial CAD developers to create tools that are better suited to the needs of practitioners.
Resumo:
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.
Resumo:
INDISIM-YEAST, an individual-based simulator, models the evolution of a yeast population by settingup rules of behaviour for each individual cell according to their own biological rules and characteristics. Ittakes into account the uptake, metabolism, budding reproduction and viability of the yeast cells, over aperiod of time in the bulk of a liquid medium, occupying a three dimensional closed spatial grid with twokinds of particles (glucose and ethanol). Each microorganism is characterized by its biomass, genealogicalage, states in the budding cellular reproduction cycle and position in the space among others. Simulationsare carried out for population properties (global properties), as well as for those properties that pertain toindividual yeast cells (microscopic properties). The results of the simulations are in good qualitativeagreement with established experimental trends.