908 resultados para Environmental objective function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Solutions to combinatorial optimization, such as p-median problems of locating facilities, frequently rely on heuristics to minimize the objective function. The minimum is sought iteratively and a criterion is needed to decide when the procedure (almost) attains it. However, pre-setting the number of iterations dominates in OR applications, which implies that the quality of the solution cannot be ascertained. A small branch of the literature suggests using statistical principles to estimate the minimum and use the estimate for either stopping or evaluating the quality of the solution. In this paper we use test-problems taken from Baesley's OR-library and apply Simulated Annealing on these p-median problems. We do this for the purpose of comparing suggested methods of minimum estimation and, eventually, provide a recommendation for practioners. An illustration ends the paper being a problem of locating some 70 distribution centers of the Swedish Post in a region.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Solutions to combinatorial optimization problems frequently rely on heuristics to minimize an objective function. The optimum is sought iteratively and pre-setting the number of iterations dominates in operations research applications, which implies that the quality of the solution cannot be ascertained. Deterministic bounds offer a mean of ascertaining the quality, but such bounds are available for only a limited number of heuristics and the length of the interval may be difficult to control in an application. A small, almost dormant, branch of the literature suggests using statistical principles to derive statistical bounds for the optimum. We discuss alternative approaches to derive statistical bounds. We also assess their performance by testing them on 40 test p-median problems on facility location, taken from Beasley’s OR-library, for which the optimum is known. We consider three popular heuristics for solving such location problems; simulated annealing, vertex substitution, and Lagrangian relaxation where only the last offers deterministic bounds. Moreover, we illustrate statistical bounds in the location of 71 regional delivery points of the Swedish Post. We find statistical bounds reliable and much more efficient than deterministic bounds provided that the heuristic solutions are sampled close to the optimum. Statistical bounds are also found computationally affordable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective Levodopa in presence of decarboxylase inhibitors is following two-compartment kinetics and its effect is typically modelled using sigmoid Emax models. Pharmacokinetic modelling of the absorption phase of oral distributions is problematic because of irregular gastric emptying. The purpose of this work was to identify and estimate a population pharmacokinetic- pharmacodynamic model for duodenal infusion of levodopa/carbidopa (Duodopa®) that can be used for in numero simulation of treatment strategies. Methods The modelling involved pooling data from two studies and fixing some parameters to values found in literature (Chan et al. J Pharmacokinet Pharmacodyn. 2005 Aug;32(3-4):307-31). The first study involved 12 patients on 3 occasions and is described in Nyholm et al. Clinical Neuropharmacology 2003:26:156-63. The second study, PEDAL, involved 3 patients on 2 occasions. A bolus dose (normal morning dose plus 50%) was given after a washout during night. Plasma samples and motor ratings (clinical assessment of motor function from video recordings on a treatment response scale between -3 and 3, where -3 represents severe parkinsonism and 3 represents severe dyskinesia.) were repeatedly collected until the clinical effect was back at baseline. At this point, the usual infusion rate was started and sampling continued for another two hours. Different structural absorption models and effect models were evaluated using the value of the objective function in the NONMEM package. Population mean parameter values, standard error of estimates (SE) and if possible, interindividual/interoccasion variability (IIV/IOV) were estimated. Results Our results indicate that Duodopa absorption can be modelled with an absorption compartment with an added bioavailability fraction and a lag time. The most successful effect model was of sigmoid Emax type with a steep Hill coefficient and an effect compartment delay. Estimated parameter values are presented in the table. Conclusions The absorption and effect models were reasonably successful in fitting observed data and can be used in simulation experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A customer is presumed to gravitate to a facility by the distance to it and the attractiveness of it. However regarding the location of the facility, the presumption is that the customer opts for the shortest route to the nearest facility.This paradox was recently solved by the introduction of the gravity p-median model. The model is yet to be implemented and tested empirically. We implemented the model in an empirical problem of locating locksmiths, vehicle inspections, and retail stores ofv ehicle spare-parts, and we compared the solutions with those of the p-median model. We found the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We analize a discrete type version of a common agency model with informed principals of Martimort and Moreira (2005) in the context of lobby games. We begin discussing issues related to the common values nature of the model, i.e.the agent cares directly about the principal’s utility function. With this feature the equilibrium of Martimort and Moreira (2005) is not valid. We argue in favor of one solution, although we are not able to fully characterize the equilibrium in this context. We then turn to an application: a modification of the Grossman and Helpman (1994) model of lobbying for tariff protection to incoporate assimetric information (but disconsidering the problem of common values) in the lobbies objective function. We show that the main results of the original model do not hold and that lobbies may behave less agressively towards the police maker when there is private information in the lobbies valuation for the tariffs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this dissertation is to re-examine classical issues in corporate finance, applying a new analytical tool. The single-crossing property, also called Spence-irrlees condition, is not required in the models developed here. This property has been a standard assumption in adverse selection and signaling models developed so far. The classical papers by Guesnerie and Laffont (1984) and Riley (1979) assume it. In the simplest case, for a consumer with a privately known taste, the single-crossing property states that the marginal utility of a good is monotone with respect to the taste. This assumption has an important consequence to the result of the model: the relationship between the private parameter and the quantity of the good assigned to the agent is monotone. While single crossing is a reasonable property for the utility of an ordinary consumer, this property is frequently absent in the objective function of the agents for more elaborate models. The lack of a characterization for the non-single crossing context has hindered the exploration of models that generate objective functions without this property. The first work that characterizes the optimal contract without the single-crossing property is Araújo and Moreira (2001a) and, for the competitive case, Araújo and Moreira (2001b). The main implication is that a partial separation of types may be observed. Two sets of disconnected types of agents may choose the same contract, in adverse selection problems, or signal with the same levei of signal, in signaling models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A dificuldade em se caracterizar alocações ou equilíbrios não estacionários é uma das principais explicações para a utilização de conceitos e hipóteses que trivializam a dinâmica da economia. Tal dificuldade é especialmente crítica em Teoria Monetária, em que a dimensionalidade do problema é alta mesmo para modelos muito simples. Neste contexto, o presente trabalho relata a estratégia computacional de implementação do método recursivo proposto por Monteiro e Cavalcanti (2006), o qual permite calcular a sequência ótima (possivelmente não estacionária) de distribuições de moeda em uma extensão do modelo proposto por Kiyotaki e Wright (1989). Três aspectos deste cálculo são enfatizados: (i) a implementação computacional do problema do planejador envolve a escolha de variáveis contínuas e discretas que maximizem uma função não linear e satisfaçam restrições não lineares; (ii) a função objetivo deste problema não é côncava e as restrições não são convexas; e (iii) o conjunto de escolhas admissíveis não é conhecido a priori. O objetivo é documentar as dificuldades envolvidas, as soluções propostas e os métodos e recursos disponíveis para a implementação numérica da caracterização da dinâmica monetária eficiente sob a hipótese de encontros aleatórios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste artigo estimamos e simulamos um modelo macroeconômico aberto de expectativas racionais (Batini e Haldane [4]) para a economia brasileira, com o objetivo de identificar as características das regras monetárias ótimas e a dinâmica de curto prazo gerada por elas. Trabalhamos com uma versão forward-Iooking e uma versão backward-Iooking a fim de comparar o desempenho de três parametrizações de regras monetárias, que diferem em relação à variável de inflação: a tradicional regra de Taylor, que se baseia na inflação passada; uma regra que combina inflação e taxa de câmbio real (ver Ball [5]) e uma regra que utiliza previsões de inflação (ver Bank af England [3]). Resolvemos o modelo numericamente e contruímos fronteiras eficientes em relação às variâncias do produto e da infiação por simulações estocásticas, para choques i.i.d. ou correlacionados. Os conjuntos de regras ótimas para as duas versões são qualitativamente distintos. Devido à incerteza quanto ao grau de forward-Iookingness sugerimos a escolha das regras pela soma das funções objetivos nas duas versões. Concluímos que as regras escolhidas com base neste critério têm perdas moderadas em relação às regras ótimas, mas previnem perdas maiores que resultariam da escolha da regra com base na versão errada. Finalmente calculamos funções de resposta a impulso dos dois modelos para algumas regras selecionadas, a fim de avaliar como diferentes regras monetárias alteram a dinâmica de curto prazo dos dois modelos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Alavancagem em hedge funds tem preocupado investidores e estudiosos nos últimos anos. Exemplos recentes de estratégias desse tipo se mostraram vantajosos em períodos de pouca incerteza na economia, porém desastrosos em épocas de crise. No campo das finanças quantitativas, tem-se procurado encontrar o nível de alavancagem que otimize o retorno de um investimento dado o risco que se corre. Na literatura, os estudos têm se mostrado mais qualitativos do que quantitativos e pouco se tem usado de métodos computacionais para encontrar uma solução. Uma forma de avaliar se alguma estratégia de alavancagem aufere ganhos superiores do que outra é definir uma função objetivo que relacione risco e retorno para cada estratégia, encontrar as restrições do problema e resolvê-lo numericamente por meio de simulações de Monte Carlo. A presente dissertação adotou esta abordagem para tratar o investimento em uma estratégia long-short em um fundo de investimento de ações em diferentes cenários: diferentes formas de alavancagem, dinâmicas de preço das ações e níveis de correlação entre esses preços. Foram feitas simulações da dinâmica do capital investido em função das mudanças dos preços das ações ao longo do tempo. Considerou-se alguns critérios de garantia de crédito, assim como a possibilidade de compra e venda de ações durante o período de investimento e o perfil de risco do investidor. Finalmente, estudou-se a distribuição do retorno do investimento para diferentes níveis de alavancagem e foi possível quantificar qual desses níveis é mais vantajoso para a estratégia de investimento dadas as restrições de risco.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper analysis a general equilibrium model with two periods, several households and a government that has to finance some expenditures in the first period. Households may have some private information either about their type (adverse selection) or about some action levei chosen in the first period that affects the probability of certain states of nature in the second period (moral hazard). Trade of financiai assets are intermediated by a finite collection of banks. Banks objective functions are determined in equilibrium by shareholders. Due to private information it may be optimal for the banks to introduce constraints in the set of available portfolios for each household as wellas household specific asset prices. In particular, households may face distinct interest rates for holding the risk-free asset. The government finances its expenditures either by taxing households in the first period or by issuing bonds in the first period and taxing households in the second period. Taxes may be state-dependent. Suppose government policies are neutml: i) government policies do not affect the distribution of wealth across households; and ii) if the government decides to tax a household in the second period there is a portfolio available for the banks that generates the Mme payoff in each state of nature as the household taxes. Tben, Ricardian equivalence holds if and only if an appropriate boundary condition is satisfied. Moreover, at every free-entry equilibrium the boundary condition is satisfied and thus Ricardian equivalence holds. These results do not require any particular assumption on the banks' objective function. In particular, we do not assume banks to be risk neutral.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A abordagem de Modelos Baseados em Agentes é utilizada para trabalhar problemas complexos, em que se busca obter resultados partindo da análise e construção de componentes e das interações entre si. Os resultados observados a partir das simulações são agregados da combinação entre ações e interferências que ocorrem no nível microscópico do modelo. Conduzindo, desta forma, a uma simulação do micro para o macro. Os mercados financeiros são sistemas perfeitos para o uso destes modelos por preencherem a todos os seus requisitos. Este trabalho implementa um Modelo de Mercado Financeiro Baseado em Agentes constituído por diversos agentes que interagem entre si através de um Núcleo de Negociação que atua com dois ativos e conta com o auxílio de formadores de mercado para promover a liquidez dos mercados, conforme se verifica em mercados reais. Para operação deste modelo, foram desenvolvidos dois tipos de agentes que administram, simultaneamente, carteiras com os dois ativos. O primeiro tipo usa o modelo de Markowitz, enquanto o segundo usa técnicas de análise de spread entre ativos. Outra contribuição deste modelo é a análise sobre o uso de função objetivo sobre os retornos dos ativos, no lugar das análises sobre os preços.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente estudo investiga se há competição entre os bancos públicos e privados varejistas na presença de intervenções governamentais impostas ao mercado de crédito bancário brasileiro, tais como o aumento da oferta de crédito via bancos públicos e a campanha de redução dos spreads bancários capitaneada pelos bancos governamentais. Os resultados encontrados no modelo Diff-in-Diff indicam que os bancos públicos apresentam ritmo de crescimento do estoque de crédito, nível de aprovisionamento, rentabilidade da carteira de crédito, retorno operacional, bem como custo do funding superiores aos bancos privados após o tratamento. Ademais, há evidências de mudanças na estratégia de alocação de recursos dos bancos privados em relação aos pares públicos, tendo as instituições bancárias privadas preferido aumentar a participação de ativos líquidos no balanço em detrimento de operações de crédito após o tratamento. Esses resultados sugerem que os bancos privados não competem com os bancos públicos no segmento de varejo quando estes adotam estratégias de alocação de recursos difusas à maximização do lucro esperado para um dado risco.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Um programa baseado na técnica de evolução diferencial foi desenvolvido para a definição da contribuição genética ótima na seleção de candidatos a reprodução. A função- objetivo a ser otimizada foi composta pelo mérito genético esperado da futura progênie e pela coascendência média dos animais em reprodução. Conjuntos de dados reais e simulados de populações com gerações sobrepostas foram usados para validar e testar o desempenho do programa desenvolvido. O programa se mostrou computacionalmente eficiente e viável para ser aplicado na prática e as consequências esperadas de sua aplicação, em comparação a procedimentos empíricos de controle da endogamia e/ou com a seleção baseada apenas no valor genético esperado, seriam a melhora da resposta genética futura e limitação mais efetiva da taxa de endogamia.