884 resultados para G13 - Contingent Pricing
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.
Resumo:
31 p.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.
The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.
The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.
To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.
Resumo:
This thesis brings together four papers on optimal resource allocation under uncertainty with capacity constraints. The first is an extension of the Arrow-Debreu contingent claim model to a good subject to supply uncertainty for which delivery capacity has to be chosen before the uncertainty is resolved. The second compares an ex-ante contingent claims market to a dynamic market in which capacity is chosen ex-ante and output and consumption decisions are made ex-post. The third extends the analysis to a storable good subject to random supply. Finally, the fourth examines optimal allocation of water under an appropriative rights system.
Resumo:
The focus of this project is going to be, as the title indicates, on the comparison of marketing policies applied by the same company in different countries and analysis of the reasons for the differences. In order to do that, I have selected the company Nestlé to analyze the marketing decisions it makes across national boundaries to market the brand of Kit Kat and keep it as a leader snack worldwide. After having analyzed the brand in all continents, I can say the execution of the strategy used by Nestlé with Kit Kat really matches the planning of the strategy which is to think globally and act locally. Nestlé uses global brand identity but, from the internal point of view, it uses local ingredients and gives autonomy to its local branches based in different countries to make pricing and distributions decisions and therefore satisfy different consumers’ needs and preferences in local environments where changes happen very rapidly. The “glocal” approach to marketing is an effective way for Nestlé Kit Kat to stay focused on the consumer in worldwide markets.
Resumo:
Estima-se que, no ano de 2025, 23% da população total dos países desenvolvidos estarão com mais de 60 anos, evidenciando-se assim o envelhecimento gradativo do contingente populacional destes países. Deste modo, é perceptível o contingente de mulheres que estarão vivenciando a fase da menopausa com seus efeitos biológicos, psicológicos e sociais. As mudanças hormonais e fisiológicas que acontecem nas mulheres durante a fase da menopausa, acompanhadas pela desvalorização estética do corpo e por toda uma sintomatologia física e psíquica, têm sido interpretadas como perda da feminilidade, sinalizando o envelhecimento inevitável e a finitude. No entanto, muitos dos desconfortos que as mulheres vivenciam nesta fase não se devem às mudanças biológicas, mas ao seu processo de socialização, caracterizando a influência de gênero. Neste contexto, este trabalho teve como objeto o estudo da influência da relação de gênero na vivência e no significado do processo da menopausa, tendo como objetivos: descrever a vivência da menopausa a partir da perspectiva de mulheres que a vivenciam e identificar as particularidades relacionadas ao gênero diretamente envolvidas na experiência da menopausa a partir da perspectiva das mulheres. Para desenvolvimento do trabalho foi utilizada abordagem qualitativa de natureza descritiva com vinte mulheres de idade entre 45 e 55 anos que apresentavam menopausa espontânea e eram clientes das Unidades Básicas de Saúde da cidade de Curitibanos-SC, no período de 1 a 15 de outubro de 2009. Para a coleta de dados foi utilizado um roteiro de entrevista semi-estruturada com uma questão norteadora: Fale-me como é para você estar vivenciando a menopausa. A interpretação e análise foram feitas através de análise de conteúdo do tipo temática descritas por Bardin. Nas narrativas, identificaram-se categorias que foram integradas em quatro temas principais: 1- Vivenciando a Menopausa, 2- Identificando Transformações no Corpo e na Vida, 3- Cuidando de Si, 4- Buscando Informações/Influências e Construindo Conhecimento. Foi possível identificar nessas categorias que as mulheres trazem o conceito de que a fase da menopausa é uma doença, e relacionam essa fase com envelhecimento e declínio físico, a qual traz grandes sofrimentos, o que demonstra a influência de gênero no vivenciar desta fase. As entrevistadas explicitaram em suas falas diversos sintomas que as incomodavam e interferiam em suas atividades diárias e na sua maneira de ser, repercutindo muitas vezes no seu comportamento familiar e profissional. O conhecimento sobre a menopausa, neste grupo de mulheres, foi construído ao longo de suas vidas e reflete as suas realidades culturais e sociais, deixando evidente a escassez de fontes de informação e os tabus relacionados com relação à fase da menopausa. Este estudo contribui com a geração de conhecimentos levando em consideração os efeitos que a influência de gênero pode ter na vivência e percepção da menopausa, desmistificando-a para que a vivência das mulheres durante esse período não seja condicionada por estereótipos e crenças relacionadas ao gênero.
Resumo:
[ES] Se suele afirmar que hay dos tipos de patrimonio cultural. Uno, “eterno”, “riguroso”, “irremplazable” o “global”; otro, “contingente”, “flexible”; “sustituible” o “local”. El primero es de la “humanidad”; el segundo, de la “comunidad”. Los trabajos que se presentan en esta publicación, describen y analizan procesos de activación de ese segundo tipo de patrimonio: los intereses y las valorizaciones que han conducido a la patrimonialización o puesta en valor de unos bienes culturales; qué agentes han intervenido en ese proceso; por qué, para qué y cómo lo han llevado a cabo. Se interesan más por el modus operandi que por el opus operatum. Escritos por profesores, investigadores, especialistas y técnicos, los trabajos de esta publicación presentan un conjunto de experiencias europeas y americanas, mostrando un panorama diverso y complejo del campo patrimonial y museístico, así como distintas aproximaciones en función de la posición que ocupa el autor respecto a los bienes culturales.
Resumo:
[EN] The aim of this paper is to study systematic liquidity at the Euronext Lisbon Stock Exchange. The motivation for this research is provided by the growing interest in financial literature about stock liquidity and the implications of commonality in liquidity for asset pricing since it could represent a source of non-diversifiable risk. Namely, it is analysed whether there exist common factors that drive the variation in individual stock liquidity and the causes of the inter-temporal variation of aggregate liquidity. Monthly data for the period between January 1988 and December 2011 is used to compute some of the most used proxies for liquidity: bid-ask spreads, turnover rate, trading volume, proportion of zero returns and the illiquidity ratio. Following Chordia et al. (2000) methodology, some evidence of commonality in liquidity is found in the Portuguese stock market when the proportion of zero returns is used as a measure of liquidity. In relation to the factors that drive the inter-temporal variation of the Portuguese stock market liquidity, the results obtained within a VAR framework suggest that changes in real economy activity, monetary policy (proxied by changes in monetary aggregate M1) and stock market returns play an important role as determinants of commonality in liquidity.
Resumo:
A Constituição Federal Brasileira de 1988, primeiro texto constitucional após o regime militar no país, é um texto plural, pleno de significados e ambivalências, resultado de um esforço na busca de consensos em uma assembleia constituinte heterogênea num país democraticamente jovem. A abrangência de significados da Magna Carta precisa, portanto, a fim de ser aplicada às realidades também plurais e heterogêneas de um país enorme como o Brasil, ser interpretada à luz dessas mesmas realidades, sempre situadas e datadas. Nesse diapasão, a dedução dogmática da esfera legítima da atuação estatal no domínio econômico só pode ser apreendida através da técnica ponderativa, levando em conta o fato da constituição econômica encravada no texto da Lei Maior estabelecer como regra o desenvolvimento da atividade econômica pela livre iniciativa da sociedade. Sob o signo da emancipação da sociedade e da liberdade de empreender por um lado, e da obrigação do Estado garantir as condições da liberdade e do posicionamento estratégico do Brasil no cenário geopolítico e econômico mundial de outro, serão relidas as modalidades interventivas estatais como a prestação de serviços públicos, as atuações monopolistas e em regime de concorrência com a iniciativa privada, produzindo um quadro com as condições que justifiquem a presença do Estado na economia, servindo de guia dinâmico para a elucidação das fronteiras sempre contingentes entre as esferas pública e privada no domínio econômico.
Resumo:
Resumen (Castellano): El acceso a los medicamentos ha sido en las últimas décadas un tema a debatir tanto en los países del Sur como en los países del Norte. Los precios impuestos por la industria farmacéutica para enfermedades tan graves como el VIH-SIDA han sido excesivos y consecuentemente inalcanzables para los países más pobres. Sin embargo, actualmente los países más ricos del Norte están sufriendo estas mismas consecuencias a causa del nuevo tratamiento contra la hepatitis C, cuyos precios astronómicos han excluido a numerosos enfermos del acceso al mismo. El derecho humano a la salud está siendo vulnerado, y las principales responsables son las empresas farmacéuticas, las cuales han corrompido los sistemas de salud. La fijación de precios monopólicos, tras el fortalecimiento de las patentes farmacéuticas con la firma del Acuerdo sobre Derechos de Propiedad Intelectual relacionados con el Comercio (ADPIC), ha constituido un obstáculo para la realización del mismo, agrandado por las disposiciones ADPIC-plus incluidas en los tratados de libre comercio . No obstante, la consolidación de la industria de medicamentos genéricos ha logrado competir contra ellas, suministrando medicamentos asequibles y promoviendo los intereses de los más necesitados.
Resumo:
El presente trabajo tiene como objetivo estudiar la relevancia del uso del sexo a la hora de tarificar y calcular prestaciones en el seguro de salud en base a la promulgación de la Directiva 2004/113/CE, que prohíbe su utilización al considerarlo discriminatorio. Se analiza el valor de la variable sexo como indicador del perfil de riesgo de los consumidores en el seguro de enfermedad, haciendo hincapié en el ámbito del gasto obstétrico, en las primas de las mujeres en edad de embarazo y en el empleo de factores alternativos razonables que determinen el riesgo. Se estudian a su vez los efectos y consecuencias de la Directiva 2004/113/CE y la sentencia Test-Achats en el ámbito asegurador, así como el impacto producido concretamente en la tarificación y suscripción de los seguros de salud en España, analizando la evolución del sector durante el periodo de implantación de dichos documentos.