847 resultados para Interconnected microgrids
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.
Resumo:
The stabilization of energy supply in Brazil has been a challenge for the operation of the National Interconnected System in face of hydrological and climatic variations. Thermoelectric plants have been used as an emergency source for periods of water scarcity. The utilization of fossil fuels, however, has elevated the cost of electricity. On the other hand, offshore wind energy has gained importance in the international context and is competitive enough to become a possibility for future generation in Brazil. In this scenario, the main goal of this thesis was to investigate the magnitude and distribution of offshore wind resources, and also verify the possibilities of complementing hydropower. A data series of precipitation from the Climatic Research Unit (CRU) Blended Sea Winds from the National Climatic Data Center (NCDC/NOAA) were used. According to statistical criteria, three types of complementarity were found in the Brazilian territory: hydro × hydro, wind × wind and hydro × wind. It was noted a significant complementarity between wind and hydro resources (r = -0.65), mainly for the hydrographical basins of the southeast and central regions with Northeastern Brazil winds. To refine the extrapolation of winds over the ocean, a method based on the Monin-Obukhov theory was used to model the stability of the atmospheric boundary layer. Objectively Analyzed Air-Sea Flux (OAFLUX) datasets for heat flux, temperature and humidity, and also sea level pressure data from NCEP/NCAR were used. The ETOPO1 from the National Geophysical Data Center (NGDC/NOAA) provided bathymetric data. It was found that shallow waters, between 0-20 meters, have a resource estimated at 559 GW. The contribution of wind resources to hydroelectric reservoir operation was investigated with a simplified hybrid wind-hydraulic model, and reservoir level, inflow, outflow and turbine production data. It was found that the hybrid system avoids drought periods, continuously saving water from reservoirs through wind production. Therefore, from the results obtained, it is possible to state that the good winds from the Brazilian coast can, besides diversifying the electric matrix, stabilize the hydrological fluctuations avoiding rationing and blackouts, reducing the use of thermal power plants, increasing the production cost and emission of greenhouse gases. Public policies targeted to offshore wind energy will be necessary for its full development.
Resumo:
The infection caused by Helicobacter pylori (H. pylori) is associated with gastroduodenal inflammation can lead to the development of gastritis, gastric or duodenal ulcer and gastric cancer (type 1 carcinogen for stomach cancer). Amoxicillin is used as first-line therapy in the treatment of H. pylori associated to metronidazole or clarithromycin, and a proton pump inhibitor. However, the scheme is not fully effective due to inadequate accumulation of antibiotics in gastric tissue, inadequate efficacy of ecological niche of H. pylori, and other factors. In this context, this study aimed to obtaining and characterization of particulate systems gastrorretentivos chitosan - amoxicillin aiming its use for treatment of H. pylori infections. The particles were obtained by the coacervation method / precipitation using sodium sulfate as precipitating agent and crosslinking and two techniques: addition of amoxicillin during preparation in a single step and the sorption particles prior to amoxycillin prepared by coacervation / precipitation and spray drying. The physicochemical characterization of the particles was performed by SEM, FTIR, DSC, TG and XRD. The in vitro release profile of amoxycillin free and incorporated in the particles was obtained in 0.1 N HCl (pH = 1.2). The particles have higher encapsulation efficiency to 80% spherical shape with interconnected particles or adhered to each other, the nanometric diameter to the systems obtained by coacervation / precipitation and fine for the particles obtained by spray drying. The characterization by FTIR, DSC and XRD showed that the drug was incorporated into the nanoparticles dispersed in the polymeric matrix. Thermal analysis (TG and DSC) indicated that encapsulation provides greater heat stability to the drug. Amoxicillin encapsulated in nanoparticles had slower release compared to free drug. The particles showed release profile with a faster initial stage (burst effect) reaching a maximum at 30 minutes 35% of amoxicillin for the system in 1: 1 ratio relative to the polymer and 80% for the system in the ratio 2: 1. Although simple and provide high encapsulation efficiency of amoxicillin, the process of coacervation, precipitation in one step using sodium sulfate as precipitant / cross-linker must be optimized in order to adjust the release kinetics according to the intended application.
Resumo:
Since long ago that the religious manifestations are considered important items in various social media. In this sense, we are interested in understanding the strong denominational plurality, and the emergence of new movements / events and forms of Christianity in Brazil, which therefore is a reflection of what happens in the rest of the globalized world. Such interest is covered by the assumption that religiosity and social media are built on mutual and interconnected way, which allows us to understand that social cosmology presents itself as a fertile space privileged and to verify the interactions pertaining to the binomial: religion and society. The Protestantism is divided into three main streams: Historical, Pentecostal and Neo-Pentecostal. Each current of Protestantism emerged to adapt to social cosmology of historical ages, from the Reformation to the present day, forming institutions with certain ethical and moral stances. In this sense, it is necessary to research and understand how the historical Protestant, Pentecostal and neo-Pentecostal were implanted in Brazil. From this we believe to be interesting problematize accurately the following: How have the processes of formation of strands / evangelical movements in Brazil? These movements are still holding their "classic robes" or hybreeding their borders? What are the modulations in the religious transit demonstrate that this probable hybridization? If there is, indeed, a hybridization between the Evangelical means, we can put it is provoked only by a logic of "market of symbolic goods of religion" in the world today? Nowadays, we are experiencing a period of change cosmological, from Modern to Postmodern (caosmológic). This last is therefore characterized by secularization, the fracture, the mutilation, the diversity of subjectivities. And thinking about is this "spirit of the times" we take as the main its theoretical references of this study, the Italian philosopher and thinker of postmodernity Gianni Vattimo featuring postmodernity with the cosmology of fragile / pensiero debole Thought and post-metaphysical, that is which enhances the appearance of more plural institutions and non-absolute. To do so, he makes use of the philosophies of German, Nietzsche and Heidegger. Finally, it must be said that this grounded theoretical framework, raise the hypothesis that the Protestantism tends to "injure anymore" by the influence of postmodern social episteme, providing thereby the emergence of a new hybrid framework has not yet nominated and can move between the three major currents of world Protestantism, with features in converged aspects of the same.
Resumo:
Since long ago that the religious manifestations are considered important items in various social media. In this sense, we are interested in understanding the strong denominational plurality, and the emergence of new movements / events and forms of Christianity in Brazil, which therefore is a reflection of what happens in the rest of the globalized world. Such interest is covered by the assumption that religiosity and social media are built on mutual and interconnected way, which allows us to understand that social cosmology presents itself as a fertile space privileged and to verify the interactions pertaining to the binomial: religion and society. The Protestantism is divided into three main streams: Historical, Pentecostal and Neo-Pentecostal. Each current of Protestantism emerged to adapt to social cosmology of historical ages, from the Reformation to the present day, forming institutions with certain ethical and moral stances. In this sense, it is necessary to research and understand how the historical Protestant, Pentecostal and neo-Pentecostal were implanted in Brazil. From this we believe to be interesting problematize accurately the following: How have the processes of formation of strands / evangelical movements in Brazil? These movements are still holding their "classic robes" or hybreeding their borders? What are the modulations in the religious transit demonstrate that this probable hybridization? If there is, indeed, a hybridization between the Evangelical means, we can put it is provoked only by a logic of "market of symbolic goods of religion" in the world today? Nowadays, we are experiencing a period of change cosmological, from Modern to Postmodern (caosmológic). This last is therefore characterized by secularization, the fracture, the mutilation, the diversity of subjectivities. And thinking about is this "spirit of the times" we take as the main its theoretical references of this study, the Italian philosopher and thinker of postmodernity Gianni Vattimo featuring postmodernity with the cosmology of fragile / pensiero debole Thought and post-metaphysical, that is which enhances the appearance of more plural institutions and non-absolute. To do so, he makes use of the philosophies of German, Nietzsche and Heidegger. Finally, it must be said that this grounded theoretical framework, raise the hypothesis that the Protestantism tends to "injure anymore" by the influence of postmodern social episteme, providing thereby the emergence of a new hybrid framework has not yet nominated and can move between the three major currents of world Protestantism, with features in converged aspects of the same.
Resumo:
The intervention research proposed was based on the Cultural-Historical Theory based on the laws and logic of materialism historical-dialectical. Therefore, we tried to design a research process that involved all as responsible for the process. In the field of continuous teacher's training usually has been found dualistic relationship / paradoxical processes as a result of the adopted training models which are characterized by individualist human processes. The teacher training work sought to overcome this dualism, to promote the unveiling of the contradictions with regard to teaching models. As a hypothesis, we imagined that immersed in this process, teachers recognize such contradictions, and this recognition would make the contradictions become the driving force of change in teaching practice, realizing the teaching-learning-development triad as the basis of praxis. Aiming to develop a process of continuing education to bring results to the professional teachers development looking for answer the following research question: How and what the changes of teachers who participated in the Didactic-Formative Intervention process raised the quality of their teaching practices? In this context, the objective of the research was to develop a process of Didactic-Formative Intervention from the perspective of Cultural-Historical Theory with high school teachers in order to theorize about the changes in pedagogical practices of teachers and learn aspects that transform the essence teaching practice. The research involved two high school teachers of a public school in Uberlândia-MG. The training meetings took place at the school through a collective study group between the years 2013 and 2015. As procedures were used two interconnected aspects: classes observations, and a theoretical and methodological training, both for diagnosis and for the process evaluation, the second aspect has a formative dimension, and a didactic dimension (double meaning) to form didactically the teacher and to elaborate didactic procedures. The collected data were analyzed by observing the assumptions of the method, analysis by units and the processuality. As results teachers showed changes in their teaching practices regarding the organization of the pedagogical work and also centered their design educational actions based on learning and development of the students. The presence of continuous diagnosis during the classes, work with a systems of concepts and their conceptual links, problematization as a teaching method can be pointed as meaningful changes in their praxis. Regarding the training activities that emerged from the analysis of the compiled materials and analyzed throughout the process can be emphasized: forming a collective group of school teachers continuous training, diagnostics, development of practical activities, increase relationships among participants, the choice of scientific material used should have direct relation to the needs of the participants, promoting conditions that enable the emergence of contradictions between the pedagogical practice of teachers and teaching based on the perspective of the Cultural-Historical Theory. This research craved to develop and design a teachers' training processes that increase the quality of teachers life and ways of teaching in the Brazilian public school.
Resumo:
L’auteur qui appose son nom à une publication universitaire sera reconnu pour sa contribution à la recherche et devra également en assumer la responsabilité. Il existe divers types d’agencements pouvant être utilisés afin de nommer les auteurs et souligner l’ampleur de leur contribution à ladite recherche. Par exemple, les auteurs peuvent être nommés en ordre décroissant selon l’importance de leurs contributions, ce qui permet d’allouer davantage de mérite et de responsabilité aux premiers auteurs (à l’instar des sciences de la santé) ou bien les individus peuvent être nommés en ordre alphabétique, donnant une reconnaissance égale à tous (tel qu’on le note dans certains domaines des sciences sociales). On observe aussi des pratiques émergeant de certaines disciplines ou des champs de recherche (tel que la notion d’auteur correspondant, ou directeur de recherche nommé à la fin de la liste d’auteurs). En science de la santé, lorsque la recherche est de nature multidisciplinaire, il existe différentes normes et pratiques concernant la distribution et l’ordre de la signature savante, ce qui peut donner lieu à des désaccords, voire à des conflits au sein des équipes de recherche. Même si les chercheurs s’entendent pour dire que la signature savante devrait être distribué de façon ‘juste’, il n’y a pas de consensus sur ce que l’on qualifie de ‘juste’ dans le contexte des équipes de recherche multidisciplinaire. Dans cette thèse, nous proposons un cadre éthique pour la distribution juste de la signature savante dans les équipes multidisciplinaires en sciences de la santé. Nous présentons une critique de la documentation sur la distribution de la signature savante en recherche. Nous analysons les enjeux qui peuvent entraver ou compliquer une distribution juste de la signature savante tels que les déséquilibres de pouvoir, les conflits d’intérêts et la diversité de cultures disciplinaires. Nous constatons que les normes internationales sont trop vagues; par conséquent, elles n’aident pas les chercheurs à gérer la complexité des enjeux concernant la distribution de la signature savante. Cette limitation devient particulièrement importante en santé mondiale lorsque les chercheurs provenant de pays développés collaborent avec des chercheurs provenant de pays en voie de développement. Afin de créer un cadre conceptuel flexible en mesure de s’adapter à la diversité des types de recherche multidisciplinaire, nous proposons une approche influencée par le Contractualisme de T.M. Scanlon. Cette approche utilise le respect mutuel et la force normative de la raison comme fondation, afin de justifier l’application de principes éthiques. Nous avons ainsi développé quatre principes pour la distribution juste de la signature savante en recherche: le mérite, la juste reconnaissance, la transparence et la collégialité. Enfin, nous proposons un processus qui intègre une taxonomie basée sur la contribution, afin de délimiter les rôles de chacun dans le projet de recherche. Les contributions peuvent alors être mieux comparées et évaluées pour déterminer l’ordre de la signature savante dans les équipes de recherche multidisciplinaire en science de la santé.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.
In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.
Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.
I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and
discuss some implications for capital regulation policy and stress testing.
Resumo:
The central dogma of molecular biology relies on the correct Watson-Crick (WC) geometry of canonical deoxyribonucleic acid (DNA) dG•dC and dA•dT base pairs to replicate and transcribe genetic information with speed and an astonishing level of fidelity. In addition, the Watson-Crick geometry of canonical ribonucleic acid (RNA) rG•rC and rA•rU base pairs is highly conserved to ensure that proteins are translated with high fidelity. However, numerous other potential nucleobase tautomeric and ionic configurations are possible that can give rise to entirely new pairing modes between the nucleotide bases. Very early on, James Watson and Francis Crick recognized their importance and in 1953 postulated that if bases adopted one of their less energetically disfavored tautomeric forms (and later ionic forms) during replication it could lead to the formation of a mismatch with a Watson-Crick-like geometry and could give rise to “natural mutations.”
Since this time numerous studies have provided evidence in support of this hypothesis and have expanded upon it; computational studies have addressed the energetic feasibilities of different nucleobases’ tautomeric and ionic forms in siico; crystallographic studies have trapped different mismatches with WC-like geometries in polymerase or ribosome active sites. However, no direct evidence has been given for (i) the direct existence of these WC-like mismatches in canonical DNA duplex, RNA duplexes, or non-coding RNAs; (ii) which, if any, tautomeric or ionic form stabilizes the WC-like geometry. This thesis utilizes nuclear magnetic resonance (NMR) spectroscopy and rotating frame relaxation dispersion (R1ρ RD) in combination with density functional theory (DFT), biochemical assays, and targeted chemical perturbations to show that (i) dG•dT mismatches in DNA duplexes, as well as rG•rU mismatches RNA duplexes and non-coding RNAs, transiently adopt a WC-like geometry that is stabilized by (ii) an interconnected network of rapidly interconverting rare tautomers and anionic bases. These results support Watson and Crick’s tautomer hypothesis, but additionally support subsequent hypotheses invoking anionic mismatches and ultimately tie them together. This dissertation shows that a common mismatch can adopt a Watson-Crick-like geometry globally, in both DNA and RNA, and whose geometry is stabilized by a kinetically linked network of rare tautomeric and anionic bases. The studies herein also provide compelling evidence for their involvement in spontaneous replication and translation errors.
Resumo:
To reveal the theories and practices that linked education to the development within the cities of Boston and Buenos Aires, and in turn to the development of US and Argentina nationalism, “Cosmopolitan Imperialism” centers on two education reformers, Horace Mann (1776-1859) and Domingo Faustino Sarmiento (1811-1888). Mann and Sarmiento formed part of a supra-national community where liberal intellectual elites created a republic of letters, or perhaps better said, a republic of schools. As different versions of education branched out from a common Atlantic origin during the nineteenth century, Mann and Sarmiento searched for those ideas that better fit their national projects, a local project that started in the cities and moved to the interior parts of the country. In Boston and Buenos Aires, modern nationalism intertwined with imperial projects. This dissertation thus analyzes nationalism and reform in the nineteenth-century as an imperial project led by cosmopolitan intellectual elites. While we might expect to find Mann and Sarmiento’s ideas on education to be centered on their national experiences, looking to Europe for inspiration, this dissertation shows that it was quite the opposite. Educational ideas developed within an interconnected network and traveled within the North-South axis connecting Boston with Buenos Aires. This framework moves the focus from the interchange of ideas between America and Europe and places it within the American continent. At the same time, it allows us to consider Latin American and the US as both creators and recipients of educational ideas. There is a traditional way of talking about nationalism and reform in the nineteenth-century, especially in terms of education and educational policies. It is common to imagine that in the US, and even more certainly in Latin America, educated elites looked to the so-called West for inspiration. The argument is that they ended up adapting foreign models to their local and internal contexts. This dissertation challenges that idea and shows that different versions of education developed from a shared Atlantic milieu in which reformers in certain cities saw themselves as part of the same cosmopolitan empires.
Resumo:
This dissertation consists of three essays on different aspects of water management. The first essay focuses on the sustainability of freshwater use by introducing the notion that altruistic parents do bequeath economic assets for their offspring. Constructing a two-period, over-lapping generational model, an optimal ratio of consumption and pollution for old and young generations in each period is determined. Optimal levels of water consumption and pollution change according to different parameters, such as, altruistic degree, natural recharge rate, and population growth. The second essay concerns water sharing between countries in the case of trans-boundary river basins. The paper recognizes that side payments fail to forge water-sharing agreement among the international community and that downstream countries have weak bargaining power. An interconnected game approach is developed by linking the water allocation issue with other non-water issues such as trade or border security problems, creating symmetry between countries in bargaining power. An interconnected game forces two countries to at least partially cooperate under some circumstances. The third essay introduces the concept of virtual water (VW) into a traditional international trade model in order to estimate water savings for a water scarce country. A two country, two products and two factors trade model is developed, which includes not only consumers and producer’s surplus, but also environmental externality of water use. The model shows that VW trade saves water and increases global and local welfare. This study should help policy makers to design appropriate subsidy or tax policy to promote water savings especially in water scarce countries.
Resumo:
The compositions of natural glasses and phenocrysts in basalts from Deep Sea Drilling Project Sites 501, 504, and 505, near the Costa Rica Rift, constitute evidence for the existence of a periodically replenished axial magma chamber that repeatedly erupted lavas of remarkably uniform composition. Magma compositions were affected by three general components: (1) injected magmas carrying (in decreasing order of abundance) Plagioclase, olivine, and chrome-spinel phenocrysts (spinel assemblage); (2) injected magmas carrying Plagioclase, clinopyroxene, and olivine phenocrysts, but no spinel (clinopyroxene assemblage); and (3) moderately evolved hybrids in the magma chamber itself. The compositions of the injected phenocrysts and minerals in glomerocrysts are as follows: Plagioclase - An85-94; olivine - Fo87-89; clinopyroxene - high Cr2O3 (0.7-1.1%), endiopside (Wo42En51Fs7), and aluminous chromian spinel (Cr/Cr + Al = 0.3). These minerals resemble those thought to occur in upper mantle sources (9 kbars and less) of ocean-ridge basalts and to crystallize in magmas near those sources. In the magma chamber, more sodic Plagioclase (An79-85), less magnesian olivine (Fo81-86) and low-Cr2O3 (0.1-0.4%) clinopyroxene formed rims on these crystals, grew as other phenocrysts, and formed cumulus segregations on the walls and floors of the magma chamber. In the spinel-assemblage magmas, magnesiochromite (Cr/Cr + Al = 0.4-0.5) also formed. Some cumulus segregations were later entrained in lavas as xenoliths. The glass compositions define 16 internally homogeneous eruptive units, 13 of which are in stratigraphic order in a single hole, Hole 504B, which was drilled 561.5 meters into the ocean crust. These units are defined as differing from each other by more than analytical uncertainty in one or more oxides. However, many of the glass groups in Hole 504B show virtually no differences in TiO2 contents, Mg/Mg + Fe2+, or normative An/An + Ab, all of which are sensitive indicators of crystallization differentiation. The differences are so small that they are only apparent in the glass compositions; they are almost completely obscured in whole-rock samples by the presence of phenocrysts and the effects of alteration. Moreover, several of the glass units at different depths in Hole 504B are compositionally identical, with all oxides falling within the range of analytical uncertainty, with only small variations in the rest of the suite. The repetition of identical chemical types requires (1) very regular injection of magmas into the magma chamber, (2) extreme similarity of injected magmas, and (3) displacement of very nearly the same proportion of the magmas in the chamber at each injection. Numerical modeling and thermal considerations have led some workers to propose the existence of such conditions at certain types of spreading centers, but the lava and glass compositions at Hole 504B represent the first direct evidence revealed by drilling of the existence of a compositionally nearly steady-state magma chamber, and this chapter examines the processes acting in it in some detail. The glass groups that are most similar are from clinopyroxene-assemblage lavas, which have a range of Mg/Mg + Fe2"1" of 0.59 to 0.65. Spinel-assemblage basalts are less evolved, with Mg/Mg + Fe2+ of 0.65 to 0.69, but both types have nearly identical normative An/An + Ab (0.65-0.66). However, the two lava types contain megacrysts (olivine, Plagioclase, clinopyroxene) that crystallized from melts with Mg/Mg + Fe2+ values of 0.70 to 0.72. Projection of glass compositions into ternary normative systems suggests that spinel-assemblage magmas originated deeper in the mantle than clinopyroxene-assemblage magmas, and mineral data indicate that the two types followed different fractionation paths before reaching the magma chamber. The two magma types therefore represent neither a low- nor a high-pressure fractionation sequence. Some of the spinel-assemblage magmas may have had picritic parents, but were coprecipitating all of the spinel-assemblage phenocrysts before reaching the magma chamber. Clinopyroxene-assemblage magmas did not have picritic parents, but the compositions of phenocrysts suggest that they originated at about 9 kbars, near the transition between plagioclase peridotite and spinel peridotite in the mantle. Two glass groups have higher contents of alkalis, TiO2, and P2O5 than the others, evidently as a result of the compositions of mantle sources. Eruption of these lavas implies that conduits and chambers containing magmas from dissimilar sources were not completely interconnected on the Costa Rica Rift. The data are used to draw comparisons with the East Pacific Rise and to consider the mechanisms that may have prevented the eruption of ferrobasalts at these sites.
Resumo:
This dissertation focuses on industrial policy in two developing countries: Peru and Ecuador. Informed by comparative historical analysis, it explains how the Import-Substitution Industrialization policies promoted during the 1970s by military administration unravelled in the following 30 years under the guidance of Washington Consensus policies. Positioning political economy in time, the research objectives were two-fold: understanding long-term policy reform patterns, including the variables that conditioned cyclical versus path-dependent dynamics of change and; secondly, investigating the direction and leverage of state institutions supporting the manufacturing sector at the dawn, peak and consolidation of neoliberal discourse in both countries. Three interconnected causal mechanisms explain the divergence of trajectories: institutional legacies, coordination among actors and economic distribution of power. Peru’s long tradition of a minimal state contrasts with the embedded character of Ecuador long tradition of legal protectionism dating back to the Liberal Revolution. Peru’s close policy coordination among stakeholders –state technocrats and business elites- differs from Ecuador’s “winners-take-all” approach for policy-making. Peru’s economic dynamism concentrated in Lima sharply departs from Ecuador’s competitive regional economic leaderships. This dissertation paid particular attention to methodology to understand the intersection between structure and agency in policy change. Tracing primary and secondary sources, as well as key pieces of legislation, became critical to understand key turning points and long-term patterns of change. Open-ended interviews (N=58) with two stakeholder groups (business elites and bureaucrats) compounded the effort to knit motives, discourses, and interests behind this long transition. In order to understand this amount of data, this research build an index of policy intervention as a methodological contribution to assess long patterns of policy change. These findings contribute to the current literature on State-market relations and varieties of capitalism, institutional change, and policy reform.
Resumo:
The equations governing the dynamics of rigid body systems with velocity constraints are singular at degenerate configurations in the constraint distribution. In this report, we describe the causes of singularities in the constraint distribution of interconnected rigid body systems with smooth configuration manifolds. A convention of defining primary velocity constraints in terms of orthogonal complements of one-dimensional subspaces is introduced. Using this convention, linear maps are defined and used to describe the space of allowable velocities of a rigid body. Through the definition of these maps, we present a condition for non-degeneracy of velocity constraints in terms of the one dimensional subspaces defining the primary velocity constraints. A method for defining the constraint subspace and distribution in terms of linear maps is presented. Using these maps, the constraint distribution is shown to be singular at configuration where there is an increase in its dimension.