840 resultados para Optimal allocation
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Pós-graduação em Agronomia (Irrigação e Drenagem) - FCA
Resumo:
Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
With proper application of Best Management Practices (BMPs), the impact from the sediment to the water bodies could be minimized. However, finding the optimal allocation of BMP can be difficult, since there are numerous possible options. Also, economics plays an important role in BMP affordability and, therefore, the number of BMPs able to be placed in a given budget year. In this study, two methodologies are presented to determine the optimal cost-effective BMP allocation, by coupling a watershed-level model, Soil and Water Assessment Tool (SWAT), with two different methods, targeting and a multi-objective genetic algorithm (Non-dominated Sorting Genetic Algorithm II, NSGA-II). For demonstration, these two methodologies were applied to an agriculture-dominant watershed located in Lower Michigan to find the optimal allocation of filter strips and grassed waterways. For targeting, three different criteria were investigated for sediment yield minimization, during the process of which it was found that the grassed waterways near the watershed outlet reduced the watershed outlet sediment yield the most under this study condition, and cost minimization was also included as a second objective during the cost-effective BMP allocation selection. NSGA-II was used to find the optimal BMP allocation for both sediment yield reduction and cost minimization. By comparing the results and computational time of both methodologies, targeting was determined to be a better method for finding optimal cost-effective BMP allocation under this study condition, since it provided more than 13 times the amount of solutions with better fitness for the objective functions while using less than one eighth of the SWAT computational time than the NSGA-II with 150 generations did.
Resumo:
Agents with single-peaked preferences share a resource coming from different suppliers; each agent is connected to only a subset of suppliers. Examples include workload balancing, sharing earmarked funds, and rationing utilities after a storm. Unlike in the one supplier model, in a Pareto optimal allocation agents who get more than their peak from underdemanded suppliers, coexist with agents who get less from overdemanded suppliers. Our Egalitarian solution is the Lorenz dominant Pareto optimal allocation. It treats agents with equal demands as equally as the connectivity constraints allow. Together, Strategyproofness, Pareto Optimality, and Equal Treatment of Equals, characterize our solution.
Resumo:
We propose a nonparametric model for global cost minimization as a framework for optimal allocation of a firm's output target across multiple locations, taking account of differences in input prices and technologies across locations. This should be useful for firms planning production sites within a country and for foreign direct investment decisions by multi-national firms. Two illustrative examples are included. The first example considers the production location decision of a manufacturing firm across a number of adjacent states of the US. In the other example, we consider the optimal allocation of US and Canadian automobile manufacturers across the two countries.
Resumo:
Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^
Resumo:
Conventional designs of animal bioassays allocate the same number of animals into control and dose groups to explore the spontaneous and induced tumor incidence rates, respectively. The purpose of such bioassays are (a) to determine whether or not the substance exhibits carcinogenic properties, and (b) if so, to estimate the human response at relatively low doses. In this study, it has been found that the optimal allocation to the experimental groups which, in some sense, minimize the error of the estimated response for low dose extrapolation is associated with the dose level and tumor risk. The number of dose levels has been investigated at the affordable experimental cost. The pattern of the administered dose, 1 MTD, 1/2 MTD, 1/4 MTD,....., etc. plus control, gives the most reasonable arrangement for the low dose extrapolation purpose. The arrangement of five dose groups may make the highest dose trivial. A four-dose design can circumvent this problem and has also one degree of freedom for testing the goodness-of-fit of the response model.^ An example using the data on liver tumors induced in mice in a lifetime study of feeding dieldrin (Walker et al., 1973) is implemented with the methodology. The results are compared with conclusions drawn from other studies. ^
Resumo:
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Resumo:
Scheduling optimization is concerned with the optimal allocation of events to time slots. In this paper, we look at one particular example of scheduling problems - the 2015 Joint Statistical Meetings. We want to assign each session among similar topics to time slots to reduce scheduling conflicts. Chapter 1 briefly talks about the motivation for this example as well as the constraints and the optimality criterion. Chapter 2 proposes use of Latent Dirichlet Allocation (LDA) to identify the topic proportions in each session and talks about the fitting of the model. Chapter 3 translates these ideas into a mathematical formulation and introduces a Greedy Algorithm to minimize conflicts. Chapter 4 demonstrates the improvement of the scheduling with this method.
Resumo:
Effects of CO2 concentration on elemental composition of the coccolithophore Emiliania huxleyi were studied in phosphorus-limited, continuous cultures that were acclimated to experimental conditions for 30 d prior to the first sampling. We determined phytoplankton and bacterial cell numbers, nutrients, particulate components like organic carbon (POC), inorganic carbon (PIC), nitrogen (PN), organic phosphorus (POP), transparent exopolymer particles (TEP), as well as dissolved organic carbon (DOC) and nitrogen (DON), in addition to carbonate system parameters at CO2 levels of 180, 380 and 750 µatm. No significant difference between treatments was observed for any of the measured variables during repeated sampling over a 14 d period. We considered several factors that might lead to these results, i.e. light, nutrients, carbon overconsumption and transient versus steady-state growth. We suggest that the absence of a clear CO2 effect during this study does not necessarily imply the absence of an effect in nature. Instead, the sensitivity of the cell towards environmental stressors such as CO2 may vary depending on whether growth conditions are transient or sufficiently stable to allow for optimal allocation of energy and resources. We tested this idea on previously published data sets where PIC and POC divided by the corresponding cell abundance of E. huxleyi at various pCO2 levels and growth rates were available.
Resumo:
This paper examines the effects of higher-order risk attitudes and statistical moments on the optimal allocation of risky assets within the standard portfolio choice model. We derive the expressions for the optimal proportion of wealth invested in the risky asset to show they are functions of portfolio returns third- and fourth-order moments as well as on the investor’s risk preferences of prudence and temperance. We illustrate the relative importance that the introduction of those higher-order effects have in the decision of expected utility maximizers using data for the US.
Resumo:
Le Système Stockage de l’Énergie par Batterie ou Batterie de Stockage d’Énergie (BSE) offre de formidables atouts dans les domaines de la production, du transport, de la distribution et de la consommation d’énergie électrique. Cette technologie est notamment considérée par plusieurs opérateurs à travers le monde entier, comme un nouveau dispositif permettant d’injecter d’importantes quantités d’énergie renouvelable d’une part et d’autre part, en tant que composante essentielle aux grands réseaux électriques. De plus, d’énormes avantages peuvent être associés au déploiement de la technologie du BSE aussi bien dans les réseaux intelligents que pour la réduction de l’émission des gaz à effet de serre, la réduction des pertes marginales, l’alimentation de certains consommateurs en source d’énergie d’urgence, l’amélioration de la gestion de l’énergie, et l’accroissement de l’efficacité énergétique dans les réseaux. Cette présente thèse comprend trois étapes à savoir : l’Étape 1 - est relative à l’utilisation de la BSE en guise de réduction des pertes électriques ; l’Étape 2 - utilise la BSE comme élément de réserve tournante en vue de l’atténuation de la vulnérabilité du réseau ; et l’Étape 3 - introduit une nouvelle méthode d’amélioration des oscillations de fréquence par modulation de la puissance réactive, et l’utilisation de la BSE pour satisfaire la réserve primaire de fréquence. La première Étape, relative à l’utilisation de la BSE en vue de la réduction des pertes, est elle-même subdivisée en deux sous-étapes dont la première est consacrée à l’allocation optimale et le seconde, à l’utilisation optimale. Dans la première sous-étape, l’Algorithme génétique NSGA-II (Non-dominated Sorting Genetic Algorithm II) a été programmé dans CASIR, le Super-Ordinateur de l’IREQ, en tant qu’algorithme évolutionniste multiobjectifs, permettant d’extraire un ensemble de solutions pour un dimensionnement optimal et un emplacement adéquat des multiple unités de BSE, tout en minimisant les pertes de puissance, et en considérant en même temps la capacité totale des puissances des unités de BSE installées comme des fonctions objectives. La première sous-étape donne une réponse satisfaisante à l’allocation et résout aussi la question de la programmation/scheduling dans l’interconnexion du Québec. Dans le but de réaliser l’objectif de la seconde sous-étape, un certain nombre de solutions ont été retenues et développées/implantées durant un intervalle de temps d’une année, tout en tenant compte des paramètres (heure, capacité, rendement/efficacité, facteur de puissance) associés aux cycles de charge et de décharge de la BSE, alors que la réduction des pertes marginales et l’efficacité énergétique constituent les principaux objectifs. Quant à la seconde Étape, un nouvel indice de vulnérabilité a été introduit, formalisé et étudié ; indice qui est bien adapté aux réseaux modernes équipés de BES. L’algorithme génétique NSGA-II est de nouveau exécuté (ré-exécuté) alors que la minimisation de l’indice de vulnérabilité proposé et l’efficacité énergétique représentent les principaux objectifs. Les résultats obtenus prouvent que l’utilisation de la BSE peut, dans certains cas, éviter des pannes majeures du réseau. La troisième Étape expose un nouveau concept d’ajout d’une inertie virtuelle aux réseaux électriques, par le procédé de modulation de la puissance réactive. Il a ensuite été présenté l’utilisation de la BSE en guise de réserve primaire de fréquence. Un modèle générique de BSE, associé à l’interconnexion du Québec, a enfin été proposé dans un environnement MATLAB. Les résultats de simulations confirment la possibilité de l’utilisation des puissances active et réactive du système de la BSE en vue de la régulation de fréquence.
Resumo:
The consumption of energy on the planet is currently based on fossil fuels. They are responsible for adverse effects on the environment. Renewables propose solutions for this scenario, but must face issues related to the capacity of the power supply. Wind energy offshore emerging as a promising alternative. The speed and stability are greater winds over oceans, but the variability of these may cause inconvenience to the generation of electric power fluctuations. To reduce this, a combination of wind farms geographically distributed was proposed. The greater the distance between them, the lower the correlation between the wind velocity, increasing the likelihood that together achieve more stable power system with less fluctuations in power generation. The efficient use of production capacity of the wind park however, depends on their distribution in marine environments. The objective of this research was to analyze the optimal allocation of wind farms offshore on the east coast of the U.S. by Modern Portfolio Theory. The Modern Portfolio Theory was used so that the process of building portfolios of wind energy offshore contemplate the particularity of intermittency of wind, through calculations of return and risk of the production of wind farms. The research was conducted with 25.934 observations of energy produced by wind farms 11 hypothetical offshore, from the installation of 01 simulated ocean turbine with a capacity of 5 MW. The data show hourly time resolution and covers the period between January 1, 1998 until December 31, 2002. Through the Matlab R software, six were calculated minimum variance portfolios, each for a period of time distinct. Given the inequality of the variability of wind over time, set up four strategies rebalancing to evaluate the performance of the related portfolios, which enabled us to identify the most beneficial to the stability of the wind energy production offshore. The results showed that the production of wind energy for 1998, 1999, 2000 and 2001 should be considered by the portfolio weights calculated for the same periods, respectively. Energy data for 2002 should use the weights derived from the portfolio calculated in the previous time period. Finally, the production of wind energy in the period 1998-2002 should also be weighted by 1/11. It follows therefore that the portfolios found failed to show reduced levels of variability when compared to the individual production of wind farms hypothetical offshore