980 resultados para Dynamic code generation
Resumo:
Predictions about electric energy needs, based on current electric energy models, forecast that the global energy consumption on Earth for 2050 will double present rates. Using distributed procedures for control and integration, the expected needs can be halved. Therefore implementation of Smart Grids is necessary. Interaction between final consumers and utilities is a key factor of future Smart Grids. This interaction is aimed to reach efficient and responsible energy consumption. Energy Residential Gateways (ERG) are new in-building devices that will govern the communication between user and utility and will control electric loads. Utilities will offer new services empowering residential customers to lower their electric bill. Some of these services are Smart Metering, Demand Response and Dynamic Pricing. This paper presents a practical development of an ERG for residential buildings.
Resumo:
Abstraction-Carrying Code (ACC) has recently been proposed as a framework for mobile code safety in which the code supplier provides a program together with an abstraction whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certifícate and its generation is carried out automatically by a fixed-point analyzer. The advantage of providing a (fixedpoint) abstraction to the code consumer is that its validity is checked in a single pass of an abstract interpretation-based checker. A main challenge is to reduce the size of certificates as much as possible while at the same time not increasing checking time. In this paper, we first introduce the notion of reduced certifícate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the full certifícate in a single pass. Based on this notion, we then instrument a generic analysis algorithm with the necessary extensions in order to identify the information relevant to the checker.
Resumo:
The main objective of this paper is the development and application of multivariate time series models for forecasting aggregated wind power production in a country or region. Nowadays, in Spain, Denmark or Germany there is an increasing penetration of this kind of renewable energy, somehow to reduce energy dependence on the exterior, but always linked with the increaseand uncertainty affecting the prices of fossil fuels. The disposal of accurate predictions of wind power generation is a crucial task both for the System Operator as well as for all the agents of the Market. However, the vast majority of works rarely onsider forecasting horizons longer than 48 hours, although they are of interest for the system planning and operation. In this paper we use Dynamic Factor Analysis, adapting and modifying it conveniently, to reach our aim: the computation of accurate forecasts for the aggregated wind power production in a country for a forecasting horizon as long as possible, particularly up to 60 days (2 months). We illustrate this methodology and the results obtained for real data in the leading country in wind power production: Denmark
Resumo:
Based on the recent high-resolution laboratory experiments on propagating shear rupture, the constitutive law that governs shear rupture processes is discussed in view of the physical principles and constraints, and a specific constitutive law is proposed for shear rupture. It is demonstrated that nonuniform distributions of the constitutive law parameters on the fault are necessary for creating the nucleation process, which consists of two phases: (i) a stable, quasistatic phase, and (ii) the subsequent accelerating phase. Physical models of the breakdown zone and the nucleation zone are presented for shear rupture in the brittle regime. The constitutive law for shear rupture explicitly includes a scaling parameter Dc that enables one to give a common interpretation to both small scale rupture in the laboratory and large scale rupture as earthquake source in the Earth. Both the breakdown zone size Xc and the nucleation zone size L are prescribed and scaled by Dc, which in turn is prescribed by a characteristic length lambda c representing geometrical irregularities of the fault. The models presented here make it possible to understand the earthquake generation process from nucleation to unstable, dynamic rupture propagation in terms of physics. Since the nucleation process itself is an immediate earthquake precursor, deep understanding of the nucleation process in terms of physics is crucial for the short-term (or immediate) earthquake prediction.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
Part 6: Engineering and Implementation of Collaborative Networks
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Using a dynamic systems model specifically developed for Piracicaba, Capivari and Jundia River Water Basins (BH-PCJ) as a tool to help to analyze water resources management alternatives for policy makers and decision takers, five simulations for 50 years timeframe were performed. The model estimates water supply and demand, as well as wastewater generation from the consumers at BH-PCJ. A run was performed using mean precipitation value constant, and keeping the actual water supply and demand rates, the business as usual scenario. Under these considerations, it is expected an increment of about similar to 76% on water demand, that similar to 39% of available water volume will come from wastewater reuse, and that waste load increases to similar to 91%. Falkenmark Index will change from 1,403 m(3) person(-1) year(-1) in 2004, to 734 m(3) P(-1) year(-1) by 2054, and the Sustainability Index from 0.44 to 0.20. Another four simulations were performed by affecting the annual precipitation by 90 and 110%; considering an ecological flow equal to 30% of the mean daily flow; and keeping the same rates for all other factors except for ecological flow and household water consumption. All of them showed a tendency to a water crisis in the near future at BH-PCJ.
Resumo:
Distributed energy resources will provide a significant amount of the electricity generation and will be a normal profitable business. In the new decentralized grid, customers will be among the many decentralized players and may even help to co-produce the required energy services such as demand-side management and load shedding. So, they will gain the opportunity to be more active market players. The aggregation of DG plants gives place to a new concept: the Virtual Power Producer (VPP). VPPs can reinforce the importance of these generation technologies making them valuable in electricity markets. In this paper we propose the improvement of MASCEM, a multi-agent simulation tool to study negotiations in electricity spot markets based on different market mechanisms and behavior strategies, in order to take account of decentralized players such as VPP.
Resumo:
Actualmente, os smartphones e outros dispositivos móveis têm vindo a ser dotados com cada vez maior poder computacional, sendo capazes de executar um vasto conjunto de aplicações desde simples programas de para tirar notas até sofisticados programas de navegação. Porém, mesmo com a evolução do seu hardware, os actuais dispositivos móveis ainda não possuem as mesmas capacidades que os computadores de mesa ou portáteis. Uma possível solução para este problema é distribuir a aplicação, executando partes dela no dispositivo local e o resto em outros dispositivos ligados à rede. Adicionalmente, alguns tipos de aplicações como aplicações multimédia, jogos electrónicos ou aplicações de ambiente imersivos possuem requisitos em termos de Qualidade de Serviço, particularmente de tempo real. Ao longo desta tese é proposto um sistema de execução de código remota para sistemas distribuídos com restrições de tempo-real. A arquitectura proposta adapta-se a sistemas que necessitem de executar periodicamente e em paralelo mesmo conjunto de funções com garantias de tempo real, mesmo desconhecendo os tempos de execução das referidas funções. A plataforma proposta foi desenvolvida para sistemas móveis capazes de executar o Sistema Operativo Android.
Resumo:
This paper addresses the impact of the CO2 opportunity cost on the wholesale electricity price in the context of the Iberian electricity market (MIBEL), namely on the Portuguese system, for the period corresponding to the Phase II of the European Union Emission Trading Scheme (EU ETS). In the econometric analysis a vector error correction model (VECM) is specified to estimate both long–run equilibrium relations and short–run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The model is estimated using daily spot market prices and the four commodities prices are jointly modelled as endogenous variables. Moreover, a set of exogenous variables is incorporated in order to account for the electricity demand conditions (temperature) and the electricity generation mix (quantity of electricity traded according the technology used). The outcomes for the Portuguese electricity system suggest that the dynamic pass–through of carbon prices into electricity prices is strongly significant and a long–run elasticity was estimated (equilibrium relation) that is aligned with studies that have been conducted for other markets.