9 resultados para 280301 Programming Techniques

em Instituto Politécnico do Porto, Portugal


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper the problem of the evolution of an object-oriented database in the context of orthogonal persistent programming systems is addressed. We have observed two characteristics in that type of systems that offer particular conditions to implement the evolution in a semi-transparent fashion. That transparency can further be enhanced with the obliviousness provided by the Aspect-Oriented Programming techniques. Was conceived a meta-model and developed a prototype to test the feasibility of our approach. The system allows programs, written to a schema, access semi-transparently to data in other versions of the schema.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main goal of this work is to solve mathematical program with complementarity constraints (MPCC) using nonlinear programming techniques (NLP). An hyperbolic penalty function is used to solve MPCC problems by including the complementarity constraints in the penalty term. This penalty function [1] is twice continuously differentiable and combines features of both exterior and interior penalty methods. A set of AMPL problems from MacMPEC [2] are tested and a comparative study is performed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematical Program with Complementarity Constraints (MPCC) finds applica- tion in many fields. As the complementarity constraints fail the standard Linear In- dependence Constraint Qualification (LICQ) or the Mangasarian-Fromovitz constraint qualification (MFCQ), at any feasible point, the nonlinear programming theory may not be directly applied to MPCC. However, the MPCC can be reformulated as NLP problem and solved by nonlinear programming techniques. One of them, the Inexact Restoration (IR) approach, performs two independent phases in each iteration - the feasibility and the optimality phases. This work presents two versions of an IR algorithm to solve MPCC. In the feasibility phase two strategies were implemented, depending on the constraints features. One gives more importance to the complementarity constraints, while the other considers the priority of equality and inequality constraints neglecting the complementarity ones. The optimality phase uses the same approach for both algorithm versions. The algorithms were implemented in MATLAB and the test problems are from MACMPEC collection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The introduction of Electric Vehicles (EVs) together with the implementation of smart grids will raise new challenges to power system operators. This paper proposes a demand response program for electric vehicle users which provides the network operator with another useful resource that consists in reducing vehicles charging necessities. This demand response program enables vehicle users to get some profit by agreeing to reduce their travel necessities and minimum battery level requirements on a given period. To support network operator actions, the amount of demand response usage can be estimated using data mining techniques applied to a database containing a large set of operation scenarios. The paper includes a case study based on simulated operation scenarios that consider different operation conditions, e.g. available renewable generation, and considering a diversity of distributed resources and electric vehicles with vehicle-to-grid capacity and demand response capacity in a 33 bus distribution network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Power Systems (PS) have experimented many changes in their operation. The introduction of new players managing Distributed Generation (DG) units, and the existence of new Demand Response (DR) programs make the control of the system a more complex problem and allow a more flexible management. An intelligent resource management in the context of smart grids is of huge important so that smart grids functions are assured. This paper proposes a new methodology to support system operators and/or Virtual Power Players (VPPs) to determine effective and efficient DR programs that can be put into practice. This method is based on the use of data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 32 bus distribution network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing importance and influence of new resources connected to the power systems has caused many changes in their operation. Environmental policies and several well know advantages have been made renewable based energy resources largely disseminated. These resources, including Distributed Generation (DG), are being connected to lower voltage levels where Demand Response (DR) must be considered too. These changes increase the complexity of the system operation due to both new operational constraints and amounts of data to be processed. Virtual Power Players (VPP) are entities able to manage these resources. Addressing these issues, this paper proposes a methodology to support VPP actions when these act as a Curtailment Service Provider (CSP) that provides DR capacity to a DR program declared by the Independent System Operator (ISO) or by the VPP itself. The amount of DR capacity that the CSP can assure is determined using data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 33 bus distribution network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Important research effort has been devoted to the topic of optimal planning of distribution systems. The non linear nature of the system, the need to consider a large number of scenarios and the increasing necessity to deal with uncertainties make optimal planning in distribution systems a difficult task. Heuristic techniques approaches have been proposed to deal with these issues, overcoming some of the inherent difficulties of classic methodologies. This paper considers several methodologies used to address planning problems of electrical power distribution networks, namely mixedinteger linear programming (MILP), ant colony algorithms (AC), genetic algorithms (GA), tabu search (TS), branch exchange (BE), simulated annealing (SA) and the Bender´s decomposition deterministic non-linear optimization technique (BD). Adequacy of theses techniques to deal with uncertainties is discussed. The behaviour of each optimization technique is compared from the point of view of the obtained solution and of the methodology performance. The paper presents results of the application of these optimization techniques to a real case of a 10-kV electrical distribution system with 201 nodes that feeds an urban area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several projects in the recent past have aimed at promoting Wireless Sensor Networks as an infrastructure technology, where several independent users can submit applications that execute concurrently across the network. Concurrent multiple applications cause significant energy-usage overhead on sensor nodes, that cannot be eliminated by traditional schemes optimized for single-application scenarios. In this paper, we outline two main optimization techniques for reducing power consumption across applications. First, we describe a compiler based approach that identifies redundant sensing requests across applications and eliminates those. Second, we cluster the radio transmissions together by concatenating packets from independent applications based on Rate-Harmonized Scheduling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.