989 resultados para Integer mixed programming


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In comparison with mixed forest stands, the cultivation of pure plantations in Vietnam entails serious ecological consequences such as loss of biodiversity and higher rate of soil erosion. The economic evaluation is elaborated between pure plantations and mixed forests where the fast-growing tree species are mixed with slow growing tree species which are planted in stripes separating the segments with fast-growing tree species (Acacia sp.). For the evaluation, the input values were used from local costs of goods, services and labour. The results show that the internal rate of return is the highest in the case of pure plantation in comparison with mixed forests – 86% to 77%(first planting pattern: Acacia sp. + noble hardwood species) and 54% (second planting pattern: Acacia + Dipterocarpus sp. + Sindora sp.). The average profit per hectare and year is almost five times higher in the case of mixed stands. The first planting pattern reaches 2,650 $, the second planting pattern 2,280 $ and the pure acacia plantation only 460 $. From an economic point of view, the cultivation of mixed forests that corresponds to the principles of sustainable forestry generates a good economical profit while maintaining habitat complexity and biodiversity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit wurde ein gemischt-ganzzahliges lineares Einsatzoptimierungsmodell für Kraftwerke und Speicher aufgebaut und für die Untersuchung der Energieversorgung Deutschlands im Jahre 2050 gemäß den Leitstudie-Szenarien 2050 A und 2050 C ([Nitsch und Andere, 2012]) verwendet, in denen erneuerbare Energien einen Anteil von über 85 % an der Stromerzeugung haben und die Wind- und Solarenergie starke Schwankungen der durch steuerbare Kraftwerke und Speicher zu deckenden residualen Stromnachfrage (Residuallast) verursachen. In Szenario 2050 A sind 67 TWh Wasserstoff, die elektrolytisch aus erneuerbarem Strom zu erzeugen sind, für den Verkehr vorgesehen. In Szenario 2050 C ist kein Wasserstoff für den Verkehr vorgesehen und die effizientere Elektromobilität hat einen Anteil von 100% am Individualverkehr. Daher wird weniger erneuerbarer Strom zur Erreichung desselben erneuerbaren Anteils im Verkehrssektor benötigt. Da desweiteren Elektrofahrzeuge Lastmanagementpotentiale bieten, weisen die Residuallasten der Szenarien eine unterschiedliche zeitliche Charakteristik und Jahressumme auf. Der Schwerpunkt der Betrachtung lag auf der Ermittlung der Auslastung und Fahrweise des in den Szenarien unterstellten ’Kraftwerks’-parks bestehend aus Kraftwerken zur reinen Stromerzeugung, Kraft-Wärme-Kopplungskraftwerken, die mit Wärmespeichern, elektrischen Heizstäben und Gas-Backupkesseln ausgestattet sind, Stromspeichern und Wärmepumpen, die durch Wärmespeicher zum Lastmanagment eingesetzt werden können. Der Fahrplan dieser Komponenten wurde auf minimale variable Gesamtkosten der Strom- und Wärmeerzeugung über einen Planungshorizont von jeweils vier Tagen hin optimiert. Das Optimierungsproblem wurde mit dem linearen Branch-and-Cut-Solver der software CPLEX gelöst. Mittels sogenannter rollierender Planung wurde durch Zusammensetzen der Planungsergebnisse für überlappende Planungsperioden der Kraftwerks- und Speichereinsatz für die kompletten Szenariojahre erhalten. Es wurde gezeigt, dass der KWK-Anteil an der Wärmelastdeckung gering ist. Dies wurde begründet durch die zeitliche Struktur der Stromresiduallast, die wärmeseitige Dimensionierung der Anlagen und die Tatsache, dass nur eine kurzfristige Speicherung von Wärme vorgesehen war. Die wärmeseitige Dimensionierung der KWK stellte eine Begrenzung des Deckungsanteils dar, da im Winter bei hoher Stromresiduallast nur wenig freie Leistung zur Beladung der Speicher zur Verfügung stand. In den Berechnungen für das Szenario 2050 A und C lag der mittlere Deckungsanteil der KWK an der Wärmenachfrage von ca. 100 TWh_th bei 40 bzw. 60 %, obwohl die Auslegung der KWK einen theoretischen Anteil von über 97 % an der Wärmelastdeckung erlaubt hätte, gäbe es die Beschränkungen durch die Stromseite nicht. Desweiteren wurde die CO2-Vermeidungswirkung der KWK-Wärmespeicher und des Lastmanagements mit Wärmepumpen untersucht. In Szenario 2050 A ergab sich keine signifikante CO2-Vermeidungswirkung der KWK-Wärmespeicher, in Szenario 2050 C hingegen ergab sich eine geringe aber signifikante CO2-Einsparung in Höhe von 1,6 % der Gesamtemissionen der Stromerzeugung und KWK-gebundenen Wärmeversorgung. Das Lastmanagement mit Wärmepumpen vermied Emissionen von 110 Tausend Tonnen CO2 (0,4 % der Gesamtemissionen) in Szenario A und 213 Tausend Tonnen in Szenario C (0,8 % der Gesamtemissionen). Es wurden darüber hinaus Betrachtungen zur Konkurrenz zwischen solarthermischer Nahwärme und KWK bei Einspeisung in dieselben Wärmenetze vorgenommen. Eine weitere Einschränkung der KWK-Erzeugung durch den Einspeisevorrang der Solarthermie wurde festgestellt. Ferner wurde eine untere Grenze von 6,5 bzw. 8,8 TWh_th für die in den Szenarien mindestens benötigte Wasserstoff-Speicherkapazität ermittelt. Die Ergebnisse dieser Arbeit legen nahe, das technisch-ökonomische Potential von Langzeitwärmespeichern für eine bessere Integration von KWK ins System zu ermitteln bzw. generell nach geeigneteren Wärmesektorszenarien zu suchen, da deutlich wurde, dass für die öffentliche Wärmeversorgung die KWK in Kombination mit Kurzzeitwärmespeicherung, Gaskesseln und elektrischen Heizern keine sehr effektive CO2 -Reduktion in den Szenarien erreicht. Es sollte dabei z.B. untersucht werden, ob ein multivalentes System aus KWK, Wärmespeichern und Wärmepumpen eine ökonomisch darstellbare Alternative sein könnte und im Anschluss eine Betrachtung der optimalen Anteile von KWK, Wärmepumpen und Solarthermie im Wärmemarkt vorgenommen werden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evaluation of major feed resources was conducted in four crop-livestock mixed farming systems of central southern Ethiopia, with 90 farmers, selected using multi-stage purposive and random sampling methods. Discussions were held with focused groups and key informants for vernacular name identification of feed, followed by feed sampling to analyse chemical composition (CP, ADF and NDF), in-vitro dry matter digestibility (IVDMD), and correlate with indigenous technical knowledge (ITK). Native pastures, crop residues (CR) and multi-purpose trees (MPT) are the major feed resources, demonstrated great variations in seasonality, chemical composition and IVDMD. The average CP, NDF and IVDMD values for grasses were 83.8 (ranged: 62.9–190), 619 (ranged: 357–877) and 572 (ranged: 317–743) g kg^(−1) DM, respectively. Likewise, the average CP, NDF and IVDMD for CR were 58 (ranged: 20–90), 760 (ranged: 340–931) and 461 (ranged: 285–637)g kg^(−1) DM, respectively. Generally, the MPT and non-conventional feeds (NCF, Ensete ventricosum and Ipomoea batatas) possessed higher CP (ranged: 155–164 g kg^(−1) DM) and IVDMD values (611–657 g kg^(−1) DM) while lower NDF (331–387 g kg^(−1) DM) and ADF (321–344 g kg^(−1) DM) values. The MPT and NCF were ranked as the best nutritious feeds by ITK while crop residues were the least. This study indicates that there are remarkable variations within and among forage resources in terms of chemical composition. There were also complementarities between ITK and feed laboratory results, and thus the ITK need to be taken into consideration in evaluation of local feed resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational models are arising is which programs are constructed by specifying large networks of very simple computational devices. Although such models can potentially make use of a massive amount of concurrency, their usefulness as a programming model for the design of complex systems will ultimately be decided by the ease in which such networks can be programmed (constructed). This thesis outlines a language for specifying computational networks. The language (AFL-1) consists of a set of primitives, ad a mechanism to group these elements into higher level structures. An implementation of this language runs on the Thinking Machines Corporation, Connection machine. Two significant examples were programmed in the language, an expert system (CIS), and a planning system (AFPLAN). These systems are explained and analyzed in terms of how they compare with similar systems written in conventional languages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most Artificial Intelligence (AI) work can be characterized as either ``high-level'' (e.g., logical, symbolic) or ``low-level'' (e.g., connectionist networks, behavior-based robotics). Each approach suffers from particular drawbacks. High-level AI uses abstractions that often have no relation to the way real, biological brains work. Low-level AI, on the other hand, tends to lack the powerful abstractions that are needed to express complex structures and relationships. I have tried to combine the best features of both approaches, by building a set of programming abstractions defined in terms of simple, biologically plausible components. At the ``ground level'', I define a primitive, perceptron-like computational unit. I then show how more abstract computational units may be implemented in terms of the primitive units, and show the utility of the abstract units in sample networks. The new units make it possible to build networks using concepts such as long-term memories, short-term memories, and frames. As a demonstration of these abstractions, I have implemented a simulator for ``creatures'' controlled by a network of abstract units. The creatures exist in a simple 2D world, and exhibit behaviors such as catching mobile prey and sorting colored blocks into matching boxes. This program demonstrates that it is possible to build systems that can interact effectively with a dynamic physical environment, yet use symbolic representations to control aspects of their behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(lambda) and Q-learning belong.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The underlying assumptions for interpreting the meaning of data often change over time, which further complicates the problem of semantic heterogeneities among autonomous data sources. As an extension to the COntext INterchange (COIN) framework, this paper introduces the notion of temporal context as a formalization of the problem. We represent temporal context as a multi-valued method in F-Logic; however, only one value is valid at any point in time, the determination of which is constrained by temporal relations. This representation is then mapped to an abductive constraint logic programming framework with temporal relations being treated as constraints. A mediation engine that implements the framework automatically detects and reconciles semantic differences at different times. We articulate that this extended COIN framework is suitable for reasoning on the Semantic Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The underlying assumptions for interpreting the meaning of data often change over time, which further complicates the problem of semantic heterogeneities among autonomous data sources. As an extension to the COntext INterchange (COIN) framework, this paper introduces the notion of temporal context as a formalization of the problem. We represent temporal context as a multi-valued method in F-Logic; however, only one value is valid at any point in time, the determination of which is constrained by temporal relations. This representation is then mapped to an abductive constraint logic programming framework with temporal relations being treated as constraints. A mediation engine that implements the framework automatically detects and reconciles semantic differences at different times. We articulate that this extended COIN framework is suitable for reasoning on the Semantic Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The underlying assumptions for interpreting the meaning of data often change over time, which further complicates the problem of semantic heterogeneities among autonomous data sources. As an extension to the COntext INterchange (COIN) framework, this paper introduces the notion of temporal context as a formalization of the problem. We represent temporal context as a multi-valued method in F-Logic; however, only one value is valid at any point in time, the determination of which is constrained by temporal relations. This representation is then mapped to an abductive constraint logic programming framework with temporal relations being treated as constraints. A mediation engine that implements the framework automatically detects and reconciles semantic differences at different times. We articulate that this extended COIN framework is suitable for reasoning on the Semantic Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The underlying assumptions for interpreting the meaning of data often change over time, which further complicates the problem of semantic heterogeneities among autonomous data sources. As an extension to the COntext INterchange (COIN) framework, this paper introduces the notion of temporal context as a formalization of the problem. We represent temporal context as a multi-valued method in F-Logic; however, only one value is valid at any point in time, the determination of which is constrained by temporal relations. This representation is then mapped to an abductive constraint logic programming framework with temporal relations being treated as constraints. A mediation engine that implements the framework automatically detects and reconciles semantic differences at different times. We articulate that this extended COIN framework is suitable for reasoning on the Semantic Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address this problem through the design of a semiactive controller based on the mixed H2/H∞ control theory. The vibrations caused by the seismic motions are mitigated by a semiactive damper installed in the bottom of the structure. It is meant by semiactive damper, a device that absorbs but cannot inject energy into the system. Sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller that guarantees asymptotic stability and a mixed H2/H∞ performance is then developed. An algorithm is proposed to handle the semiactive nature of the actuator. The performance of the controller is experimentally evaluated in a real-time hybrid testing facility that consists of a physical specimen (a small-scale magnetorheological damper) and a numerical model (a large-scale three-story building)