919 resultados para Many-to-many-assignment problem
Resumo:
Telecommunications play a key role in contemporary society. However, as new technologies are put into the market, it also grows the demanding for new products and services that depend on the offered infrastructure, making the problems of planning telecommunications networks, despite the advances in technology, increasingly larger and complex. However, many of these problems can be formulated as models of combinatorial optimization, and the use of heuristic algorithms can help solving these issues in the planning phase. In this project it was developed two pure metaheuristic implementations Genetic algorithm (GA) and Memetic Algorithm (MA) plus a third hybrid implementation Memetic Algorithm with Vocabulary Building (MA+VB) for a problem in telecommunications that is known in the literature as Problem SONET Ring Assignment Problem or SRAP. The SRAP arises during the planning stage of the physical network and it consists in the selection of connections between a number of locations (customers) in order to meet a series of restrictions on the lowest possible cost. This problem is NP-hard, so efficient exact algorithms (in polynomial complexity ) are not known and may, indeed, even exist
Algoritmo evolutivo paralelo para o problema de atribuição de localidades a anéis em redes sonet/sdh
Resumo:
The telecommunications play a fundamental role in the contemporary society, having as one of its main roles to give people the possibility to connect them and integrate them into society in which they operate and, therewith, accelerate development through knowledge. But as new technologies are introduced on the market, increases the demand for new products and services that depend on the infrastructure offered, making the problems of planning of telecommunication networks become increasingly large and complex. Many of these problems, however, can be formulated as combinatorial optimization models, and the use of heuristic algorithms can help solve these issues in the planning phase. This paper proposes the development of a Parallel Evolutionary Algorithm to be applied to telecommunications problem known in the literature as SONET Ring Assignment Problem SRAP. This problem is the class NP-hard and arises during the physical planning of a telecommunication network and consists of determining the connections between locations (customers), satisfying a series of constrains of the lowest possible cost. Experimental results illustrate the effectiveness of the Evolutionary Algorithm parallel, over other methods, to obtain solutions that are either optimal or very close to it
Resumo:
The transmission network planning problem is a non-linear integer mixed programming problem (NLIMP). Most of the algorithms used to solve this problem use a linear programming subroutine (LP) to solve LP problems resulting from planning algorithms. Sometimes the resolution of these LPs represents a major computational effort. The particularity of these LPs in the optimal solution is that only some inequality constraints are binding. This task transforms the LP into an equivalent problem with only one equality constraint (the power flow equation) and many inequality constraints, and uses a dual simplex algorithm and a relaxation strategy to solve the LPs. The optimisation process is started with only one equality constraint and, in each step, the most unfeasible constraint is added. The logic used is similar to a proposal for electric systems operation planning. The results show a higher performance of the algorithm when compared to primal simplex methods.
Resumo:
Pós-graduação em Matemática - IBILCE
Resumo:
The purpose of this study is to determine if students solve math problems using addition, subtraction, multiplication, and division consistently and whether students transfer these skills to other mathematical situations and solutions. In this action research study, a classroom of 6th grade mathematics students was used to investigate how students solve word problems and how they determine which mathematical approach to use to solve a problem. It was discovered that many of the students read and re-read a question before they try to find an answer. Most students will check their answer to determine if it is correct and makes sense. Most students agree that mastering basic math facts is very important for problem solving and prefer mathematics that does not focus on problem solving. As a result of this research, it will be emphasized to the building principal and staff the need for a unified and focused curriculum with a scope and sequence for delivery that is consistently followed. The importance of managing basic math skills and making sure each student is challenged to be a mathematical thinker will be stressed.
Resumo:
Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
Canned tuna is one of the most widespread and recognizable fish commodities in the world. Over all oceans 80% of the total tuna catches are caught by purse seine fishery and in tropical waters their target species are: yellowfin (Thunnus albacares), bigeye (Thunnus obesus) and skipjack (Katsuwonus pelamis). Even if this fishing gear is claimed to be very selective, there are high levels of by-catch especially when operating under Fish Aggregating Devices (FADs). The main problem is underestimation of by-catch data. In order to solve this problem the scientific community has developed many specific programs (e.g. Observe Program) to collect data about both target species and by-catch with observers onboard. The purposes of this study are to estimate the quantity and composition of target species and by-catch by tuna purse seiner fishery operating in tropical waters and to underline a possible seasonal variability in the by-catch ratio (tunas versus by-catch). Data were collected with the French scientific program ”Observe” on board of the French tuna purse seiner “Via Avenir” during a fishing trip in the Gulf of Guinea (C-E Atlantic) from August to September 2012. Furthermore some by-catch specimens have been sampled to obtain more information about size class composition. In order to achieve those purposes we have shared our data with the French Institute of Research for the Development (IRD), which has data collected by observers onboard in the same study area. Yellowfin tuna results to be the main specie caught in all trips considered (around 71% of the total catches) especially on free swimming schools (FSC) sets. Instead skipjack tuna is the main specie caught under FADs. Different percentages of by-catch with the two fishing modes are observed: the by-catch incidence is higher on FADs sets (96.5% of total by-catch) than on FSC sets (3.5%) and the main category of by-catch is little-tuna (73%). When pooling data for both fishing sets used in purse seine fishery the overall by-catch/catch ratio is 5%, a lower level than in other fishing gears like long-lining and trawling.
Resumo:
Over the last 60 years, computers and software have favoured incredible advancements in every field. Nowadays, however, these systems are so complicated that it is difficult – if not challenging – to understand whether they meet some requirement or are able to show some desired behaviour or property. This dissertation introduces a Just-In-Time (JIT) a posteriori approach to perform the conformance check to identify any deviation from the desired behaviour as soon as possible, and possibly apply some corrections. The declarative framework that implements our approach – entirely developed on the promising open source forward-chaining Production Rule System (PRS) named Drools – consists of three components: 1. a monitoring module based on a novel, efficient implementation of Event Calculus (EC), 2. a general purpose hybrid reasoning module (the first of its genre) merging temporal, semantic, fuzzy and rule-based reasoning, 3. a logic formalism based on the concept of expectations introducing Event-Condition-Expectation rules (ECE-rules) to assess the global conformance of a system. The framework is also accompanied by an optional module that provides Probabilistic Inductive Logic Programming (PILP). By shifting the conformance check from after execution to just in time, this approach combines the advantages of many a posteriori and a priori methods proposed in literature. Quite remarkably, if the corrective actions are explicitly given, the reactive nature of this methodology allows to reconcile any deviations from the desired behaviour as soon as it is detected. In conclusion, the proposed methodology brings some advancements to solve the problem of the conformance checking, helping to fill the gap between humans and the increasingly complex technology.
Resumo:
Undergraduate education has a historical tradition of preparing students to meet the problem-solving challenges they will encounter in work, civic, and personal contexts. This thesis research was conducted to study the role of rhetoric in engineering problem solving and decision making and to pose pedagogical strategies for preparing undergraduate students for workplace problem solving. Exploratory interviews with engineering managers as well as the heuristic analyses of engineering A3 project planning reports suggest that Aristotelian rhetorical principles are critical to the engineer's success: Engineers must ascertain the rhetorical situation surrounding engineering problems; apply and adapt invention heuristics to conduct inquiry; draw from their investigation to find innovative solutions; and influence decision making by navigating workplace decision-making systems and audiences using rhetorically constructed discourse. To prepare undergraduates for workplace problem solving, university educators are challenged to help undergraduates understand the exigence and realize the kairotic potential inherent in rhetorical problem solving. This thesis offers pedagogical strategies that focus on mentoring learning communities in problem-posing experiences that are situated in many disciplinary, work, and civic contexts. Undergraduates build a flexible rhetorical technê for problem solving as they navigate the nuances of relevant problem-solving systems through the lens of rhetorical practice.
Resumo:
Introduction: Over the last decades, Swiss sports clubs have lost their "monopoly" in the market for sports-related services and increasingly are in competition with other sports providers. For many sport clubs long-term membership cannot be seen as a matter of course. Current research on sports clubs in Switzerland – as well as for other European countries – confirms the increasing difficulties in achieving long-term member commitment. Looking at recent findings of the Swiss sport clubs report (Lamprecht, Fischer & Stamm, 2012), it can be noted, that a decrease in memberships does not equally affect all clubs. There are sports clubs – because of their specific situational and structural conditions – that have few problems with member fluctuation, while other clubs show considerable declines in membership. Therefore, a clear understanding of individual and structural factors that trigger and sustain member commitment would help sports clubs to tackle this problem more effectively. This situation poses the question: What are the individual and structural determinants that influence the tendency to continue or to quit the membership? Methods: Existing research has extensively investigated the drivers of members’ commitment at an individual level. As commitment of members usually occurs within an organizational context, the characteristics of the organisation should be also considered. However, this context has been largely neglected in current research. This presentation addresses both the individual characteristics of members and the corresponding structural conditions of sports clubs resulting in a multi-level framework for the investigation of the factors of members’ commitment in sports clubs. The multilevel analysis grant a adequate handling of hierarchically structured data (e.g., Hox, 2002). The influences of both the individual and context level on the stability of memberships are estimated in multi-level models based on a sample of n = 1,434 sport club members from 36 sports clubs. Results: Results of these multi-level analyses indicate that commitment of members is not just an outcome of individual characteristics, such as strong identification with the club, positively perceived communication and cooperation, satisfaction with sports clubs’ offers, or voluntary engagement. It is also influenced by club-specific structural conditions: stable memberships are more probable in rural sports clubs, and in clubs that explicitly support sociability, whereas sporting-success oriented goals in clubs have a destabilizing effect. Discussion/Conclusion: The proposed multi-level framework and the multi-level analysis can open new perspectives for research concerning commitment of members to sports clubs and other topics and problems of sport organisation research, especially in assisting to understand individual behavior within organizational contexts. References: Hox, J. J. (2002). Multilevel analysis: Techniques and applications. Mahwah: Lawrence Erlbaum. Lamprecht, M., Fischer, A., & Stamm, H.-P. (2012). Die Schweizer Sportvereine – Strukturen, Leistungen, Herausforderungen. Zurich: Seismo.
Resumo:
A problem frequently encountered in Data Envelopment Analysis (DEA) is that the total number of inputs and outputs included tend to be too many relative to the sample size. One way to counter this problem is to combine several inputs (or outputs) into (meaningful) aggregate variables reducing thereby the dimension of the input (or output) vector. A direct effect of input aggregation is to reduce the number of constraints. This, in its turn, alters the optimal value of the objective function. In this paper, we show how a statistical test proposed by Banker (1993) may be applied to test the validity of a specific way of aggregating several inputs. An empirical application using data from Indian manufacturing for the year 2002-03 is included as an example of the proposed test.
Resumo:
The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.
Resumo:
The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes.
Resumo:
Zeldovič’s article “On Russian Dative Reflexive Constructions: Accidental or Compositional” is very interesting. It contains a good deal of insightful observations and is painstakingly argued. Its research object is the Russian dative reflexive construction (DRC) like Ивану не работается ‘Ivan does not feel like reading’. The aim of the article is to show that the DRC is fully compositional. Like many other works by Zeldovič, the article is written from the radical-pragmatic perspective and constitutes a very good illustration of this trend in linguistic research. The language material that it analyzes has often been investigated within more traditional frameworks, especially in Russian linguistics, which makes Zeldovič’s novel approach to the old problem particularly interesting. In this short note I would like (by way of discussion) to address two problems connected not so much with the DRC itself as with methodological issues concerning compositionality. I will dwell on two aspects: on the question of how we understand the very concept of compositionality, and what instruments we employ to demonstrate it.