838 resultados para Multi-objective analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current study includes theoretical and methodological reflections on the quality of life in the city of Uberlândia, Minas Gerais. It started from the thought that the quality of life is multifactorial and is permanently under construction and the main objective of analyzing it as one of the componets of Healthy Cities's moviment. The theoretical research focused on the concepts of healthy cities, quality of life, health, sustainability, well-being, happiness, indexes and indicators. From the use of multiple search strategies, documentary and on field of quantitative and qualitative character, this research of exploratory descriptive nature can offers a contribution to the studies on the quality of life in cities. It is proposed that the studies startes to work with some concept, like some notions os life quality adequated for some paticular reality, whose notions can approach concepts already established as health. This step is important on the exploratory researches. The studies may include aspects of objective analysis, subjective or both. The objective dimension, which is most common approach, are traditionally considered variables and indicators related to: the urban infrastructure (health, education, leisure, security, mobility), dwelling (quantitative and qualitative dwlling deficit), the urban structure (density and mix uses), socioeconomic characteristics (age, income, education), urban infrastructure (sanitation, communication), governance (social mobilization and participation). To focus on the subjective dimension, most recent and unusual, it is proposed to consider the (dis)satisfaction, the personal assessment in relation to the objective aspects. In conclusion, being intrinsically related to the health, the quality of life also has a number of determinants, and the ideal of the reach of quality of life depends on the action of all citizens based on the recognition of networks and territories, in a interescalar perspective and intersectoral. Therefore, emphasis in given on the potential of tools, such as the observatories, to monitor and intervent in reality, aiming in a building process of healthy cities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: Recent studies have documented a link between axial myopia and ciliary muscle morphology; yet, the variation in biometric characteristics of the emmetropic ciliary muscle are not fully known. Ciliary muscle morphology, including symmetry, was investigated between both eyes of emmetropic participants and correlated to ocular biometric parameters. Methods: Anterior segment optical coherence tomography (Zeiss, Visante) was utilised to image both eyes of 49 emmetropic participants (mean spherical equivalent refractive error (MSE) ≥ -0.55; < +0.75 D), aged 19 to 26 years. High resolution images were obtained of nasal and temporal aspects of the ciliary muscle in the relaxed state. MSE of both eyes was recorded using the Grand Seiko WAM 5500; axial length (AXL), anterior chamber depth (ACD) and lens thickness (LT) of the right eye were obtained using the Haag-streit Lenstar LS 900 biometer. A bespoke semi-objective analysis programme was used to measure a range of ciliary muscle parameters. Results: Temporal ciliary muscle overall length (CML) was greater than nasal CML, in both eyes (right: 3.58 ± 0.40 mm and 3.85 ± 0.39 mm for nasal and temporal aspects, respectively, P < 0.001; left: 3.65 ± 0.35 mm and 3.88 ± 0.41 mm for nasal and temporal aspects, respectively, P < 0.001). Temporal ciliary muscle thickness (CMT) was greater than nasal CMT at 2 mm and 3 mm from the scleral spur (CM2 and CM3, respectively) in each eye (right CM2: 0.29 ± 0.05 mm and 0.32 ± 0.05 mm for nasal and temporal aspects, respectively, P < 0.001; left CM2: 0.30 ± 0.05 mm and 0.32 ± 0.05 mm for nasal and temporal aspects, respectively, P < 0.001; right CM3: 0.13 ± 0.05 mm and 0.16 ± 0.04 mm for nasal and temporal aspects, respectively, P < 0.001; left CM3: 0.14 ± 0.04 mm and 0.17 ± 0.05 mm for nasal and temporal aspects, respectively, P < 0.001). AXL was positively correlated with ciliary muscle anterior length (AL) (e.g. P < 0.001, r2 = 0.262 for left temporal aspect), CML (P = 0.003, r2 = 0.175 for right nasal aspect) and ACD (P = 0.01, r2 = 0.181). Conclusions: Morphological characteristics of the ciliary muscle in emmetropic eyes display high levels of symmetry between the eyes. Greater CML and AL are linked to greater AXL and ACD, indicating ciliary muscle growth with normal ocular development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ignoring small-scale heterogeneities in Arctic land cover may bias estimates of water, heat and carbon fluxes in large-scale climate and ecosystem models. We investigated subpixel-scale heterogeneity in CHRIS/PROBA and Landsat-7 ETM+ satellite imagery over ice-wedge polygonal tundra in the Lena Delta of Siberia, and the associated implications for evapotranspiration (ET) estimation. Field measurements were combined with aerial and satellite data to link fine-scale (0.3 m resolution) with coarse-scale (upto 30 m resolution) land cover data. A large portion of the total wet tundra (80%) and water body area (30%) appeared in the form of patches less than 0.1 ha in size, which could not be resolved with satellite data. Wet tundra and small water bodies represented about half of the total ET in summer. Their contribution was reduced to 20% in fall, during which ET rates from dry tundra were highest instead. Inclusion of subpixel-scale water bodies increased the total water surface area of the Lena Delta from 13% to 20%. The actual land/water proportions within each composite satellite pixel was best captured with Landsat data using a statistical downscaling approach, which is recommended for reliable large-scale modelling of water, heat and carbon exchange from permafrost landscapes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ticket distribution channels for live music events have been revolutionised through the increased take-up of internet technologies, and the music supply-chain has evolved into a multi-channel value network. The assumption that this creates increased consumer autonomy and improved service quality is explored here through a case-study of the ticket pre-sale for the US leg of the Depeche Mode 2005–06 World Tour, which utilises an innovative virtual channel strategy, promoted as a service to loyal fans. A multi-method analysis, adopting Kozinets' (2002) Kozinets, R. V. 2002. The field behind the screen: using netnography for marketing research in online communities. Journal of Marketing Research, 39: 61–72. [CrossRef], [Web of Science ®] netnography methodology, is employed to map responses of the band's serious fan base on an internet message board (IMB) throughout the tour pre-sale. The analysis focuses on concerns of pricing, ethics, scope of the offer, use of technology, service quality and perceived brand performance fit of channel partners. Findings indicate that fans behaviour is unpredictable in response to channel partners' performance, and that such offers need careful management to avoid alienation of loyal consumers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Franz Liszt has all too often been discarded as the virtuosic showman, despite the fact that his several of works have often gained great praise and attracted scholarly engagement. However, one also finds striking development of formal design and tonal harmony in many of the works for his principal composition medium, the piano. This paper seeks to explore the practical application of James A. Hepokoski and Warren Darcy’s 'Sonata Theory' upon Liszt’s magnum opus for the instrument, the Sonata in B Minor.

I shall first consider the historical analyses placed upon the work that deal with structural design, as it pertains to the paradigm of Classical sonata-form. Previous research reveals two main theoretical camps; those in favour of a multi-movement analysis (with conflicting hypotheses therein) and those in favour of a single movement sonata-form. An understanding of these historical conceptions of the piece allows one to then highlight areas of conflict and offer a new solution.

Finally, I shall use Sonata Theory to survey the Sonata in B Minor’s landscape in a new light. The title ‘Sonata’ has clear generic implications, many of which are met by Liszt; 'Sonata Theory' provides a model with which to outline the compositional deformations employed by the composer and the implications of this practice. In particular, I offer new perspectives on the validity of the double-function form, insight into the rhetorical layout of a rotational discourse, and propose a nuanced analysis befitting of this striking work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A public organization has a section for customer service which is responsible of order entry from customers concerning errands within real estate and business equipment, cleaning, transport operations and handicap appliance. According to the co-ordinators in hospitals within the organisation the customers are requesting for staff to talk to physically, meaning a person to speak with face to face. The customers experiencing that it’s sometimes difficult to call customer service or use the web form, which is the only communication paths in the current situation. Proposed changes presented claim that a complement to customer service with local service centers in every hospital. The purpose of this study is to evaluate a change proposition by weigh between efficiency and working environment, this by using multi-criteria analysis. To achieve the goal a decision model is designed in the decision tool DecideIT. The aim of the study is to recommend decision makers to choose one of the options based on as rational grounds as possible. The result of the study showed that the preferred alternative is not to supplement customer service with local service centers. For the most part, the result depending on the survey (represented result from working environment criteria) which showed that the majority of customers do not request a person to speak with face to face at all.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays, a lot of interesting and useful and imaginative applications are springing to Android software market. And for guitar fans, some related apps bring great connivence to them, like a guitar tuner can save people from carrying a entity tuner all the time, some apps can simulate a real guitar, and some apps provide some simple lessons allowing people to learn some basic things. But these apps which can teach people, they can't really “monitor ” people, that is, they just give some instructions and hope people would follow them. So my project is to design an app which can detect if users are playing wrong and right real-timely. Guitar chords are always the first for new guitar beginners to learn, and a chord is a set of notes combined together in a regulated way ( get from the music theory having millions of developing ), and 'pitch' is the term for determining if the note different from other notes or noise, so the problem here is to manage the multi-pitch analysis in real time. And it's necessary to know some basics of digital signal processing ( DSP ) because digital signals are always more convenient for computers to analyze compared to analog signals. Then I found an audio processing Java library – TarsosDSP, and try to apply it to my Android project.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis deals with control of stock in an inventory, focusing on inventory placement. The purpose of this thesis is to reduce the transport distance within the main stock house while gathering inventory. This will be achieved by reconstructing the inventory placement in consideration with how frequently the inventories get picked and mass of the inventory. In particular, the literature and the data that is collected from the company´s business system have laid the foundation for the thesis. In general, interviews and observations also contributed to the data collection. To fulfill the aim and to produce arbitrary results, two issues have been developed regarding which attributes that should determine the position of the inventory in the stock house and how to obtain a more effective inventory structure? The authors have jointly produced a result of suggestions for future inventory placement in terms of picking frequency and weight. Initially a situation analysis was conducted to identify known problems with the inventory´s placement and storage systems. The problems that were identified were that the inventory placement has no consideration regarding picking frequency. To determine the most frequent picked inventory an ABC analysis was conducted. All of the inventories were spread out throughout the whole stock house. To take in account, the additional criterion, which was weight, a multi-criteria analysis was performed in combination with the ABC analysis. The results of the combined analysis provided that the basis for drawing up concepts for future inventory placement. The proposal includes optimized inventory placements in different zones of the most frequently picked inventory with weight as an additional criterion.