829 resultados para Multi-classifier systems
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Robotics is an emergent branch of engineering that involves the conception, manufacture, and control of robots. It is a multidisciplinary field that combines electronics, design, computer science, artificial intelligence, mechanics and nanotechnology. Its evolution results in machines that are able to perform tasks with some level of complexity. Multi-agent systems is a researching topic within robotics, thus they allow the solving of higher complexity problems, through the execution of simple routines. Robotic soccer allows the study and development of robotics and multiagent systems, as the agents have to work together as a team, having in consideration most problems found in our quotidian, as for example adaptation to a highly dynamic environment as it is the one of a soccer game. CAMBADA is the robotic soccer team belonging to the group of research IRIS from IEETA, composed by teachers, researchers and students of the University of Aveiro, which annually has as main objective the participation in the RoboCup, in the Middle Size League. The purpose of this work is to improve the coordination in set pieces situations. This thesis introduces a new behavior and the adaptation of the already existing ones in the offensive situation, as well as the proposal of a new positioning method in defensive situations. The developed work was incorporated within the competition software of the robots. Which allows the presentation, in this dissertation, of the experimental results obtained, through simulation software as well as through the physical robots on the laboratory.
Resumo:
Relatório de estágio apresentado para a obtenção do grau de mestre em Educação e Comunicação Multimédia
Resumo:
Malware is a foundational component of cyber crime that enables an attacker to modify the normal operation of a computer or access sensitive, digital information. Despite the extensive research performed to identify such programs, existing schemes fail to detect evasive malware, an increasingly popular class of malware that can alter its behavior at run-time, making it difficult to detect using today’s state of the art malware analysis systems. In this thesis, we present DVasion, a comprehensive strategy that exposes such evasive behavior through a multi-execution technique. DVasion successfully detects behavior that would have been missed by traditional, single-execution approaches, while addressing the limitations of previously proposed multi-execution systems. We demonstrate the accuracy of our system through strong parallels with existing work on evasive malware, as well as uncover the hidden behavior within 167 of 1,000 samples.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
We present new radial velocity measurements of eight stars that were secured with the spectrograph SOPHIE at the 193 cm telescope of the Haute-Provence Observatory. The measurements allow detecting and characterizing new giant extrasolar planets. The host stars are dwarfs of spectral types between F5 and K0 and magnitudes of between 6.7 and 9.6; the planets have minimum masses Mp sin i of between 0.4 to 3.8 MJup and orbitalperiods of several days to several months. The data allow only single planets to be discovered around the first six stars (HD 143105, HIP 109600, HD 35759, HIP 109384, HD 220842, and HD 12484), but one of them shows the signature of an additional substellar companion in the system. The seventh star, HIP 65407, allows the discovery of two giant planets that orbit just outside the 12:5 resonance in weak mutual interaction. The last star, HD 141399, was already known to host a four-planet system; our additional data and analyses allow new constraints to be set on it. We present Keplerian orbits of all systems, together with dynamical analyses of the two multi-planet systems. HD 143105 is one of the brightest stars known to host a hot Jupiter, which could allow numerous follow-up studies to be conducted even though this is not a transiting system. The giant planets HIP 109600b, HIP 109384b, and HD 141399c are located in the habitable zone of their host star.
Resumo:
Object recognition has long been a core problem in computer vision. To improve object spatial support and speed up object localization for object recognition, generating high-quality category-independent object proposals as the input for object recognition system has drawn attention recently. Given an image, we generate a limited number of high-quality and category-independent object proposals in advance and used as inputs for many computer vision tasks. We present an efficient dictionary-based model for image classification task. We further extend the work to a discriminative dictionary learning method for tensor sparse coding. In the first part, a multi-scale greedy-based object proposal generation approach is presented. Based on the multi-scale nature of objects in images, our approach is built on top of a hierarchical segmentation. We first identify the representative and diverse exemplar clusters within each scale. Object proposals are obtained by selecting a subset from the multi-scale segment pool via maximizing a submodular objective function, which consists of a weighted coverage term, a single-scale diversity term and a multi-scale reward term. The weighted coverage term forces the selected set of object proposals to be representative and compact; the single-scale diversity term encourages choosing segments from different exemplar clusters so that they will cover as many object patterns as possible; the multi-scale reward term encourages the selected proposals to be discriminative and selected from multiple layers generated by the hierarchical image segmentation. The experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012 segmentation dataset demonstrate the accuracy and efficiency of our object proposal model. Additionally, we validate our object proposals in simultaneous segmentation and detection and outperform the state-of-art performance. To classify the object in the image, we design a discriminative, structural low-rank framework for image classification. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier.
Resumo:
Electoral researchers are so much accustomed to analyzing the choice of the single most preferred party as the left-hand side variable of their models of electoral behavior that they often ignore revealed preference data. Drawing on random utility theory, their models predict electoral behavior at the extensive margin of choice. Since the seminal work of Luce and others on individual choice behavior, however, many social science disciplines (consumer research, labor market research, travel demand, etc.) have extended their inventory of observed preference data with, for instance, multiple paired comparisons, complete or incomplete rankings, and multiple ratings. Eliciting (voter) preferences using these procedures and applying appropriate choice models is known to considerably increase the efficiency of estimates of causal factors in models of (electoral) behavior. In this paper, we demonstrate the efficiency gain when adding additional preference information to first preferences, up to full ranking data. We do so for multi-party systems of different sizes. We use simulation studies as well as empirical data from the 1972 German election study. Comparing the practical considerations for using ranking and single preference data results in suggestions for choice of measurement instruments in different multi-candidate and multi-party settings.
Resumo:
Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK’s top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we have looked at employee development and cashier empowerment as two examples of shop floor management practices. In this paper we describe the underlying conceptual ideas and the features of our simulation model. We present a selection of experiments we have conducted in order to validate our simulation model and to show its potential for answering “what-if” questions in a retail context. We also introduce a novel performance measure which we have created to quantify customers’ satisfaction with service, based on their individual shopping experiences.
Resumo:
In Brazil and around the world, oil companies are looking for, and expected development of new technologies and processes that can increase the oil recovery factor in mature reservoirs, in a simple and inexpensive way. So, the latest research has developed a new process called Gas Assisted Gravity Drainage (GAGD) which was classified as a gas injection IOR. The process, which is undergoing pilot testing in the field, is being extensively studied through physical scale models and core-floods laboratory, due to high oil recoveries in relation to other gas injection IOR. This process consists of injecting gas at the top of a reservoir through horizontal or vertical injector wells and displacing the oil, taking advantage of natural gravity segregation of fluids, to a horizontal producer well placed at the bottom of the reservoir. To study this process it was modeled a homogeneous reservoir and a model of multi-component fluid with characteristics similar to light oil Brazilian fields through a compositional simulator, to optimize the operational parameters. The model of the process was simulated in GEM (CMG, 2009.10). The operational parameters studied were the gas injection rate, the type of gas injection, the location of the injector and production well. We also studied the presence of water drive in the process. The results showed that the maximum vertical spacing between the two wells, caused the maximum recovery of oil in GAGD. Also, it was found that the largest flow injection, it obtained the largest recovery factors. This parameter controls the speed of the front of the gas injected and determined if the gravitational force dominates or not the process in the recovery of oil. Natural gas had better performance than CO2 and that the presence of aquifer in the reservoir was less influential in the process. In economic analysis found that by injecting natural gas is obtained more economically beneficial than CO2
Resumo:
One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.
The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.
We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.
The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.
To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.
A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.
One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.
Resumo:
Este trabajo exploratorio estudia al movimiento político Mesa de la Unidad Democrática (MUD), creada con el fin de oponerse la Gobierno socialista existente en venezuela. La crítica que este documento realiza, parte desde el punto de vista de la Ciencia de la Complejidad. Algunos conceptos clave de sistemas complejos han sido utilizados para explicar el funcionamiento y organización de la MUD, esto con el objetivo de generar un diagnóstico integral de los problemas que enfrenta, y evidenciar las nuevas percepciones sobre comportamientos perjudiciales que el partido tiene actualmente. Con el enfoque de la complejidad se pretende ayudar a comprender mejor el contexto que enmarca al partido y, para, finalmente aportar una serie de soluciones a los problemas de cohesión que presen
Resumo:
Introducción -- Liderazgo transformacional e inteligencia emocional: una revisión / Gina Paola Cortés Agudelo, Olga Lucia Lacouture -- El liderazgo transformacional y sus implicaciones en la cultura organizacional: una revisión de la literatura / Alejandro Arévalo, Iván Suárez, Jhon Zuluaga -- Liderazgo femenino: una revisión a la literatura / David Leonardo Méndez Sarmiento, María José Cruz Mancheno -- Liderazgo adaptativo y las relaciones de aprendizaje: implicaciones para las organizaciones actuales / María Alejandra Avella Torres, Laura Katherine Umaña Roa -- Inteligencia emocional y su relación con el liderazgo de rango total / Diego Stiven García Morales, Mohamed zakaria el arksoussi fakih -- Influencia del liderazgo transformacional y transaccional sobre la calidad de vida laboral en las empresas / María Paula Ordoñez Valencia -- Discriminación y desigualdad de género : situación actual de las mujeres en el mundo empresarial / Daniela Andrade Palau / Natalia Villarreal Rodríguez -- Competitividad: análisis comparativo entre Colombia y chile desde la perspectiva de la innovación / Álvaro David Buenaventura Maya.
Resumo:
O Montado, em Portugal, é um complexo sistema silvopastoril de uso da terra, tipicamente Mediterrânico, com diversos estratos de vegetação, incluindo sobreiro e azinheira em várias densidades, onde é frequente a criação de gado. Esta actividade pecuária beneficia das pastagens no sob-coberto, de algumas espécies arbustivas e também das bolotas que caem do coberto arbóreo, contribuindo para evitar a invasão da pastagem por matos. No entanto, dependendo da sua gestão, este gado pode comprometer a regeneração do sistema. Nos últimos 20 anos, os subsídios no âmbito da Política Agrícola Comum da União Europeia têm promovido a criação de gado bovino em detrimento de outras espécies e raças mais leves, bem como a intensificação desta produção. Esta intensificação pode impossibilitar a regeneração natural das árvores ameaçando o equilíbrio do Montado. Por esta razão é necessária uma avaliação focada na criação de gado bovino e nos seus impactos sobre o sistema. O objectivo deste estudo foi obter uma melhor compreensão do funcionamento de uma exploração silvopastoril num sistema de Montado, através da aplicação do Método de Avaliação Emergética e do cálculo de índices emergéticos. Pretende-se assim compreender a melhor forma de o gerir, bem como conceber estratégias que maximizem o fluxo de emergia na exploração. Uma comparação deste método com a avaliação económica permitiu perceber em que aspectos esta pode ser complementada pelo método da avaliação emergética. O método da avaliação emergética permite a avaliação de sistemas multifuncionais complexos à escala de uma exploração individual, fornecendo informação extra em relação à avaliação económica como a renovabilidade dos inputs do sistema, ou a quantidade de fluxos livres da natureza que é valorada por preços de mercado. Este método permite a integração das emternalidades e das externalidades à contabilização económica, transformando uma avaliação tendencialmente separada do seu sistema mais vasto, numa avaliação de um sistema em conexão com aqueles mais vastos nos quais se integra; Abstract: The Montado, in Portugal, is a complex silvo-pastoral system of land use, typically Mediterranean, with different strata of vegetation, including cork and holm oaks in various densities, and where cattle rearing is common. This stockfarm benefits from the herbaceous layer under the trees, as well as from some species in the shrub layer, and also from the acorns faling down from the tree cover, while contributing to prevent the invasion of pastures by shrubs. Nevertheless, depending on its management, livestock can affect the system regeneration. Over the past 20 years, subsidies of the European Union's common agricultural policy have promoted the cattle rearing at expense of other lighter species and breeds, as well as its intensification. This intensification may impair the natural regeneration of trees threatening the balance of the Montado. Therefore an assessment focused on cattle and their impact on the system is required. The purpose of this study was to obtain a better understanding of the functioning of a silvo-pastoral farm in a Montado system, by applying the emergy evaluation method and through the calculation of emergy indices. It is intended to understand the best way to manage and design strategies that maximize the emergy flow on the farm. A comparison of this method with the economic evaluation allowed to realize in what aspects it can be complemented by the emergy evaluation method. The emergy evaluation method alows the assessment of complex multi-functional systems at the scale of an individual farm, providing extra information in relation to economic avaluation as the renewability of the inputs to a system and the amount of free flows of nature that is valued by market prices. This method allows the integration of the emternalities and the externalities to the economic accounting, transforming an evaluation tended separated from its wider system, in an evaluation of a system in connection with the larger ones on which it is incorporated.
Resumo:
Comparative studies on constitutional design for divided societies indicate that there is no magic formula to the challenges that these societies pose, as lots of factors influence constitutional design. In the literature on asymmetric federalism, the introduction of constitutional asymmetries is considered a flexible instrument of ethnic conflict resolution, as it provides a mixture of the two main theoretical approaches to constitutional design for divided societies (i.e., integration and accommodation). Indeed, constitutional asymmetries are a complex and multifaceted phenomenon, as their degree of intensity can vary across constitutional systems, and there are both legal and extra-legal factors that may explain such differences. This thesis argues that constitutional asymmetries provide a flexible model of constitutional design and aims to explore the legal factors that are most likely to explain the different degrees of constitutional asymmetry in divided multi-tiered systems. To this end, the research adopts a qualitative methodology, i.e., Qualitative Comparative Analysis (QCA), which allows an understanding of whether a condition or combination of conditions (i.e., the legal factors) determine the outcome (i.e., high, medium, low degree of constitutional asymmetry, or constitutional symmetry). The QCA is conducted on 16 divided multi-tiered systems, and for each case, the degree of constitutional asymmetry was analyzed by employing standardized indexes on subnational autonomy, allowing for a more precise measure of constitutional asymmetry than has previously been provided in the literature. Overall, the research confirms the complex nature of constitutional asymmetries, as the degrees of asymmetries vary substantially not only across systems but also within cases among the dimensions of subnational autonomy. The outcome of the Qualitative Comparative Analysis also confirms a path of complex causality since the different degrees of constitutional asymmetry always depend on several legal factors, that combined produce a low, medium, or high degree of constitutional asymmetry or, conversely, constitutional symmetry.