912 resultados para Bias-Variance Trade-off


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current study applies a two-state switching regression model to examine the behavior of a hypothetical portfolio of ten socially responsible (SRI) equity mutual funds during the expansion and contraction phases of US business cycles between April 1991 and June 2009, based on the Carhart four-factor model, using monthly data. The model identified a business cycle effect on the performance of SRI equity mutual funds. Fund returns were less volatile during expansion/peaks than during contraction/troughs, as indicated by the standard deviation of returns. During contraction/troughs, fund excess returns were explained by the differential in returns between small and large companies, the difference between the returns on stocks trading at high and low Book-to-Market Value, the market excess return over the risk-free rate, and fund objective. During contraction/troughs, smaller companies offered higher returns than larger companies (ci = 0.26, p = 0.01), undervalued stocks out-performed high growth stocks (h i = 0.39, p <0.0001), and funds with growth objectives out-performed funds with other objectives (oi = 0.01, p = 0.02). The hypothetical SRI portfolio was less risky than the market (bi = 0.74, p <0.0001). During expansion/peaks, fund excess returns were explained by the market excess return over the risk-free rate, and fund objective. Funds with other objectives, such as balanced funds and income funds out-performed funds with growth objectives (oi = −0.01, p = 0.03). The hypothetical SRI portfolio exhibited similar risk as the market (bi = 0.93, p <0.0001). The SRI investor adds a third criterion to the risk and return trade-off of traditional portfolio theory. This constraint is social performance. The research suggests that managers of SRI equity mutual funds may diminish value by using social and ethical criteria to select stocks, but add value by superior stock selection. The result is that the performance of SRI mutual funds is very similar to that of the market. There was no difference in the value added among secular SRI, religious SRI, and vice screens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infrastructure management agencies are facing multiple challenges, including aging infrastructure, reduction in capacity of existing infrastructure, and availability of limited funds. Therefore, decision makers are required to think innovatively and develop inventive ways of using available funds. Maintenance investment decisions are generally made based on physical condition only. It is important to understand that spending money on public infrastructure is synonymous with spending money on people themselves. This also requires consideration of decision parameters, in addition to physical condition, such as strategic importance, socioeconomic contribution and infrastructure utilization. Consideration of multiple decision parameters for infrastructure maintenance investments can be beneficial in case of limited funding. Given this motivation, this dissertation presents a prototype decision support framework to evaluate trade-off, among competing infrastructures, that are candidates for infrastructure maintenance, repair and rehabilitation investments. Decision parameters' performances measured through various factors are combined to determine the integrated state of an infrastructure using Multi-Attribute Utility Theory (MAUT). The integrated state, cost and benefit estimates of probable maintenance actions are utilized alongside expert opinion to develop transition probability and reward matrices for each probable maintenance action for a particular candidate infrastructure. These matrices are then used as an input to the Markov Decision Process (MDP) for the finite-stage dynamic programming model to perform project (candidate)-level analysis to determine optimized maintenance strategies based on reward maximization. The outcomes of project (candidate)-level analysis are then utilized to perform network-level analysis taking the portfolio management approach to determine a suitable portfolio under budgetary constraints. The major decision support outcomes of the prototype framework include performance trend curves, decision logic maps, and a network-level maintenance investment plan for the upcoming years. The framework has been implemented with a set of bridges considered as a network with the assistance of the Pima County DOT, AZ. It is expected that the concept of this prototype framework can help infrastructure management agencies better manage their available funds for maintenance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Construction projects are complex endeavors that require the involvement of different professional disciplines in order to meet various project objectives that are often conflicting. The level of complexity and the multi-objective nature of construction projects lend themselves to collaborative design and construction such as integrated project delivery (IPD), in which relevant disciplines work together during project conception, design and construction. Traditionally, the main objectives of construction projects have been to build in the least amount of time with the lowest cost possible, thus the inherent and well-established relationship between cost and time has been the focus of many studies. The importance of being able to effectively model relationships among multiple objectives in building construction has been emphasized in a wide range of research. In general, the trade-off relationship between time and cost is well understood and there is ample research on the subject. However, despite sustainable building designs, relationships between time and environmental impact, as well as cost and environmental impact, have not been fully investigated. The objectives of this research were mainly to analyze and identify relationships of time, cost, and environmental impact, in terms of CO2 emissions, at different levels of a building: material level, component level, and building level, at the pre-use phase, including manufacturing and construction, and the relationships of life cycle cost and life cycle CO2 emissions at the usage phase. Additionally, this research aimed to develop a robust simulation-based multi-objective decision-support tool, called SimulEICon, which took construction data uncertainty into account, and was capable of incorporating life cycle assessment information to the decision-making process. The findings of this research supported the trade-off relationship between time and cost at different building levels. Moreover, the time and CO2 emissions relationship presented trade-off behavior at the pre-use phase. The results of the relationship between cost and CO2 emissions were interestingly proportional at the pre-use phase. The same pattern continually presented after the construction to the usage phase. Understanding the relationships between those objectives is a key in successfully planning and designing environmentally sustainable construction projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predators exert strong direct and indirect effects on ecological communities by intimidating their prey. Non-consumptive effects (NCEs) of predators are important features of many ecosystems and have changed the way we understand predator-prey interactions, but are not well understood in some systems. For my dissertation research I combined a variety of approaches to examine the effect of predation risk on herbivore foraging and reproductive behaviors in a coral reef ecosystem. In the first part of my dissertation, I investigated how diet and territoriality of herbivorous fish varied across multiple reefs with different levels of predator biomass in the Florida Keys National Marine Sanctuary. I show that both predator and damselfish abundance impacted diet diversity within populations for two herbivores in different ways. Additionally, reef protection and the associated recovery of large predators appeared to shape the trade-off reef herbivores made between territory size and quality. In the second part of my dissertation, I investigated context-dependent causal linkages between predation risk, herbivore foraging behavior and resource consumption in multiple field experiments. I found that reef complexity, predator hunting mode, light availability and prey hunger influenced prey perception of threat and their willingness to feed. This research argues for more emphasis on the role of predation risk in affecting individual herbivore foraging behavior in order to understand the implications of human-mediated predator removal and recovery in coral reef ecosystems.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation discusses the relationship between inflation, currency substitution and dollarization that has taken place in Argentina for the past several decades. First, it is shown that when consumers are able to hold only domestic monetary balances (without capital mobility) an increase in the rate of inflation will produce a balance of payments deficit. We then look at the same issue but with heterogeneous consumers, this heterogeneity being generated by non-proportional lump-sum transfers. Second, we discussed some necessary assumptions related to currency substitution models and concluded that there was no a-priori conclusion on whether currencies should be assumed to be "cooperant" or "non-cooperant" in utility. That is to say, whether individuals held different currencies together or one instead of the other. Third, we went into discussing the issue of currency substitution as being a constraint on governments inflationary objectives rather than a choice of those governments to avoid hyperinflations. We showed that imperfect substitutability between currencies does not "reduce the scope for rational (hyper)inflationary processes" as it had been previously argued. It will ultimately depend on the parametrization used and not on the intrinsic characteristics of imperfect substitutability between currencies. We further showed that in Argentina, individuals have been able to endogenize the money supply by holding foreign monetary balances. We argued that the decision to hold foreign monetary balances by individuals is always a second best due to the trade-off between holding foreign monetary balances and consumption. For some levels of income, consumption, and foreign inflation, individuals would prefer to hold domestic monetary balances rather than foreign ones. We then modeled the distinction between dollarization and currency substitution. We concluded that although dollarization is necessary for currency substitution to take place, the decision to use foreign monetary balances for transactions purposes is largely independent from the dollarization process. Finally, we concluded that Argentina should not fully dollarize its economy because dollarization is always a second best to using a domestic currency. Further, we argued that a fixed exchange system would be better than a flexible exchange rate or a "crawling-peg" system because of the characteristics of the political system and the possibilities of "mass praetorianism" to develop, which is intricately linked to "populist" solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thin layers of indium tin oxide are widely used as transparent coatings and electrodes in solar energy cells, flat-panel displays, antireflection coatings, radiation protection and lithium-ion battery materials, because they have the characteristics of low resistivity, strong absorption at ultraviolet wavelengths, high transmission in the visible, high reflectivity in the far-infrared and strong attenuation in the microwave region. However, there is often a trade-off between electrical conductivity and transparency at visible wavelengths for indium tin oxide and other transparent conducting oxides. Here, we report the growth of layers of indium tin oxide nanowires that show optimum electronic and photonic properties and demonstrate their use as fully transparent top contacts in the visible to near-infrared region for light-emitting devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation explores the complex process of organizational change, applying a behavioral lens to understand change in processes, products, and search behaviors. Chapter 1 examines new practice adoption, exploring factors that predict the extent to which routines are adopted “as designed” within the organization. Using medical record data obtained from the hospital’s Electronic Health Record (EHR) system I develop a novel measure of the “gap” between routine “as designed” and routine “as realized.” I link this to a survey administered to the hospital’s professional staff following the adoption of a new EHR system and find that beliefs about the expected impact of the change shape fidelity of the adopted practice to its design. This relationship is more pronounced in care units with experienced professionals and less pronounced when the care unit includes departmental leadership. This research offers new insights into the determinants of routine change in organizations, in particular suggesting the beliefs held by rank-and-file members of an organization are critical in new routine adoption. Chapter 2 explores changes to products, specifically examining culling behaviors in the mobile device industry. Using a panel of quarterly mobile device sales in Germany from 2004-2009, this chapter suggests that the organization’s response to performance feedback is conditional upon the degree to which decisions are centralized. While much of the research on product exit has pointed to economic drivers or prior experience, these central finding of this chapter—that performance below aspirations decreases the rate of phase-out—suggests that firms seek local solutions when doing poorly, which is consistent with behavioral explanations of organizational action. Chapter 3 uses a novel text analysis approach to examine how the allocation of attention within organizational subunits shapes adaptation in the form of search behaviors in Motorola from 1974-1997. It develops a theory that links organizational attention to search, and the results suggest a trade-off between both attentional specialization and coupling on search scope and depth. Specifically, specialized unit attention to a more narrow set of problems increases search scope but reduces search depth; increased attentional coupling also increases search scope at the cost of depth. This novel approach and these findings help clarify extant research on the behavioral outcomes of attention allocation, which have offered mixed results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational fluid dynamic (CFD) studies of blood flow in cerebrovascular aneurysms have potential to improve patient treatment planning by enabling clinicians and engineers to model patient-specific geometries and compute predictors and risks prior to neurovascular intervention. However, the use of patient-specific computational models in clinical settings is unfeasible due to their complexity, computationally intensive and time-consuming nature. An important factor contributing to this challenge is the choice of outlet boundary conditions, which often involves a trade-off between physiological accuracy, patient-specificity, simplicity and speed. In this study, we analyze how resistance and impedance outlet boundary conditions affect blood flow velocities, wall shear stresses and pressure distributions in a patient-specific model of a cerebrovascular aneurysm. We also use geometrical manipulation techniques to obtain a model of the patient’s vasculature prior to aneurysm development, and study how forces and stresses may have been involved in the initiation of aneurysm growth. Our CFD results show that the nature of the prescribed outlet boundary conditions is not as important as the relative distributions of blood flow through each outlet branch. As long as the appropriate parameters are chosen to keep these flow distributions consistent with physiology, resistance boundary conditions, which are simpler, easier to use and more practical than their impedance counterparts, are sufficient to study aneurysm pathophysiology, since they predict very similar wall shear stresses, time-averaged wall shear stresses, time-averaged pressures, and blood flow patterns and velocities. The only situations where the use of impedance boundary conditions should be prioritized is if pressure waveforms are being analyzed, or if local pressure distributions are being evaluated at specific time points, especially at peak systole, where the use of resistance boundary conditions leads to unnaturally large pressure pulses. In addition, we show that in this specific patient, the region of the blood vessel where the neck of the aneurysm developed was subject to abnormally high wall shear stresses, and that regions surrounding blebs on the aneurysmal surface were subject to low, oscillatory wall shear stresses. Computational models using resistance outlet boundary conditions may be suitable to study patient-specific aneurysm progression in a clinical setting, although several other challenges must be addressed before these tools can be applied clinically.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.

In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.

Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.

I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and

discuss some implications for capital regulation policy and stress testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.

The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.

This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.

Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.

The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.

Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research assesses the impact of user charges in the context of consumer choice to ascertain how user charges in healthcare impact on patient behaviour in Ireland. Quantitative data is collected from a subset of the population in walk-in Urgent Care Clinics and General Practitioner surgeries to assess their responses to user charges and whether user charges are a viable source of part-funding healthcare in Ireland. Examining the economic theories of Becker (1965) and Grossman (1972), the research has assessed the impact of user charges on patient choice in terms of affordability and accessibility in healthcare. The research examined a number of private, public and part-publicly funded healthcare services in Ireland for which varying levels of user charges exist depending on patients’ healthcare cover. Firstly, the study identifies the factors affecting patient choice of privately funded walk-in Urgent Care Clinics in Ireland given user charges. Secondly, the study assesses patient response to user charges for a mainly public or part-publicly provided service; prescription drugs. Finally, the study examines patients’ attitudes towards the potential application of user charges for both public and private healthcare services when patient choice is part of a time-money trade-off, convenience choice or preference choice. These services are valued in the context of user charges becoming more prevalent in healthcare systems over time. The results indicate that the impact of user charges on healthcare services vary according to socio-economic status. The study shows that user charges can disproportionately affect lower income groups and consequently lead to affordability and accessibility issues. However, when valuing the potential application of user charges for three healthcare services (MRI scans, blood tests and a branded over a generic prescription drug), this research indicates that lower income individuals are willing to pay for healthcare services, albeit at a lower user charge than higher income earners. Consequently, this study suggests that user charges may be a feasible source of part-financing Irish healthcare, once the user charge is determined from the patients’ perspective, taking into account their ability to pay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.