912 resultados para Bias-Variance Trade-off
Resumo:
Computational fluid dynamic (CFD) studies of blood flow in cerebrovascular aneurysms have potential to improve patient treatment planning by enabling clinicians and engineers to model patient-specific geometries and compute predictors and risks prior to neurovascular intervention. However, the use of patient-specific computational models in clinical settings is unfeasible due to their complexity, computationally intensive and time-consuming nature. An important factor contributing to this challenge is the choice of outlet boundary conditions, which often involves a trade-off between physiological accuracy, patient-specificity, simplicity and speed. In this study, we analyze how resistance and impedance outlet boundary conditions affect blood flow velocities, wall shear stresses and pressure distributions in a patient-specific model of a cerebrovascular aneurysm. We also use geometrical manipulation techniques to obtain a model of the patient’s vasculature prior to aneurysm development, and study how forces and stresses may have been involved in the initiation of aneurysm growth. Our CFD results show that the nature of the prescribed outlet boundary conditions is not as important as the relative distributions of blood flow through each outlet branch. As long as the appropriate parameters are chosen to keep these flow distributions consistent with physiology, resistance boundary conditions, which are simpler, easier to use and more practical than their impedance counterparts, are sufficient to study aneurysm pathophysiology, since they predict very similar wall shear stresses, time-averaged wall shear stresses, time-averaged pressures, and blood flow patterns and velocities. The only situations where the use of impedance boundary conditions should be prioritized is if pressure waveforms are being analyzed, or if local pressure distributions are being evaluated at specific time points, especially at peak systole, where the use of resistance boundary conditions leads to unnaturally large pressure pulses. In addition, we show that in this specific patient, the region of the blood vessel where the neck of the aneurysm developed was subject to abnormally high wall shear stresses, and that regions surrounding blebs on the aneurysmal surface were subject to low, oscillatory wall shear stresses. Computational models using resistance outlet boundary conditions may be suitable to study patient-specific aneurysm progression in a clinical setting, although several other challenges must be addressed before these tools can be applied clinically.
Resumo:
I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.
In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.
Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.
I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and
discuss some implications for capital regulation policy and stress testing.
Resumo:
Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.
The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.
This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.
Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.
The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.
Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.
Resumo:
This research assesses the impact of user charges in the context of consumer choice to ascertain how user charges in healthcare impact on patient behaviour in Ireland. Quantitative data is collected from a subset of the population in walk-in Urgent Care Clinics and General Practitioner surgeries to assess their responses to user charges and whether user charges are a viable source of part-funding healthcare in Ireland. Examining the economic theories of Becker (1965) and Grossman (1972), the research has assessed the impact of user charges on patient choice in terms of affordability and accessibility in healthcare. The research examined a number of private, public and part-publicly funded healthcare services in Ireland for which varying levels of user charges exist depending on patients’ healthcare cover. Firstly, the study identifies the factors affecting patient choice of privately funded walk-in Urgent Care Clinics in Ireland given user charges. Secondly, the study assesses patient response to user charges for a mainly public or part-publicly provided service; prescription drugs. Finally, the study examines patients’ attitudes towards the potential application of user charges for both public and private healthcare services when patient choice is part of a time-money trade-off, convenience choice or preference choice. These services are valued in the context of user charges becoming more prevalent in healthcare systems over time. The results indicate that the impact of user charges on healthcare services vary according to socio-economic status. The study shows that user charges can disproportionately affect lower income groups and consequently lead to affordability and accessibility issues. However, when valuing the potential application of user charges for three healthcare services (MRI scans, blood tests and a branded over a generic prescription drug), this research indicates that lower income individuals are willing to pay for healthcare services, albeit at a lower user charge than higher income earners. Consequently, this study suggests that user charges may be a feasible source of part-financing Irish healthcare, once the user charge is determined from the patients’ perspective, taking into account their ability to pay.
Resumo:
Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.
Resumo:
Predators exert strong direct and indirect effects on ecological communities by intimidating their prey. Non-consumptive effects (NCEs) of predators are important features of many ecosystems and have changed the way we understand predator-prey interactions, but are not well understood in some systems. For my dissertation research I combined a variety of approaches to examine the effect of predation risk on herbivore foraging and reproductive behaviors in a coral reef ecosystem. In the first part of my dissertation, I investigated how diet and territoriality of herbivorous fish varied across multiple reefs with different levels of predator biomass in the Florida Keys National Marine Sanctuary. I show that both predator and damselfish abundance impacted diet diversity within populations for two herbivores in different ways. Additionally, reef protection and the associated recovery of large predators appeared to shape the trade-off reef herbivores made between territory size and quality. In the second part of my dissertation, I investigated context-dependent causal linkages between predation risk, herbivore foraging behavior and resource consumption in multiple field experiments. I found that reef complexity, predator hunting mode, light availability and prey hunger influenced prey perception of threat and their willingness to feed. This research argues for more emphasis on the role of predation risk in affecting individual herbivore foraging behavior in order to understand the implications of human-mediated predator removal and recovery in coral reef ecosystems.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
John le Carré’s novels “The Spy Who Came in From the Cold” (1963), “Tinker, Tailor, Soldier, Spy” (1974), and “The Tailor of Panama” (1997), focus on how the main characters reflect the somber reality of working in the British intelligence service. Through a broad post-structuralist analysis, I will identify the dichotomies - good/evil in “The Spy Who Came in From the Cold,” past/future in “Tinker, Tailor, Soldier, Spy,” and institution/individual in “The Tailor of Panama” - that frame the role of the protagonists. Each character is defined by his ambiguity and swinging moral compass, transforming him into a hybrid creation of morality and adaptability during transitional time periods in history, mainly during the Cold War. Le Carré’s novels reject the notion of spies standing above a group being celebrated. Instead, he portrays spies as characters who trade off individualism and social belonging for a false sense of heroism, loneliness, and even death.
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.
Resumo:
Waterways have many more ties with society than as a medium for the transportation of goods alone. Waterway systems offer society many kinds of socio-economic value. Waterway authorities responsible for management and (re)development need to optimize the public benefits for the investments made. However, due to the many trade-offs in the system these agencies have multiple options for achieving this goal. Because they can invest resources in a great many different ways, they need a way to calculate the efficiency of the decisions they make. Transaction cost theory, and the analysis that goes with it, has emerged as an important means of justifying efficiency decisions in the economic arena. To improve our understanding of the value-creating and coordination problems for waterway authorities, such a framework is applied to this sector. This paper describes the findings for two cases, which reflect two common multi trade-off situations for waterway (re)development. Our first case study focuses on the Miami River, an urban revitalized waterway. The second case describes the Inner Harbour Navigation Canal in New Orleans, a canal and lock in an industrialized zone, in need of an upgrade to keep pace with market developments. The transaction cost framework appears to be useful in exposing a wide variety of value-creating opportunities and the resistances that come with it. These insights can offer infrastructure managers guidance on how to seize these opportunities.
Resumo:
Ground delay programs typically involve the delaying of aircraft that are departing from origin airports within some set distance of a capacity constrained destination airport. Long haul flights are not delayed in this way. A trade-off exists when fixing the distance parameter: increasing the ‘scope’ distributes delay among more aircraft and may reduce airborne holding delay but could also result in unnecessary delay in the (frequently observed) case of early program cancellation. In order to overcome part of this drawback, a fuel based cruise speed reduction strategy aimed at realizing airborne delay, was suggested by the authors in previous publications. By flying slower, at a specific speed, aircraft that are airborne can recover part of their initially assigned delay without incurring extra fuel consumption if the ground delay program is canceled before planned. In this paper, the effect of the scope of the program is assessed when applying this strategy. A case study is presented by analyzing all the ground delay programs that took place at San Francisco, Newark Liberty and Chicago O’Hare International airports during one year. Results show that by the introduction of this technique it is possible to define larger scopes, partially reducing the amount of unrecovered delay.
Resumo:
Different charging zones are found within European airspace. This allows airlines to select different routes between origin and destination that have different lengths and en-route charges. There is a trade- off between the shortest available route and other routes that might have different charges. This paper analyses the routes submitted by airlines to be operated on a given day and compares the associated costs of operating those routes with the shortest available at the time, in terms of en-route charges and fuel consumption. The flights are characterised by different variables with the idea of identifying a behaviour or pattern based on the airline or flight characteristics. Results show that in some areas of the European airspace there might be an incentive to select a longer route, leading to both a lower charge and a lower total cost. However, more variables need to be considered and other techniques used, such as factor analysis, to be able to identify the behaviour within an airline category.
Resumo:
Calcifying marine phytoplankton - coccolithophores - are some of the most successful yet enigmatic organisms in the ocean, and are at risk from global change. In order to better understand how they will be affected we need to know 'why' coccolithophores calcify. Here we review coccolithophorid evolutionary history, cell biology, and insights from recent experiments to provide a critical assessment of the costs and benefits of calcification. We conclude that calcification has high energy demands, and that coccolithophores might have calcified initially to reduce grazing pressure, but that additional benefits such as protection from photo-damage and viral-bacterial attack further explain their high diversity and broad spectrum ecology. The cost-versus-benefit of these traits is illustrated by novel ecosystem modeling, although conclusive observations are still limited. In the future ocean, the trade-off between changing ecological and physiological costs of calcification and their benefits will ultimately decide how this important group is affected by ocean acidification and global warming.
Resumo:
Calcifying marine phytoplankton - coccolithophores - are some of the most successful yet enigmatic organisms in the ocean, and are at risk from global change. In order to better understand how they will be affected we need to know 'why' coccolithophores calcify. Here we review coccolithophorid evolutionary history, cell biology, and insights from recent experiments to provide a critical assessment of the costs and benefits of calcification. We conclude that calcification has high energy demands, and that coccolithophores might have calcified initially to reduce grazing pressure, but that additional benefits such as protection from photo-damage and viral-bacterial attack further explain their high diversity and broad spectrum ecology. The cost-versus-benefit of these traits is illustrated by novel ecosystem modeling, although conclusive observations are still limited. In the future ocean, the trade-off between changing ecological and physiological costs of calcification and their benefits will ultimately decide how this important group is affected by ocean acidification and global warming.
Resumo:
The expression of animal personality is indicated by patterns of consistency in individual behaviour. Often, the differences exhibited between individuals are consistent across situations. However, between some situations, this can be biased by variable levels of individual plasticity. The interaction between individual plasticity and animal personality can be illustrated by examining situation-sensitive personality traits such as boldness (i.e. risk-taking and exploration tendency). For the weakly electric fish Gnathonemus petersii, light condition is a major factor influencing behaviour. Adapted to navigate in low-light conditions, this species chooses to be more active in dark environments where risk from visual predators is lower. However, G. petersii also exhibit individual differences in their degree of behavioural change from light to dark. The present study, therefore, aims to examine if an increase of motivation to explore in the safety of the dark, not only affects mean levels of boldness, but also the variation between individuals, as a result of differences in individual plasticity. Results: Boldness was consistent between a novel-object and a novel-environment situation in bright light. However, no consistency in boldness was noted between a bright (risky) and a dark (safe) novel environment. Furthermore, there was a negative association between boldness and the degree of change across novel environments, with shier individuals exhibiting greater behavioural plasticity. Conclusions: This study highlights that individual plasticity can vary with personality. In addition, the effect of light suggests that variation in boldness is situation specific. Finally, there appears to be a trade-off between personality and individual plasticity with shy but plastic individuals minimizing costs when perceiving risk and bold but stable individuals consistently maximizing rewards, which can be maladaptive.