10 resultados para Trade-off
em Duke University
Resumo:
Relationships between aging, disease risks, and longevity are not yet well understood. For example, joint increases in cancer risk and total survival observed in many human populations and some experimental aging studies may be linked to a trade-off between cancer and aging as well as to the trade-off(s) between cancer and other diseases, and their relative impact is not clear. While the former trade-off (between cancer and aging) received broad attention in aging research, the latter one lacks respective studies, although its understanding is important for developing optimal strategies of increasing both longevity and healthy life span. In this paper, we explore the possibility of trade-offs between risks of cancer and selected major disorders. First, we review current literature suggesting that the trade-offs between cancer and other diseases may exist and be linked to the differential intensity of apoptosis. Then we select relevant disorders for the analysis (acute coronary heart disease [ACHD], stroke, asthma, and Alzheimer disease [AD]) and calculate the risk of cancer among individuals with each of these disorders, and vice versa, using the Framingham Study (5209 individuals) and the National Long Term Care Survey (NLTCS) (38,214 individuals) data. We found a reduction in cancer risk among old (80+) men with stroke and in risk of ACHD among men (50+) with cancer in the Framingham Study. We also found an increase in ACHD and stroke among individuals with cancer, and a reduction in cancer risk among women with AD in the NLTCS. The manifestation of trade-offs between risks of cancer and other diseases thus depended on sex, age, and study population. We discuss factors modulating the potential trade-offs between major disorders in populations, e.g., disease treatments. Further study is needed to clarify possible impact of such trade-offs on longevity.
Resumo:
Multiple functions of the beta2-adrenergic receptor (ADRB2) and angiotensin-converting enzyme (ACE) genes warrant studies of their associations with aging-related phenotypes. We focus on multimarker analyses and analyses of the effects of compound genotypes of two polymorphisms in the ADRB2 gene, rs1042713 and rs1042714, and 11 polymorphisms of the ACE gene, on the risk of such an aging-associated phenotype as myocardial infarction (MI). We used the data from a genotyped sample of the Framingham Heart Study Offspring (FHSO) cohort (n = 1500) followed for about 36 years with six examinations. The ADRB2 rs1042714 (C-->G) polymorphism and two moderately correlated (r(2) = 0.77) ACE polymorphisms, rs4363 (A-->G) and rs12449782 (A-->G), were significantly associated with risks of MI in this aging cohort in multimarker models. Predominantly linked ACE genotypes exhibited opposite effects on MI risks, e.g., the AA (rs12449782) genotype had a detrimental effect, whereas the predominantly linked AA (rs4363) genotype exhibited a protective effect. This trade-off occurs as a result of the opposite effects of rare compound genotypes of the ACE polymorphisms with a single dose of the AG heterozygote. This genetic trade-off is further augmented by the selective modulating effect of the rs1042714 ADRB2 polymorphism. The associations were not altered by adjustment for common MI risk factors. The results suggest that effects of single specific genetic variants of the ADRB2 and ACE genes on MI can be readily altered by gene-gene or/and gene-environmental interactions, especially in large heterogeneous samples. Multimarker genetic analyses should benefit studies of complex aging-associated phenotypes.
Resumo:
When solid material is removed in order to create flow channels in a load carrying structure, the strength of the structure decreases. On the other hand, a structure with channels is lighter and easier to transport as part of a vehicle. Here, we show that this trade off can be used for benefit, to design a vascular mechanical structure. When the total amount of solid is fixed and the sizes, shapes, and positions of the channels can vary, it is possible to morph the flow architecture such that it endows the mechanical structure with maximum strength. The result is a multifunctional structure that offers not only mechanical strength but also new capabilities necessary for volumetric functionalities such as self-healing and self-cooling. We illustrate the generation of such designs for strength and fluid flow for several classes of vasculatures: parallel channels, trees with one, two, and three bifurcation levels. The flow regime in every channel is laminar and fully developed. In each case, we found that it is possible to select not only the channel dimensions but also their positions such that the entire structure offers more strength and less flow resistance when the total volume (or weight) and the total channel volume are fixed. We show that the minimized peak stress is smaller when the channel volume (φ) is smaller and the vasculature is more complex, i.e., with more levels of bifurcation. Diminishing returns are reached in both directions, decreasing φ and increasing complexity. For example, when φ=0.02 the minimized peak stress of a design with one bifurcation level is only 0.2% greater than the peak stress in the optimized vascular design with two levels of bifurcation. © 2010 American Institute of Physics.
Resumo:
In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.
Resumo:
Antigenically evolving pathogens such as influenza viruses are difficult to control owing to their ability to evade host immunity by producing immune escape variants. Experimental studies have repeatedly demonstrated that viral immune escape variants emerge more often from immunized hosts than from naive hosts. This empirical relationship between host immune status and within-host immune escape is not fully understood theoretically, nor has its impact on antigenic evolution at the population level been evaluated. Here, we show that this relationship can be understood as a trade-off between the probability that a new antigenic variant is produced and the level of viraemia it reaches within a host. Scaling up this intra-host level trade-off to a simple population level model, we obtain a distribution for variant persistence times that is consistent with influenza A/H3N2 antigenic variant data. At the within-host level, our results show that target cell limitation, or a functional equivalent, provides a parsimonious explanation for how host immune status drives the generation of immune escape mutants. At the population level, our analysis also offers an alternative explanation for the observed tempo of antigenic evolution, namely that the production rate of immune escape variants is driven by the accumulation of herd immunity. Overall, our results suggest that disease control strategies should be further assessed by considering the impact that increased immunity--through vaccination--has on the production of new antigenic variants.
Resumo:
Starvation during early development can have lasting effects that influence organismal fitness and disease risk. We characterized the long-term phenotypic consequences of starvation during early larval development in Caenorhabditis elegans to determine potential fitness effects and develop it as a model for mechanistic studies. We varied the amount of time that larvae were developmentally arrested by starvation after hatching ("L1 arrest"). Worms recovering from extended starvation grew slowly, taking longer to become reproductive, and were smaller as adults. Fecundity was also reduced, with the smallest individuals most severely affected. Feeding behavior was impaired, possibly contributing to deficits in growth and reproduction. Previously starved larvae were more sensitive to subsequent starvation, suggesting decreased fitness even in poor conditions. We discovered that smaller larvae are more resistant to heat, but this correlation does not require passage through L1 arrest. The progeny of starved animals were also adversely affected: Embryo quality was diminished, incidence of males was increased, progeny were smaller, and their brood size was reduced. However, the progeny and grandprogeny of starved larvae were more resistant to starvation. In addition, the progeny, grandprogeny, and great-grandprogeny were more resistant to heat, suggesting epigenetic inheritance of acquired resistance to starvation and heat. Notably, such resistance was inherited exclusively from individuals most severely affected by starvation in the first generation, suggesting an evolutionary bet-hedging strategy. In summary, our results demonstrate that starvation affects a variety of life-history traits in the exposed animals and their descendants, some presumably reflecting fitness costs but others potentially adaptive.
Resumo:
This dissertation explores the complex process of organizational change, applying a behavioral lens to understand change in processes, products, and search behaviors. Chapter 1 examines new practice adoption, exploring factors that predict the extent to which routines are adopted “as designed” within the organization. Using medical record data obtained from the hospital’s Electronic Health Record (EHR) system I develop a novel measure of the “gap” between routine “as designed” and routine “as realized.” I link this to a survey administered to the hospital’s professional staff following the adoption of a new EHR system and find that beliefs about the expected impact of the change shape fidelity of the adopted practice to its design. This relationship is more pronounced in care units with experienced professionals and less pronounced when the care unit includes departmental leadership. This research offers new insights into the determinants of routine change in organizations, in particular suggesting the beliefs held by rank-and-file members of an organization are critical in new routine adoption. Chapter 2 explores changes to products, specifically examining culling behaviors in the mobile device industry. Using a panel of quarterly mobile device sales in Germany from 2004-2009, this chapter suggests that the organization’s response to performance feedback is conditional upon the degree to which decisions are centralized. While much of the research on product exit has pointed to economic drivers or prior experience, these central finding of this chapter—that performance below aspirations decreases the rate of phase-out—suggests that firms seek local solutions when doing poorly, which is consistent with behavioral explanations of organizational action. Chapter 3 uses a novel text analysis approach to examine how the allocation of attention within organizational subunits shapes adaptation in the form of search behaviors in Motorola from 1974-1997. It develops a theory that links organizational attention to search, and the results suggest a trade-off between both attentional specialization and coupling on search scope and depth. Specifically, specialized unit attention to a more narrow set of problems increases search scope but reduces search depth; increased attentional coupling also increases search scope at the cost of depth. This novel approach and these findings help clarify extant research on the behavioral outcomes of attention allocation, which have offered mixed results.
Resumo:
Computational fluid dynamic (CFD) studies of blood flow in cerebrovascular aneurysms have potential to improve patient treatment planning by enabling clinicians and engineers to model patient-specific geometries and compute predictors and risks prior to neurovascular intervention. However, the use of patient-specific computational models in clinical settings is unfeasible due to their complexity, computationally intensive and time-consuming nature. An important factor contributing to this challenge is the choice of outlet boundary conditions, which often involves a trade-off between physiological accuracy, patient-specificity, simplicity and speed. In this study, we analyze how resistance and impedance outlet boundary conditions affect blood flow velocities, wall shear stresses and pressure distributions in a patient-specific model of a cerebrovascular aneurysm. We also use geometrical manipulation techniques to obtain a model of the patient’s vasculature prior to aneurysm development, and study how forces and stresses may have been involved in the initiation of aneurysm growth. Our CFD results show that the nature of the prescribed outlet boundary conditions is not as important as the relative distributions of blood flow through each outlet branch. As long as the appropriate parameters are chosen to keep these flow distributions consistent with physiology, resistance boundary conditions, which are simpler, easier to use and more practical than their impedance counterparts, are sufficient to study aneurysm pathophysiology, since they predict very similar wall shear stresses, time-averaged wall shear stresses, time-averaged pressures, and blood flow patterns and velocities. The only situations where the use of impedance boundary conditions should be prioritized is if pressure waveforms are being analyzed, or if local pressure distributions are being evaluated at specific time points, especially at peak systole, where the use of resistance boundary conditions leads to unnaturally large pressure pulses. In addition, we show that in this specific patient, the region of the blood vessel where the neck of the aneurysm developed was subject to abnormally high wall shear stresses, and that regions surrounding blebs on the aneurysmal surface were subject to low, oscillatory wall shear stresses. Computational models using resistance outlet boundary conditions may be suitable to study patient-specific aneurysm progression in a clinical setting, although several other challenges must be addressed before these tools can be applied clinically.
Resumo:
I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.
In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.
Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.
I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and
discuss some implications for capital regulation policy and stress testing.
Resumo:
Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.
The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.
This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.
Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.
The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.
Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.