6 resultados para Partial game
em Duke University
Resumo:
Allocating resources optimally is a nontrivial task, especially when multiple
self-interested agents with conflicting goals are involved. This dissertation
uses techniques from game theory to study two classes of such problems:
allocating resources to catch agents that attempt to evade them, and allocating
payments to agents in a team in order to stabilize it. Besides discussing what
allocations are optimal from various game-theoretic perspectives, we also study
how to efficiently compute them, and if no such algorithms are found, what
computational hardness results can be proved.
The first class of problems is inspired by real-world applications such as the
TOEFL iBT test, course final exams, driver's license tests, and airport security
patrols. We call them test games and security games. This dissertation first
studies test games separately, and then proposes a framework of Catcher-Evader
games (CE games) that generalizes both test games and security games. We show
that the optimal test strategy can be efficiently computed for scored test
games, but it is hard to compute for many binary test games. Optimal Stackelberg
strategies are hard to compute for CE games, but we give an empirically
efficient algorithm for computing their Nash equilibria. We also prove that the
Nash equilibria of a CE game are interchangeable.
The second class of problems involves how to split a reward that is collectively
obtained by a team. For example, how should a startup distribute its shares, and
what salary should an enterprise pay to its employees. Several stability-based
solution concepts in cooperative game theory, such as the core, the least core,
and the nucleolus, are well suited to this purpose when the goal is to avoid
coalitions of agents breaking off. We show that some of these solution concepts
can be justified as the most stable payments under noise. Moreover, by adjusting
the noise models (to be arguably more realistic), we obtain new solution
concepts including the partial nucleolus, the multiplicative least core, and the
multiplicative nucleolus. We then study the computational complexity of those
solution concepts under the constraint of superadditivity. Our result is based
on what we call Small-Issues-Large-Team games and it applies to popular
representation schemes such as MC-nets.
Resumo:
In the event of a terrorist-mediated attack in the United States using radiological or improvised nuclear weapons, it is expected that hundreds of thousands of people could be exposed to life-threatening levels of ionizing radiation. We have recently shown that genome-wide expression analysis of the peripheral blood (PB) can generate gene expression profiles that can predict radiation exposure and distinguish the dose level of exposure following total body irradiation (TBI). However, in the event a radiation-mass casualty scenario, many victims will have heterogeneous exposure due to partial shielding and it is unknown whether PB gene expression profiles would be useful in predicting the status of partially irradiated individuals. Here, we identified gene expression profiles in the PB that were characteristic of anterior hemibody-, posterior hemibody- and single limb-irradiation at 0.5 Gy, 2 Gy and 10 Gy in C57Bl6 mice. These PB signatures predicted the radiation status of partially irradiated mice with a high level of accuracy (range 79-100%) compared to non-irradiated mice. Interestingly, PB signatures of partial body irradiation were poorly predictive of radiation status by site of injury (range 16-43%), suggesting that the PB molecular response to partial body irradiation was anatomic site specific. Importantly, PB gene signatures generated from TBI-treated mice failed completely to predict the radiation status of partially irradiated animals or non-irradiated controls. These data demonstrate that partial body irradiation, even to a single limb, generates a characteristic PB signature of radiation injury and thus may necessitate the use of multiple signatures, both partial body and total body, to accurately assess the status of an individual exposed to radiation.
Resumo:
The rivalry between the men's basketball teams of Duke University and the University of North Carolina-Chapel Hill (UNC) is one of the most storied traditions in college sports. A subculture of students at each university form social bonds with fellow fans, develop expertise in college basketball rules, team statistics, and individual players, and self-identify as a member of a fan group. The present study capitalized on the high personal investment of these fans and the strong affective tenor of a Duke-UNC basketball game to examine the neural correlates of emotional memory retrieval for a complex sporting event. Male fans watched a competitive, archived game in a social setting. During a subsequent functional magnetic resonance imaging session, participants viewed video clips depicting individual plays of the game that ended with the ball being released toward the basket. For each play, participants recalled whether or not the shot went into the basket. Hemodynamic signal changes time locked to correct memory decisions were analyzed as a function of emotional intensity and valence, according to the fan's perspective. Results showed intensity-modulated retrieval activity in midline cortical structures, sensorimotor cortex, the striatum, and the medial temporal lobe, including the amygdala. Positively valent memories specifically recruited processing in dorsal frontoparietal regions, and additional activity in the insula and medial temporal lobe for positively valent shots recalled with high confidence. This novel paradigm reveals how brain regions implicated in emotion, memory retrieval, visuomotor imagery, and social cognition contribute to the recollection of specific plays in the mind of a sports fan.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2(∗) -weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed.