990 resultados para INCOMPLETE REVASCULARIZATION
Resumo:
In the literature the outcome of contests is either interpreted as win probabilities or as shares of the prize. With this in mind, we examine two approaches to contest success functions. In the first we analyze the implications of contestants' incomplete information concerning the "type" of the contest administrator. While in the case of two contestants this approach can rationalize prominent contest success functions, we show that it runs into difficulties when there are more agents. Our second approach interprets contest success functions as sharing rules and establishes a connection to bargaining and claims problems which is independent of the number of contestants. Both approaches provide foundations for popular contest success functions and guidelines for the definition of new ones. Keywords: Endogenous Contests, Contest Success Function. JEL Classification: C72 (Noncooperative Games), D72 (Economic Models of Political Processes: Rent-Seeking, Elections), D74 (Conflict; Conflict Resolution; Alliances).
Resumo:
OBJECTIVE: Incomplete compliance is one of several possible causes of uncontrolled hypertension. Yet, non-compliance remains largely unrecognized and is falsely interpreted as treatment resistance, because it is difficult to confirm or exclude objectively. The goal of this study was to evaluate the potential benefits of electronic monitoring of drug compliance in the management of patients with resistant hypertension. METHODS: Forty-one hypertensive patients resistant to a three-drug regimen (average blood pressure 156/ 106 +/- 23/11 mmHg, mean +/- SD) were studied prospectively. They were informed that for the next 2 months, their presently prescribed drugs would be provided in electronic monitors, without any change in treatment, so as to provide the treating physician with a measure of their compliance. Thereafter, patients were offered the possibility of prolonging the monitoring of compliance for another 2 month period, during which treatment was adapted if necessary. RESULTS: Monitoring of compliance alone was associated with a significant improvement of blood pressure at 2 months (145/97 +/- 20/15 mmHg, P < 0.01). During monitoring, blood pressure was normalized (systolic < 140 mmHg or diastolic < 90 mmHg) in one-third of the patients and insufficient compliance was unmasked in another 20%. When analysed according to tertiles of compliance, patients with the lowest compliance exhibited significantly higher achieved diastolic blood pressures (P = 0.04). In 30 patients, compliance was monitored up to 4 months and drug therapy was adapted whenever necessary. In these patients, a further significant decrease in blood pressure was obtained (from 150/100 +/- 18/15 to 143/94 +/- 22/11 mmHg, P = 0.04/0.02). CONCLUSIONS: These results suggest that objective monitoring of compliance using electronic devices may be a useful step in the management of patients with refractory hypertension, as it enables physicians to take rational decisions based on reliable and objective data of drug compliance and hence to improve blood pressure control.
Resumo:
We show that a flex-price two-sector open economy DSGE model can explain the poor degree of international risk sharing and exchange rate disconnect. We use a suite of model evaluation measures and examine the role of (i) traded and non-traded sectors; (ii) financial market incompleteness; (iii) preference shocks; (iv) deviations from UIP condition for the exchange rates; and (v) creditor status in net foreign assets. We find that there is a good case for both traded and non-traded productivity shocks as well as UIP deviations in explaining the puzzles.
Resumo:
INTRODUCTION: Many clinical practice guidelines (CPG) have been published in reply to the development of the concept of "evidence-based medicine" (EBM) and as a solution to the difficulty of synthesizing and selecting relevant medical literature. Taking into account the expansion of new CPG, the question of choice arises: which CPG to consider in a given clinical situation? It is of primary importance to evaluate the quality of the CPG, but until recently, there has been no standardized tool of evaluation or comparison of the quality of the CPG. An instrument of evaluation of the quality of the CPG, called "AGREE" for appraisal of guidelines for research and evaluation was validated in 2002. AIM OF THE STUDY: The six principal CPG concerning the treatment of schizophrenia are compared with the help of the "AGREE" instrument: (1) "the Agence nationale pour le développement de l'évaluation médicale (ANDEM) recommendations"; (2) "The American Psychiatric Association (APA) practice guideline for the treatment of patients with schizophrenia"; (3) "The quick reference guide of APA practice guideline for the treatment of patients with schizophrenia"; (4) "The schizophrenia patient outcomes research team (PORT) treatment recommendations"; (5) "The Texas medication algorithm project (T-MAP)" and (6) "The expert consensus guideline for the treatment of schizophrenia". RESULTS: The results of our study were then compared with those of a similar investigation published in 2005, structured on 24 CPG tackling the treatment of schizophrenia. The "AGREE" tool was also used by two investigators in their study. In general, the scores of the two studies differed little and the two global evaluations of the CPG converged; however, each of the six CPG is perfectible. DISCUSSION: The rigour of elaboration of the six CPG was in general average. The consideration of the opinion of potential users was incomplete, and an effort made in the presentation of the recommendations would facilitate their clinical use. Moreover, there was little consideration by the authors regarding the applicability of the recommendations. CONCLUSION: Globally, two CPG are considered as strongly recommended: "the quick reference guide of the APA practice guideline for the treatment of patients with schizophrenia" and "the T-MAP".
Resumo:
The recent identification and molecular characterization of tumor-associated antigens recognized by tumor-reactive CD8+ T lymphocytes has led to the development of antigen-specific immunotherapy of cancer. Among other approaches, clinical studies have been initiated to assess the in vivo immunogenicity of tumor antigen-derived peptides in cancer patients. In this study, we have analyzed the CD8+ T cell response of an ocular melanoma patient to a vaccine composed of four different tumor antigen-derived peptides administered simultaneously in incomplete Freund's adjuvant (IFA). Peptide NY-ESO-1(157-165) was remarkably immunogenic and induced a CD8+ T cell response detectable ex vivo at an early time point of the vaccination protocol. A CD8+ T cell response to the peptide analog Melan-A(26-35 A27L) was also detectable ex vivo at a later time point, whereas CD8+ T cells specific for peptide tyrosinase(368-376) were detected only after in vitro peptide stimulation. No detectable CD8+ T cell response to peptide gp100(457-466) was observed. Vaccine-induced CD8+ T cell responses declined rapidly after the initial response but increased again after further peptide injections. In addition, tumor antigen-specific CD8+ T cells were isolated from a vaccine injection site biopsy sample. Importantly, vaccine-induced CD8+ T cells specifically lysed tumor cells expressing the corresponding antigen. Together, these data demonstrate that simultaneous immunization with multiple tumor antigen-derived peptides can result in the elicitation of multiepitope-directed CD8+ T cell responses that are reactive against antigen-expressing tumors and able to infiltrate antigen-containing peripheral sites.
Resumo:
One of the striking aspects of recent sovereign debt restructurings is, conditional on default, delay length is positively correlated with the size of haircut. In this paper, we develop an incomplete information model of debt restructuring where the prospect of uncertain economic recovery and the signalling about sustainability concerns together generate multi-period delay. The results from our analysis show that there is a correlation between delay length and size of haircut. Such results are supported by evidence. We show that Pareto ranking of equilibria, conditional on default, can be altered once we take into account the ex ante incentive of sovereign debtor. We use our results to evaluate proposals advocated to ensure orderly resolution of sovereign debt crises.
Resumo:
We propose an elementary theory of wars fought by fully rational contenders. Two parties play a Markov game that combines stages of bargaining with stages where one side has the ability to impose surrender on the other. Under uncertainty and incomplete information, in the unique equilibrium of the game, long confrontations occur: war arises when reality disappoints initial (rational) optimism, and it persist longer when both agents are optimists but reality proves both wrong. Bargaining proposals that are rejected initially might eventually be accepted after several periods of confrontation. We provide an explicit computation of the equilibrium, evaluating the probability of war, and its expected losses as a function of i) the costs of confrontation, ii) the asymmetry of the split imposed under surrender, and iii) the strengths of contenders at attack and defense. Changes in these parameters display non-monotonic effects.
Resumo:
In this paper, we consider a producer who faces uninsurable business risks due to incomplete spanning of asset markets over stochastic goods market outcomes, and examine how the presence of the uninsurable business risks affects the producer's optimal pricing and production behaviours. Three key (inter-related) results we find are: (1) optimal prices in goods markets comprise ‘markup’ to the extent of market power and ‘premium’ by shadow price of the risks; (2) price inertia as we observe in data can be explained by a joint work of risk neutralization motive and marginal cost equalization condition; (3) the relative responsiveness of risk neutralization motive and marginal cost equalization at optimum is central to the cyclical variation of markups, providing a consistent explanation for procyclical and countercyclical movements. By these results, the proposed theory of producer leaves important implications both micro and macro, and both empirical and theoretical.
Resumo:
Proponents of proportional electoral rules often argue that majority rule depresses turnout and may lower welfare due to the 'tyranny of the majority' problem. The present paper studies the impact of electoral rules on turnout and social welfare. We analyze a model of instrumental voting where citizens have private information over their individual cost of voting and over the alternative they prefer. The electoral rule used to select the winning alternative is a combination of majority rule and proportional rule. Results show that the above arguments against majority rule do not hold in this set up. Social welfare and turnout increase with the weight that the electoral rule gives to majority rule when the electorate is expected to be split, and they are independent of the electoral rule employed when the expected size of the minority group tends to zero. However, more proportional rules can increase turnout within the minority group. This effect is stronger the smaller the minority group. We then conclude that majority rule fosters overall turnout and increases social welfare, whereas proportional rule fosters the participation of minorities.
Resumo:
This paper is a contribution to the growing literature on constrained inefficiencies in economies with financial frictions. The purpose is to present two simple examples, inspired by the stochastic models in Gersbach-Rochet (2012) and Lorenzoni (2008), of deterministic environments in which such inefficiencies arise through credit constraints. Common to both examples is a pecuniary externality, which operates through an asset price. In the second example, a simple transfer between two groups of agents can bring about a Pareto improvement. In a first best economy, there are no pecuniary externalities because marginal productivities are equalised. But when agents face credit constraints, there is a wedge between their marginal productivities and those of the non-credit-constrained agents. The wedge is the source of the pecuniary externality: economies with these kinds of imperfections in credit markets are not second-best efficient. This is akin to the constrained inefficiency of an economy with incomplete markets, as in Geanakoplos and Polemarchakis (1986).
Resumo:
Although standard incomplete market models can account for the magnitude of the rise in consumption inequality over the life cycle, they generate unrealistically concave age pro.les of consumption inequality and unrealistically less wealth inequality. In this paper, I investigate the role of discount rate heterogeneity on consumption inequality in the context of incomplete market life cycle models. The distribution of discount rates is estimated using moments from the wealth distribution. I .nd that the model with heterogeneous income pro.les (HIP) and discount rate heterogeneity can successfully account for the empirical age pro.le of consumption inequality, both in its magnitude and in its non-concave shape. Generating realistic wealth inequality, this simulated model also highlights the importance of ex ante heterogeneities as main sources of life time inequality.
Resumo:
In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.
Resumo:
In this paper we study decision making in situations where the individual’s preferences are not assumed to be complete. First, we identify conditions that are necessary and sufficient for choice behavior in general domains to be consistent with maximization of a possibly incomplete preference relation. In this model of maximally dominant choice, the agent defers/avoids choosing at those and only those menus where a most preferred option does not exist. This allows for simple explanations of conflict-induced deferral and choice overload. It also suggests a criterion for distinguishing between indifference and incomparability based on observable data. A simple extension of this model also incorporates decision costs and provides a theoretical framework that is compatible with the experimental design that we propose to elicit possibly incomplete preferences in the lab. The design builds on the introduction of monetary costs that induce choice of a most preferred feasible option if one exists and deferral otherwise. Based on this design we found evidence suggesting that a quarter of the subjects in our study had incomplete preferences, and that these made significantly more consistent choices than a group of subjects who were forced to choose. The latter effect, however, is mitigated once data on indifferences are accounted for.
Resumo:
There are two ways of creating incentives for interacting agents to behave in a desired way. One is by providing appropriate payoff incentives, which is the subject of mechanism design. The other is by choosing the information that agents observe, which we refer to as information design. We consider a model of symmetric information where a designer chooses and announces the information structure about a payoff relevant state. The interacting agents observe the signal realizations and take actions which affect the welfare of both the designer and the agents. We characterize the general finite approach to deriving the optimal information structure for the designer - the one that maximizes the designer's ex ante expected utility subject to agents playing a Bayes Nash equilibrium. We then apply the general approach to a symmetric two state, two agent, and two actions environment in a parameterized underlying game and fully characterize the optimal information structure: it is never strictly optimal for the designer to use conditionally independent private signals; the optimal information structure may be a public signal or may consist of correlated private signals. Finally, we examine how changes in the underlying game affect the designer's maximum payoff. This exercise provides a joint mechanism/information design perspective.
Resumo:
Interaction, the act of mutual influence between two or more individuals, is an essential part of daily life and economic decisions. Yet, micro-foundations of interaction are unexplored. This paper presents a first attempt to this purpose. We study a decision procedure for interacting agents. According to our model, interaction occurs since individuals seek influence for those issues that they cannot solve on their own. Following a choice-theoretic approach, we provide simple properties that aid to detect interacting individuals. In this case, revealed preference analysis not only grants the underlying preferences but also the influence acquired. Our baseline model is based on two interacting individuals, though we extend the analysis to multi-individual environments.