851 resultados para Competing Risk Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Evidence for a better performance of different highly atherogenic versus traditional lipid parameters for coronary heart disease (CHD) risk prediction is conflicting. We investigated the association of the ratios of sma11 dense low density lipoprotein(LDL)/apoplipoprotein A, aolipoprotein B/apolipoprotein A-I and total cholesterol! HDL-cholesterol and CHD events in patients on combination antiretroviral therapy (cART).Methods: Case control study nested into the Swiss HIV Cohort Study: for each cART-treated patient with a first coronary event between April 1, 2000 and July 31, 2008 (case) we selected four control patients (1) that were without coronary events until the date of the event of the index case, (2) had a plasma sample within ±30 days of the sample date of the respective case, (3) received cART and (4) were then matched for age, gender and smoking status. Lipoproteins were measured by ultracentrifugation. Conditional logistic regression models were used to estimate the independent effects of different lipid ratios and the occurrence of coronary events.Results: In total, 98 cases (19 fatal myocardial infarctions [MI] and 79 non-fatal coronary events [53 definite MIs, 15 possible MIs and 11 coronary angioplasties or bypassesJ) were matched with 392 controls. Cases were more often injecting drug users, less likely to be virologically suppressed and more often on abacavir-containing regimens. In separa te multivariable models of total cholesterol, triglycerides, HDL-cholesterol, systolic blood pressure, abdominal obesity, diabetes and family history of CHD, small dense-LDL and apolipoprotein B were each statistically significantly associated with CHD events (for 1 mg/dl increase: odds ratio [OR] 1.05, 95% CI 1.00-1.11 and 1.15, 95% CI 1.01-1.31, respectively), but the ratiosof small dense-LDLlapolipoprotein A-I (OR 1.26, 95% CI 0.95-1.67), apolipoprotein B/apolipoprotein A-I (OR 1.02, 95% CI 0.97-1.07) and HDL-cholesterol! total cholesterol (OR 0.99 95% CI 0.98-1.00) were not. Following adjustment for HIV related and cART variables these associations were weakened in each model: apolipoprotein B (OR 1.27, 95% CI 1.00-1.30), sd-LDL (OR 1.04, 95% CI 0.99-1.20), small dense-LDLlapolipoprotein A-I (OR 1.17, 95% CI 0.87-1.58), apolipoprotein B/apolipoprotein A-I (OR 1.02, 95% CI 0.97-1.07) and total cholesterolJHDL- cholesterol (OR 0.99, 95% CI 0.99-1.00).Conclusions: In patients receiving cART, small dense-LDL and apolipoprotein B showed the strongest associations with CHD events in models controlling for traditional CHD risk factors including total cholesterol and triglycerides. Adding small dense LDLlapoplipoprotein A-l, apolipoprotein B/apolipoprotein A-I and total cholesterol! HDL-cholesterol ratios did not further improve models of lipid parameters and associations of increased risk for CHD events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emphasis on integrated care implies new incentives that promote coordinationbetween levels of care. Considering a population as a whole, the resource allocation systemhas to adapt to this environment. This research is aimed to design a model that allows formorbidity related prospective and concurrent capitation payment. The model can be applied inpublicly funded health systems and managed competition settings.Methods: We analyze the application of hybrid risk adjustment versus either prospective orconcurrent risk adjustment formulae in the context of funding total health expenditures for thepopulation of an integrated healthcare delivery organization in Catalonia during years 2004 and2005.Results: The hybrid model reimburses integrated care organizations avoiding excessive risktransfer and maximizing incentives for efficiency in the provision. At the same time, it eliminatesincentives for risk selection for a specific set of high risk individuals through the use ofconcurrent reimbursement in order to assure a proper classification of patients.Conclusion: Prospective Risk Adjustment is used to transfer the financial risk to the healthprovider and therefore provide incentives for efficiency. Within the context of a National HealthSystem, such transfer of financial risk is illusory, and the government has to cover the deficits.Hybrid risk adjustment is useful to provide the right combination of incentive for efficiency andappropriate level of risk transfer for integrated care organizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we address a problem arising in risk management; namely the study of price variations of different contingent claims in the Black-Scholes model due to anticipating future events. The method we propose to use is an extension of the classical Vega index, i.e. the price derivative with respect to the constant volatility, in thesense that we perturb the volatility in different directions. Thisdirectional derivative, which we denote the local Vega index, will serve as the main object in the paper and one of the purposes is to relate it to the classical Vega index. We show that for all contingent claims studied in this paper the local Vega index can be expressed as a weighted average of the perturbation in volatility. In the particular case where the interest rate and the volatility are constant and the perturbation is deterministic, the local Vega index is an average of this perturbation multiplied by the classical Vega index. We also study the well-known goal problem of maximizing the probability of a perfect hedge and show that the speed of convergence is in fact dependent of the local Vega index.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze risk sharing and fiscal spending in a two-region model withcomplete markets. Fiscal policy determines tax rates for each state ofnature. When fiscal policy is decentralized, it can be used to affect prices of securities. To manipulate prices to their beneffit, regionschoose pro-cyclical fiscal spending. This leads to incomplete risk sharing,despite the existence of complete markets and the absence of aggregaterisk. When a fiscal union centralizes fiscal policy, securities pricescan no longer be manipulated and complete risk sharing ensues. If regionsare homogeneous, median income residents of both regions prefer the fiscalunion. If they are heterogeneous, the median resident of the rich regionprefers the decentralized setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive an international asset pricing model that assumes local investorshave preferences of the type "keeping up with the Joneses." In aninternational setting investors compare their current wealth with that oftheir peers who live in the same country. In the process of inferring thecountry's average wealth, investors incorporate information from the domesticmarket portfolio. In equilibrium, this gives rise to a multifactor CAPMwhere, together with the world market price of risk, there existscountry-speciffic prices of risk associated with deviations from thecountry's average wealth level. The model performs signifficantly better, interms of explaining cross-section of returns, than the international CAPM.Moreover, the results are robust, both for conditional and unconditionaltests, to the inclusion of currency risk, macroeconomic sources of risk andthe Fama and French HML factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We model systemic risk in an interbank market. Banks face liquidityneeds as consumers are uncertain about where they need to consume. Interbank credit lines allow to cope with these liquidity shocks while reducing the cost of maintaining reserves. However, the interbank market exposes the system to a coordination failure(gridlock equilibrium) even if all banks are solvent. When one bankis insolvent, the stability of the banking system is affected in various ways depending on the patterns of payments across locations. We investigate the ability of the banking industry to withstand the insolvency of one bank and whether the closure ofone bank generates a chain reaction on the rest of the system. Weanalyze the coordinating role of the Central Bank in preventing payments systemic repercussions and we examine the justification ofthe Too-big-to-fail-policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a situation in which an auctioneer wishes to sell an object toone of N risk-neutral bidders with heterogeneous preferences. Theauctioneer does not know bidders preferences but has private informationabout the characteristics of the ob ject, and must decide how muchinformation to reveal prior to the auction. We show that the auctioneerhas incentives to release less information than would be efficient andthat the amount of information released increases with the level ofcompetition (as measured by the number of bidders). Furthermore, in aperfectly competitive market the auctioneer would provide the efficientlevel of information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of existing studies have concluded that risk sharing allocations supported by competitive, incomplete markets equilibria are quantitatively close to first-best. Equilibrium asset prices in these models have been difficult to distinguish from those associated with a complete markets model, the counterfactual features of which have been widely documented. This paper asks if life cycle considerations, in conjunction with persistent idiosyncratic shocks which become more volatile during aggregate downturns, can reconcile the quantitative properties of the competitive asset pricing framework with those of observed asset returns. We begin by arguing that data from the Panel Study on Income Dynamics support the plausibility of such a shock process. Our estimates suggest a high degree of persistence as well as a substantial increase in idiosyncratic conditional volatility coincident with periods of low growth in U.S. GNP. When these factors are incorporated in a stationary overlapping generations framework, the implications for the returns on risky assets are substantial. Plausible parameterizations of our economy are able to generate Sharpe ratios which match those observed in U.S. data. Our economy cannot, however, account for the level of variability of stock returns, owing in large part to the specification of its production technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper I develop a general equilibrium model with risk averse entrepreneurialfirms and with public firms. The model predicts that an increase in uncertainty reducesthe propensity of entrepreneurial firms to innovate, while it does not affect thepropensity of public firms to innovate. Furthermore, it predicts that the negativeeffect of uncertainty on innovation is stronger for the less diversified entrepreneurialfirms, and is stronger in the absence of financing frictions in the economy. In thesecond part of the paper I test these predictions on a dataset of small and mediumItalian manufacturing firms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordination games arise very often in studies of industrial organization and international trade. This type of games has multiple strict equilibria, and therefore the identification of testable predictions isvery difficult. We study a vertical product differentiation model with two asymmetric players choosing first qualities and then prices. This game has two equilibria for some parameter values. However, we apply the risk dominance criterion suggested by Harsanyi and Selten and show that it always selects the equilibrium where the leader is the firm having some initial advantage. We then perform an experimental analysis totest whether the risk dominance prediction is supported by the behaviour oflaboratory agents. We show that the probability that the risk dominance prediction is right depends crucially on the degree of asymmetry of the game. The stronger the asymmetries the higher the predictive power of the risk dominance criterion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyses the application of hybrid risk adjustment versus either prospective orconcurrent risk adjustment formulae in the context of funding pharmaceutical benefits for thepopulation of an integrated healthcare delivery organization in Catalonia during years 2002 and2003. We apply a mixed formula and find that a hybrid risk adjustment model increasesincentives for efficiency in the provision of low risk individuals at health organizations not only asa whole but also at each internal department compared to only prospective models by reducingwithin-group variation of drug expenditures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Prognosis prediction for resected primary colon cancer is based on the T-stage Node Metastasis (TNM) staging system. We investigated if four well-documented gene expression risk scores can improve patient stratification. METHODS: Microarray-based versions of risk-scores were applied to a large independent cohort of 688 stage II/III tumors from the PETACC-3 trial. Prognostic value for relapse-free survival (RFS), survival after relapse (SAR), and overall survival (OS) was assessed by regression analysis. To assess improvement over a reference, prognostic model was assessed with the area under curve (AUC) of receiver operating characteristic (ROC) curves. All statistical tests were two-sided, except the AUC increase. RESULTS: All four risk scores (RSs) showed a statistically significant association (single-test, P < .0167) with OS or RFS in univariate models, but with HRs below 1.38 per interquartile range. Three scores were predictors of shorter RFS, one of shorter SAR. Each RS could only marginally improve an RFS or OS model with the known factors T-stage, N-stage, and microsatellite instability (MSI) status (AUC gains < 0.025 units). The pairwise interscore discordance was never high (maximal Spearman correlation = 0.563) A combined score showed a trend to higher prognostic value and higher AUC increase for OS (HR = 1.74, 95% confidence interval [CI] = 1.44 to 2.10, P < .001, AUC from 0.6918 to 0.7321) and RFS (HR = 1.56, 95% CI = 1.33 to 1.84, P < .001, AUC from 0.6723 to 0.6945) than any single score. CONCLUSIONS: The four tested gene expression-based risk scores provide prognostic information but contribute only marginally to improving models based on established risk factors. A combination of the risk scores might provide more robust information. Predictors of RFS and SAR might need to be different.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Second cancer risk assessment for radiotherapy is controversial due to the large uncertainties of the dose-response relationship. This could be improved by a better assessment of the peripheral doses to healthy organs in future epidemiological studies. In this framework, we developed a simple Monte Carlo (MC) model of the Siemens Primus 6 MV linac for both open and wedged fields that we then validated with dose profiles measured in a water tank up to 30 cm from the central axis. The differences between the measured and calculated doses were comparable to other more complex MC models and never exceeded 50%. We then compared our simple MC model with the peripheral dose profiles of five different linacs with different collimation systems. We found that the peripheral dose between two linacs could differ up to a factor of 9 for small fields (5 × 5 cm(2)) and up to a factor of 10 for wedged fields. Considering that an uncertainty of 50% in dose estimation could be acceptable in the context of risk assessment, the MC model can be used as a generic model for large open fields (≥10 × 10 cm(2)) only. The uncertainties in peripheral doses should be considered in future epidemiological studies when designing the width of the dose bins to stratify the risk as a function of the dose.