902 resultados para Deterministic imputation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: Studies of major depression in twins and families have shown moderate to high heritability, but extensive molecular studies have failed to identify susceptibility genes convincingly. To detect genetic variants contributing to major depression, the authors performed a genome-wide association study using 1,636 cases of depression ascertained in the U.K. and 1,594 comparison subjects screened negative for psychiatric disorders. METHOD: Cases were collected from 1) a case-control study of recurrent depression (the Depression Case Control [DeCC] study; N=1346), 2) an affected sibling pair linkage study of recurrent depression (probands from the Depression Network [DeNT] study; N=332), and 3) a pharmacogenetic study (the Genome-Based Therapeutic Drugs for Depression [GENDEP] study; N=88). Depression cases and comparison subjects were genotyped at Centre National de Génotypage on the Illumina Human610-Quad BeadChip. After applying stringent quality control criteria for missing genotypes, departure from Hardy-Weinberg equilibrium, and low minor allele frequency, the authors tested for association to depression using logistic regression, correcting for population ancestry. RESULTS: Single nucleotide polymorphisms (SNPs) in BICC1 achieved suggestive evidence for association, which strengthened after imputation of ungenotyped markers, and in analysis of female depression cases. A meta-analysis of U.K. data with previously published results from studies in Munich and Lausanne showed some evidence for association near neuroligin 1 (NLGN1) on chromosome 3, but did not support findings at BICC1. CONCLUSIONS: This study identifies several signals for association worthy of further investigation but, as in previous genome-wide studies, suggests that individual gene contributions to depression are likely to have only minor effects, and very large pooled analyses will be required to identify them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

That individuals contribute in social dilemma interactions even when contributing is costly is a well-established observation in the experimental literature. Since a contributor is always strictly worse off than a non-contributor the question is raised if an intrinsic motivation to contribute can survive in an evolutionary setting. Using recent results on deterministic approximation of stochastic evolutionary dynamics we give conditions for equilibria with a positive number of contributors to be selected in the long run.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: The purpose of this study was to evaluate the association between inflammation and heart failure (HF) risk in older adults. BACKGROUND: Inflammation is associated with HF risk factors and also directly affects myocardial function. METHODS: The association of baseline serum concentrations of interleukin (IL)-6, tumor necrosis factor-alpha, and C-reactive protein (CRP) with incident HF was assessed with Cox models among 2,610 older persons without prevalent HF enrolled in the Health ABC (Health, Aging, and Body Composition) study (age 73.6 +/- 2.9 years; 48.3% men; 59.6% white). RESULTS: During follow-up (median 9.4 years), HF developed in 311 (11.9%) participants. In models controlling for clinical characteristics, ankle-arm index, and incident coronary heart disease, doubling of IL-6, tumor necrosis factor-alpha, and CRP concentrations was associated with 29% (95% confidence interval: 13% to 47%; p < 0.001), 46% (95% confidence interval: 17% to 84%; p = 0.001), and 9% (95% confidence interval: -1% to 24%; p = 0.087) increase in HF risk, respectively. In models including all 3 markers, IL-6, and tumor necrosis factor-alpha, but not CRP, remained significant. These associations were similar across sex and race and persisted in models accounting for death as a competing event. Post-HF ejection fraction was available in 239 (76.8%) cases; inflammatory markers had stronger association with HF with preserved ejection fraction. Repeat IL-6 and CRP determinations at 1-year follow-up did not provide incremental information. Addition of IL-6 to the clinical Health ABC HF model improved model discrimination (C index from 0.717 to 0.734; p = 0.001) and fit (decreased Bayes information criterion by 17.8; p < 0.001). CONCLUSIONS: Inflammatory markers are associated with HF risk among older adults and may improve HF risk stratification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a model of price discrimination where a monopolistfaces a consumer who is privately informed about thedistribution of his valuation for an indivisible unit ofgood but has yet to learn privately the actual valuation.The monopolist sequentially screens the consumer with amenu of contracts:the consumer self-selects once by choosing a contract andthen self-selects again when he learns the actual valuation. A deterministic sequential mechanism is a menu of refundcontracts, each consisting of an advance payment and a refundamount in case of no consumption, but sequential mechanismsmay involve randomization.We characterize the optimal sequential mechanism when someconsumer types are more eager in the sense of first-orderstochastic dominance, and when some types face greatervaluation uncertainty in the sense of mean-preserving-spread.We show that it can be optimal to subsidize consumer typeswith smaller valuation uncertainty (through low refund, as inairplane ticket pricing) in order to reduce the rent to thosewith greater uncertainty. The size of distortion depends bothon the type distribution and on how informative the consumer'sinitial private knowledge is about his valuation, but noton how much he initially knows about the valuation per se.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new parametric minimum distance time-domain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator iseasy to compute and is consistent and asymptotically normally distributed for fractionallyintegrated (FI) processes with an integration order d strictly greater than -0.75. Therefore, it can be applied to both stationary and non-stationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a by-product, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the Box-Pierce goodness of fit statistic. Empirical applications and Monte-Carlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models are presented for the optimal location of hubs in airline networks, that take into consideration the congestion effects. Hubs, which are the most congested airports, are modeled as M/D/c queuing systems, that is, Poisson arrivals, deterministic service time, and {\em c} servers. A formula is derived for the probability of a number of customers in the system, which is later used to propose a probabilistic constraint. This constraint limits the probability of {\em b} airplanes in queue, to be lesser than a value $\alpha$. Due to the computational complexity of the formulation. The model is solved using a meta-heuristic based on tabu search. Computational experience is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PRECON S.A is a manufacturing company dedicated to produce prefabricatedconcrete parts to several industries as rail transportation andagricultural industries.Recently, PRECON signed a contract with RENFE,the Spanish Nnational Rail Transportation Company to manufacturepre-stressed concrete sleepers for siding of the new railways of the highspeed train AVE. The scheduling problem associated with the manufacturingprocess of the sleepers is very complex since it involves severalconstraints and objectives. The constraints are related with productioncapacity, the quantity of available moulds, satisfying demand and otheroperational constraints. The two main objectives are related withmaximizing the usage of the manufacturing resources and minimizing themoulds movements. We developed a deterministic crowding genetic algorithmfor this multiobjective problem. The algorithm has proved to be a powerfuland flexible tool to solve the large-scale instance of this complex realscheduling problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a new time-domain test of a process being I(d), 0 < d = 1, under the null, against the alternative of being I(0) with deterministic components subject to structural breaks at known or unknown dates, with the goal of disentangling the existing identification issue between long-memory and structural breaks. Denoting by AB(t) the different types of structural breaks in the deterministic components of a time series considered by Perron (1989), the test statistic proposed here is based on the t-ratio (or the infimum of a sequence of t-ratios) of the estimated coefficient on yt-1 in an OLS regression of ?dyt on a simple transformation of the above-mentioned deterministic components and yt-1, possibly augmented by a suitable number of lags of ?dyt to account for serial correlation in the error terms. The case where d = 1 coincides with the Perron (1989) or the Zivot and Andrews (1992) approaches if the break date is known or unknown, respectively. The statistic is labelled as the SB-FDF (Structural Break-Fractional Dickey- Fuller) test, since it is based on the same principles as the well-known Dickey-Fuller unit root test. Both its asymptotic behavior and finite sample properties are analyzed, and two empirical applications are provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major challenge in community ecology is a thorough understanding of the processes that govern the assembly and composition of communities in time and space. The growing threat of climate change to the vascular plant biodiversity of fragile ecosystems such as mountains has made it equally imperative to develop comprehensive methodologies to provide insights into how communities are assembled. In this perspective, the primary objective of this PhD thesis is to contribute to the theoretical and methodological development of community ecology, by proposing new solutions to better detect the ecological and evolutionary processes that govern community assembly. As phylogenetic trees provide by far, the most advanced tools to integrate the spatial, ecological and evolutionary dynamics of plant communities, they represent the cornerstone on which this work was based. In this thesis, I proposed new solutions to: (i) reveal trends in community assembly on phylogenies, depicted by the transition of signals at the nodes of the different species and lineages responsible for community assembly, (ii) contribute to evidence the importance of evolutionarily labile traits in the distribution of mountain plant species. More precisely, I demonstrated that phylogenetic and functional compositional turnover in plant communities was driven by climate and human land use gradients mostly influenced by evolutionarily labile traits, (iii) predict and spatially project the phylogenetic structure of communities using species distribution models, to identify the potential distribution of phylogenetic diversity, as well as areas of high evolutionary potential along elevation. The altitudinal setting of the Diablerets mountains (Switzerland) provided an appropriate model for this study. The elevation gradient served as a compression of large latitudinal variations similar to a collection of islands within a single area, and allowed investigations on a large number of plant communities. Overall, this thesis highlights that stochastic and deterministic environmental filtering processes mainly influence the phylogenetic structure of plant communities in mountainous areas. Negative density-dependent processes implied through patterns of phylogenetic overdispersion were only detected at the local scale, whereas environmental filtering implied through phylogenetic clustering was observed at both the regional and local scale. Finally, the integration of indices of phylogenetic community ecology with species distribution models revealed the prospects of providing novel and insightful explanations on the potential distribution of phylogenetic biodiversity in high mountain areas. These results generally demonstrate the usefulness of phylogenies in inferring assembly processes, and are worth considering in the theoretical and methodological development of tools to better understand phylogenetic community structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyses the predictive ability of quantitative precipitation forecasts (QPF) and the so-called "poor-man" rainfall probabilistic forecasts (RPF). With this aim, the full set of warnings issued by the Meteorological Service of Catalonia (SMC) for potentially-dangerous events due to severe precipitation has been analysed for the year 2008. For each of the 37 warnings, the QPFs obtained from the limited-area model MM5 have been verified against hourly precipitation data provided by the rain gauge network covering Catalonia (NE of Spain), managed by SMC. For a group of five selected case studies, a QPF comparison has been undertaken between the MM5 and COSMO-I7 limited-area models. Although MM5's predictive ability has been examined for these five cases by making use of satellite data, this paper only shows in detail the heavy precipitation event on the 9¿10 May 2008. Finally, the "poor-man" rainfall probabilistic forecasts (RPF) issued by SMC at regional scale have also been tested against hourly precipitation observations. Verification results show that for long events (>24 h) MM5 tends to overestimate total precipitation, whereas for short events (¿24 h) the model tends instead to underestimate precipitation. The analysis of the five case studies concludes that most of MM5's QPF errors are mainly triggered by very poor representation of some of its cloud microphysical species, particularly the cloud liquid water and, to a lesser degree, the water vapor. The models' performance comparison demonstrates that MM5 and COSMO-I7 are on the same level of QPF skill, at least for the intense-rainfall events dealt with in the five case studies, whilst the warnings based on RPF issued by SMC have proven fairly correct when tested against hourly observed precipitation for 6-h intervals and at a small region scale. Throughout this study, we have only dealt with (SMC-issued) warning episodes in order to analyse deterministic (MM5 and COSMO-I7) and probabilistic (SMC) rainfall forecasts; therefore we have not taken into account those episodes that might (or might not) have been missed by the official SMC warnings. Therefore, whenever we talk about "misses", it is always in relation to the deterministic LAMs' QPFs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An enormous burst of interest in the public health burden from chronic disease in Africa has emerged as a consequence of efforts to estimate global population health. Detailed estimates are now published for Africa as a whole and each country on the continent. These data have formed the basis for warnings about sharp increases in cardiovascular disease (CVD) in the coming decades. In this essay we briefly examine the trajectory of social development on the continent and its consequences for the epidemiology of CVD and potential control strategies. Since full vital registration has only been implemented in segments of South Africa and the island nations of Seychelles and Mauritius - formally part of WHO-AFRO - mortality data are extremely limited. Numerous sample surveys have been conducted but they often lack standardization or objective measures of health status. Trend data are even less informative. However, using the best quality data available, age-standardized trends in CVD are downward, and in the case of stroke, sharply so. While acknowledging that the extremely limited available data cannot be used as the basis for inference to the continent, we raise the concern that general estimates based on imputation to fill in the missing mortality tables may be even more misleading. No immediate remedies to this problem can be identified, however bilateral collaborative efforts to strength local educational institutions and governmental agencies rank as the highest priority for near term development.