946 resultados para Multiperiod mixed-integer convex model
Resumo:
The role of the local atmospheric forcing on the ocean mixed layer depth (MLD) over the global oceans is studied using ocean reanalysis data products and a single-column ocean model coupled to an atmospheric general circulation model. The focus of this study is on how the annual mean and the seasonal cycle of the MLD relate to various forcing characteristics in different parts of the world's ocean, and how anomalous variations in the monthly mean MLD relate to anomalous atmospheric forcings. By analysing both ocean reanalysis data and the single-column ocean model, regions with different dominant forcings and different mean and variability characteristics of the MLD can be identified. Many of the global oceans' MLD characteristics appear to be directly linked to different atmospheric forcing characteristics at different locations. Here, heating and wind-stress are identified as the main drivers; in some, mostly coastal, regions the atmospheric salinity forcing also contributes. The annual mean MLD is more closely related to the annual mean wind-stress and the MLD seasonality is more closely to the seasonality in heating. The single-column ocean model, however, also points out that the MLD characteristics over most global ocean regions, and in particular the tropics and subtropics, cannot be maintained by local atmospheric forcings only, but are also a result of ocean dynamics that are not simulated in a single-column ocean model. Thus, lateral ocean dynamics are essentially in correctly simulating observed MLD.
Resumo:
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Resumo:
Phylogenetic analyses of chloroplast DNA sequences, morphology, and combined data have provided consistent support for many of the major branches within the angiosperm, clade Dipsacales. Here we use sequences from three mitochondrial loci to test the existing broad scale phylogeny and in an attempt to resolve several relationships that have remained uncertain. Parsimony, maximum likelihood, and Bayesian analyses of a combined mitochondrial data set recover trees broadly consistent with previous studies, although resolution and support are lower than in the largest chloroplast analyses. Combining chloroplast and mitochondrial data results in a generally well-resolved and very strongly supported topology but the previously recognized problem areas remain. To investigate why these relationships have been difficult to resolve we conducted a series of experiments using different data partitions and heterogeneous substitution models. Usually more complex modeling schemes are favored regardless of the partitions recognized but model choice had little effect on topology or support values. In contrast there are consistent but weakly supported differences in the topologies recovered from coding and non-coding matrices. These conflicts directly correspond to relationships that were poorly resolved in analyses of the full combined chloroplast-mitochondrial data set. We suggest incongruent signal has contributed to our inability to confidently resolve these problem areas. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We develop a job-market signaling model where signals may convey two pieces of information. This model is employed to study the GED exam and countersignaling (signals non-monotonic in ability). A result of the model is that countersignaling is more expected to occur in jobs that require a combination of skills that differs from the combination used in the schooling process. The model also produces testable implications consistent with evidence on the GED: (i) it signals both high cognitive and low non-cognitive skills and (ii) it does not affect wages. Additionally, it suggests modifications that would make the GED a more signal.
Resumo:
Real exchange rate is an important macroeconomic price in the economy and a ects economic activity, interest rates, domestic prices, trade and investiments ows among other variables. Methodologies have been developed in empirical exchange rate misalignment studies to evaluate whether a real e ective exchange is overvalued or undervalued. There is a vast body of literature on the determinants of long-term real exchange rates and on empirical strategies to implement the equilibrium norms obtained from theoretical models. This study seeks to contribute to this literature by showing that it is possible to calculate the misalignment from a mixed ointegrated vector error correction framework. An empirical exercise using United States' real exchange rate data is performed. The results suggest that the model with mixed frequency data is preferred to the models with same frequency variables
Resumo:
This work is related with the proposition of a so-called regular or convex solver potential to be used in numerical simulations involving a certain class of constitutive elastic-damage models. All the mathematical aspects involved are based on convex analysis, which is employed aiming a consistent variational formulation of the potential and its conjugate one. It is shown that the constitutive relations for the class of damage models here considered can be derived from the solver potentials by means of sub-differentials sets. The optimality conditions of the resulting minimisation problem represent in particular a linear complementarity problem. Finally, a simple example is present in order to illustrate the possible integration errors that can be generated when finite step analysis is performed. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Sugarcane-breeding programs take at least 12 years to develop new commercial cultivars. Molecular markers offer a possibility to study the genetic architecture of quantitative traits in sugarcane, and they may be used in marker-assisted selection to speed up artificial selection. Although the performance of sugarcane progenies in breeding programs are commonly evaluated across a range of locations and harvest years, many of the QTL detection methods ignore two- and three-way interactions between QTL, harvest, and location. In this work, a strategy for QTL detection in multi-harvest-location trial data, based on interval mapping and mixed models, is proposed and applied to map QTL effects on a segregating progeny from a biparental cross of pre-commercial Brazilian cultivars, evaluated at two locations and three consecutive harvest years for cane yield (tonnes per hectare), sugar yield (tonnes per hectare), fiber percent, and sucrose content. In the mixed model, we have included appropriate (co)variance structures for modeling heterogeneity and correlation of genetic effects and non-genetic residual effects. Forty-six QTLs were found: 13 QTLs for cane yield, 14 for sugar yield, 11 for fiber percent, and 8 for sucrose content. In addition, QTL by harvest, QTL by location, and QTL by harvest by location interaction effects were significant for all evaluated traits (30 QTLs showed some interaction, and 16 none). Our results contribute to a better understanding of the genetic architecture of complex traits related to biomass production and sucrose content in sugarcane.
Resumo:
In this paper, we propose a random intercept Poisson model in which the random effect is assumed to follow a generalized log-gamma (GLG) distribution. This random effect accommodates (or captures) the overdispersion in the counts and induces within-cluster correlation. We derive the first two moments for the marginal distribution as well as the intraclass correlation. Even though numerical integration methods are, in general, required for deriving the marginal models, we obtain the multivariate negative binomial model from a particular parameter setting of the hierarchical model. An iterative process is derived for obtaining the maximum likelihood estimates for the parameters in the multivariate negative binomial model. Residual analysis is proposed and two applications with real data are given for illustration. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The aim of the thesi is to formulate a suitable Item Response Theory (IRT) based model to measure HRQoL (as latent variable) using a mixed responses questionnaire and relaxing the hypothesis of normal distributed latent variable. The new model is a combination of two models already presented in literature, that is, a latent trait model for mixed responses and an IRT model for Skew Normal latent variable. It is developed in a Bayesian framework, a Markov chain Monte Carlo procedure is used to generate samples of the posterior distribution of the parameters of interest. The proposed model is test on a questionnaire composed by 5 discrete items and one continuous to measure HRQoL in children, the EQ-5D-Y questionnaire. A large sample of children collected in the schools was used. In comparison with a model for only discrete responses and a model for mixed responses and normal latent variable, the new model has better performances, in term of deviance information criterion (DIC), chain convergences times and precision of the estimates.
Resumo:
We introduce a diagnostic test for the mixing distribution in a generalised linear mixed model. The test is based on the difference between the marginal maximum likelihood and conditional maximum likelihood estimates of a subset of the fixed effects in the model. We derive the asymptotic variance of this difference, and propose a test statistic that has a limiting chi-square distribution under the null hypothesis that the mixing distribution is correctly specified. For the important special case of the logistic regression model with random intercepts, we evaluate via simulation the power of the test in finite samples under several alternative distributional forms for the mixing distribution. We illustrate the method by applying it to data from a clinical trial investigating the effects of hormonal contraceptives in women.
Resumo:
A basin-wide interdecadal change in both the physical state and the ecology of the North Pacific occurred near the end of 1976. Here we use a physical-ecosystem model to examine whether changes in the physical environment associated with the 1976-1977 transition influenced the lower trophic levels of the food web and if so by what means. The physical component is an ocean general circulation model, while the biological component contains 10 compartments: two phytoplankton, two zooplankton, two detritus pools, nitrate, ammonium, silicate, and carbon dioxide. The model is forced with observed atmospheric fields during 1960-1999. During spring, there is a similar to 40% reduction in plankton biomass in all four plankton groups during 1977-1988 relative to 1970-1976 in the central Gulf of Alaska (GOA). The epoch difference in plankton appears to be controlled by the mixed layer depth. Enhanced Ekman pumping after 1976 caused the halocline to shoal, and thus the mixed layer depth, which extends to the top of the halocline in late winter, did not penetrate as deep in the central GOA. As a result, more phytoplankton remained in the euphotic zone, and phytoplankton biomass began to increase earlier in the year after the 1976 transition. Zooplankton biomass also increased, but then grazing pressure led to a strong decrease in phytoplankton by April followed by a drop in zooplankton by May: Essentially, the mean seasonal cycle of plankton biomass was shifted earlier in the year. As the seasonal cycle progressed, the difference in plankton concentrations between epochs reversed sign again, leading to slightly greater zooplankton biomass during summer in the later epoch.