946 resultados para Multiperiod mixed-integer convex model
Resumo:
Granular matter, also known as bulk solids, consists of discrete particles with sizes between micrometers and meters. They are present in many industrial applications as well as daily life, like in food processing, pharmaceutics or in the oil and mining industry. When handling granular matter the bulk solids are stored, mixed, conveyed or filtered. These techniques are based on observations in macroscopic experiments, i.e. rheological examinations of the bulk properties. Despite the amply investigations of bulk mechanics, the relation between single particle motion and macroscopic behavior is still not well understood. For exploring the microscopic properties on a single particle level, 3D imaging techniques are required.rnThe objective of this work was the investigation of single particle motions in a bulk system in 3D under an external mechanical load, i.e. compression and shear. During the mechanical load the structural and dynamical properties of these systems were examined with confocal microscopy. Therefor new granular model systems in the wet and dry state were designed and prepared. As the particles are solid bodies, their motion is described by six degrees of freedom. To explore their entire motion with all degrees of freedom, a technique to visualize the rotation of spherical micrometer sized particles in 3D was developed. rnOne of the foci during this dissertation was a model system for dry cohesive granular matter. In such systems the particle motion during a compression of the granular matter was investigated. In general the rotation of single particles was the more sensitive parameter compared to the translation. In regions with large structural changes the rotation had an earlier onset than the translation. In granular systems under shear, shear dilatation and shear zone formation were observed. Globally the granular sediments showed a shear behavior, which was known already from classical shear experiments, for example with Jenike cells. Locally the shear zone formation was enhanced, when near the applied load a pre-diluted region existed. In regions with constant volume fraction a mixing between the different particle layers occurred. In particular an exchange of particles between the current flowing region and the non-flowing region was observed. rnThe second focus was on model systems for wet granular matter, where an additional binding liquid is added to the particle suspension. To examine the 3D structure of the binding liquid on the micrometer scale independently from the particles, a second illumination and detection beam path was implemented. In shear and compression experiments of wet clusters and bulk systems completely different dynamics compared to dry cohesive models systems occured. In a Pickering emulsion-like system large structural changes predominantly occurred in the local environment of binding liquid droplets. These large local structural changes were due to an energy interplay between the energy stored in the binding droplet during its deformation and the binding energy of particles at the droplet interface. rnConfocal microscopy in combination with nanoindentation gave new insights into the single particle motions and dynamics of granular systems under a mechanical load. These novel experimental results can help to improve the understanding of the relationship between bulk properties of granular matter, such as volume fraction or yield stress and the dynamics on a single particle level.rnrn
Resumo:
In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.
Resumo:
Changes in marine net primary productivity (PP) and export of particulate organic carbon (EP) are projected over the 21st century with four global coupled carbon cycle-climate models. These include representations of marine ecosystems and the carbon cycle of different structure and complexity. All four models show a decrease in global mean PP and EP between 2 and 20% by 2100 relative to preindustrial conditions, for the SRES A2 emission scenario. Two different regimes for productivity changes are consistently identified in all models. The first chain of mechanisms is dominant in the low- and mid-latitude ocean and in the North Atlantic: reduced input of macro-nutrients into the euphotic zone related to enhanced stratification, reduced mixed layer depth, and slowed circulation causes a decrease in macro-nutrient concentrations and in PP and EP. The second regime is projected for parts of the Southern Ocean: an alleviation of light and/or temperature limitation leads to an increase in PP and EP as productivity is fueled by a sustained nutrient input. A region of disagreement among the models is the Arctic, where three models project an increase in PP while one model projects a decrease. Projected changes in seasonal and interannual variability are modest in most regions. Regional model skill metrics are proposed to generate multi-model mean fields that show an improved skill in representing observation-based estimates compared to a simple multi-model average. Model results are compared to recent productivity projections with three different algorithms, usually applied to infer net primary production from satellite observations.
Resumo:
Background: ;Rates of molecular evolution vary widely among species. While significant deviations from molecular clock have been found in many taxa, effects of life histories on molecular evolution are not fully understood. In plants, annual/perennial life history traits have long been suspected to influence the evolutionary rates at the molecular level. To date, however, the number of genes investigated on this subject is limited and the conclusions are mixed. To evaluate the possible heterogeneity in evolutionary rates between annual and perennial plants at the genomic level, we investigated 85 nuclear housekeeping genes, 10 non-housekeeping families, and 34 chloroplast;genes using the genomic data from model plants including Arabidopsis thaliana and Medicago truncatula for annuals and grape (Vitis vinifera) and popular (Populus trichocarpa) for perennials.;Results: ;According to the cross-comparisons among the four species, 74-82% of the nuclear genes and 71-97% of the chloroplast genes suggested higher rates of molecular evolution in the two annuals than those in the two perennials. The significant heterogeneity in evolutionary rate between annuals and perennials was consistently found both in nonsynonymous sites and synonymous sites. While a linear correlation of evolutionary rates in orthologous genes between species was observed in nonsynonymous sites, the correlation was weak or invisible in synonymous sites. This tendency was clearer in nuclear genes than in chloroplast genes, in which the overall;evolutionary rate was small. The slope of the regression line was consistently lower than unity, further confirming the higher evolutionary rate in annuals at the genomic level.;Conclusions: ;The higher evolutionary rate in annuals than in perennials appears to be a universal phenomenon both in nuclear and chloroplast genomes in the four dicot model plants we investigated. Therefore, such heterogeneity in evolutionary rate should result from factors that have genome-wide influence, most likely those associated with annual/perennial life history. Although we acknowledge current limitations of this kind of study, mainly due to a small sample size available and a distant taxonomic relationship of the model organisms, our results indicate that the genome-wide survey is a promising approach toward further understanding of the;mechanism determining the molecular evolutionary rate at the genomic level.
Resumo:
The major route of transmission of Neospora caninum in cattle is transplacentally from an infected cow to its progeny. Therefore, a vaccine should be able to prevent both the horizontal transmission from contaminated food or water and the vertical transmission. We have previously shown that a chimeric vaccine composed of predicted immunogenic epitopes of NcMIC3, NcMIC1 and NcROP2 (recNcMIC3-1-R) significantly reduced the cerebral infection in BALB/c mice. In this study, mice were first vaccinated, then mated and pregnant mice were challenged with 2×10(6)N. caninum tachyzoites at day 7-9 of pregnancy. Partial protection was only observed in the mice vaccinated with a tachyzoite crude protein extract but no protection against vertical transmission or cerebral infection in the dams was observed in the group vaccinated with recNcMIC3-1-R. Serological and cytokine analysis showed an overall lower cytokine level in sera associated with a dominant IL-4 expression and high IgG1 titers. Thus, the Th2-type immune response observed in the pregnant mice was not protective against experimental neosporosis, in contrary to the mixed Th1-/Th2-type immune response observed in the non-pregnant mouse model. These results demonstrate that the immunomodulation that occurs during pregnancy was not favorable for the protection against N. caninum infection conferred by vaccination with recNcMIC3-1-R.
Resumo:
Parents and children, starting at very young ages, discuss religious and spiritual issues¿where we come from, what happens to us after we die, is there a God, and so on. Unfortunately, few studies have analyzed the content and structure of parent-child conversation about religion and spirituality (Boyatzis & Janicki, 2003; Dollahite & Thatcher, 2009), and most studies have relied on self-report with no direct observation. The current study examined mother-child (M-C) spiritual discourse to learn about its content, structure, and frequency through a survey inventory in combination with direct video observation using a novel structured task. We also analyzed how mothers¿ religiosity along several major dimensions related to their communication behaviors within both methods. Mothers (N = 39, M age = 40) of children aged 3-12 completed a survey packet on M-C spiritual discourse and standard measures of mothers¿ religious fundamentalism, intrinsic religiosity, sanctification of parenting (how much the mother saw herself as doing God¿s work as a parent), and a new measure of parental openness to children¿s spirituality. Then, in a structured task in our lab, mothers (N = 33) and children (M age = 7.33) watched a short film or read a short book that explored death in an age-appropriate manner and then engaged in a videotaped conversation about the movie or book and their religious or spiritual beliefs. Frequency of M-C spiritual discourse was positively related to mothers¿ religious fundamentalism (r = .71, p = .00), intrinsic religiosity (r = .77, p = .00), and sanctification of parenting (r = .79, p = .00), but, surprisingly, was inversely related to mothers¿ v openness to child¿s spirituality (r = -.52, p = .00). Survey data showed that the two most common topics discussed were God (once a week) and religion as it relates to moral issues (once a week). According to mothers their children¿s most common method of initiating spiritual discourse was to repeat what he or she has heard parents or family say about religious issues (M = 2.97; once a week); mothers¿ most common method was to describe their own religious/spiritual beliefs (M = 2.92). Spiritual discourse most commonly occurred either at bedtime or mealtime as reported by 26% of mothers, with the most common triggers reported as daily routine/random thoughts (once a week) and observations of nature (once a week). Mothers¿ most important goals for spiritual discourse were to let their children know that they love them (M = 3.72; very important) and to help them become a good and moral person (M = 3.67; very important). A regression model showed that significant variance in frequency of mother-child spiritual discourse (R2 = .84, p = .00) was predicted by the mothers¿ importance of goals during discourse (ß = 0.46, p = .00), frequency that the mother¿s spirituality was deepened through spiritual discourse (ß = 0.39, p = .00), and the mother¿s fundamentalism (ß = 0.20, p = .05). In a separate regression, the mother¿s comfort in the structured task (ß = 0.70, p = .00), and the number of open-ended questions she asked (ß = -0.26, p = .03) predicted the reciprocity between mother and child (R2 = .62, p = .00). In addition, the mother¿s age (ß = 0.22, p = .059) and comfort during the task (ß = 0.73, p = .00) predicted the child¿s engagement within the structured task. Other findings and theoretical and methodological implications will be discussed.
Resumo:
Khutoretsky dealt with the problem of maximising a linear utility function (MUF) over the set of short-term equilibria in a housing market by reducing it to a linear programming problem, and suggested a combinatorial algorithm for this problem. Two approaches to the market adjustment were considered: the funding of housing construction and the granting of housing allowances. In both cases, locally optimal regulatory measures can be developed using the corresponding dual prices. The optimal effects (with the regulation expenditures restricted by an amount K) can be found using specialised models based on MUF: a model M1 for choice of the optimum structure of investment in housing construction, and a model M2 for optimum distribution of housing allowances. The linear integer optimisation problems corresponding to these models are initially difficult but can be solved after slight modifications of the parameters. In particular, the necessary modification of K does not exceed the maximum construction cost of one dwelling (for M1) or the maximum size of one housing allowance (for M2). The result is particularly useful since slight modification of K is not essential in practice.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.
Resumo:
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model, normal base measures and Gibbs sampling procedures based on the Pólya urn scheme are often used to simulate posterior draws. These algorithms are applicable in the conjugate case when (for a normal base measure) the likelihood is normal. In the non-conjugate case, the algorithms proposed by MacEachern and Müller (1998) and Neal (2000) are often applied to generate posterior samples. Some common problems associated with simulation algorithms for non-conjugate MDP models include convergence and mixing difficulties. This paper proposes an algorithm based on the Pólya urn scheme that extends the Gibbs sampling algorithms to non-conjugate models with normal base measures and exponential family likelihoods. The algorithm proceeds by making Laplace approximations to the likelihood function, thereby reducing the procedure to that of conjugate normal MDP models. To ensure the validity of the stationary distribution in the non-conjugate case, the proposals are accepted or rejected by a Metropolis-Hastings step. In the special case where the data are normally distributed, the algorithm is identical to the Gibbs sampler.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion (AIC) have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is no longer an asymptotically unbiased estimator of the Akaike information, and in fact favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that leads to the selection of any random effect not predicted to be exactly zero. We derive an analytic representation of a corrected version of the conditional AIC, which avoids the high computational cost and imprecision of available numerical approximations. An implementation in an R package is provided. All theoretical results are illustrated in simulation studies, and their impact in practice is investigated in an analysis of childhood malnutrition in Zambia.
Resumo:
We establish a fundamental equivalence between singular value decomposition (SVD) and functional principal components analysis (FPCA) models. The constructive relationship allows to deploy the numerical efficiency of SVD to fully estimate the components of FPCA, even for extremely high-dimensional functional objects, such as brain images. As an example, a functional mixed effect model is fitted to high-resolution morphometric (RAVENS) images. The main directions of morphometric variation in brain volumes are identified and discussed.