883 resultados para Mixed integer models
Resumo:
A significant challenge in the prediction of climate change impacts on ecosystems and biodiversity is quantifying the sources of uncertainty that emerge within and between different models. Statistical species niche models have grown in popularity, yet no single best technique has been identified reflecting differing performance in different situations. Our aim was to quantify uncertainties associated with the application of 2 complimentary modelling techniques. Generalised linear mixed models (GLMM) and generalised additive mixed models (GAMM) were used to model the realised niche of ombrotrophic Sphagnum species in British peatlands. These models were then used to predict changes in Sphagnum cover between 2020 and 2050 based on projections of climate change and atmospheric deposition of nitrogen and sulphur. Over 90% of the variation in the GLMM predictions was due to niche model parameter uncertainty, dropping to 14% for the GAMM. After having covaried out other factors, average variation in predicted values of Sphagnum cover across UK peatlands was the next largest source of variation (8% for the GLMM and 86% for the GAMM). The better performance of the GAMM needs to be weighed against its tendency to overfit the training data. While our niche models are only a first approximation, we used them to undertake a preliminary evaluation of the relative importance of climate change and nitrogen and sulphur deposition and the geographic locations of the largest expected changes in Sphagnum cover. Predicted changes in cover were all small (generally <1% in an average 4 m2 unit area) but also highly uncertain. Peatlands expected to be most affected by climate change in combination with atmospheric pollution were Dartmoor, Brecon Beacons and the western Lake District.
Resumo:
The application of real options theory to commercial real estate has developed rapidly during the last 15 Years. In particular, several pricing models have been applied to value real options embedded in development projects. In this study we use a case study of a mixed use development scheme and identify the major implied and explicit real options available to the developer. We offer the perspective of a real market application by exploring different binomial models and the associated methods of estimating the crucial parameter of volatility. We include simple binomial lattices, quadranomial lattices and demonstrate the sensitivity of the results to the choice of inputs and method.
Resumo:
Ice cloud representation in general circulation models remains a challenging task, due to the lack of accurate observations and the complexity of microphysical processes. In this article, we evaluate the ice water content (IWC) and ice cloud fraction statistical distributions from the numerical weather prediction models of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the UK Met Office, exploiting the synergy between the CloudSat radar and CALIPSO lidar. Using the last three weeks of July 2006, we analyse the global ice cloud occurrence as a function of temperature and latitude and show that the models capture the main geographical and temperature-dependent distributions, but overestimate the ice cloud occurrence in the Tropics in the temperature range from −60 °C to −20 °C and in the Antarctic for temperatures higher than −20 °C, but underestimate ice cloud occurrence at very low temperatures. A global statistical comparison of the occurrence of grid-box mean IWC at different temperatures shows that both the mean and range of IWC increases with increasing temperature. Globally, the models capture most of the IWC variability in the temperature range between −60 °C and −5 °C, and also reproduce the observed latitudinal dependencies in the IWC distribution due to different meteorological regimes. Two versions of the ECMWF model are assessed. The recent operational version with a diagnostic representation of precipitating snow and mixed-phase ice cloud fails to represent the IWC distribution in the −20 °C to 0 °C range, but a new version with prognostic variables for liquid water, ice and snow is much closer to the observed distribution. The comparison of models and observations provides a much-needed analysis of the vertical distribution of IWC across the globe, highlighting the ability of the models to reproduce much of the observed variability as well as the deficiencies where further improvements are required.
Resumo:
A process-oriented modeling approach is applied in order to simulate glacier mass balance for individual glaciers using statistically downscaled general circulation models (GCMs). Glacier-specific seasonal sensitivity characteristics based on a mass balance model of intermediate complexity are used to simulate mass balances of Nigardsbreen (Norway) and Rhonegletscher (Switzerland). Simulations using reanalyses (ECMWF) for the period 1979–93 are in good agreement with in situ mass balance measurements for Nigardsbreen. The method is applied to multicentury integrations of coupled (ECHAM4/OPYC) and mixed-layer (ECHAM4/MLO) GCMs excluding external forcing. A high correlation between decadal variations in the North Atlantic oscillation (NAO) and mass balance of the glaciers is found. The dominant factor for this relationship is the strong impact of winter precipitation associated with the NAO. A high NAO phase means enhanced (reduced) winter precipitation for Nigardsbreen (Rhonegletscher), typically leading to a higher (lower) than normal annual mass balance. This mechanism, entirely due to internal variations in the climate system, can explain observed strong positive mass balances for Nigardsbreen and other maritime Norwegian glaciers within the period 1980–95. It can also partly be responsible for recent strong negative mass balances of Alpine glaciers.
Resumo:
The role of different sky conditions on diffuse PAR fraction (ϕ), air temperature (Ta), vapor pressure deficit (vpd) and GPP in a deciduous forest is investigated using eddy covariance observations of CO2 fluxes and radiometer and ceilometer observations of sky and PAR conditions on hourly and growing season timescales. Maximum GPP response occurred under moderate to high PAR and ϕ and low vpd. Light response models using a rectangular hyperbola showed a positive linear relation between ϕ and effective quantum efficiency (α = 0.023ϕ + 0.012, r2 = 0.994). Since PAR and ϕ are negatively correlated, there is a tradeoff between the greater use efficiency of diffuse light and lower vpd and the associated decrease in total PAR available for photosynthesis. To a lesser extent, light response was also modified by vpd and Ta. The net effect of these and their relation with sky conditions helped enhance light response under sky conditions that produced higher ϕ. Six sky conditions were classified from cloud frequency and ϕ data: optically thick clouds, optically thin clouds, mixed sky (partial clouds within hour), high, medium and low optical aerosol. The frequency and light responses of each sky condition for the growing season were used to predict the role of changing sky conditions on annual GPP. The net effect of increasing frequency of thick clouds is to decrease GPP, changing low aerosol conditions has negligible effect. Increases in the other sky conditions all lead to gains in GPP. Sky conditions that enhance intermediate levels of ϕ, such as thin or scattered clouds or higher aerosol concentrations from volcanic eruptions or anthropogenic emissions, will have a positive outcome on annual GPP, while an increase in cloud cover will have a negative impact. Due to the ϕ/PAR tradeoff and since GPP response to changes in individual sky conditions differ in sign and magnitude, the net response of ecosystem GPP to future sky conditions is non-linear and tends toward moderation of change.
Resumo:
Scattering and absorption by aerosol in anthropogenically perturbed air masses over Europe has been measured using instrumentation flown on the UK’s BAe-146-301 large Atmospheric Research Aircraft (ARA) operated by the Facility for Airborne Atmospheric Measurements (FAAM) on 14 flights during the EUCAARI-LONGREX campaign in May 2008. The geographical and temporal variations of the derived shortwave optical properties of aerosol are presented. Values of single scattering albedo of dry aerosol at 550 nm varied considerably from 0.86 to near unity, with a campaign average of 0.93 ± 0.03. Dry aerosol optical depths ranged from 0.030 ± 0.009 to 0.24 ± 0.07. An optical properties closure study comparing calculations from composition data and Mie scattering code with the measured properties is presented. Agreement to within measurement uncertainties of 30% can be achieved for both scattering and absorption,but the latter is shown to be sensitive to the refractive indices chosen for organic aerosols, and to a lesser extent black carbon, as well as being highly dependent on the accuracy of the absorption measurements. Agreement with the measured absorption can be achieved either if organic carbon is assumed to be weakly absorbing, or if the organic aerosol is purely scattering and the absorption measurement is an overestimate due to the presence of large amounts of organic carbon. Refractive indices could not be inferred conclusively due to this uncertainty, despite the enhancement in methodology compared to previous studies that derived from the use of the black carbon measurements. Hygroscopic growth curves derived from the wet nephelometer indicate moderate water uptake by the aerosol with a campaign mean f (RH) value (ratio in scattering) of 1.5 (range from 1.23 to 1.63) at 80% relative humidity. This value is qualitatively consistent with the major chemical components of the aerosol measured by the aerosol mass spectrometer, which are primarily mixed organics and nitrate and some sulphate.
Resumo:
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We report on integer and fractional microwave-induced resistance oscillations in a 2D electron system with high density and moderate mobility, and present results of measurements at high microwave intensity and temperature. Fractional microwave-induced resistance oscillations occur up to fractional denominator 8 and are quenched independently of their fractional order. We discuss our results and compare them with existing theoretical models. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Frutalin is a homotetrameric alpha-D-galactose (D-Gal)-binding lectin that activates natural killer cells in vitro and promotes leukocyte migration in vivo. Because lectins are potent lymphocyte stimulators, understanding the interactions that occur between them and cell surfaces can help to the action mechanisms involved in this process. In this paper, we present a detailed investigation of the interactions of frutalin with phospho- and glycolipids using Langmuir monolayers as biomembrane models. The results confirm the specificity of frutalin for D-Gal attached to a biomembrane. Adsorption of frutalin was more efficient for the galactose polar head lipids, in contrast to the one for sulfated galactose, in which a lag time is observed, indicating a rearrangement of the monolayer to incorporate the protein. Regarding ganglioside GM1 monolayers, lower quantities of the protein were adsorbed, probably due to the farther apart position of D-galactose from the interface. Binary mixtures containing galactocerebroside revealed small domains formed at high lipid packing in the presence of frutalin, suggesting that lectin induces the clusterization and the forming of domains in vitro, which may be a form of receptor internalization. This is the first experimental evidence of such lectin effect, and it may be useful to understand the mechanism of action of lectins at the molecular level. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper we extend partial linear models with normal errors to Student-t errors Penalized likelihood equations are applied to derive the maximum likelihood estimates which appear to be robust against outlying observations in the sense of the Mahalanobis distance In order to study the sensitivity of the penalized estimates under some usual perturbation schemes in the model or data the local influence curvatures are derived and some diagnostic graphics are proposed A motivating example preliminary analyzed under normal errors is reanalyzed under Student-t errors The local influence approach is used to compare the sensitivity of the model estimates (C) 2010 Elsevier B V All rights reserved
Resumo:
Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We present the hglm package for fitting hierarchical generalized linear models. It can be used for linear mixed models and generalized linear mixed models with random effects for a variety of links and a variety of distributions for both the outcomes and the random effects. Fixed effects can also be fitted in the dispersion part of the model.
Resumo:
Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.
Resumo:
This paper confronts the Capital Asset Pricing Model - CAPM - and the 3-Factor Fama-French - FF - model using both Brazilian and US stock market data for the same Sample period (1999-2007). The US data will serve only as a benchmark for comparative purposes. We use two competing econometric methods, the Generalized Method of Moments (GMM) by (Hansen, 1982) and the Iterative Nonlinear Seemingly Unrelated Regression Estimation (ITNLSUR) by Burmeister and McElroy (1988). Both methods nest other options based on the procedure by Fama-MacBeth (1973). The estimations show that the FF model fits the Brazilian data better than CAPM, however it is imprecise compared with the US analog. We argue that this is a consequence of an absence of clear-cut anomalies in Brazilian data, specially those related to firm size. The tests on the efficiency of the models - nullity of intercepts and fitting of the cross-sectional regressions - presented mixed conclusions. The tests on intercept failed to rejected the CAPM when Brazilian value-premium-wise portfolios were used, contrasting with US data, a very well documented conclusion. The ITNLSUR has estimated an economically reasonable and statistically significant market risk premium for Brazil around 6.5% per year without resorting to any particular data set aggregation. However, we could not find the same for the US data during identical period or even using a larger data set. Este estudo procura contribuir com a literatura empírica brasileira de modelos de apreçamento de ativos. Dois dos principais modelos de apreçamento são Infrontados, os modelos Capital Asset Pricing Model (CAPM)e de 3 fatores de Fama-French. São aplicadas ferramentas econométricas pouco exploradas na literatura nacional na estimação de equações de apreçamento: os métodos de GMM e ITNLSUR. Comparam-se as estimativas com as obtidas de dados americanos para o mesmo período e conclui-se que no Brasil o sucesso do modelo de Fama e French é limitado. Como subproduto da análise, (i) testa-se a presença das chamadas anomalias nos retornos, e (ii) calcula-se o prêmio de risco implícito nos retornos das ações. Os dados revelam a presença de um prêmio de valor, porém não de um prêmio de tamanho. Utilizando o método de ITNLSUR, o prêmio de risco de mercado é positivo e significativo, ao redor de 6,5% ao ano.