931 resultados para Mixed Linear Model
Resumo:
Three emissions inventories have been used with a fully Lagrangian trajectory model to calculate the stratospheric accumulation of water vapour emissions from aircraft, and the resulting radiative forcing. The annual and global mean radiative forcing due to present-day aviation water vapour emissions has been found to be 0.9 [0.3 to 1.4] mW m^2. This is around a factor of three smaller than the value given in recent assessments, and the upper bound is much lower than a recently suggested 20 mW m^2 upper bound. This forcing is sensitive to the vertical distribution of emissions, and, to a lesser extent, interannual variability in meteorology. Large differences in the vertical distribution of emissions within the inventories have been identified, which result in the choice of inventory being the largest source of differences in the calculation of the radiative forcing due to the emissions. Analysis of Northern Hemisphere trajectories demonstrates that the assumption of an e-folding time is not always appropriate for stratospheric emissions. A linear model is more representative for emissions that enter the stratosphere far above the tropopause.
Resumo:
A method is suggested for the calculation of the friction velocity for stable turbulent boundary-layer flow over hills. The method is tested using a continuous upstream mean velocity profile compatible with the propagation of gravity waves, and is incorporated into the linear model of Hunt, Leibovich and Richards with the modification proposed by Hunt, Richards and Brighton to include the effects of stability, and the reformulated solution of Weng for the near-surface region. Those theoretical results are compared with results from simulations using a non-hydrostatic microscale-mesoscale two-dimensional numerical model, and with field observations for different values of stability. These comparisons show a considerable improvement in the behaviour of the theoretical model when the friction velocity is calculated using the method proposed here, leading to a consistent variation of the boundary-layer structure with stability, and better agreement with observational and numerical data.
Resumo:
An important feature of agribusiness promotion programs is their lagged impact on consumption. Efficient investment in advertising requires reliable estimates of these lagged responses and it is desirable from both applied and theoretical standpoints to have a flexible method for estimating them. This note derives an alternative Bayesian methodology for estimating lagged responses when investments occur intermittently within a time series. The method exploits a latent-variable extension of the natural-conjugate, normal-linear model, Gibbs sampling and data augmentation. It is applied to a monthly time series on Turkish pasta consumption (1993:5-1998:3) and three, nonconsecutive promotion campaigns (1996:3, 1997:3, 1997:10). The results suggest that responses were greatest to the second campaign, which allocated its entire budget to television media; that its impact peaked in the sixth month following expenditure; and that the rate of return (measured in metric tons additional consumption per thousand dollars expended) was around a factor of 20.
Resumo:
Attribute non-attendance in choice experiments affects WTP estimates and therefore the validity of the method. A recent strand of literature uses attenuated estimates of marginal utilities of ignored attributes. Following this approach, we propose a generalisation of the mixed logit model whereby the distribution of marginal utility coefficients of a stated non-attender has a potentially lower mean and lower variance than those of a stated attender. Model comparison shows that our shrinkage approach fits the data better and produces more reliable WTP estimates. We further find that while reliability of stated attribute non-attendance increases in successive choice experiments, it does not increase when respondents report having ignored the same attribute twice.
Resumo:
The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.
Resumo:
We compare linear autoregressive (AR) models and self-exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two-regime SETAR process is used as the data-generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non-linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data
Resumo:
The orographic gravity wave drag produced in flow over an axisymmetric mountain when both vertical wind shear and non-hydrostatic effects are important was calculated using a semi-analytical two-layer linear model, including unidirectional or directional constant wind shear in a layer near the surface, above which the wind is constant. The drag behaviour is determined by partial wave reflection at the shear discontinuity, wave absorption at critical levels (both of which exist in hydrostatic flow), and total wave reflection at levels where the waves become evanescent (an intrinsically non-hydrostatic effect), which produces resonant trapped lee wave modes. As a result of constructive or destructive wave interference, the drag oscillates with the thickness of the constant-shear layer and the Richardson number within it (Ri), generally decreasing at low Ri and when the flow is strongly non-hydrostatic. Critical level absorption, which increases with the angle spanned by the wind velocity in the constant-shear layer, shields the surface from reflected waves, keeping the drag closer to its hydrostatic limit. While, for the parameter range considered here, the drag seldom exceeds this limit, a substantial drag fraction may be produced by trapped lee waves, particularly when the flow is strongly non-hydrostatic, the lower layer is thick and Ri is relatively high. In directionally sheared flows with Ri = O(1), the drag may be misaligned with the surface wind in a direction opposite to the shear, a behaviour which is totally due to non-trapped waves. The trapped lee wave drag, whose reaction force on the atmosphere is felt at low levels, may therefore have a distinctly different direction from the drag associated with vertically propagating waves, which acts on the atmosphere at higher levels.
Resumo:
Global controls on month-by-month fractional burnt area (2000–2005) were investigated by fitting a generalised linear model (GLM) to Global Fire Emissions Database (GFED) data, with 11 predictor variables representing vegetation, climate, land use and potential ignition sources. Burnt area is shown to increase with annual net primary production (NPP), number of dry days, maximum temperature, grazing-land area, grass/shrub cover and diurnal temperature range, and to decrease with soil moisture, cropland area and population density. Lightning showed an apparent (weak) negative influence, but this disappeared when pure seasonal-cycle effects were taken into account. The model predicts observed geographic and seasonal patterns, as well as the emergent relationships seen when burnt area is plotted against each variable separately. Unimodal relationships with mean annual temperature and precipitation, population density and gross domestic product (GDP) are reproduced too, and are thus shown to be secondary consequences of correlations between different controls (e.g. high NPP with high precipitation; low NPP with low population density and GDP). These findings have major implications for the design of global fire models, as several assumptions in current models – most notably, the widely assumed dependence of fire frequency on ignition rates – are evidently incorrect.
Resumo:
This research explores whether patterns of typographic differentiation influence readers’ impressions of documents. It develops a systematic approach to typographic investigation that considers relationships between different kinds of typographic attributes, rather than testing the influence of isolated variables. An exploratory study using multiple sort tasks and semantic differential scales identifies that readers form a variety of impressions in relation to how typographic elements are differentiated in document design. Building on the findings of the exploratory study and analysis of a sample of magazines, the research describes three patterns of typographic differentiation: high, moderate, and low. Each pattern comprises clusters of typographic attributes and organisational principles that are articulated in relation to a specified level of typographic differentiation (amplified, medium, or subtle). The patterns are applied to two sets of controlled test material. Using this purposely-designed material, the influence of patterns of typographic differentiation on readers’ impressions of documents is explored in a repertory grid analysis and a paired comparison procedure. The results of these studies indicate that patterns of typographic differentiation consistently shape readers’ impressions of documents, influencing judgments of credibility, document address, and intended readership; and suggesting particular kinds of engagement and genre associations. For example, high differentiation documents are likely to be considered casual, sensationalist, and young; moderate differentiation documents are most likely to be seen as formal and serious; and low differentiation examples are considered calm. Typographic meaning is shown to be created through complex, yet systematic, interrelationships rather than reduced to a linear model of increasing or decreasing variation. The research provides a way of describing typographic articulation that has application across a variety of disciplines and design practice. In particular, it illuminates the ways in which typographic presentation is meaningful to readers, providing knowledge that document producers can use to communicate more effectively.
Resumo:
Soil organic matter (SOM) is one of the main global carbon pools. It is a measure of soil quality as its presence increases carbon sequestration and improves physical and chemical soil properties. The determination and characterisation of humic substances gives essential information of the maturity and stresses of soils as well as of their health. However, the determination of the exact nature and molecular structure of these substances has been proven difficult. Several complex techniques exist to characterise SOM and mineralisation and humification processes. One of the more widely accepted for its accuracy is nuclear magnetic resonance (NMR) spectroscopy. Despite its efficacy, NMR needs significant economic resources, equipment, material and time. Proxy measures like the fluorescence index (FI), cold and hot-water extractable carbon (CWC and HWC) and SUVA-254 have the potential to characterise SOM and, in combination, provide qualitative and quantitative data of SOM and its processes. Spanish and British agricultural cambisols were used to measure SOM quality and determine whether similarities were found between optical techniques and 1H NMR results in these two regions with contrasting climatic conditions. High correlations (p < 0.001) were found between the specific aromatic fraction measured with 1H NMR and SUVA-254 (Rs = 0.95) and HWC (Rs = 0.90), which could be described using a linear model. A high correlation between FI and the aromatics fraction measured with 1H NMR (Rs = −0.976) was also observed. In view of our results, optical measures have a potential, in combination, to predict the aromatic fraction of SOM without the need of expensive and time consuming techniques.
Resumo:
Aims Potatoes are a globally important source of food whose production requires large inputs of fertiliser and water. Recent research has highlighted the importance of the root system in acquiring resources. Here measurements, previously generated by field phenotyping, tested the effect of root size on maintenance of yield under drought (drought tolerance). Methods Twelve potato genotypes, including genotypes with extremes of root size, were grown to maturity in the field under a rain shelter and either irrigated or subjected to drought. Soil moisture, canopy growth, carbon isotope discrimination and final yields were measured. Destructively harvested field phenotype data were used as explanatory variables in a general linear model (GLM) to investigate yield under conditions of drought or irrigation. Results Drought severely affected the small rooted genotype Pentland Dell but not the large rooted genotype Cara. More plantlets, longer and more numerous stolons and stolon roots were associated with drought tolerance. Previously measured carbon isotope discrimination did not correlate with the effect of drought. Conclusions These data suggest that in-field phenotyping can be used to identify useful characteristics when known genotypes are subjected to an environmental stress. Stolon root traits were associated with drought tolerance in potato and could be used to select genotypes with resilience to drought.
Resumo:
Using data on 5509 foreign subsidiaries established in 50 regions of 8 EU countries over the period 1991–1999, we estimate a mixed logit model of the location choice of multinational firms in Europe. In particular, we focus on the role of EU Cohesion Policy in attracting foreign investors from both within and outside Europe. We find that, after controlling for the role of agglomeration economies as well as a number of other regional and country characteristics and allowing for a very flexible correlation pattern among choices, Structural and Cohesion funds allocated by the EU to laggard regions have indeed contributed to attracting multinationals. These policies as well as other determinants play a different role in the case of European investors as opposed to non-European ones.
Resumo:
Filter degeneracy is the main obstacle for the implementation of particle filter in non-linear high-dimensional models. A new scheme, the implicit equal-weights particle filter (IEWPF), is introduced. In this scheme samples are drawn implicitly from proposal densities with a different covariance for each particle, such that all particle weights are equal by construction. We test and explore the properties of the new scheme using a 1,000-dimensional simple linear model, and the 1,000-dimensional non-linear Lorenz96 model, and compare the performance of the scheme to a Local Ensemble Kalman Filter. The experiments show that the new scheme can easily be implemented in high-dimensional systems and is never degenerate, with good convergence properties in both systems.
Resumo:
The aim of this work was to study the effect of the hydrolysis degree (HD) and the concentration (C(PVA)) Of two types of poly(vinyl alcohol) (PVA) and of the type (glycerol and sorbitol) and the concentration (C(P)) of plasticizers on some physical properties of biodegradable films based on blends of gelatin and PVA Using a response-surface methodology. The films were prepared with a film forming solutions (FFS) with 2 g of macromolecules (gelatin+PVA)/100 g de FFS. The responses analyzed were the mechanical properties, the solubility, the moisture Content. the color difference and the opacity. The linear model was statistically significant and predictive for puncture force and deformation. elongation at break, solubility in water, Moisture content and opacity. The CPVA affected strongly the elongation at break of the films. The interaction of the HD and the C(P) affected this property. Moreover. the puncture force was affected slightly by the C(PVA). Concerning the Solubility in water, the reduction of the HD increased it and this effect was greater for high CPVA Values. In general. the most important effect observed in the physical properties of the films was that of the plasticizer type and concentration. The PVA hydrolysis degree and concentration have an important effect only for the elongation at break, puncture deformation and solubility in water. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
An one-dimensional atmospheric second order closure model, coupled to an oceanic mixed layer model, is used to investigate the short term variation of the atmospheric and oceanic boundary layers in the coastal upwelling area of Cabo Frio, Brazil (23 degrees S, 42 degrees 08`W). The numerical simulations were carried out to evaluate the impact caused by the thermal contrast between atmosphere and ocean on the vertical extent and other properties of both atmospheric and oceanic boundary layers. The numerical simulations were designed taking as reference the observations carried out during the passage of a cold front that disrupted the upwelling regime in Cabo Frio in July of 1992. The simulations indicated that in 10 hours the mechanical mixing, sustained by a constant background flow of 10 in s(-1), increases the atmospheric boundary layer in 214 in when the atmosphere is initially 2 K warmer than the ocean (positive thermal contrast observed during upwelling regime). For an atmosphere initially -2 K colder than the ocean (negative thermal contrast observed during passage of the cold front), the incipient thermal convection intensifies the mechanical mixing increasing the vertical extent of the atmospheric boundary layer in 360 in. The vertical evolution of the atmospheric boundary layer is consistent with the observations carried out in Cabo Frio during upwelling condition. When the upwelling is disrupted, the discrepancy between the simulated and observed atmospheric boundary layer heights in Cabo Frio during July of 1992 increases considerably. During the period of 10 hours, the simulated oceanic mixed layer deepens 2 in and 5.4 in for positive and negative thermal contrasts of 2 K and -2 K, respectively. In the latter case, the larger vertical extent of the oceanic mixed layer is due to the presence of thermal convection in the atmospheric boundary layer, which in turn is associated to the absence of upwelling caused by the passage of cold fronts in Cabo Frio.