45 resultados para Linear model
Resumo:
An important feature of agribusiness promotion programs is their lagged impact on consumption. Efficient investment in advertising requires reliable estimates of these lagged responses and it is desirable from both applied and theoretical standpoints to have a flexible method for estimating them. This note derives an alternative Bayesian methodology for estimating lagged responses when investments occur intermittently within a time series. The method exploits a latent-variable extension of the natural-conjugate, normal-linear model, Gibbs sampling and data augmentation. It is applied to a monthly time series on Turkish pasta consumption (1993:5-1998:3) and three, nonconsecutive promotion campaigns (1996:3, 1997:3, 1997:10). The results suggest that responses were greatest to the second campaign, which allocated its entire budget to television media; that its impact peaked in the sixth month following expenditure; and that the rate of return (measured in metric tons additional consumption per thousand dollars expended) was around a factor of 20.
Resumo:
The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.
Resumo:
We compare linear autoregressive (AR) models and self-exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two-regime SETAR process is used as the data-generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non-linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data
Resumo:
The orographic gravity wave drag produced in flow over an axisymmetric mountain when both vertical wind shear and non-hydrostatic effects are important was calculated using a semi-analytical two-layer linear model, including unidirectional or directional constant wind shear in a layer near the surface, above which the wind is constant. The drag behaviour is determined by partial wave reflection at the shear discontinuity, wave absorption at critical levels (both of which exist in hydrostatic flow), and total wave reflection at levels where the waves become evanescent (an intrinsically non-hydrostatic effect), which produces resonant trapped lee wave modes. As a result of constructive or destructive wave interference, the drag oscillates with the thickness of the constant-shear layer and the Richardson number within it (Ri), generally decreasing at low Ri and when the flow is strongly non-hydrostatic. Critical level absorption, which increases with the angle spanned by the wind velocity in the constant-shear layer, shields the surface from reflected waves, keeping the drag closer to its hydrostatic limit. While, for the parameter range considered here, the drag seldom exceeds this limit, a substantial drag fraction may be produced by trapped lee waves, particularly when the flow is strongly non-hydrostatic, the lower layer is thick and Ri is relatively high. In directionally sheared flows with Ri = O(1), the drag may be misaligned with the surface wind in a direction opposite to the shear, a behaviour which is totally due to non-trapped waves. The trapped lee wave drag, whose reaction force on the atmosphere is felt at low levels, may therefore have a distinctly different direction from the drag associated with vertically propagating waves, which acts on the atmosphere at higher levels.
Resumo:
Global controls on month-by-month fractional burnt area (2000–2005) were investigated by fitting a generalised linear model (GLM) to Global Fire Emissions Database (GFED) data, with 11 predictor variables representing vegetation, climate, land use and potential ignition sources. Burnt area is shown to increase with annual net primary production (NPP), number of dry days, maximum temperature, grazing-land area, grass/shrub cover and diurnal temperature range, and to decrease with soil moisture, cropland area and population density. Lightning showed an apparent (weak) negative influence, but this disappeared when pure seasonal-cycle effects were taken into account. The model predicts observed geographic and seasonal patterns, as well as the emergent relationships seen when burnt area is plotted against each variable separately. Unimodal relationships with mean annual temperature and precipitation, population density and gross domestic product (GDP) are reproduced too, and are thus shown to be secondary consequences of correlations between different controls (e.g. high NPP with high precipitation; low NPP with low population density and GDP). These findings have major implications for the design of global fire models, as several assumptions in current models – most notably, the widely assumed dependence of fire frequency on ignition rates – are evidently incorrect.
Resumo:
This research explores whether patterns of typographic differentiation influence readers’ impressions of documents. It develops a systematic approach to typographic investigation that considers relationships between different kinds of typographic attributes, rather than testing the influence of isolated variables. An exploratory study using multiple sort tasks and semantic differential scales identifies that readers form a variety of impressions in relation to how typographic elements are differentiated in document design. Building on the findings of the exploratory study and analysis of a sample of magazines, the research describes three patterns of typographic differentiation: high, moderate, and low. Each pattern comprises clusters of typographic attributes and organisational principles that are articulated in relation to a specified level of typographic differentiation (amplified, medium, or subtle). The patterns are applied to two sets of controlled test material. Using this purposely-designed material, the influence of patterns of typographic differentiation on readers’ impressions of documents is explored in a repertory grid analysis and a paired comparison procedure. The results of these studies indicate that patterns of typographic differentiation consistently shape readers’ impressions of documents, influencing judgments of credibility, document address, and intended readership; and suggesting particular kinds of engagement and genre associations. For example, high differentiation documents are likely to be considered casual, sensationalist, and young; moderate differentiation documents are most likely to be seen as formal and serious; and low differentiation examples are considered calm. Typographic meaning is shown to be created through complex, yet systematic, interrelationships rather than reduced to a linear model of increasing or decreasing variation. The research provides a way of describing typographic articulation that has application across a variety of disciplines and design practice. In particular, it illuminates the ways in which typographic presentation is meaningful to readers, providing knowledge that document producers can use to communicate more effectively.
Resumo:
Soil organic matter (SOM) is one of the main global carbon pools. It is a measure of soil quality as its presence increases carbon sequestration and improves physical and chemical soil properties. The determination and characterisation of humic substances gives essential information of the maturity and stresses of soils as well as of their health. However, the determination of the exact nature and molecular structure of these substances has been proven difficult. Several complex techniques exist to characterise SOM and mineralisation and humification processes. One of the more widely accepted for its accuracy is nuclear magnetic resonance (NMR) spectroscopy. Despite its efficacy, NMR needs significant economic resources, equipment, material and time. Proxy measures like the fluorescence index (FI), cold and hot-water extractable carbon (CWC and HWC) and SUVA-254 have the potential to characterise SOM and, in combination, provide qualitative and quantitative data of SOM and its processes. Spanish and British agricultural cambisols were used to measure SOM quality and determine whether similarities were found between optical techniques and 1H NMR results in these two regions with contrasting climatic conditions. High correlations (p < 0.001) were found between the specific aromatic fraction measured with 1H NMR and SUVA-254 (Rs = 0.95) and HWC (Rs = 0.90), which could be described using a linear model. A high correlation between FI and the aromatics fraction measured with 1H NMR (Rs = −0.976) was also observed. In view of our results, optical measures have a potential, in combination, to predict the aromatic fraction of SOM without the need of expensive and time consuming techniques.
Resumo:
Aims Potatoes are a globally important source of food whose production requires large inputs of fertiliser and water. Recent research has highlighted the importance of the root system in acquiring resources. Here measurements, previously generated by field phenotyping, tested the effect of root size on maintenance of yield under drought (drought tolerance). Methods Twelve potato genotypes, including genotypes with extremes of root size, were grown to maturity in the field under a rain shelter and either irrigated or subjected to drought. Soil moisture, canopy growth, carbon isotope discrimination and final yields were measured. Destructively harvested field phenotype data were used as explanatory variables in a general linear model (GLM) to investigate yield under conditions of drought or irrigation. Results Drought severely affected the small rooted genotype Pentland Dell but not the large rooted genotype Cara. More plantlets, longer and more numerous stolons and stolon roots were associated with drought tolerance. Previously measured carbon isotope discrimination did not correlate with the effect of drought. Conclusions These data suggest that in-field phenotyping can be used to identify useful characteristics when known genotypes are subjected to an environmental stress. Stolon root traits were associated with drought tolerance in potato and could be used to select genotypes with resilience to drought.
Resumo:
Filter degeneracy is the main obstacle for the implementation of particle filter in non-linear high-dimensional models. A new scheme, the implicit equal-weights particle filter (IEWPF), is introduced. In this scheme samples are drawn implicitly from proposal densities with a different covariance for each particle, such that all particle weights are equal by construction. We test and explore the properties of the new scheme using a 1,000-dimensional simple linear model, and the 1,000-dimensional non-linear Lorenz96 model, and compare the performance of the scheme to a Local Ensemble Kalman Filter. The experiments show that the new scheme can easily be implemented in high-dimensional systems and is never degenerate, with good convergence properties in both systems.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
Linear models of bidirectional reflectance distribution are useful tools for understanding the angular variability of surface reflectance as observed by medium-resolution sensors such as the Moderate Resolution Imaging Spectrometer. These models are operationally used to normalize data to common view and illumination geometries and to calculate integral quantities such as albedo. Currently, to compensate for noise in observed reflectance, these models are inverted against data collected during some temporal window for which the model parameters are assumed to be constant. Despite this, the retrieved parameters are often noisy for regions where sufficient observations are not available. This paper demonstrates the use of Lagrangian multipliers to allow arbitrarily large windows and, at the same time, produce individual parameter sets for each day even for regions where only sparse observations are available.