970 resultados para non-linear oscillations
Resumo:
Optimal estimation (OE) is applied as a technique for retrieving sea surface temperature (SST) from thermal imagery obtained by the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) on Meteosat 9. OE requires simulation of observations as part of the retrieval process, and this is done here using numerical weather prediction fields and a fast radiative transfer model. Bias correction of the simulated brightness temperatures (BTs) is found to be a necessary step before retrieval, and is achieved by filtered averaging of simulations minus observations over a time period of 20 days and spatial scale of 2.5° in latitude and longitude. Throughout this study, BT observations are clear-sky averages over cells of size 0.5° in latitude and longitude. Results for the OE SST are compared to results using a traditional non-linear retrieval algorithm (“NLSST”), both validated against a set of 30108 night-time matches with drifting buoy observations. For the OE SST the mean difference with respect to drifter SSTs is − 0.01 K and the standard deviation is 0.47 K, compared to − 0.38 K and 0.70 K respectively for the NLSST algorithm. Perhaps more importantly, systematic biases in NLSST with respect to geographical location, atmospheric water vapour and satellite zenith angle are greatly reduced for the OE SST. However, the OE SST is calculated to have a lower sensitivity of retrieved SST to true SST variations than the NLSST. This feature would be a disadvantage for observing SST fronts and diurnal variability, and raises questions as to how best to exploit OE techniques at SEVIRI's full spatial resolution.
Resumo:
Optimal estimation (OE) improves sea surface temperature (SST) estimated from satellite infrared imagery in the “split-window”, in comparison to SST retrieved using the usual multi-channel (MCSST) or non-linear (NLSST) estimators. This is demonstrated using three months of observations of the Advanced Very High Resolution Radiometer (AVHRR) on the first Meteorological Operational satellite (Metop-A), matched in time and space to drifter SSTs collected on the global telecommunications system. There are 32,175 matches. The prior for the OE is forecast atmospheric fields from the Météo-France global numerical weather prediction system (ARPEGE), the forward model is RTTOV8.7, and a reduced state vector comprising SST and total column water vapour (TCWV) is used. Operational NLSST coefficients give mean and standard deviation (SD) of the difference between satellite and drifter SSTs of 0.00 and 0.72 K. The “best possible” NLSST and MCSST coefficients, empirically regressed on the data themselves, give zero mean difference and SDs of 0.66 K and 0.73 K respectively. Significant contributions to the global SD arise from regional systematic errors (biases) of several tenths of kelvin in the NLSST. With no bias corrections to either prior fields or forward model, the SSTs retrieved by OE minus drifter SSTs have mean and SD of − 0.16 and 0.49 K respectively. The reduction in SD below the “best possible” regression results shows that OE deals with structural limitations of the NLSST and MCSST algorithms. Using simple empirical bias corrections to improve the OE, retrieved minus drifter SSTs are obtained with mean and SD of − 0.06 and 0.44 K respectively. Regional biases are greatly reduced, such that the absolute bias is less than 0.1 K in 61% of 10°-latitude by 30°-longitude cells. OE also allows a statistic of the agreement between modelled and measured brightness temperatures to be calculated. We show that this measure is more efficient than the current system of confidence levels at identifying reliable retrievals, and that the best 75% of satellite SSTs by this measure have negligible bias and retrieval error of order 0.25 K.
Resumo:
Objective: To investigate the sociodemographic determinants of diet quality of the elderly in four EU countries. Design: Cross-sectional study. For each country, a regression was performed of a multidimensional index of dietary quality v. sociodemographic variables. Setting In Finland, Finnish Household Budget Survey (1998 and 2006); in Sweden, SNAC-K (2001–2004); in the UK, Expenditure & Food Survey (2006–07); in Italy, Multi-purpose Survey of Daily Life (2009). Subjects: One- and two-person households of over-50s (Finland, n 2994; UK, n 4749); over-50 s living alone or in two-person households (Italy, n 7564); over-60 s (Sweden, n 2023). Results: Diet quality among the EU elderly is both low on average and heterogeneous across individuals. The regression models explained a small but significant part of the observed heterogeneity in diet quality. Resource availability was associated with diet quality either negatively (Finland and UK) or in a non-linear or non-statistically significant manner (Italy and Sweden), as was the preference for food parameter. Education, not living alone and female gender were characteristics positively associated with diet quality with consistency across the four countries, unlike socio-professional status, age and seasonality. Regional differences within countries persisted even after controlling for the other sociodemographic variables. Conclusions: Poor dietary choices among the EU elderly were not caused by insufficient resources and informational measures could be successful in promoting healthy eating for healthy ageing. On the other hand, food habits appeared largely set in the latter part of life, with age and retirement having little influence on the healthiness of dietary choices.
Resumo:
In a series of papers, Killworth and Blundell have proposed to study the effects of a background mean flow and topography on Rossby wave propagation by means of a generalized eigenvalue problem formulated in terms of the vertical velocity, obtained from a linearization of the primitive equations of motion. However, it has been known for a number of years that this eigenvalue problem contains an error, which Killworth was prevented from correcting himself by his unfortunate passing and whose correction is therefore taken up in this note. Here, the author shows in the context of quasigeostrophic (QG) theory that the error can ulti- mately be traced to the fact that the eigenvalue problem for the vertical velocity is fundamentally a non- linear one (the eigenvalue appears both in the numerator and denominator), unlike that for the pressure. The reason that this nonlinear term is lacking in the Killworth and Blundell theory comes from neglecting the depth dependence of a depth-dependent term. This nonlinear term is shown on idealized examples to alter significantly the Rossby wave dispersion relation in the high-wavenumber regime but is otherwise irrelevant in the long-wave limit, in which case the eigenvalue problems for the vertical velocity and pressure are both linear. In the general dispersive case, however, one should first solve the generalized eigenvalue problem for the pressure vertical structure and, if needed, diagnose the vertical velocity vertical structure from the latter.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
The role of different sky conditions on diffuse PAR fraction (ϕ), air temperature (Ta), vapor pressure deficit (vpd) and GPP in a deciduous forest is investigated using eddy covariance observations of CO2 fluxes and radiometer and ceilometer observations of sky and PAR conditions on hourly and growing season timescales. Maximum GPP response occurred under moderate to high PAR and ϕ and low vpd. Light response models using a rectangular hyperbola showed a positive linear relation between ϕ and effective quantum efficiency (α = 0.023ϕ + 0.012, r2 = 0.994). Since PAR and ϕ are negatively correlated, there is a tradeoff between the greater use efficiency of diffuse light and lower vpd and the associated decrease in total PAR available for photosynthesis. To a lesser extent, light response was also modified by vpd and Ta. The net effect of these and their relation with sky conditions helped enhance light response under sky conditions that produced higher ϕ. Six sky conditions were classified from cloud frequency and ϕ data: optically thick clouds, optically thin clouds, mixed sky (partial clouds within hour), high, medium and low optical aerosol. The frequency and light responses of each sky condition for the growing season were used to predict the role of changing sky conditions on annual GPP. The net effect of increasing frequency of thick clouds is to decrease GPP, changing low aerosol conditions has negligible effect. Increases in the other sky conditions all lead to gains in GPP. Sky conditions that enhance intermediate levels of ϕ, such as thin or scattered clouds or higher aerosol concentrations from volcanic eruptions or anthropogenic emissions, will have a positive outcome on annual GPP, while an increase in cloud cover will have a negative impact. Due to the ϕ/PAR tradeoff and since GPP response to changes in individual sky conditions differ in sign and magnitude, the net response of ecosystem GPP to future sky conditions is non-linear and tends toward moderation of change.
Resumo:
This thesis is concerned with development of improved management practices in indigenous chicken production systems in a research process that includes participatory approaches with smallholder farmers and other stakeholders in Kenya. The research process involved a wide range of activities that included on-station experiments, field surveys, stakeholder consultations in workshops, seminars and visits, and on-farm farmer participatory research to evaluate the effect of some improved management interventions on production performance of indigenous chickens. The participatory research was greatly informed from collective experiences and lessons of the previous activities. The on-station studies focused on hatching, growth and nutritional characteristics of the indigenous chickens. Four research publications from these studies are included in this thesis. Quantitative statistical analyses were applied and they involved use of growth models estimated with non-linear regressions for the growth characteristics, chi-square determinations to investigate differences among different reciprocal crosses of indigenous chickens and general linear models and covariance determination for the nutrition study. The on-station studies brought greater understanding of performance and production characteristics of indigenous chickens and the influence of management practices on these characteristics. The field surveys and stakeholder consultations helped in understanding the overarching issues affecting the productivity of the indigenous chickens systems and their place in the livelihoods of smallholder farmers. These activities created strong networking opportunities with stakeholders from a wide spectrum. The on-farm farmer participatory research involved selection of 200 farmers in five regions followed by training and introduction of interventions on improved management practices which included housing, vaccination, deworming and feed supplementation. Implementation and monitoring was mainly done by individual farmers continuously for close to one and half years. Six quarterly visits to the farms were made by the research team to monitor and provide support for on-going project activities. The data collected has been analysed for 5 consecutive 3-monthly periods. Descriptive and inferential statistics were applied to analyse the data collected involving treatment applications, production characteristics and flock demography characteristics. Out of the 200 farmers initially selected, 173 had records on treatment applications and flock demography characteristics while 127 farmers had records on production characteristics. The demographic analysis with a dissimilarity index of flock size produced 7 distinct farm groups from among the 173 farms. Two of these farm groups were represented in similar numbers in each of the five regions. The research process also involved a number of dissemination and communication strategies that have brought the process and project outcomes into the domain of accessibility by wider readership locally and globally. These include workshops, seminars, field visits and consultations, local and international conferences, electronic conferencing, publications and personal communication via emailing and conventional posting. A number of research and development proposals were also developed based on the knowledge and experiences gained from the research process. The thesis captures the research process activities and outcomes in 8 chapters which include in ascending order – introduction, theoretical concepts underpinning FPR, research methodology and process, on-station research output, FPR descriptive statistical analysis, FPR inferential statistical analysis on production characteristics, FPR demographic analysis and conclusions. Various research approaches both quantitative and qualitative have been applied in the research process indicating the possibilities and importance of combining both systems for greater understanding of issues being studied. In our case, participatory studies of the improved management of indigenous chickens indicates their potential importance as livelihood assets for poor people.
Resumo:
We consider the forecasting performance of two SETAR exchange rate models proposed by Kräger and Kugler [J. Int. Money Fin. 12 (1993) 195]. Assuming that the models are good approximations to the data generating process, we show that whether the non-linearities inherent in the data can be exploited to forecast better than a random walk depends on both how forecast accuracy is assessed and on the ‘state of nature’. Evaluation based on traditional measures, such as (root) mean squared forecast errors, may mask the superiority of the non-linear models. Generalized impulse response functions are also calculated as a means of portraying the asymmetric response to shocks implied by such models.
Resumo:
Methods of improving the coverage of Box–Jenkins prediction intervals for linear autoregressive models are explored. These methods use bootstrap techniques to allow for parameter estimation uncertainty and to reduce the small-sample bias in the estimator of the models’ parameters. In addition, we also consider a method of bias-correcting the non-linear functions of the parameter estimates that are used to generate conditional multi-step predictions.
Resumo:
Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.
Resumo:
This paper reviews nine software packages with particular reference to their GARCH model estimation accuracy when judged against a respected benchmark. We consider the numerical consistency of GARCH and EGARCH estimation and forecasting. Our results have a number of implications for published research and future software development. Finally, we argue that the establishment of benchmarks for other standard non-linear models is long overdue.
Resumo:
This paper explores a number of statistical models for predicting the daily stock return volatility of an aggregate of all stocks traded on the NYSE. An application of linear and non-linear Granger causality tests highlights evidence of bidirectional causality, although the relationship is stronger from volatility to volume than the other way around. The out-of-sample forecasting performance of various linear, GARCH, EGARCH, GJR and neural network models of volatility are evaluated and compared. The models are also augmented by the addition of a measure of lagged volume to form more general ex-ante forecasting models. The results indicate that augmenting models of volatility with measures of lagged volume leads only to very modest improvements, if any, in forecasting performance.
Resumo:
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources.
Resumo:
Although there is a strong policy interest in the impacts of climate change corresponding to different degrees of climate change, there is so far little consistent empirical evidence of the relationship between climate forcing and impact. This is because the vast majority of impact assessments use emissions-based scenarios with associated socio-economic assumptions, and it is not feasible to infer impacts at other temperature changes by interpolation. This paper presents an assessment of the global-scale impacts of climate change in 2050 corresponding to defined increases in global mean temperature, using spatially-explicit impacts models representing impacts in the water resources, river flooding, coastal, agriculture, ecosystem and built environment sectors. Pattern-scaling is used to construct climate scenarios associated with specific changes in global mean surface temperature, and a relationship between temperature and sea level used to construct sea level rise scenarios. Climate scenarios are constructed from 21 climate models to give an indication of the uncertainty between forcing and response. The analysis shows that there is considerable uncertainty in the impacts associated with a given increase in global mean temperature, due largely to uncertainty in the projected regional change in precipitation. This has important policy implications. There is evidence for some sectors of a non-linear relationship between global mean temperature change and impact, due to the changing relative importance of temperature and precipitation change. In the socio-economic sectors considered here, the relationships are reasonably consistent between socio-economic scenarios if impacts are expressed in proportional terms, but there can be large differences in absolute terms. There are a number of caveats with the approach, including the use of pattern-scaling to construct scenarios, the use of one impacts model per sector, and the sensitivity of the shape of the relationships between forcing and response to the definition of the impact indicator.