924 resultados para Divergence time estimation
Resumo:
We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.
Resumo:
We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.
Resumo:
There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.
Resumo:
Numerous CCT domain genes are known to control flowering in plants. They belong to the CONSTANS-like (COL) and PREUDORESPONSE REGULATOR (PRR) gene families, which in addition to a CCT domain possess B-box or response-regulator domains, respectively. Ghd7 is the most recently identified COL gene to have a proven role in the control of flowering time in the Poaceae. However, as it lacks B-box domains, its inclusion within the COL gene family, technically, is incorrect. Here, we show Ghd7 belongs to a larger family of previously uncharacterized Poaceae genes which possess just a single CCT domain, termed here CCT MOTIF FAMILY (CMF) genes. We molecularly describe the CMF (and related COL and PRR) gene families in four sequenced Poaceae species, as well as in the draft genome assembly of barley (Hordeum vulgare). Genetic mapping of the ten barley CMF genes identified, as well as twelve previously unmapped HvCOL and HvPRR genes, finds the majority map to colinear positions relative to their Poaceae orthologues. Combined inter-/intra-species comparative and phylogenetic analysis of CMF, COL and PRR gene families indicates they evolved prior to the monocot/dicot divergence ~200 mya, with Poaceae CMF evolution described as the interplay between whole genome duplication in the ancestral cereal, and subsequent clade-specific mutation, deletion and duplication events. Given the proven role of CMF genes in the modulation of cereals flowering, the molecular, phylogenetic and comparative analysis of the Poaceae CMF, COL and PRR gene families presented here provides the foundation from which functional investigation can be undertaken.
Resumo:
We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.
Resumo:
Optimal estimation (OE) and probabilistic cloud screening were developed to provide lake surface water temperature (LSWT) estimates from the series of (advanced) along-track scanning radiometers (ATSRs). Variations in physical properties such as elevation, salinity, and atmospheric conditions are accounted for through the forward modelling of observed radiances. Therefore, the OE retrieval scheme developed is generic (i.e., applicable to all lakes). LSWTs were obtained for 258 of Earth's largest lakes from ATSR-2 and AATSR imagery from 1995 to 2009. Comparison to in situ observations from several lakes yields satellite in situ differences of −0.2 ± 0.7 K for daytime and −0.1 ± 0.5 K for nighttime observations (mean ± standard deviation). This compares with −0.05 ± 0.8 K for daytime and −0.1 ± 0.9 K for nighttime observations for previous methods based on operational sea surface temperature algorithms. The new approach also increases coverage (reducing misclassification of clear sky as cloud) and exhibits greater consistency between retrievals using different channel–view combinations. Empirical orthogonal function (EOF) techniques were applied to the LSWT retrievals (which contain gaps due to cloud cover) to reconstruct spatially and temporally complete time series of LSWT. The new LSWT observations and the EOF-based reconstructions offer benefits to numerical weather prediction, lake model validation, and improve our knowledge of the climatology of lakes globally. Both observations and reconstructions are publically available from http://hdl.handle.net/10283/88.
Resumo:
Optimal estimation (OE) is applied as a technique for retrieving sea surface temperature (SST) from thermal imagery obtained by the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) on Meteosat 9. OE requires simulation of observations as part of the retrieval process, and this is done here using numerical weather prediction fields and a fast radiative transfer model. Bias correction of the simulated brightness temperatures (BTs) is found to be a necessary step before retrieval, and is achieved by filtered averaging of simulations minus observations over a time period of 20 days and spatial scale of 2.5° in latitude and longitude. Throughout this study, BT observations are clear-sky averages over cells of size 0.5° in latitude and longitude. Results for the OE SST are compared to results using a traditional non-linear retrieval algorithm (“NLSST”), both validated against a set of 30108 night-time matches with drifting buoy observations. For the OE SST the mean difference with respect to drifter SSTs is − 0.01 K and the standard deviation is 0.47 K, compared to − 0.38 K and 0.70 K respectively for the NLSST algorithm. Perhaps more importantly, systematic biases in NLSST with respect to geographical location, atmospheric water vapour and satellite zenith angle are greatly reduced for the OE SST. However, the OE SST is calculated to have a lower sensitivity of retrieved SST to true SST variations than the NLSST. This feature would be a disadvantage for observing SST fronts and diurnal variability, and raises questions as to how best to exploit OE techniques at SEVIRI's full spatial resolution.
Resumo:
Optimal estimation (OE) improves sea surface temperature (SST) estimated from satellite infrared imagery in the “split-window”, in comparison to SST retrieved using the usual multi-channel (MCSST) or non-linear (NLSST) estimators. This is demonstrated using three months of observations of the Advanced Very High Resolution Radiometer (AVHRR) on the first Meteorological Operational satellite (Metop-A), matched in time and space to drifter SSTs collected on the global telecommunications system. There are 32,175 matches. The prior for the OE is forecast atmospheric fields from the Météo-France global numerical weather prediction system (ARPEGE), the forward model is RTTOV8.7, and a reduced state vector comprising SST and total column water vapour (TCWV) is used. Operational NLSST coefficients give mean and standard deviation (SD) of the difference between satellite and drifter SSTs of 0.00 and 0.72 K. The “best possible” NLSST and MCSST coefficients, empirically regressed on the data themselves, give zero mean difference and SDs of 0.66 K and 0.73 K respectively. Significant contributions to the global SD arise from regional systematic errors (biases) of several tenths of kelvin in the NLSST. With no bias corrections to either prior fields or forward model, the SSTs retrieved by OE minus drifter SSTs have mean and SD of − 0.16 and 0.49 K respectively. The reduction in SD below the “best possible” regression results shows that OE deals with structural limitations of the NLSST and MCSST algorithms. Using simple empirical bias corrections to improve the OE, retrieved minus drifter SSTs are obtained with mean and SD of − 0.06 and 0.44 K respectively. Regional biases are greatly reduced, such that the absolute bias is less than 0.1 K in 61% of 10°-latitude by 30°-longitude cells. OE also allows a statistic of the agreement between modelled and measured brightness temperatures to be calculated. We show that this measure is more efficient than the current system of confidence levels at identifying reliable retrievals, and that the best 75% of satellite SSTs by this measure have negligible bias and retrieval error of order 0.25 K.
Resumo:
The use of Bayesian inference in the inference of time-frequency representations has, thus far, been limited to offline analysis of signals, using a smoothing spline based model of the time-frequency plane. In this paper we introduce a new framework that allows the routine use of Bayesian inference for online estimation of the time-varying spectral density of a locally stationary Gaussian process. The core of our approach is the use of a likelihood inspired by a local Whittle approximation. This choice, along with the use of a recursive algorithm for non-parametric estimation of the local spectral density, permits the use of a particle filter for estimating the time-varying spectral density online. We provide demonstrations of the algorithm through tracking chirps and the analysis of musical data.
Resumo:
Vertical divergence of CO2 fluxes is observed over two Midwestern AmeriFlux forest sites. The differences in ensemble averaged hourly CO2 fluxes measured at two heights above canopy are relatively small (0.2–0.5 μmol m−2 s−1), but they are the major contributors to differences (76–256 g C m−2 or 41.8–50.6%) in estimated annual net ecosystem exchange (NEE) in 2001. A friction velocity criterion is used in these estimates but mean flow advection is not accounted for. This study examines the effects of coordinate rotation, averaging time period, sampling frequency and co-spectral correction on CO2 fluxes measured at a single height, and on vertical flux differences measured between two heights. Both the offset in measured vertical velocity and the downflow/upflow caused by supporting tower structures in upwind directions lead to systematic over- or under-estimates of fluxes measured at a single height. An offset of 1 cm s−1 and an upflow/downflow of 1° lead to 1% and 5.6% differences in momentum fluxes and nighttime sensible heat and CO2 fluxes, respectively, but only 0.5% and 2.8% differences in daytime sensible heat and CO2 fluxes. The sign and magnitude of both offset and upflow/downflow angle vary between sonic anemometers at two measurement heights. This introduces a systematic and large bias in vertical flux differences if these effects are not corrected in the coordinate rotation. A 1 h averaging time period is shown to be appropriate for the two sites. In the daytime, the absolute magnitudes of co-spectra decrease with height in the natural frequencies of 0.02–0.1 Hz but increase in the lower frequencies (<0.01 Hz). Thus, air motions in these two frequency ranges counteract each other in determining vertical flux differences, whose magnitude and sign vary with averaging time period. At night, co-spectral densities of CO2 are more positive at the higher levels of both sites in the frequency range of 0.03–0.4 Hz and this vertical increase is also shown at most frequencies lower than 0.03 Hz. Differences in co-spectral corrections at the two heights lead to a positive shift in vertical CO2 flux differences throughout the day at both sites. At night, the vertical CO2 flux differences between two measurement heights are 20–30% and 40–60% of co-spectral corrected CO2 fluxes measured at the lower levels of the two sites, respectively. Vertical differences of CO2 flux are relatively small in the daytime. Vertical differences in estimated mean vertical advection of CO2 between the two measurement heights generally do not improve the closure of the 1D (vertical) CO2 budget in the air layer between the two measurement heights. This may imply the significance of horizontal advection. However, a reliable assessment of mean advection contributions in annual NEE estimate at these two AmeriFlux sites is currently an unsolved problem.
Resumo:
The calculation of interval forecasts for highly persistent autoregressive (AR) time series based on the bootstrap is considered. Three methods are considered for countering the small-sample bias of least-squares estimation for processes which have roots close to the unit circle: a bootstrap bias-corrected OLS estimator; the use of the Roy–Fuller estimator in place of OLS; and the use of the Andrews–Chen estimator in place of OLS. All three methods of bias correction yield superior results to the bootstrap in the absence of bias correction. Of the three correction methods, the bootstrap prediction intervals based on the Roy–Fuller estimator are generally superior to the other two. The small-sample performance of bootstrap prediction intervals based on the Roy–Fuller estimator are investigated when the order of the AR model is unknown, and has to be determined using an information criterion.
Resumo:
We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Visual motion cues play an important role in animal and humans locomotion without the need to extract actual ego-motion information. This paper demonstrates a method for estimating the visual motion parameters, namely the Time-To-Contact (TTC), Focus of Expansion (FOE), and image angular velocities, from a sparse optical flow estimation registered from a downward looking camera. The presented method is capable of estimating the visual motion parameters in a complicated 6 degrees of freedom motion and in real time with suitable accuracy for mobile robots visual navigation.
Resumo:
The Bollène-2002 Experiment was aimed at developing the use of a radar volume-scanning strategy for conducting radar rainfall estimations in the mountainous regions of France. A developmental radar processing system, called Traitements Régionalisés et Adaptatifs de Données Radar pour l’Hydrologie (Regionalized and Adaptive Radar Data Processing for Hydrological Applications), has been built and several algorithms were specifically produced as part of this project. These algorithms include 1) a clutter identification technique based on the pulse-to-pulse variability of reflectivity Z for noncoherent radar, 2) a coupled procedure for determining a rain partition between convective and widespread rainfall R and the associated normalized vertical profiles of reflectivity, and 3) a method for calculating reflectivity at ground level from reflectivities measured aloft. Several radar processing strategies, including nonadaptive, time-adaptive, and space–time-adaptive variants, have been implemented to assess the performance of these new algorithms. Reference rainfall data were derived from a careful analysis of rain gauge datasets furnished by the Cévennes–Vivarais Mediterranean Hydrometeorological Observatory. The assessment criteria for five intense and long-lasting Mediterranean rain events have proven that good quantitative precipitation estimates can be obtained from radar data alone within 100-km range by using well-sited, well-maintained radar systems and sophisticated, physically based data-processing systems. The basic requirements entail performing accurate electronic calibration and stability verification, determining the radar detection domain, achieving efficient clutter elimination, and capturing the vertical structure(s) of reflectivity for the target event. Radar performance was shown to depend on type of rainfall, with better results obtained with deep convective rain systems (Nash coefficients of roughly 0.90 for point radar–rain gauge comparisons at the event time step), as opposed to shallow convective and frontal rain systems (Nash coefficients in the 0.6–0.8 range). In comparison with time-adaptive strategies, the space–time-adaptive strategy yields a very significant reduction in the radar–rain gauge bias while the level of scatter remains basically unchanged. Because the Z–R relationships have not been optimized in this study, results are attributed to an improved processing of spatial variations in the vertical profile of reflectivity. The two main recommendations for future work consist of adapting the rain separation method for radar network operations and documenting Z–R relationships conditional on rainfall type.
Resumo:
This thesis describes a form of non-contact measurement using two dimensional hall effect sensing to resolve the location of a moving magnet which is part of a ‘magnetic spring’ type suspension system. This work was inspired by the field of Space Robotics, which currently relies on solid link suspension techniques for rover stability. This thesis details the design, development and testing of a novel magnetic suspension system with a possible application in space and terrestrial based robotics, especially when the robot needs to traverse rough terrain. A number of algorithms were developed, to utilize experimental data from testing, that can approximate the separation between magnets in the suspension module through observation of the magnetic fields. Experimental hardware was also developed to demonstrate how two dimensional hall effect sensor arrays could provide accurate feedback, with respects to the magnetic suspension modules operation, so that future work can include the sensor array in a real-time control system to produce dynamic ride control for space robots. The research performed has proven that two dimensional hall effect sensing with respects to magnetic suspension is accurate, effective and suitable for future testing.