550 resultados para SMOOTHING SPLINES
Resumo:
In a recently published paper. spherical nonparametric estimators were applied to feature-track ensembles to determine a range of statistics for the atmospheric features considered. This approach obviates the types of bias normally introduced with traditional estimators. New spherical isotropic kernels with local support were introduced. Ln this paper the extension to spherical nonisotropic kernels with local support is introduced, together with a means of obtaining the shape and smoothing parameters in an objective way. The usefulness of spherical nonparametric estimators based on nonisotropic kernels is demonstrated with an application to an oceanographic feature-track ensemble.
Resumo:
The aim of this paper is essentially twofold: first, to describe the use of spherical nonparametric estimators for determining statistical diagnostic fields from ensembles of feature tracks on a global domain, and second, to report the application of these techniques to data derived from a modern general circulation model. New spherical kernel functions are introduced that are more efficiently computed than the traditional exponential kernels. The data-driven techniques of cross-validation to determine the amount elf smoothing objectively, and adaptive smoothing to vary the smoothing locally, are also considered. Also introduced are techniques for combining seasonal statistical distributions to produce longer-term statistical distributions. Although all calculations are performed globally, only the results for the Northern Hemisphere winter (December, January, February) and Southern Hemisphere winter (June, July, August) cyclonic activity are presented, discussed, and compared with previous studies. Overall, results for the two hemispheric winters are in good agreement with previous studies, both for model-based studies and observational studies.
Resumo:
Using the Met Office large-eddy model (LEM) we simulate a mixed-phase altocumulus cloud that was observed from Chilbolton in southern England by a 94 GHz Doppler radar, a 905 nm lidar, a dual-wavelength microwave radiometer and also by four radiosondes. It is important to test and evaluate such simulations with observations, since there are significant differences between results from different cloud-resolving models for ice clouds. Simulating the Doppler radar and lidar data within the LEM allows us to compare observed and modelled quantities directly, and allows us to explore the relationships between observed and unobserved variables. For general-circulation models, which currently tend to give poor representations of mixed-phase clouds, the case shows the importance of using: (i) separate prognostic ice and liquid water, (ii) a vertical resolution that captures the thin layers of liquid water, and (iii) an accurate representation the subgrid vertical velocities that allow liquid water to form. It is shown that large-scale ascents and descents are significant for this case, and so the horizontally averaged LEM profiles are relaxed towards observed profiles to account for these. The LEM simulation then gives a reasonable. cloud, with an ice-water path approximately two thirds of that observed, with liquid water at the cloud top, as observed. However, the liquid-water cells that form in the updraughts at cloud top in the LEM have liquid-water paths (LWPs) up to half those observed, and there are too few cells, giving a mean LWP five to ten times smaller than observed. In reality, ice nucleation and fallout may deplete ice-nuclei concentrations at the cloud top, allowing more liquid water to form there, but this process is not represented in the model. Decreasing the heterogeneous nucleation rate in the LEM increased the LWP, which supports this hypothesis. The LEM captures the increase in the standard deviation in Doppler velocities (and so vertical winds) with height, but values are 1.5 to 4 times smaller than observed (although values are larger in an unforced model run, this only increases the modelled LWP by a factor of approximately two). The LEM data show that, for values larger than approximately 12 cm s(-1), the standard deviation in Doppler velocities provides an almost unbiased estimate of the standard deviation in vertical winds, but provides an overestimate for smaller values. Time-smoothing the observed Doppler velocities and modelled mass-squared-weighted fallspeeds shows that observed fallspeeds are approximately two-thirds of the modelled values. Decreasing the modelled fallspeeds to those observed increases the modelled IWC, giving an IWP 1.6 times that observed.
Resumo:
Composites of wind speeds, equivalent potential temperature, mean sea level pressure, vertical velocity, and relative humidity have been produced for the 100 most intense extratropical cyclones in the Northern Hemisphere winter for the 40-yr ECMWF Re-Analysis (ERA-40) and the high resolution global environment model (HiGEM). Features of conceptual models of cyclone structure—the warm conveyor belt, cold conveyor belt, and dry intrusion—have been identified in the composites from ERA-40 and compared to HiGEM. Such features can be identified in the composite fields despite the smoothing that occurs in the compositing process. The surface features and the three-dimensional structure of the cyclones in HiGEM compare very well with those from ERA-40. The warm conveyor belt is identified in the temperature and wind fields as a mass of warm air undergoing moist isentropic uplift and is very similar in ERA-40 and HiGEM. The rate of ascent is lower in HiGEM, associated with a shallower slope of the moist isentropes in the warm sector. There are also differences in the relative humidity fields in the warm conveyor belt. In ERA-40, the high values of relative humidity are strongly associated with the moist isentropic uplift, whereas in HiGEM these are not so strongly associated. The cold conveyor belt is identified as rearward flowing air that undercuts the warm conveyor belt and produces a low-level jet, and is very similar in HiGEM and ERA-40. The dry intrusion is identified in the 500-hPa vertical velocity and relative humidity. The structure of the dry intrusion compares well between HiGEM and ERA-40 but the descent is weaker in HiGEM because of weaker along-isentrope flow behind the composite cyclone. HiGEM’s ability to represent the key features of extratropical cyclone structure can give confidence in future predictions from this model.
Resumo:
Techniques for obtaining quantitative values of the temperatures and concentrations of remote hot gaseous effluents from their measured passive emission spectra have been examined in laboratory experiments. The high sensitivity of the spectrometer in the vicinity of the 2397 cm-1 band head region of CO2 has allowed the gas temperature to be calculated from the relative intensity of the observed rotational lines. The spatial distribution of the CO2 in a methane flame has been reconstructed tomographically using a matrix inversion technique. The spectrometer has been calibrated against a black body source at different temperatures and a self absorption correction has been applied to the data avoiding the need to measure the transmission directly. Reconstruction artifacts have been reduced by applying a smoothing routine to the inversion matrix.
Resumo:
Capturing the pattern of structural change is a relevant task in applied demand analysis, as consumer preferences may vary significantly over time. Filtering and smoothing techniques have recently played an increasingly relevant role. A dynamic Almost Ideal Demand System with random walk parameters is estimated in order to detect modifications in consumer habits and preferences, as well as changes in the behavioural response to prices and income. Systemwise estimation, consistent with the underlying constraints from economic theory, is achieved through the EM algorithm. The proposed model is applied to UK aggregate consumption of alcohol and tobacco, using quarterly data from 1963 to 2003. Increased alcohol consumption is explained by a preference shift, addictive behaviour and a lower price elasticity. The dynamic and time-varying specification is consistent with the theoretical requirements imposed at each sample point. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A fully automated procedure to extract and to image local fibre orientation in biological tissues from scanning X-ray diffraction is presented. The preferred chitin fibre orientation in the flow sensing system of crickets is determined with high spatial resolution by applying synchrotron radiation based X-ray microbeam diffraction in conjunction with advanced sample sectioning using a UV micro-laser. The data analysis is based on an automated detection of azimuthal diffraction maxima after 2D convolution filtering (smoothing) of the 2D diffraction patterns. Under the assumption of crystallographic fibre symmetry around the morphological fibre axis, the evaluation method allows mapping the three-dimensional orientation of the fibre axes in space. The resulting two-dimensional maps of the local fibre orientations - together with the complex shape of the flow sensing system - may be useful for a better understanding of the mechanical optimization of such tissues.
Resumo:
Two experiments examine the effect on an immediate recall test of simulating a reverberant auditory environment in which auditory distracters in the form of speech are played to the participants (the 'irrelevant sound effect'). An echo-intensive environment simulated by the addition of reverberation to the speech reduced the extent of 'changes in state' in the irrelevant speech stream by smoothing the profile of the waveform. In both experiments, the reverberant auditory environment produced significantly smaller irrelevant sound distraction effects than an echo-free environment. Results are interpreted in terms of changing-state hypothesis, which states that acoustic content of irrelevant sound, rather than phonology or semantics, determines the extent of the irrelevant sound effect (ISE). Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
The usefulness of any simulation of atmospheric tracers using low-resolution winds relies on both the dominance of large spatial scales in the strain and time dependence that results in a cascade in tracer scales. Here, a quantitative study on the accuracy of such tracer studies is made using the contour advection technique. It is shown that, although contour stretching rates are very insensitive to the spatial truncation of the wind field, the displacement errors in filament position are sensitive. A knowledge of displacement characteristics is essential if Lagrangian simulations are to be used for the inference of airmass origin. A quantitative lower estimate is obtained for the tracer scale factor (TSF): the ratio of the smallest resolved scale in the advecting wind field to the smallest “trustworthy” scale in the tracer field. For a baroclinic wave life cycle the TSF = 6.1 ± 0.3 while for the Northern Hemisphere wintertime lower stratosphere the TSF = 5.5 ± 0.5, when using the most stringent definition of the trustworthy scale. The similarity in the TSF for the two flows is striking and an explanation is discussed in terms of the activity of potential vorticity (PV) filaments. Uncertainty in contour initialization is investigated for the stratospheric case. The effect of smoothing initial contours is to introduce a spinup time, after which wind field truncation errors take over from initialization errors (2–3 days). It is also shown that false detail from the proliferation of finescale filaments limits the useful lifetime of such contour advection simulations to 3σ−1 days, where σ is the filament thinning rate, unless filaments narrower than the trustworthy scale are removed by contour surgery. In addition, PV analysis error and diabatic effects are so strong that only PV filaments wider than 50 km are at all believable, even for very high-resolution winds. The minimum wind field resolution required to accurately simulate filaments down to the erosion scale in the stratosphere (given an initial contour) is estimated and the implications for the modeling of atmospheric chemistry are briefly discussed.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
Volatility, or the variability of the underlying asset, is one of the key fundamental components of property derivative pricing and in the application of real option models in development analysis. There has been relatively little work on volatility in real terms of its application to property derivatives and the real options analysis. Most research on volatility stems from investment performance (Nathakumaran & Newell (1995), Brown & Matysiak 2000, Booth & Matysiak 2001). Historic standard deviation is often used as a proxy for volatility and there has been a reliance on indices, which are subject to valuation smoothing effects. Transaction prices are considered to be more volatile than the traditional standard deviations of appraisal based indices. This could lead, arguably, to inefficiencies and mis-pricing, particularly if it is also accepted that changes evolve randomly over time and where future volatility and not an ex-post measure is the key (Sing 1998). If history does not repeat, or provides an unreliable measure, then estimating model based (implied) volatility is an alternative approach (Patel & Sing 2000). This paper is the first of two that employ alternative approaches to calculating and capturing volatility in UK real estate for the purposes of applying the measure to derivative pricing and real option models. It draws on a uniquely constructed IPD/Gerald Eve transactions database, containing over 21,000 properties over the period 1983-2005. In this first paper the magnitude of historic amplification associated with asset returns by sector and geographic spread is looked at. In the subsequent paper the focus will be upon model based (implied) volatility.
Resumo:
Identifying a periodic time-series model from environmental records, without imposing the positivity of the growth rate, does not necessarily respect the time order of the data observations. Consequently, subsequent observations, sampled in the environmental archive, can be inversed on the time axis, resulting in a non-physical signal model. In this paper an optimization technique with linear constraints on the signal model parameters is proposed that prevents time inversions. The activation conditions for this constrained optimization are based upon the physical constraint of the growth rate, namely, that it cannot take values smaller than zero. The actual constraints are defined for polynomials and first-order splines as basis functions for the nonlinear contribution in the distance-time relationship. The method is compared with an existing method that eliminates the time inversions, and its noise sensitivity is tested by means of Monte Carlo simulations. Finally, the usefulness of the method is demonstrated on the measurements of the vessel density, in a mangrove tree, Rhizophora mucronata, and the measurement of Mg/Ca ratios, in a bivalve, Mytilus trossulus.
Resumo:
The ASTER Global Digital Elevation Model (GDEM) has made elevation data at 30 m spatial resolution freely available, enabling reinvestigation of morphometric relationships derived from limited field data using much larger sample sizes. These data are used to analyse a range of morphometric relationships derived for dunes (between dune height, spacing, and equivalent sand thickness) in the Namib Sand Sea, which was chosen because there are a number of extant studies that could be used for comparison with the results. The relative accuracy of GDEM for capturing dune height and shape was tested against multiple individual ASTER DEM scenes and against field surveys, highlighting the smoothing of the dune crest and resultant underestimation of dune height, and the omission of the smallest dunes, because of the 30 m sampling of ASTER DEM products. It is demonstrated that morphometric relationships derived from GDEM data are broadly comparable with relationships derived by previous methods, across a range of different dune types. The data confirm patterns of dune height, spacing and equivalent sand thickness mapped previously in the Namib Sand Sea, but add new detail to these patterns.
Resumo:
The case for property has typically rested on the application of modern portfolio theory (MPT), in that property has been shown to offer increased diversification benefits within a multi asset portfolio without hurting portfolio returns especially for lower risk portfolios. However this view is based upon the use of historic, usually appraisal based, data for property. Recent research suggests strongly that such data significantly underestimates the risk characteristics of property, because appraisals explicitly or implicitly smooth out much of the real volatility in property returns. This paper examines the portfolio diversification effects of including property in a multi-asset portfolio, using UK appraisal based (smoothed) data and several derived de-smoothed series. Having considered the effects of de-smoothing, we then consider the inclusion of a further low risk asset (cash) in order to investigate further whether property's place in a low risk portfolio is maintained. The conclusions of this study are that the previous supposed benefits of including property have been overstated. Although property may still have a place in a 'balanced' institutional portfolio, the case for property needs to be reassessed and not be based simplistically on the application of MPT.
Resumo:
Methods for producing nonuniform transformations, or regradings, of discrete data are discussed. The transformations are useful in image processing, principally for enhancement and normalization of scenes. Regradings which “equidistribute” the histogram of the data, that is, which transform it into a constant function, are determined. Techniques for smoothing the regrading, dependent upon a continuously variable parameter, are presented. Generalized methods for constructing regradings such that the histogram of the data is transformed into any prescribed function are also discussed. Numerical algorithms for implementing the procedures and applications to specific examples are described.