936 resultados para pitch interpolation
Resumo:
Palaeoclimates across Europe for 6000 y BP were estimated from pollen data using the modern pollen analogue technique constrained with lake-level data. The constraint consists of restricting the set of modern pollen samples considered as analogues of the fossil samples to those locations where the implied change in annual precipitation minus evapotranspiration (P–E) is consistent with the regional change in moisture balance as indicated by lakes. An artificial neural network was used for the spatial interpolation of lake-level changes to the pollen sites, and for mapping palaeoclimate anomalies. The climate variables reconstructed were mean temperature of the coldest month (T c ), growing degree days above 5 °C (GDD), moisture availability expressed as the ratio of actual to equilibrium evapotranspiration (α), and P–E. The constraint improved the spatial coherency of the reconstructed palaeoclimate anomalies, especially for P–E. The reconstructions indicate clear spatial and seasonal patterns of Holocene climate change, which can provide a quantitative benchmark for the evaluation of palaeoclimate model simulations. Winter temperatures (T c ) were 1–3 K greater than present in the far N and NE of Europe, but 2–4 K less than present in the Mediterranean region. Summer warmth (GDD) was greater than present in NW Europe (by 400–800 K day at the highest elevations) and in the Alps, but >400 K day less than present at lower elevations in S Europe. P–E was 50–250 mm less than present in NW Europe and the Alps, but α was 10–15% greater than present in S Europe and P–E was 50–200 mm greater than present in S and E Europe.
Resumo:
A one-dimensional surface energy-balance lake model, coupled to a thermodynamic model of lake ice, is used to simulate variations in the temperature of and evaporation from three Estonian lakes: Karujärv, Viljandi and Kirjaku. The model is driven by daily climate data, derived by cubic-spline interpolation from monthly mean data, and was run for periods of 8 years (Kirjaku) up to 30 years (Viljandi). Simulated surface water temperature is in good agreement with observations: mean differences between simulated and observed temperatures are from −0.8°C to +0.1°C. The simulated duration of snow and ice cover is comparable with observed. However, the model generally underpredicts ice thickness and overpredicts snow depth. Sensitivity analyses suggest that the model results are robust across a wide range (0.1–2.0 m−1) of lake extinction coefficient: surface temperature differs by less than 0.5°C between extreme values of the extinction coefficient. The model results are more sensitive to snow and ice albedos. However, changing the snow (0.2–0.9) and ice (0.15–0.55) albedos within realistic ranges does not improve the simulations of snow depth and ice thickness. The underestimation of ice thickness is correlated with the overestimation of snow cover, since a thick snow layer insulates the ice and limits ice formation. The overestimation of snow cover results from the assumption that all the simulated winter precipitation occurs as snow, a direct consequence of using daily climate data derived by interpolation from mean monthly data.
Resumo:
We introduced photo-polymer networks into the various liquid crystalline phases of the antiferroelectric liquid crystal AS612 and studied the effects of these networks by measuring the temperature dependence of the Bragg wavelengths selectively reflected. After polymerization, the decrease in Bragg wavelengths with respect to the original values is consistent with a shorter helical pitch due to polymer network shrinkage. Also, by removing the liquid crystalline material, we are able to image the residual polymer network using scanning electron microscopy and polarized light microscopy. The polymer strands are a few microns thick and the networks show both chiral and non-chiral features.
Resumo:
The question is addressed whether using unbalanced updates in ocean-data assimilation schemes for seasonal forecasting systems can result in a relatively poor simulation of zonal currents. An assimilation scheme, where temperature observations are used for updating only the density field, is compared to a scheme where updates of density field and zonal velocities are related by geostrophic balance. This is done for an equatorial linear shallow-water model. It is found that equatorial zonal velocities can be detoriated if velocity is not updated in the assimilation procedure. Adding balanced updates to the zonal velocity is shown to be a simple remedy for the shallow-water model. Next, optimal interpolation (OI) schemes with balanced updates of the zonal velocity are implemented in two ocean general circulation models. First tests indicate a beneficial impact on equatorial upper-ocean zonal currents.
Resumo:
Eddy covariance has been used in urban areas to evaluate the net exchange of CO2 between the surface and the atmosphere. Typically, only the vertical flux is measured at a height 2–3 times that of the local roughness elements; however, under conditions of relatively low instability, CO2 may accumulate in the airspace below the measurement height. This can result in inaccurate emissions estimates if the accumulated CO2 drains away or is flushed upwards during thermal expansion of the boundary layer. Some studies apply a single height storage correction; however, this requires the assumption that the response of the CO2 concentration profile to forcing is constant with height. Here a full seasonal cycle (7th June 2012 to 3rd June 2013) of single height CO2 storage data calculated from concentrations measured at 10 Hz by open path gas analyser are compared to a data set calculated from a concurrent switched vertical profile measured (2 Hz, closed path gas analyser) at 10 heights within and above a street canyon in central London. The assumption required for the former storage determination is shown to be invalid. For approximately regular street canyons at least one other measurement is required. Continuous measurements at fewer locations are shown to be preferable to a spatially dense, switched profile, as temporal interpolation is ineffective. The majority of the spectral energy of the CO2 storage time series was found to be between 0.001 and 0.2 Hz (500 and 5 s respectively); however, sampling frequencies of 2 Hz and below still result in significantly lower CO2 storage values. An empirical method of correcting CO2 storage values from under-sampled time series is proposed.
Resumo:
1. The rapid expansion of systematic monitoring schemes necessitates robust methods to reliably assess species' status and trends. Insect monitoring poses a challenge where there are strong seasonal patterns, requiring repeated counts to reliably assess abundance. Butterfly monitoring schemes (BMSs) operate in an increasing number of countries with broadly the same methodology, yet they differ in their observation frequency and in the methods used to compute annual abundance indices. 2. Using simulated and observed data, we performed an extensive comparison of two approaches used to derive abundance indices from count data collected via BMS, under a range of sampling frequencies. Linear interpolation is most commonly used to estimate abundance indices from seasonal count series. A second method, hereafter the regional generalized additive model (GAM), fits a GAM to repeated counts within sites across a climatic region. For the two methods, we estimated bias in abundance indices and the statistical power for detecting trends, given different proportions of missing counts. We also compared the accuracy of trend estimates using systematically degraded observed counts of the Gatekeeper Pyronia tithonus (Linnaeus 1767). 3. The regional GAM method generally outperforms the linear interpolation method. When the proportion of missing counts increased beyond 50%, indices derived via the linear interpolation method showed substantially higher estimation error as well as clear biases, in comparison to the regional GAM method. The regional GAM method also showed higher power to detect trends when the proportion of missing counts was substantial. 4. Synthesis and applications. Monitoring offers invaluable data to support conservation policy and management, but requires robust analysis approaches and guidance for new and expanding schemes. Based on our findings, we recommend the regional generalized additive model approach when conducting integrative analyses across schemes, or when analysing scheme data with reduced sampling efforts. This method enables existing schemes to be expanded or new schemes to be developed with reduced within-year sampling frequency, as well as affording options to adapt protocols to more efficiently assess species status and trends across large geographical scales.
Resumo:
We estimate crustal structure and thickness of South America north of roughly 40 degrees S. To this end, we analyzed receiver functions from 20 relatively new temporary broadband seismic stations deployed across eastern Brazil. In the analysis we include teleseismic and some regional events, particularly for stations that recorded few suitable earthquakes. We first estimate crustal thickness and average Poisson`s ratio using two different stacking methods. We then combine the new crustal constraints with results from previous receiver function studies. To interpolate the crustal thickness between the station locations, we jointly invert these Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh waveforms for a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a positive correlation between crustal thickness and geologic age is derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. The new Moho map also reveals an anomalously deep Moho beneath the oldest core of the Amazonian Craton.
Resumo:
We consider incompressible Stokes flow with an internal interface at which the pressure is discontinuous, as happens for example in problems involving surface tension. We assume that the mesh does not follow the interface, which makes classical interpolation spaces to yield suboptimal convergence rates (typically, the interpolation error in the L(2)(Omega)-norm is of order h(1/2)). We propose a modification of the P(1)-conforming space that accommodates discontinuities at the interface without introducing additional degrees of freedom or modifying the sparsity pattern of the linear system. The unknowns are the pressure values at the vertices of the mesh and the basis functions are computed locally at each element, so that the implementation of the proposed space into existing codes is straightforward. With this modification, numerical tests show that the interpolation order improves to O(h(3/2)). The new pressure space is implemented for the stable P(1)(+)/P(1) mini-element discretization, and for the stabilized equal-order P(1)/P(1) discretization. Assessment is carried out for Poiseuille flow with a forcing surface and for a static bubble. In all cases the proposed pressure space leads to improved convergence orders and to more accurate results than the standard P(1) space. In addition, two Navier-Stokes simulations with moving interfaces (Rayleigh-Taylor instability and merging bubbles) are reported to show that the proposed space is robust enough to carry out realistic simulations. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
The structure and local ordering of 1,6-hexamethylenediisocyanate-(acetoxypropy1) cellulose (HDI-APC) liquid crystalline elastomer thin films are investigated by using X-ray diffraction and scattering techniques. Optical microscopy and mechanical essays are performed to complement the investigation. The study is performed in films subjected or not to an uniaxial stress. Our results indicate that the film is constituted by a bundle of helicoidal fiber-like structure, where the cellobiose block spins around the axis of the fiber, like a string-structure in a smectic-like packing, with the pitch defined by a smectic-like layer. The fibers are in average perpendicular to the smectic-like planes. Without the stretch, these bundles are warped, only with a residual orientation along the casting direction. The stretch orients the bundles along it, increasing the smectic-like and the nematic-like ordering of the fibers. Under stress, the network of molecules which connects the cellobiose blocs and forms the cellulosic matrix tends to organize their links in a hexagonal-like structure with lattice parameter commensurate to the smectic-like structure.
Resumo:
The interest in attractive Bose-Einstein Condensates arises due to the chemical instabilities generate when the number of trapped atoms is above a critical number. In this case, recombination process promotes the collapse of the cloud. This behavior is normally geometry dependent. Within the context of the mean field approximation, the system is described by the Gross-Pitaevskii equation. We have considered the attractive Bose-Einstein condensate, confined in a nonspherical trap, investigating numerically and analytically the solutions, using controlled perturbation and self-similar approximation methods. This approximation is valid in all interval of the negative coupling parameter allowing interpolation between weak-coupling and strong-coupling limits. When using the self-similar approximation methods, accurate analytical formulas were derived. These obtained expressions are discussed for several different traps and may contribute to the understanding of experimental observations.
Resumo:
This paper proposes an improved voice activity detection (VAD) algorithm using wavelet and support vector machine (SVM) for European Telecommunication Standards Institution (ETS1) adaptive multi-rate (AMR) narrow-band (NB) and wide-band (WB) speech codecs. First, based on the wavelet transform, the original IIR filter bank and pitch/tone detector are implemented, respectively, via the wavelet filter bank and the wavelet-based pitch/tone detection algorithm. The wavelet filter bank can divide input speech signal into several frequency bands so that the signal power level at each sub-band can be calculated. In addition, the background noise level can be estimated in each sub-band by using the wavelet de-noising method. The wavelet filter bank is also derived to detect correlated complex signals like music. Then the proposed algorithm can apply SVM to train an optimized non-linear VAD decision rule involving the sub-band power, noise level, pitch period, tone flag, and complex signals warning flag of input speech signals. By the use of the trained SVM, the proposed VAD algorithm can produce more accurate detection results. Various experimental results carried out from the Aurora speech database with different noise conditions show that the proposed algorithm gives considerable VAD performances superior to the AMR-NB VAD Options 1 and 2, and AMR-WB VAD. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we present a new wavelet-based algorithm for low-cost computation of the cepstrum. It can be used for real time precise pitch determination in automatic speech and speaker recognition systems. Many wavelet families are examined to determine the one that works best. The results confirm the efficacy and accuracy of the proposed technique for pitch extraction. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The shuttle radar topography mission (SRTM), was flow on the space shuttle Endeavour in February 2000, with the objective of acquiring a digital elevation model of all land between 60 degrees north latitude and 56 degrees south latitude, using interferometric synthetic aperture radar (InSAR) techniques. The SRTM data are distributed at horizontal resolution of 1 arc-second (similar to 30m) for areas within the USA and at 3 arc-second (similar to 90m) resolution for the rest of the world. A resolution of 90m can be considered suitable for the small or medium-scale analysis, but it is too coarse for more detailed purposes. One alternative is to interpolate the SRTM data at a finer resolution; it will not increase the level of detail of the original digital elevation model (DEM), but it will lead to a surface where there is the coherence of angular properties (i.e. slope, aspect) between neighbouring pixels, which is an important characteristic when dealing with terrain analysis. This work intents to show how the proper adjustment of variogram and kriging parameters, namely the nugget effect and the maximum distance within which values are used in interpolation, can be set to achieve quality results on resampling SRTM data from 3"" to 1"". We present for a test area in western USA, which includes different adjustment schemes (changes in nugget effect value and in the interpolation radius) and comparisons with the original 1"" model of the area, with the national elevation dataset (NED) DEMs, and with other interpolation methods (splines and inverse distance weighted (IDW)). The basic concepts for using kriging to resample terrain data are: (i) working only with the immediate neighbourhood of the predicted point, due to the high spatial correlation of the topographic surface and omnidirectional behaviour of variogram in short distances; (ii) adding a very small random variation to the coordinates of the points prior to interpolation, to avoid punctual artifacts generated by predicted points with the same location than original data points and; (iii) using a small value of nugget effect, to avoid smoothing that can obliterate terrain features. Drainages derived from the surfaces interpolated by kriging and by splines have a good agreement with streams derived from the 1"" NED, with correct identification of watersheds, even though a few differences occur in the positions of some rivers in flat areas. Although the 1"" surfaces resampled by kriging and splines are very similar, we consider the results produced by kriging as superior, since the spline-interpolated surface still presented some noise and linear artifacts, which were removed by kriging.
Resumo:
Purpose: We present an iterative framework for CT reconstruction from transmission ultrasound data which accurately and efficiently models the strong refraction effects that occur in our target application: Imaging the female breast. Methods: Our refractive ray tracing framework has its foundation in the fast marching method (FNMM) and it allows an accurate as well as efficient modeling of curved rays. We also describe a novel regularization scheme that yields further significant reconstruction quality improvements. A final contribution is the development of a realistic anthropomorphic digital breast phantom based on the NIH Visible Female data set. Results: Our system is able to resolve very fine details even in the presence of significant noise, and it reconstructs both sound speed and attenuation data. Excellent correspondence with a traditional, but significantly more computationally expensive wave equation solver is achieved. Conclusions: Apart from the accurate modeling of curved rays, decisive factors have also been our regularization scheme and the high-quality interpolation filter we have used. An added benefit of our framework is that it accelerates well on GPUs where we have shown that clinical 3D reconstruction speeds on the order of minutes are possible.
Resumo:
In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.