944 resultados para Trigonometric interpolation
Resumo:
We present high-precision transit observations of the exoplanet WASP-21b, obtained with the Rapid Imager to Search for Exoplanets instrument mounted on the 2.0-m Liverpool Telescope. A transit model is fitted, coupled with a Markov chain Monte Carlo routine, to derive accurate system parameters. The two new high-precision transits allow us to estimate the stellar density directly from the light curve. Our analysis suggests that WASP-21 is evolving off the main sequence which led to a previous overestimation of the stellar density. Using isochrone interpolation, we find a stellar mass of 0.86 ± 0.04 Msun, which is significantly lower than previously reported (1.01 ± 0.03 Msun). Consequently, we find a lower planetary mass of 0.27 ± 0.01 MJup. A lower inclination (87?4 ± 0?3) is also found for the system than previously reported, resulting in a slightly larger stellar (R*= 1.10 ± 0.03 Rsun) and planetary radius (Rp= 1.14 ± 0.04 RJup). The planet radius suggests a hydrogen/helium composition with no core which strengthens the correlation between planetary density and host star metallicity. A new ephemeris is determined for the system, i.e. T0= 245 5084.519 74 ± 0.000 20 (HJD) and P= 4.322 5060 ± 0.000 0031 d. We found no transit timing variations in WASP-21b.
Resumo:
The electronic properties of zircon and hafnon, two wide-gap high-kappa materials, are investigated using many-body perturbation theory (MBPT) combined with the Wannier interpolation technique. For both materials, the calculated band structures differ from those obtained within density-functional theory and MBPT by (i) a slight displacement of the highest valence-band maximum from the Gamma point and (ii) an opening of the indirect band gap to 7.6 and 8.0 eV for zircon and hafnon, respectively. The introduction of vertex corrections in the many-body self-energy does not modify the results except for a global rigid shift of the many-body corrections.
Resumo:
Shape corrections to the standard approximate Kohn-Sham exchange-correlation (xc) potentials are considered with the aim to improve the excitation energies (especially for higher excitations) calculated with time-dependent density functional perturbation theory. A scheme of gradient-regulated connection (GRAC) of inner to outer parts of a model potential is developed. Asymptotic corrections based either on the potential of Fermi and Amaldi or van Leeuwen and Baerends (LB) are seamlessly connected to the (shifted) xc potential of Becke and Perdew (BP) with the GRAC procedure, and are employed to calculate the vertical excitation energies of the prototype molecules N-2, CO, CH2O, C2H4, C5NH5, C6H6, Li-2, Na-2, K-2. The results are compared with those of the alternative interpolation scheme of Tozer and Handy as well as with the results of the potential obtained with the statistical averaging of (model) orbital potentials. Various asymptotically corrected potentials produce high quality excitation energies, which in quite a few cases approach the benchmark accuracy of 0.1 eV for the electronic spectra. Based on these results, the potential BP-GRAC-LB is proposed for molecular response calculations, which is a smooth potential and a genuine "local" density functional with an analytical representation. (C) 2001 American Institute of Physics.
Resumo:
We study the entanglement of two impurity qubits immersed in a Bose-Einstein condensate (BEC) reservoir. This open quantum system model allows for interpolation between a common dephasing scenario and an independent dephasing scenario by modifying the wavelength of the superlattice superposed to the BEC, and how this influences the dynamical properties of the impurities. We demonstrate the existence of rich dynamics corresponding to different values of reservoir parameters, including phenomena such as entanglement trapping, revivals of entanglement, and entanglement generation. In the spirit of reservoir engineering, we present the optimal BEC parameters for entanglement generation and trapping, showing the key role of the ultracold-gas interactions. Copyright (C) EPLA, 2013
Resumo:
Artificial neural network (ANN) methods are used to predict forest characteristics. The data source is the Southeast Alaska (SEAK) Grid Inventory, a ground survey compiled by the USDA Forest Service at several thousand sites. The main objective of this article is to predict characteristics at unsurveyed locations between grid sites. A secondary objective is to evaluate the relative performance of different ANNs. Data from the grid sites are used to train six ANNs: multilayer perceptron, fuzzy ARTMAP, probabilistic, generalized regression, radial basis function, and learning vector quantization. A classification and regression tree method is used for comparison. Topographic variables are used to construct models: latitude and longitude coordinates, elevation, slope, and aspect. The models classify three forest characteristics: crown closure, species land cover, and tree size/structure. Models are constructed using n-fold cross-validation. Predictive accuracy is calculated using a method that accounts for the influence of misclassification as well as measuring correct classifications. The probabilistic and generalized regression networks are found to be the most accurate. The predictions of the ANN models are compared with a classification of the Tongass national forest in southeast Alaska based on the interpretation of satellite imagery and are found to be of similar accuracy.
Resumo:
1. The prediction and mapping of climate in areas between climate stations is of increasing importance in ecology.
2. Four categories of model, simple interpolation, thin plate splines, multiple linear regression and mixed spline-regression, were tested for their ability to predict the spatial distribution of temperature on the British mainland. The models were tested by external cross-verification.
3. The British distribution of mean daily temperature was predicted with the greatest accuracy by using a mixed model: a thin plate spline fitted to the surface of the country, after correction of the data by a selection from 16 independent topographical variables (such as altitude, distance from the sea, slope and topographic roughness), chosen by multiple regression from a digital terrain model (DTM) of the country.
4. The next most accurate method was a pure multiple regression model using the DTM. Both regression and thin plate spline models based on a few variables (latitude, longitude and altitude) only were comparatively unsatisfactory, but some rather simple methods of surface interpolation (such as bilinear interpolation after correction to sea level) gave moderately satisfactory results. Differences between the methods seemed to be dependent largely on their ability to model the effect of the sea on land temperatures.
5. Prediction of temperature by the best methods was greater than 95% accurate in all months of the year, as shown by the correlation between the predicted and actual values. The predicted temperatures were calculated at real altitudes, not subject to sea-level correction.
6. A minimum of just over 30 temperature recording stations would generate a satisfactory surface, provided the stations were well spaced.
7. Maps of mean daily temperature, using the best overall methods are provided; further important variables, such as continentality and length of growing season, were also mapped. Many of these are believed to be the first detailed representations at real altitude.
8. The interpolated monthly temperature surfaces are available on disk.
Resumo:
Quantifying the similarity between two trajectories is a fundamental operation in analysis of spatio-temporal databases. While a number of distance functions exist, the recent shift in the dynamics of the trajectory generation procedure violates one of their core assumptions; a consistent and uniform sampling rate. In this paper, we formulate a robust distance function called Edit Distance with Projections (EDwP) to match trajectories under inconsistent and variable sampling rates through dynamic interpolation. This is achieved by deploying the idea of projections that goes beyond matching only the sampled points while aligning trajectories. To enable efficient trajectory retrievals using EDwP, we design an index structure called TrajTree. TrajTree derives its pruning power by employing the unique combination of bounding boxes with Lipschitz embedding. Extensive experiments on real trajectory databases demonstrate EDwP to be up to 5 times more accurate than the state-of-the-art distance functions. Additionally, TrajTree increases the efficiency of trajectory retrievals by up to an order of magnitude over existing techniques.
Resumo:
The cyclical properties of the Baltic Dry Index (BDI) and their implications for forecasting performance are investigated. We find that changes in the BDI can lead to permanent shocks to trade of major exporting economies. In our forecasting exercise, we show that commodities and trigonometric regression can lead to improved predictions and then use our forecasting results to perform an investment exercise and to show how they can be used for improved risk management in the freight sector.
Resumo:
This thesis studies properties and applications of different generalized Appell polynomials in the framework of Clifford analysis. As an example of 3D-quasi-conformal mappings realized by generalized Appell polynomials, an analogue of the complex Joukowski transformation of order two is introduced. The consideration of a Pascal n-simplex with hypercomplex entries allows stressing the combinatorial relevance of hypercomplex Appell polynomials. The concept of totally regular variables and its relation to generalized Appell polynomials leads to the construction of new bases for the space of homogeneous holomorphic polynomials whose elements are all isomorphic to the integer powers of the complex variable. For this reason, such polynomials are called pseudo-complex powers (PCP). Different variants of them are subject of a detailed investigation. Special attention is paid to the numerical aspects of PCP. An efficient algorithm based on complex arithmetic is proposed for their implementation. In this context a brief survey on numerical methods for inverting Vandermonde matrices is presented and a modified algorithm is proposed which illustrates advantages of a special type of PCP. Finally, combinatorial applications of generalized Appell polynomials are emphasized. The explicit expression of the coefficients of a particular type of Appell polynomials and their relation to a Pascal simplex with hypercomplex entries are derived. The comparison of two types of 3D Appell polynomials leads to the detection of new trigonometric summation formulas and combinatorial identities of Riordan-Sofo type characterized by their expression in terms of central binomial coefficients.
Resumo:
Between the Bullet and the Hole is a film centred on the elusive and complex effects of war on women's role in ballistic research and early computing. The film features new and archival high-speed bullet photography, schlieren and electric spark imagery, bullet sound wave imagery, forensic ballistic photography, slide rulers, punch cards, computer diagrams, and a soundtrack by Scanner. Like a frantic animation storyboard, it explores the flickering space between the frames, testing the perceptual mechanics of visual interpolation, the possibility of reading or deciphering the gap between before and after. Interpolation - the main task of the women studying ballistics in WW2 - is the construction or guessing of missing data using only two known data points. The film tries to unpack this gap, open it up to interrogation. It questions how we read, interpolate or construct the gaps between bullet and hole, perpetrator and victim, presence and absence. The project involves exchanges with specialists in this area such as the Cranfield University Forensics department, London-based Forensic Firearms consultancy, the Imperial War Museum, the ENIAC programmers project, the Smithsonian Institute, and Forensic Scientists at Palm Beach County Sheriff's Office (USA). Exhibitions: Solo exhibition at Dallas Contemporary (Texas, Jan-Mar 2016), including newly commissioned lenticular prints and a dual slide projector installation; Group exhibition the Sydney Biennale (Sydney, Mar-June 2016); UK premiere and solo retrospective screening at Whitechapel Gallery (London); forthcoming solo exhibition at Iliya Fridman Gallery (NY, Oct-Dec 2016); Film festivals and screenings: International Film Festival Rotterdam (Jan 2016); Whitechapel Gallery (London Feb 2016); Cornerhouse/Home (Manchester Nov 2016); Public lectures: Whitechapel Gallery with prof. David Alan Grier and Morgan Quaintance; Carriageworks (Sydney) Prof. Douglas Khan; Monash University (Melbourne); Gertrude Space (Melbourne). Reviews and interviews: Artforum, Studio International, Mousse Magazine.
Resumo:
Prémio de Melhor Artigo de Jovem Investigador atribuído pela empresa Timberlake, apresentado na 1ª Conferência Nacional sobre Computação Simbólica no Ensino e na Investigação - CSEI2012, que decorreu no IST nos dias 2 e 3 de Abril.
Resumo:
Dissertação de mestrado, Qualidade em Análises, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.