768 resultados para trilinear interpolation
Resumo:
Measurements of the sea surface obtained by satellite borne radar altimetry are irregularly spaced and contaminated with various modelling and correction errors. The largest source of uncertainty for low Earth orbiting satellites such as ERS-1 and Geosat may be attributed to orbital modelling errors. The empirical correction of such errors is investigated by examination of single and dual satellite crossovers, with a view to identifying the extent of any signal aliasing: either by removal of long wavelength ocean signals or introduction of additional error signals. From these studies, it was concluded that sinusoidal approximation of the dominant one cycle per revolution orbit error over arc lengths of 11,500 km did not remove a significant mesoscale ocean signal. The use of TOPEX/Poseidon dual crossovers with ERS-1 was shown to substantially improve the radial accuracy of ERS-1, except for some absorption of small TOPEX/Poseidon errors. The extraction of marine geoid information is of great interest to the oceanographic community and was the subject of the second half of this thesis. Firstly through determination of regional mean sea surfaces using Geosat data, it was demonstrated that a dataset with 70cm orbit error contamination could produce a marine geoid map which compares to better than 12cm with an accurate regional high resolution gravimetric geoid. This study was then developed into Optimal Fourier Transform Interpolation, a technique capable of analysing complete altimeter datasets for the determination of consistent global high resolution geoid maps. This method exploits the regular nature of ascending and descending data subsets thus making possible the application of fast Fourier transform algorithms. Quantitative assessment of this method was limited by the lack of global ground truth gravity data, but qualitative results indicate good signal recovery from a single 35-day cycle.
Resumo:
A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.
Resumo:
Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
We show that inserting pilot tones with frequency intervals inversely proportional to the subcarrier index exhibits greatly improved dispersion estimation performance when compared to the equal spacing design in optical fast orthogonal frequency division multiplexing (F-OFDM). With the proposed design, a 20-Gbit/s four amplitude shift keying optical F-OFDM system with 840-km transmission without optical dispersion compensation is experimentally demonstrated. It is shown that a single F-OFDM symbol with six pilot tones can achieve near-optimal estimation performance for the 840-km dispersion. This is in contrast to the minimum of ten pilot tones using an equal spacing design with either cubic or Fourier-transform-based interpolation. © 2013 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a novel intonation modelling approach and demonstrates its applicability using the Standard Yorùbá language. Our approach is motivated by the theory that abstract and realised forms of intonation and other dimensions of prosody should be modelled within a modular and unified framework. In our model, this framework is implemented using the Relational Tree (R-Tree) technique. The R-Tree is a sophisticated data structure for representing a multi-dimensional waveform in the form of a tree. Our R-Tree for an utterance is generated in two steps. First, the abstract structure of the waveform, called the Skeletal Tree (S-Tree), is generated using tone phonological rules for the target language. Second, the numerical values of the perceptually significant peaks and valleys on the S-Tree are computed using a fuzzy logic based model. The resulting points are then joined by applying interpolation techniques. The actual intonation contour is synthesised by Pitch Synchronous Overlap Technique (PSOLA) using the Praat software. We performed both quantitative and qualitative evaluations of our model. The preliminary results suggest that, although the model does not predict the numerical speech data as accurately as contemporary data-driven approaches, it produces synthetic speech with comparable intelligibility and naturalness. Furthermore, our model is easy to implement, interpret and adapt to other tone languages.
Resumo:
Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem. In particular very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic contro algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this short paper.
Resumo:
The following problem, suggested by Laguerre’s Theorem (1884), remains open: Characterize all real sequences {μk} k=0...∞ which have the zero-diminishing property; that is, if k=0...n, p(x) = ∑(ak x^k) is any P real polynomial, then k=0...n, p(x) = ∑(μk ak x^k) has no more real zeros than p(x). In this paper this problem is solved under the additional assumption of a weak growth condition on the sequence {μk} k=0...∞, namely lim n→∞ | μn |^(1/n) < ∞. More precisely, it is established that the real sequence {μk} k≥0 is a weakly increasing zerodiminishing sequence if and only if there exists σ ∈ {+1,−1} and an entire function n≥1, Φ(z)= be^(az) ∏(1+ x/αn), a, b ∈ R^1, b =0, αn > 0 ∀n ≥ 1, ∑(1/αn) < ∞, such that µk = (σ^k)/Φ(k), ∀k ≥ 0.
Resumo:
Cardiotocographic data provide physicians information about foetal development and, through assessment of specific parameters (like accelerations, uterine contractions, ...), permit to assess conditions such as foetal distress. An incorrect evaluation of foetal status can be of course very dangerous. In the last decades, to improve interpretation of cardiotocographic recordings, great interest has been dedicated to FHRV spectral analysis. It is worth reminding that FHR is intrinsically an uneven series and that to obtain evenly sampled series, many commercial cardiotocographs use a zero-order interpolation (storage rate of CTG data equal to 4 Hz). This is not suitable for frequency analyses because interpolation introduces alterations in the FHR power spectrum. In particular, this interpolation process can produce artifacts and an attenuation of the high-frequency components of the PSD that, for example, affects the estimation of the sympatho-vagal balance (SVB - computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations due to zero-order interpolation and other CTG storage rates, in this work, we simulated uneven FHR series with set characteristics, their evenly spaced versions (with different storage rates) and computed SVB values by PSD. For PSD estimation, we chose the Lomb method, as suggested by other authors in application to uneven HR series. ©2009 IEEE.
Resumo:
Congenital nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations, that can arise since the first months of life. Pathogenesis of congenital nystagmus is still under investigation. In general, CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, image stabilisation is still achieved during the short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recording are routinely employed, allowing physicians to extract and analyse nystagmus main features such as shape, amplitude and frequency. Using eye movement recording, it is also possible to compute estimated visual acuity predictors: analytical functions which estimates expected visual acuity using signal features such as foveation time and foveation position variability. Use of those functions add information to typical visual acuity measurement (e.g. Landolt C test) and could be a support for therapy planning or monitoring. This study focus on robust detection of CN patients' foveations. Specifically, it proposes a method to recognize the exact signal tracts in which a subject foveates, This paper also analyses foveation sequences. About 50 eyemovement recordings, either infrared-oculographic or electrooculographic, from different CN subjects were acquired. Results suggest that an exponential interpolation for the slow phases of nystagmus could improve foveation time computing and reduce influence of breaking saccades and data noise. Moreover a concise description of foveation sequence variability can be achieved using non-fitting splines. © 2009 Springer Berlin Heidelberg.
Resumo:
This paper proposes a methodological scheme for the photovoltaic (PV) simulator design. With the advantages of a digital controller system, linear interpolation is proposed for precise fitting with higher computational efficiency. A novel control strategy that directly tackles two different duty cycles is proposed and implemented to achieve a full-range operation including short circuit (SC) and open circuit (OC) conditions. Systematic design procedures for both hardware and algorithm are explained, and a prototype is built. Experimental results confirm an accurate steady state performance under different load conditions, including SC and OC. This low power apparatus can be adopted for PV education and research with a limited budget.
Resumo:
2000 Mathematics Subject Classification: 26E25, 41A35, 41A36, 47H04, 54C65.
Resumo:
MSC 2010: 26A33, 34A08, 34K37
Resumo:
Reliability of power converters is of crucial importance in switched reluctance motor drives used for safety-critical applications. Open-circuit faults in power converters will cause the motor to run in unbalanced states, and if left untreated, they will lead to damage to the motor and power modules, and even cause a catastrophic failure of the whole drive system. This study is focused on using a single current sensor to detect open-circuit faults accurately. An asymmetrical half-bridge converter is considered in this study and the faults of single-phase open and two-phase open are analysed. Three different bus positions are defined. On the basis of a fast Fourier transform algorithm with Blackman window interpolation, the bus current spectrums before and after open-circuit faults are analysed in details. Their fault characteristics are extracted accurately by the normalisations of the phase fundamental frequency component and double phase fundamental frequency component, and the fault characteristics of the three bus detection schemes are also compared. The open-circuit faults can be located by finding the relationship between the bus current and rotor position. The effectiveness of the proposed diagnosis method is validated by the simulation results and experimental tests.
Resumo:
Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^