944 resultados para Trigonometric interpolation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urquhart,C., Spink, S., Thomas, R. & Weightman, A. (2007). Developing a toolkit for assessing the impact of health library services on patient care. Report to LKDN (Libraries and Knowledge Development Network). Aberystwyth: Department of Information Studies, Aberystwyth University. Sponsorship: Libraries and Knowledge Development Network/ NHS

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Z. Huang and Q. Shen. Fuzzy interpolative reasoning via scale and move transformation. IEEE Transactions on Fuzzy Systems, 14(2):340-359.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Z. Huang and Q. Shen. Scale and move transformation-based fuzzy interpolative reasoning: A revisit. Proceedings of the 13th International Conference on Fuzzy Systems, pages 623-628, 2004.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neoplastic tissue is typically highly vascularized, contains abnormal concentrations of extracellular proteins (e.g. collagen, proteoglycans) and has a high interstitial fluid pres- sure compared to most normal tissues. These changes result in an overall stiffening typical of most solid tumors. Elasticity Imaging (EI) is a technique which uses imaging systems to measure relative tissue deformation and thus noninvasively infer its mechanical stiffness. Stiffness is recovered from measured deformation by using an appropriate mathematical model and solving an inverse problem. The integration of EI with existing imaging modal- ities can improve their diagnostic and research capabilities. The aim of this work is to develop and evaluate techniques to image and quantify the mechanical properties of soft tissues in three dimensions (3D). To that end, this thesis presents and validates a method by which three dimensional ultrasound images can be used to image and quantify the shear modulus distribution of tissue mimicking phantoms. This work is presented to motivate and justify the use of this elasticity imaging technique in a clinical breast cancer screening study. The imaging methodologies discussed are intended to improve the specificity of mammography practices in general. During the development of these techniques, several issues concerning the accuracy and uniqueness of the result were elucidated. Two new algorithms for 3D EI are designed and characterized in this thesis. The first provides three dimensional motion estimates from ultrasound images of the deforming ma- terial. The novel features include finite element interpolation of the displacement field, inclusion of prior information and the ability to enforce physical constraints. The roles of regularization, mesh resolution and an incompressibility constraint on the accuracy of the measured deformation is quantified. The estimated signal to noise ratio of the measured displacement fields are approximately 1800, 21 and 41 for the axial, lateral and eleva- tional components, respectively. The second algorithm recovers the shear elastic modulus distribution of the deforming material by efficiently solving the three dimensional inverse problem as an optimization problem. This method utilizes finite element interpolations, the adjoint method to evaluate the gradient and a quasi-Newton BFGS method for optimiza- tion. Its novel features include the use of the adjoint method and TVD regularization with piece-wise constant interpolation. A source of non-uniqueness in this inverse problem is identified theoretically, demonstrated computationally, explained physically and overcome practically. Both algorithms were test on ultrasound data of independently characterized tissue mimicking phantoms. The recovered elastic modulus was in all cases within 35% of the reference elastic contrast. Finally, the preliminary application of these techniques to tomosynthesis images showed the feasiblity of imaging an elastic inclusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L»l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (~25-km grid spacing) to the same resolution as the NCEP stage IV products (~4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied. © 2010 American Meteorological Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electromagnetic metamaterials are artificially structured media typically composed of arrays of resonant electromagnetic circuits, the dimension and spacing of which are considerably smaller than the free-space wavelengths of operation. The constitutive parameters for metamaterials, which can be obtained using full-wave simulations in conjunction with numerical retrieval algorithms, exhibit artifacts related to the finite size of the metamaterial cell relative to the wavelength. Liu showed that the complicated, frequency-dependent forms of the constitutive parameters can be described by a set of relatively simple analytical expressions. These expressions provide useful insight and can serve as the basis for more intelligent interpolation or optimization schemes. Here, we show that the same analytical expressions can be obtained using a transfer-matrix formalism applied to a one-dimensional periodic array of thin, resonant, dielectric, or magnetic sheets. The transfer-matrix formalism breaks down, however, when both electric and magnetic responses are present in the same unit cell, as it neglects the magnetoelectric coupling between unit cells. We show that an alternative analytical approach based on the same physical model must be applied for such structures. Furthermore, in addition to the intercell coupling, electric and magnetic resonators within a unit cell may also exhibit magnetoelectric coupling. For such cells, we find an analytical expression for the effective index, which displays markedly characteristic dispersion features that depend on the strength of the coupling coefficient. We illustrate the applicability of the derived expressions by comparing to full-wave simulations on magnetoelectric unit cells. We conclude that the design of metamaterials with tailored simultaneous electric and magnetic response-such as negative index materials-will generally be complicated by potentially unwanted magnetoelectric coupling. © 2010 The American Physical Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new general cell-centered solution procedure based upon the conventional control or finite volume (CV or FV) approach has been developed for numerical heat transfer and fluid flow which encompasses both structured and unstructured meshes for any kind of mixed polygon cell. Unlike conventional FV methods for structured and block structured meshes and both FV and FE methods for unstructured meshes, the irregular control volume (ICV) method does not require the shape of the element or cell to be predefined because it simply exploits the concept of fluxes across cell faces. That is, the ICV method enables meshes employing mixtures of triangular, quadrilateral, and any other higher order polygonal cells to be exploited using a single solution procedure. The ICV approach otherwise preserves all the desirable features of conventional FV procedures for a structured mesh; in the current implementation, collocation of variables at cell centers is used with a Rhie and Chow interpolation (to suppress pressure oscillation in the flow field) in the context of the SIMPLE pressure correction solution procedure. In fact all other FV structured mesh-based methods may be perceived as a subset of the ICV formulation. The new ICV formulation is benchmarked using two standard computational fluid dynamics (CFD) problems i.e., the moving lid cavity and the natural convection driven cavity. Both cases were solved with a variety of structured and unstructured meshes, the latter exploiting mixed polygonal cell meshes. The polygonal mesh experiments show a higher degree of accuracy for equivalent meshes (in nodal density terms) using triangular or quadrilateral cells; these results may be interpreted in a manner similar to the CUPID scheme used in structured meshes for reducing numerical diffusion for flows with changing direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Semi-Lagrangian finite volume schemes for the numerical approximation of linear advection equations are presented. These schemes are constructed so that the conservation properties are preserved by the numerical approximation. This is achieved using an interpolation procedure based on area-weighting. Numerical results are presented illustrating some of the features of these schemes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new finite volume method for solving the incompressible Navier--Stokes equations is presented. The main features of this method are the location of the velocity components and pressure on different staggered grids and a semi-Lagrangian method for the treatment of convection. An interpolation procedure based on area-weighting is used for the convection part of the computation. The method is applied to flow through a constricted channel, and results are obtained for Reynolds numbers, based on half the flow rate, up to 1000. The behavior of the vortex in the salient corner is investigated qualitatively and quantitatively, and excellent agreement is found with the numerical results of Dennis and Smith [Proc. Roy. Soc. London A, 372 (1980), pp. 393-414] and the asymptotic theory of Smith [J. Fluid Mech., 90 (1979), pp. 725-754].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a generalisation of the k-nearest neighbour (k-NN) retrieval method based on an error function using distance metrics in the solution and problem space. It is an interpolative method which is proposed to be effective for sparse case bases. The method applies equally to nominal, continuous and mixed domains, and does not depend upon an embedding n-dimensional space. In continuous Euclidean problem domains, the method is shown to be a generalisation of the Shepard's Interpolation method. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The performance of the retrieval method is examined with reference to the Iris classification problem,and to a simulated sparse nominal value test problem. The introducion of a solution-space metric is shown to out-perform conventional nearest neighbours methods on sparse case bases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The overall objective of this work is to develop a computational model of particle degradation during dilute-phasepneumatic conveying. A key feature of such a model is the prediction of particle breakage due to particle–wall collisions in pipeline bends. This paper presents a method for calculating particle impact degradation propensity under a range of particle velocities and particle sizes. It is based on interpolation on impact data obtained in a new laboratory-scale degradation tester. The method is tested and validated against experimental results for degradation at 90± impact angle of a full-size distribution sample of granulated sugar. In a subsequent work, the calculation of degradation propensity is coupled with a ow model of the solids and gas phases in the pipeline.