929 resultados para Matrix Transform Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we introduce a new algorithm, based on the successful work of Fathi and Alexandrov, on hybrid Monte Carlo algorithms for matrix inversion and solving systems of linear algebraic equations. This algorithm consists of two parts, approximate inversion by Monte Carlo and iterative refinement using a deterministic method. Here we present a parallel hybrid Monte Carlo algorithm, which uses Monte Carlo to generate an approximate inverse and that improves the accuracy of the inverse with an iterative refinement. The new algorithm is applied efficiently to sparse non-singular matrices. When we are solving a system of linear algebraic equations, Bx = b, the inverse matrix is used to compute the solution vector x = B(-1)b. We present results that show the efficiency of the parallel hybrid Monte Carlo algorithm in the case of sparse matrices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years nonpolynomial finite element methods have received increasing attention for the efficient solution of wave problems. As with their close cousin the method of particular solutions, high efficiency comes from using solutions to the Helmholtz equation as basis functions. We present and analyze such a method for the scattering of two-dimensional scalar waves from a polygonal domain that achieves exponential convergence purely by increasing the number of basis functions in each element. Key ingredients are the use of basis functions that capture the singularities at corners and the representation of the scattered field towards infinity by a combination of fundamental solutions. The solution is obtained by minimizing a least-squares functional, which we discretize in such a way that a matrix least-squares problem is obtained. We give computable exponential bounds on the rate of convergence of the least-squares functional that are in very good agreement with the observed numerical convergence. Challenging numerical examples, including a nonconvex polygon with several corner singularities, and a cavity domain, are solved to around 10 digits of accuracy with a few seconds of CPU time. The examples are implemented concisely with MPSpack, a MATLAB toolbox for wave computations with nonpolynomial basis functions, developed by the authors. A code example is included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a method for simulating multivariate samples that have exact means, covariances, skewness and kurtosis. We introduce a new class of rectangular orthogonal matrix which is fundamental to the methodology and we call these matrices L matrices. They may be deterministic, parametric or data specific in nature. The target moments determine the L matrix then infinitely many random samples with the same exact moments may be generated by multiplying the L matrix by arbitrary random orthogonal matrices. This methodology is thus termed “ROM simulation”. Considering certain elementary types of random orthogonal matrices we demonstrate that they generate samples with different characteristics. ROM simulation has applications to many problems that are resolved using standard Monte Carlo methods. But no parametric assumptions are required (unless parametric L matrices are used) so there is no sampling error caused by the discrete approximation of a continuous distribution, which is a major source of error in standard Monte Carlo simulations. For illustration, we apply ROM simulation to determine the value-at-risk of a stock portfolio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The background error covariance matrix, B, is often used in variational data assimilation for numerical weather prediction as a static and hence poor approximation to the fully dynamic forecast error covariance matrix, Pf. In this paper the concept of an Ensemble Reduced Rank Kalman Filter (EnRRKF) is outlined. In the EnRRKF the forecast error statistics in a subspace defined by an ensemble of states forecast by the dynamic model are found. These statistics are merged in a formal way with the static statistics, which apply in the remainder of the space. The combined statistics may then be used in a variational data assimilation setting. It is hoped that the nonlinear error growth of small-scale weather systems will be accurately captured by the EnRRKF, to produce accurate analyses and ultimately improved forecasts of extreme events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper extends the singular value decomposition to a path of matricesE(t). An analytic singular value decomposition of a path of matricesE(t) is an analytic path of factorizationsE(t)=X(t)S(t)Y(t) T whereX(t) andY(t) are orthogonal andS(t) is diagonal. To maintain differentiability the diagonal entries ofS(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic pathE(t) always admits a real analytic SVD, a full-rank, smooth pathE(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Euler-like and extrapolated Euler-like numerical methods for approximating an analytic SVD and prove that the Euler-like method converges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study boundary value problems posed in a semistrip for the elliptic sine-Gordon equation, which is the paradigm of an elliptic integrable PDE in two variables. We use the method introduced by one of the authors, which provides a substantial generalization of the inverse scattering transform and can be used for the analysis of boundary as opposed to initial-value problems. We first express the solution in terms of a 2 by 2 matrix Riemann-Hilbert problem whose \jump matrix" depends on both the Dirichlet and the Neumann boundary values. For a well posed problem one of these boundary values is an unknown function. This unknown function is characterised in terms of the so-called global relation, but in general this characterisation is nonlinear. We then concentrate on the case that the prescribed boundary conditions are zero along the unbounded sides of a semistrip and constant along the bounded side. This corresponds to a case of the so-called linearisable boundary conditions, however a major difficulty for this problem is the existence of non-integrable singularities of the function q_y at the two corners of the semistrip; these singularities are generated by the discontinuities of the boundary condition at these corners. Motivated by the recent solution of the analogous problem for the modified Helmholtz equation, we introduce an appropriate regularisation which overcomes this difficulty. Furthermore, by mapping the basic Riemann-Hilbert problem to an equivalent modified Riemann-Hilbert problem, we show that the solution can be expressed in terms of a 2 by 2 matrix Riemann-Hilbert problem whose jump matrix depends explicitly on the width of the semistrip L, on the constant value d of the solution along the bounded side, and on the residues at the given poles of a certain spectral function denoted by h. The determination of the function h remains open.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If acid-sensitive drugs or cells are administered orally, there is often a reduction in efficacy associated with gastric passage. Formulation into a polymer matrix is a potential method to improve their stability. The visualization of pH within these materials may help better understand the action of these polymer systems and allow comparison of different formulations. We herein describe the development of a novel confocal laser-scanning microscopy (CLSM) method for visualizing pH changes within polymer matrices and demonstrate its applicability to an enteric formulation based on chitosan-coated alginate gels. The system in question is first shown to protect an acid-sensitive bacterial strain to low pH, before being studied by our technique. Prior to this study, it has been claimed that protection by these materials is a result of buffering, but this has not been demonstrated. The visualization of pH within these matrices during exposure to a pH 2.0 simulated gastric solution showed an encroachment of acid from the periphery of the capsule, and a persistence of pHs above 2.0 within the matrix. This implies that the protective effect of the alginate-chitosan matrices is most likely due to a combination of buffering of acid as it enters the polymer matrix and the slowing of acid penetration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the numerical treatment of second kind integral equations on the real line of the form ∅(s) = ∫_(-∞)^(+∞)▒〖κ(s-t)z(t)ϕ(t)dt,s=R〗 (abbreviated ϕ= ψ+K_z ϕ) in which K ϵ L_1 (R), z ϵ L_∞ (R) and ψ ϵ BC(R), the space of bounded continuous functions on R, are assumed known and ϕ ϵ BC(R) is to be determined. We first derive sharp error estimates for the finite section approximation (reducing the range of integration to [-A, A]) via bounds on (1-K_z )^(-1)as an operator on spaces of weighted continuous functions. Numerical solution by a simple discrete collocation method on a uniform grid on R is then analysed: in the case when z is compactly supported this leads to a coefficient matrix which allows a rapid matrix-vector multiply via the FFT. To utilise this possibility we propose a modified two-grid iteration, a feature of which is that the coarse grid matrix is approximated by a banded matrix, and analyse convergence and computational cost. In cases where z is not compactly supported a combined finite section and two-grid algorithm can be applied and we extend the analysis to this case. As an application we consider acoustic scattering in the half-plane with a Robin or impedance boundary condition which we formulate as a boundary integral equation of the class studied. Our final result is that if z (related to the boundary impedance in the application) takes values in an appropriate compact subset Q of the complex plane, then the difference between ϕ(s)and its finite section approximation computed numerically using the iterative scheme proposed is ≤C_1 [kh log⁡〖(1⁄kh)+(1-Θ)^((-1)⁄2) (kA)^((-1)⁄2) 〗 ] in the interval [-ΘA,ΘA](Θ<1) for kh sufficiently small, where k is the wavenumber and h the grid spacing. Moreover this numerical approximation can be computed in ≤C_2 N log⁡N operations, where N = 2A/h is the number of degrees of freedom. The values of the constants C1 and C2 depend only on the set Q and not on the wavenumber k or the support of z.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two recent works have adapted the Kalman–Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance. We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables. Finally, the performance of our ensemble transform Kalman–Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to their broad differentiation potential and their persistence into adulthood, human neural crest-derived stem cells (NCSCs) harbour great potential for autologous cellular therapies, which include the treatment of neurodegenerative diseases and replacement of complex tissues containing various cell types, as in the case of musculoskeletal injuries. The use of serum-free approaches often results in insufficient proliferation of stem cells and foetal calf serum implicates the use of xenogenic medium components. Thus, there is much need for alternative cultivation strategies. In this study we describe for the first time a novel, human blood plasma based semi-solid medium for cultivation of human NCSCs. We cultivated human neural crest-derived inferior turbinate stem cells (ITSCs) within a blood plasma matrix, where they revealed higher proliferation rates compared to a standard serum-free approach. Three-dimensionality of the matrix was investigated using helium ion microscopy. ITSCs grew within the matrix as revealed by laser scanning microscopy. Genetic stability and maintenance of stemness characteristics were assured in 3D cultivated ITSCs, as demonstrated by unchanged expression profile and the capability for self-renewal. ITSCs pre-cultivated in the 3D matrix differentiated efficiently into ectodermal and mesodermal cell types, particularly including osteogenic cell types. Furthermore, ITSCs cultivated as described here could be easily infected with lentiviruses directly in substrate for potential tracing or gene therapeutic approaches. Taken together, the use of human blood plasma as an additive for a completely defined medium points towards a personalisable and autologous cultivation of human neural crest-derived stem cells under clinical grade conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The aim of this study was to compare the accuracy of fit of three types of implant-supported frameworks cast in Ni-Cr alloy: specifically, a framework cast as one piece compared to frameworks cast separately in sections to the transverse or the diagonal axis, and later laser welded. Materials and Methods: Three sets of similar implant-supported frameworks were constructed. The first group of six 3-unit implant-supported frameworks were cast as one piece, the second group of six were sectioned in the transverse axis of the pontic region prior to casting, and the last group of six were sectioned in the diagonal axis of the pontic region prior to casting. The sectioned frameworks were positioned in the matrix (10 N(.)cm torque) and laser welded. To evaluate passive fit, readings were made with an optical microscope with both screws tightened and with only one-screw tightened. Data were submitted to ANOVA and Tukey-Kramer`s test (p < 0.05). Results: When both screws were tightened, no differences were found between the three groups (p > 0.05). In the single-screw-tightened test, with readings made opposite to the tightened side, the group cast as one piece (57.02 +/- 33.48 mu m) was significantly different (p < 0.05) from the group sectioned diagonally (18.92 +/- 4.75 mu m) but no different (p > 0.05) from the group transversally sectioned (31.42 +/- 20.68 mu m). On the tightened side, no significant differences were found between the groups (p > 0.05). Conclusions: Results of this study showed that casting diagonally sectioned frameworks lowers misfit levels of prosthetic implant-supported frameworks and also improves the levels of passivity to the same frameworks when compared to structures cast as one piece.