58 resultados para Kikuchi approximations

em Queensland University of Technology - ePrints Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

On the microscale, migration, proliferation and death are crucial in the development, homeostasis and repair of an organism; on the macroscale, such effects are important in the sustainability of a population in its environment. Dependent on the relative rates of migration, proliferation and death, spatial heterogeneity may arise within an initially uniform field; this leads to the formation of spatial correlations and can have a negative impact upon population growth. Usually, such effects are neglected in modeling studies and simple phenomenological descriptions, such as the logistic model, are used to model population growth. In this work we outline some methods for analyzing exclusion processes which include agent proliferation, death and motility in two and three spatial dimensions with spatially homogeneous initial conditions. The mean-field description for these types of processes is of logistic form; we show that, under certain parameter conditions, such systems may display large deviations from the mean field, and suggest computationally tractable methods to correct the logistic-type description.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the spectral approximations are used to compute the fractional integral and the Caputo derivative. The effective recursive formulae based on the Legendre, Chebyshev and Jacobi polynomials are developed to approximate the fractional integral. And the succinct scheme for approximating the Caputo derivative is also derived. The collocation method is proposed to solve the fractional initial value problems and boundary value problems. Numerical examples are also provided to illustrate the effectiveness of the derived methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological systems involving proliferation, migration and death are observed across all scales. For example, they govern cellular processes such as wound-healing, as well as the population dynamics of groups of organisms. In this paper, we provide a simplified method for correcting mean-field approximations of volume-excluding birth-death-movement processes on a regular lattice. An initially uniform distribution of agents on the lattice may give rise to spatial heterogeneity, depending on the relative rates of proliferation, migration and death. Many frameworks chosen to model these systems neglect spatial correlations, which can lead to inaccurate predictions of their behaviour. For example, the logistic model is frequently chosen, which is the mean-field approximation in this case. This mean-field description can be corrected by including a system of ordinary differential equations for pair-wise correlations between lattice site occupancies at various lattice distances. In this work we discuss difficulties with this method and provide a simplication, in the form of a partial differential equation description for the evolution of pair-wise spatial correlations over time. We test our simplified model against the more complex corrected mean-field model, finding excellent agreement. We show how our model successfully predicts system behaviour in regions where the mean-field approximation shows large discrepancies. Additionally, we investigate regions of parameter space where migration is reduced relative to proliferation, which has not been examined in detail before, and our method is successful at correcting the deviations observed in the mean-field model in these parameter regimes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the method of renormalization group (RG) [Phys. Rev. E 54, 376 (1996)] is related to the well-known approximations of Rytov and Born used in wave propagation in deterministic and random media. Certain problems in linear and nonlinear media are examined from the viewpoint of RG and compared with the literature on Born and Rytov approximations. It is found that the Rytov approximation forms a special case of the asymptotic expansion generated by the RG, and as such it gives a superior approximation to the exact solution compared with its Born counterpart. Analogous conclusions are reached for nonlinear equations with an intensity-dependent index of refraction where the RG recovers the exact solution. © 2008 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To classify each stage for a progressing disease such as Alzheimer’s disease is a key issue for the disease prevention and treatment. In this study, we derived structural brain networks from diffusion-weighted MRI using whole-brain tractography since there is growing interest in relating connectivity measures to clinical, cognitive, and genetic data. Relatively little work has usedmachine learning to make inferences about variations in brain networks in the progression of the Alzheimer’s disease. Here we developed a framework to utilize generalized low rank approximations of matrices (GLRAM) and modified linear discrimination analysis for unsupervised feature learning and classification of connectivity matrices. We apply the methods to brain networks derived from DWI scans of 41 people with Alzheimer’s disease, 73 people with EMCI, 38 people with LMCI, 47 elderly healthy controls and 221 young healthy controls. Our results show that this new framework can significantly improve classification accuracy when combining multiple datasets; this suggests the value of using data beyond the classification task at hand to model variations in brain connectivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates several competing procedures for computing the prices of vanilla European options, such as puts, calls and binaries, in which the underlying model has a characteristic function that is known in semi-closed form. The algorithms investigated here are the half-range Fourier cosine series, the half-range Fourier sine series and the full-range Fourier series. Their performance is assessed in simulation experiments in which an analytical solution is available and also for a simple affine model of stochastic volatility in which there is no closed-form solution. The results suggest that the half-range sine series approximation is the least effective of the three proposed algorithms. It is rather more difficult to distinguish between the performance of the halfrange cosine series and the full-range Fourier series. However there are two clear differences. First, when the interval over which the density is approximated is relatively large, the full-range Fourier series is at least as good as the half-range Fourier cosine series, and outperforms the latter in pricing out-of-the-money call options, in particular with maturities of three months or less. Second, the computational time required by the half-range Fourier cosine series is uniformly longer than that required by the full-range Fourier series for an interval of fixed length. Taken together,these two conclusions make a case for pricing options using a full-range range Fourier series as opposed to a half-range Fourier cosine series if a large number of options are to be priced in as short a time as possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider a variable-order fractional advection-diffusion equation with a nonlinear source term on a finite domain. Explicit and implicit Euler approximations for the equation are proposed. Stability and convergence of the methods are discussed. Moreover, we also present a fractional method of lines, a matrix transfer technique, and an extrapolation method for the equation. Some numerical examples are given, and the results demonstrate the effectiveness of theoretical analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A review of the main rolling models is conducted to assess their suitability for modelling the foil rolling process. Two such models are Fleck and Johnson's Hertzian model and Fleck, Johnson, Mear and Zhang's Influence Function model. Both of these models are approximated through the use of perturbation methods. Decrease in the computation time resulted when compared with the numerical solution. The Hertzian model was approximated using the ratio of the yield stress of the strip to the plane-strain Young's Modulus of the rolls as the small perturbation parameter. The Influence Function model approximation takes advantage of the solution of the well-known Aerofoil Integral Equation to gain an insight into how the choice of interior boundary points affects the stability of numerical solution of the model's equations. These approximations require less computation than their full models and, in the case of the Hertzian approximation, only introduces a small error in the predictions of roll force roll torque. Hence the Hertzian approximate method is suitable for on-line control. The predictions from the Influence Function approximation underestimates the predictions from the numerical results. Better approximation of the pressure in the plastic reduction regions is the main source of this error.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is predicted that with increased life expectancy in the developed world, there will be a greater demand for synthetic materials to repair or regenerate lost, injured or diseased bone (Hench & Thompson 2010). There are still few synthetic materials having true bone inductivity, which limits their application for bone regeneration, especially in large-size bone defects. To solve this problem, growth factors, such as bone morphogenetic proteins (BMPs), have been incorporated into synthetic materials in order to stimulate de novo bone formation in the center of large-size bone defects. The greatest obstacle with this approach is that the rapid diffusion of the protein from the carrier material, leading to a precipitous loss of bioactivity; the result is often insufficient local induction or failure of bone regeneration (Wei et al. 2007). It is critical that the protein is loaded in the carrier material in conditions which maintains its bioactivity (van de Manakker et al. 2009). For this reason, the efficient loading and controlled release of a protein from a synthetic material has remained a significant challenge. The use of microspheres as protein/drug carriers has received considerable attention in recent years (Lee et al. 2010; Pareta & Edirisinghe 2006; Wu & Zreiqat 2010). Compared to macroporous block scaffolds, the chief advantage of microspheres is their superior protein-delivery properties and ability to fill bone defects with irregular and complex shapes and sizes. Upon implantation, the microspheres are easily conformed to the irregular implant site, and the interstices between the particles provide space for both tissue and vascular ingrowth, which are important for effective and functional bone regeneration (Hsu et al. 1999). Alginates are natural polysaccharides and their production does not have the implicit risk of contamination with allo or xeno-proteins or viruses (Xie et al. 2010). Because alginate is generally cytocompatible, it has been used extensively in medicine, including cell therapy and tissue engineering applications (Tampieri et al. 2005; Xie et al. 2010; Xu et al. 2007). Calcium cross-linked alginate hydrogel is considered a promising material as a delivery matrix for drugs and proteins, since its gel microspheres form readily in aqueous solutions at room temperature, eliminating the need for harsh organic solvents, thereby maintaining the bioactivity of proteins in the process of loading into the microspheres (Jay & Saltzman 2009; Kikuchi et al. 1999). In addition, calcium cross-linked alginate hydrogel is degradable under physiological conditions (Kibat PG et al. 1990; Park K et al. 1993), which makes alginate stand out as an attractive candidate material for the protein carrier and bone regeneration (Hosoya et al. 2004; Matsuno et al. 2008; Turco et al. 2009). However, the major disadvantages of alginate microspheres is their low loading efficiency and also rapid release of proteins due to the mesh-like networks of the gel (Halder et al. 2005). Previous studies have shown that a core-shell structure in drug/protein carriers can overcome the issues of limited loading efficiencies and rapid release of drug or protein (Chang et al. 2010; Molvinger et al. 2004; Soppimath et al. 2007). We therefore hypothesized that introducing a core-shell structure into the alginate microspheres could solve the shortcomings of the pure alginate. Calcium silicate (CS) has been tested as a biodegradable biomaterial for bone tissue regeneration. CS is capable of inducing bone-like apatite formation in simulated body fluid (SBF) and its apatite-formation rate in SBF is faster than that of Bioglass® and A-W glass-ceramics (De Aza et al. 2000; Siriphannon et al. 2002). Titanium alloys plasma-spray coated with CS have excellent in vivo bioactivity (Xue et al. 2005) and porous CS scaffolds have enhanced in vivo bone formation ability compared to porous β-tricalcium phosphate ceramics (Xu et al. 2008). In light of the many advantages of this material, we decided to prepare CS/alginate composite microspheres by combining a CS shell with an alginate core to improve their protein delivery and mineralization for potential protein delivery and bone repair applications