67 resultados para Tridiagonal Kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Prony fitting theory is applied in this paper to solve the deconvolution problem. There are two cases in deconvolution in which unstable solution is easy to appear. They are: (1)the frequency band of known kernel is more narraw than that of the unknown kernel; (2) there exists noise. These two cases are studied thoroughly and the effectiveness of Prony fitting method is showed. Finally, this method is simulated in computer. The simulation results are compared with those obtained by using FFT method directly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the heat, linear Schrodinger and linear KdV equations in the domain l(t) < x < ∞, 0 < t < T, with prescribed initial and boundary conditions and with l(t) a given differentiable function. For the first two equations, we show that the unknown Neumann or Dirichlet boundary value can be computed as the solution of a linear Volterra integral equation with an explicit weakly singular kernel. This integral equation can be derived from the formal Fourier integral representation of the solution. For the linear KdV equation we show that the two unknown boundary values can be computed as the solution of a system of linear Volterra integral equations with explicit weakly singular kernels. The derivation in this case makes crucial use of analyticity and certain invariance properties in the complex spectral plane. The above Volterra equations are shown to admit a unique solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new sparse kernel probability density function (pdf) estimator based on zero-norm constraint is constructed using the classical Parzen window (PW) estimate as the target function. The so-called zero-norm of the parameters is used in order to achieve enhanced model sparsity, and it is suggested to minimize an approximate function of the zero-norm. It is shown that under certain condition, the kernel weights of the proposed pdf estimator based on the zero-norm approximation can be updated using the multiplicative nonnegative quadratic programming algorithm. Numerical examples are employed to demonstrate the efficacy of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides a new proof of a theorem of Chandler-Wilde, Chonchaiya, and Lindner that the spectra of a certain class of infinite, random, tridiagonal matrices contain the unit disc almost surely. It also obtains an analogous result for a more general class of random matrices whose spectra contain a hole around the origin. The presence of the hole forces substantial changes to the analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simulating spiking neural networks is of great interest to scientists wanting to model the functioning of the brain. However, large-scale models are expensive to simulate due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present NeMo, a platform for such simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs). NeMo makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Our GPU kernel can deliver up to 400 million spikes per second. This corresponds to a real-time simulation of around 40 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider in this paper the solvability of linear integral equations on the real line, in operator form (λ−K)φ=ψ, where and K is an integral operator. We impose conditions on the kernel, k, of K which ensure that K is bounded as an operator on . Let Xa denote the weighted space as |s|→∞}. Our first result is that if, additionally, |k(s,t)|⩽κ(s−t), with and κ(s)=O(|s|−b) as |s|→∞, for some b>1, then the spectrum of K is the same on Xa as on X, for 0kernel takes the form k(s,t)=κ(s−t)z(t), with , , and κ(s)=O(|s|−b) as |s|→∞, for some b>1. As an example where kernels of this latter form occur we discuss a boundary integral equation formulation of an impedance boundary value problem for the Helmholtz equation in a half-plane.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a Nystr¨om/product integration method for a class of second kind integral equations on the real line which arise in problems of two-dimensional scalar and elastic wave scattering by unbounded surfaces. Stability and convergence of the method is established with convergence rates dependent on the smoothness of components of the kernel. The method is applied to the problem of acoustic scattering by a sound soft one-dimensional surface which is the graph of a function f, and superalgebraic convergence is established in the case when f is infinitely smooth. Numerical results are presented illustrating this behavior for the case when f is periodic (the diffraction grating case). The Nystr¨om method for this problem is stable and convergent uniformly with respect to the period of the grating, in contrast to standard integral equation methods for diffraction gratings which fail at a countable set of grating periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers general second kind integral equations of the form(in operator form φ − kφ = ψ), where the functions k and ψ are assumed known, with ψ ∈ Y, the space of bounded continuous functions on R, and k such that the mapping s → k(s, · ), from R to L1(R), is bounded and continuous. The function φ ∈ Y is the solution to be determined. Conditions on a set W ⊂ BC(R, L1(R)) are obtained such that a generalised Fredholm alternative holds: If W satisfies these conditions and I − k is injective for all k ∈ W then I − k is also surjective for all k ∈ W and, moreover, the inverse operators (I − k) − 1 on Y are uniformly bounded for k ∈ W. The approximation of the kernel in the integral equation by a sequence (kn) converging in a weak sense to k is also considered and results on stability and convergence are obtained. These general theorems are used to establish results for two special classes of kernels: k(s, t) = κ(s − t)z(t) and k(s, t) = κ(s − t)λ(s − t, t), where κ ∈ L1(R), z ∈ L∞(R), and λ ∈ BC((R\{0}) × R). Kernels of both classes arise in problems of time harmonic wave scattering by unbounded surfaces. The general integral equation results are here applied to prove the existence of a solution for a boundary integral equation formulation of scattering by an infinite rough surface and to consider the stability and convergence of approximation of the rough surface problem by a sequence of diffraction grating problems of increasingly large period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider second kind integral equations of the form x(s) - (abbreviated x - K x = y ), in which Ω is some unbounded subset of Rn. Let Xp denote the weighted space of functions x continuous on Ω and satisfying x (s) = O(|s|-p ),s → ∞We show that if the kernel k(s,t) decays like |s — t|-q as |s — t| → ∞ for some sufficiently large q (and some other mild conditions on k are satisfied), then K ∈ B(XP) (the set of bounded linear operators on Xp), for 0 ≤ p ≤ q. If also (I - K)-1 ∈ B(X0) then (I - K)-1 ∈ B(XP) for 0 < p < q, and (I- K)-1∈ B(Xq) if further conditions on k hold. Thus, if k(s, t) = O(|s — t|-q). |s — t| → ∞, and y(s)=O(|s|-p), s → ∞, the asymptotic behaviour of the solution x may be estimated as x (s) = O(|s|-r), |s| → ∞, r := min(p, q). The case when k(s,t) = к(s — t), so that the equation is of Wiener-Hopf type, receives especial attention. Conditions, in terms of the symbol of I — K, for I — K to be invertible or Fredholm on Xp are established for certain cases (Ω a half-space or cone). A boundary integral equation, which models three-dimensional acoustic propaga-tion above flat ground, absorbing apart from an infinite rigid strip, illustrates the practical application and sharpness of the above results. This integral equation mod-els, in particular, road traffic noise propagation along an infinite road surface sur-rounded by absorbing ground. We prove that the sound propagating along the rigid road surface eventually decays with distance at the same rate as sound propagating above the absorbing ground.