923 resultados para boundary integral equation method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the approximation of some highly oscillatory weakly singular surface integrals, arising from boundary integral methods with smooth global basis functions for solving problems of high frequency acoustic scattering by three-dimensional convex obstacles, described globally in spherical coordinates. As the frequency of the incident wave increases, the performance of standard quadrature schemes deteriorates. Naive application of asymptotic schemes also fails due to the weak singularity. We propose here a new scheme based on a combination of an asymptotic approach and exact treatment of singularities in an appropriate coordinate system. For the case of a spherical scatterer we demonstrate via error analysis and numerical results that, provided the observation point is sufficiently far from the shadow boundary, a high level of accuracy can be achieved with a minimal computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synthetic aperture radar (SAR) data have proved useful in remote sensing studies of deserts, enabling different surfaces to be discriminated by differences in roughness properties. Roughness is characterized in SAR backscatter models using the standard deviation of surface heights (sigma), correlation length (L) and autocorrelation function (rho(xi)). Previous research has suggested that these parameters are of limited use for characterizing surface roughness, and are often unreliable due to the collection of too few roughness profiles, or under-sampling in terms of resolution or profile length (L-p). This paper reports on work aimed at establishing the effects of L-p and sampling resolution on SAR backscatter estimations and site discrimination. Results indicate significant relationships between the average roughness parameters and L-p, but large variability in roughness parameters prevents any clear understanding of these relationships. Integral equation model simulations demonstrate limited change with L-p and under-estimate backscatter relative to SAR observations. However, modelled and observed backscatter conform in pattern and magnitude for C-band systems but not for L-band data. Variation in surface roughness alone does not explain variability in site discrimination. Other factors (possibly sub-surface scattering) appear to play a significant role in controlling backscatter characteristics at lower frequencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effective medium approximations for the frequency-dependent and complex-valued effective stiffness tensors of cracked/ porous rocks with multiple solid constituents are developed on the basis of the T-matrix approach (based on integral equation methods for quasi-static composites), the elastic - viscoelastic correspondence principle, and a unified treatment of the local and global flow mechanisms, which is consistent with the principle of fluid mass conservation. The main advantage of using the T-matrix approach, rather than the first-order approach of Eshelby or the second-order approach of Hudson, is that it produces physically plausible results even when the volume concentrations of inclusions or cavities are no longer small. The new formulae, which operates with an arbitrary homogeneous (anisotropic) reference medium and contains terms of all order in the volume concentrations of solid particles and communicating cavities, take explicitly account of inclusion shape and spatial distribution independently. We show analytically that an expansion of the T-matrix formulae to first order in the volume concentration of cavities (in agreement with the dilute estimate of Eshelby) has the correct dependence on the properties of the saturating fluid, in the sense that it is consistent with the Brown-Korringa relation, when the frequency is sufficiently low. We present numerical results for the (anisotropic) effective viscoelastic properties of a cracked permeable medium with finite storage porosity, indicating that the complete T-matrix formulae (including the higher-order terms) are generally consistent with the Brown-Korringa relation, at least if we assume the spatial distribution of cavities to be the same for all cavity pairs. We have found an efficient way to treat statistical correlations in the shapes and orientations of the communicating cavities, and also obtained a reasonable match between theoretical predictions (based on a dual porosity model for quartz-clay mixtures, involving relatively flat clay-related pores and more rounded quartz-related pores) and laboratory results for the ultrasonic velocity and attenuation spectra of a suite of typical reservoir rocks. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have favoured the variational (secular equation) method for the determination of the (ro-) vibrational energy levels of polyatomic molecules. We use predominantly the Watson Hamiltonian in normal coordinates and an associated given potential in the variational code 'Multimode'. The dominant cost is the construction and diagonalization of matrices of ever-increasing size. Here we address this problem, using pertubation theory to select dominant expansion terms within the Davidson-Liu iterative diagonalization method. Our chosen example is the twelve-mode molecule methanol, for which we have an ab initio representation of the potential which includes the internal rotational motion of the OH group relative to CH3. Our new algorithm allows us to obtain converged energy levels for matrices of dimensions in excess of 100 000.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we consider the rendering equation derived from the illumination model called Cook-Torrance model. A Monte Carlo (MC) estimator for numerical treatment of the this equation, which is the Fredholm integral equation of second kind, is constructed and studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural field models describe the coarse-grained activity of populations of interacting neurons. Because of the laminar structure of real cortical tissue they are often studied in two spatial dimensions, where they are well known to generate rich patterns of spatiotemporal activity. Such patterns have been interpreted in a variety of contexts ranging from the understanding of visual hallucinations to the generation of electroencephalographic signals. Typical patterns include localized solutions in the form of traveling spots, as well as intricate labyrinthine structures. These patterns are naturally defined by the interface between low and high states of neural activity. Here we derive the equations of motion for such interfaces and show, for a Heaviside firing rate, that the normal velocity of an interface is given in terms of a non-local Biot-Savart type interaction over the boundaries of the high activity regions. This exact, but dimensionally reduced, system of equations is solved numerically and shown to be in excellent agreement with the full nonlinear integral equation defining the neural field. We develop a linear stability analysis for the interface dynamics that allows us to understand the mechanisms of pattern formation that arise from instabilities of spots, rings, stripes and fronts. We further show how to analyze neural field models with linear adaptation currents, and determine the conditions for the dynamic instability of spots that can give rise to breathers and traveling waves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airborne measurements within the urban mixing layer (360 m) over Greater London are used to quantify CO2 emissions at the meso-scale. Daytime CO2 fluxes, calculated by the Integrative Mass Boundary Layer (IMBL) method, ranged from 46 to 104 μmol CO2 m−2 s−1 for four days in October 2011. The day-to-day variability of IMBL fluxes is at the same order of magnitude as for surface eddy-covariance fluxes observed in central London. Compared to fluxes derived from emissions inventory, the IMBL method gives both lower (by −37%) and higher (by 19%) estimates. The sources of uncertainty of applying the IMBL method in urban areas are discussed and guidance for future studies is given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The general assumption under which the (X) over bar chart is designed is that the process mean has a constant in-control value. However, there are situations in which the process mean wanders. When it wanders according to a first-order autoregressive (AR (1)) model, a complex approach involving Markov chains and integral equation methods is used to evaluate the properties of the (X) over bar chart. In this paper, we propose the use of a pure Markov chain approach to study the performance of the (X) over bar chart. The performance of the chat (X) over bar with variable parameters and the (X) over bar with double sampling are compared. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we study a new risk model for a firm which is sensitive to its credit quality, proposed by Yang(2003): Are obtained recursive equations for finite time ruin probability and distribution of ruin time and Volterra type integral equation systems for ultimate ruin probability, severity of ruin and distribution of surplus before and after ruin

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We continue our discussion of the q-state Potts models for q less than or equal to 4, in the scaling regimes close to their critical and tricritical points. In a previous paper, the spectrum and full S-matrix of the models on an infinite line were elucidated; here, we consider finite-size behaviour. TBA equations are proposed for all cases related to phi(21) and phi(12) perturbations of unitary minimal models. These are subjected to a variety of checks in the ultraviolet and infrared limits, and compared with results from a recently-proposed non-linear integral equation. A non-linear integral equation is also used to study the flows from tricritical to critical models, over the full range of q. Our results should also be of relevance to the study of the off-critical dilute A models in regimes 1 and 2. (C) 2003 Elsevier B.V. B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The AlMCM-41 material with Si/Al=50 was synthesized by hydrothermal method, using cethyltrimethylammonium as template. The protonic H-AlMCM-41 acid form was obtained by ion exchange with ammonium chloride solution and subsequent calcination. The characterization of the material by several techniques showed that a good-quality MCM-41 material was obtained. High-density polyethylene (HDPE) has been submitted to thermal degradation alone, and in presence of the exchanged H-AlMCM-41 catalyst at a concentration of 1: 1 in mass (H-AlMCM-41/HDPE). The reactor was connected on line to a gas chromatograph connected to a mass spectrometer. This process was evaluated by thermogravimetry (TG), from 350 to 600degreesC, under helium dynamic atmosphere, with heating rates of 5.0; 10.0 and 20.0 degreesC/min. From TG curves, the activation energy, calculated using a multiple heating rate integral kinetic method, decreased from 225.5 KJ.mol(-1), for the pure polymer (HDPE), to 184.7 KJ.mol(-1), in the presence of the catalyst (H-AlMCM-41/HDPE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The result that we treat in this article allows to the utilization of classic tools of convex analysis in the study of optimality conditions in the optimal control convex process for a Volterra-Stietjes linear integral equation in the Banach space G([a, b],X) of the regulated functions in [a, b], that is, the functions f : [a, 6] → X that have only descontinuity of first kind, in Dushnik (or interior) sense, and with an equality linear restriction. In this work we introduce a convex functional Lβf(x) of Nemytskii type, and we present conditions for its lower-semicontinuity. As consequence, Weierstrass Theorem garantees (under compacity conditions) the existence of solution to the problem min{Lβf(x)}. © 2009 Academic Publications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A challenge in mesonic three-body decays of heavy mesons is to quantify the contribution of re-scattering between the final mesons. D decays have the unique feature that make them a key to light meson spectroscopy, in particular to access the Kn S-wave phase-shifts. We built a relativis-tic three-body model for the final state interaction in D+ → K -π+π+ decay based on the ladder approximation of the Bethe-Salpeter equation projected on the light-front. The decay amplitude is separated in a smooth term, given by the direct partonic decay amplitude, and a three-body fully interacting contribution, that is factorized in the standard two-meson resonant amplitude times a reduced complex amplitude that carries the effect of the three-body rescattering mechanism. The off-shell reduced amplitude is a solution of an inhomogeneous Faddeev type three-dimensional integral equation, that includes only isospin 1/2 K -π+ interaction in the S-wave channel. The elastic K-π+ scattering amplitude is parameterized according to the LASS data[1]. The integral equation is solved numerically and preliminary results are presented and compared to the experimental data from the E791 Collaboration[2, 3] and FOCUS Collaboration[4, 5].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O uso da técnica da camada equivalente na interpolação de dados de campo potencial permite levar em consideração que a anomalia, gravimétrica ou magnética, a ser interpolada é uma função harmônica. Entretanto, esta técnica tem aplicação computacional restrita aos levantamentos com pequeno número de dados, uma vez que ela exige a solução de um problema de mínimos quadrados com ordem igual a este número. Para viabilizar a aplicação da técnica da camada equivalente aos levantamentos com grande número de dados, nós desenvolvemos o conceito de observações equivalentes e o método EGTG, que, respectivamente, diminui a demanda em memória do computador e otimiza as avaliações dos produtos internos inerentes à solução dos problemas de mínimos quadrados. Basicamente, o conceito de observações equivalentes consiste em selecionar algumas observações, entre todas as observações originais, tais que o ajuste por mínimos quadrados, que ajusta as observações selecionadas, ajusta automaticamente (dentro de um critério de tolerância pré-estabelecido) todas as demais que não foram escolhidas. As observações selecionadas são denominadas observações equivalentes e as restantes são denominadas observações redundantes. Isto corresponde a partir o sistema linear original em dois sistemas lineares com ordens menores. O primeiro com apenas as observações equivalentes e o segundo apenas com as observações redundantes, de tal forma que a solução de mínimos quadrados, obtida a partir do primeiro sistema linear, é também a solução do segundo sistema. Este procedimento possibilita ajustar todos os dados amostrados usando apenas as observações equivalentes (e não todas as observações originais) o que reduz a quantidade de operações e a utilização de memória pelo computador. O método EGTG consiste, primeiramente, em identificar o produto interno como sendo uma integração discreta de uma integral analítica conhecida e, em seguida, em substituir a integração discreta pela avaliação do resultado da integral analítica. Este método deve ser aplicado quando a avaliação da integral analítica exigir menor quantidade de cálculos do que a exigida para computar a avaliação da integral discreta. Para determinar as observações equivalentes, nós desenvolvemos dois algoritmos iterativos denominados DOE e DOEg. O primeiro algoritmo identifica as observações equivalentes do sistema linear como um todo, enquanto que o segundo as identifica em subsistemas disjuntos do sistema linear original. Cada iteração do algoritmo DOEg consiste de uma aplicação do algoritmo DOE em uma partição do sistema linear original. Na interpolação, o algoritmo DOE fornece uma superfície interpoladora que ajusta todos os dados permitindo a interpolação na forma global. O algoritmo DOEg, por outro lado, otimiza a interpolação na forma local uma vez que ele emprega somente as observações equivalentes, em contraste com os algoritmos existentes para a interpolação local que empregam todas as observações. Os métodos de interpolação utilizando a técnica da camada equivalente e o método da mínima curvatura foram comparados quanto às suas capacidades de recuperar os valores verdadeiros da anomalia durante o processo de interpolação. Os testes utilizaram dados sintéticos (produzidos por modelos de fontes prismáticas) a partir dos quais os valores interpolados sobre a malha regular foram obtidos. Estes valores interpolados foram comparados com os valores teóricos, calculados a partir do modelo de fontes sobre a mesma malha, permitindo avaliar a eficiência do método de interpolação em recuperar os verdadeiros valores da anomalia. Em todos os testes realizados o método da camada equivalente recuperou mais fielmente o valor verdadeiro da anomalia do que o método da mínima curvatura. Particularmente em situações de sub-amostragem, o método da mínima curvatura se mostrou incapaz de recuperar o valor verdadeiro da anomalia nos lugares em que ela apresentou curvaturas mais pronunciadas. Para dados adquiridos em níveis diferentes o método da mínima curvatura apresentou o seu pior desempenho, ao contrário do método da camada equivalente que realizou, simultaneamente, a interpolação e o nivelamento. Utilizando o algoritmo DOE foi possível aplicar a técnica da camada equivalente na interpolação (na forma global) dos 3137 dados de anomalia ar-livre de parte do levantamento marinho Equant-2 e 4941 dados de anomalia magnética de campo total de parte do levantamento aeromagnético Carauari-Norte. Os números de observações equivalentes identificados em cada caso foram, respectivamente, iguais a 294 e 299. Utilizando o algoritmo DOEg nós otimizamos a interpolação (na forma local) da totalidade dos dados de ambos os levantamentos citados. Todas as interpolações realizadas não seriam possíveis sem a aplicação do conceito de observações equivalentes. A proporção entre o tempo de CPU (rodando os programas no mesmo espaço de memória) gasto pelo método da mínima curvatura e pela camada equivalente (interpolação global) foi de 1:31. Esta razão para a interpolação local foi praticamente de 1:1.