980 resultados para PARAXIAL APPROXIMATION


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The theory and experimental applications of optical Airy beams are in active development recently. The Airy beams are characterised by very special properties: they are non-diffractive and propagate along parabolic trajectories. Among the striking applications of the optical Airy beams are optical micro-manipulation implemented as the transport of small particles along the parabolic trajectory, Airy-Bessel linear light bullets, electron acceleration by the Airy beams, plasmonic energy routing. The detailed analysis of the mathematical aspects as well as physical interpretation of the electromagnetic Airy beams was done by considering the wave as a function of spatial coordinates only, related by the parabolic dependence between the transverse and the longitudinal coordinates. Their time dependence is assumed to be harmonic. Only a few papers consider a more general temporal dependence where such a relationship exists between the temporal and the spatial variables. This relationship is derived mostly by applying the Fourier transform to the expressions obtained for the harmonic time dependence or by a Fourier synthesis using the specific modulated spectrum near some central frequency. Spatial-temporal Airy pulses in the form of contour integrals is analysed near the caustic and the numerical solution of the nonlinear paraxial equation in time domain shows soliton shedding from the Airy pulse in Kerr medium. In this paper the explicitly time dependent solutions of the electromagnetic problem in the form of time-spatial pulses are derived in paraxial approximation through the Green's function for the paraxial equation. It is shown that a Gaussian and an Airy pulse can be obtained by applying the Green's function to a proper source current. We emphasize that the processes in time domain are directional, which leads to unexpected conclusions especially for the paraxial approximation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Física - FEG

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O empilhamento por superfície de reflexão comum (ou empilhamento SRC), conhecido como empilhamento CRS, do inglês Commom reflection surface, constitui-se em um novo método para o processamento sísmico na simulação de seções afastamento nulo (AN) e afastamento comum (AC). Este método é baseado em uma aproximação paraxial hiperbólica de segunda ordem dos tempos de trânsito de reflexão na vizinhança de um raio central. Para a simulação de seção AN, o raio central é um raio normal, enquanto que para a simulação de uma seção AC o raio central é um raio de afastamento finito. Em adição à seção AN, o método de empilhamento SRC também fornece estimativas dos atributos cinemáticos do campo de onda, sendo aplicados, por exemplo, na determinação (por um processo de inversão) da velocidade intervalar, no cálculo do espalhamento geométrico, na estimativa da zona de Fresnel, e também na simulação de eventos de tempos de difrações, este último tendo uma grande importância para a migração pré-empilhamento. Neste trabalho é proposta uma nova estratégia para fazer uma migração em profundidade pré-empilhamento, que usa os atributos cinemáticos do campo de onda derivados do empilhamento SRC, conhecido por método CRS-PSDM, do inglês CRS based pre-stack depth migration. O método CRS-PSDM usa os resultados obtidos do método SRC, isto é, as seções dos atributos cinemáticos do campo de onda, para construir uma superfície de tempos de trânsito de empilhamento, ao longo da qual as amplitudes do dado sísmico de múltipla cobertura são somadas, sendo o resultado da soma atribuído a um dado ponto em profundidade, na zona alvo de migração que é definida por uma malha regular. Similarmente ao método convencional de migração tipo Kirchhoff (K-PSDM), o método CRS-PSDM precisa de um modelo de velocidade de migração. Contrário ao método K-PSDM, o método CRS-PSDM necessita apenas computar os tempos de trânsito afastamento nulo, ao seja, ao longo de um único raio ligando o ponto considerado em profundidade a uma dada posição de fonte e receptor coincidentes na superfície. O resultado final deste procedimento é uma imagem sísmica em profundidade dos refletores a partir do dado de múltipla cobertura.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O método de empilhamento sísmico por Superfície de Reflexão Comum (ou empilhamento SRC) produz a simulação de seções com afastamento nulo (NA) a partir dos dados de cobertura múltipla. Para meios 2D, o operador de empilhamento SRC depende de três parâmetros que são: o ângulo de emergência do raio central com fonte-receptor nulo (β0), o raio de curvatura da onda ponto de incidência normal (RNIP) e o raio de curvatura da onda normal (RN). O problema crucial para a implementação do método de empilhamento SRC consiste na determinação, a partir dos dados sísmicos, dos três parâmetros ótimos associados a cada ponto de amostragem da seção AN a ser simulada. No presente trabalho foi desenvolvido uma nova sequência de processamento para a simulação de seções AN por meio do método de empilhamento SRC. Neste novo algoritmo, a determinação dos três parâmetros ótimos que definem o operador de empilhamento SRC é realizada em três etapas: na primeira etapa são estimados dois parâmetros (β°0 e R°NIP) por meio de uma busca global bidimensional nos dados de cobertura múltipla. Na segunda etapa é usado o valor de β°0 estimado para determinar-se o terceiro parâmetro (R°N) através de uma busca global unidimensional na seção AN resultante da primeira etapa. Em ambas etapas as buscas globais são realizadas aplicando o método de otimização Simulated Annealing (SA). Na terceira etapa são determinados os três parâmetros finais (β0, RNIP e RN) através uma busca local tridimensional aplicando o método de otimização Variable Metric (VM) nos dados de cobertura múltipla. Nesta última etapa é usado o trio de parâmetros (β°0, R°NIP, R°N) estimado nas duas etapas anteriores como aproximação inicial. Com o propósito de simular corretamente os eventos com mergulhos conflitantes, este novo algoritmo prevê a determinação de dois trios de parâmetros associados a pontos de amostragem da seção AN onde há intersecção de eventos. Em outras palavras, nos pontos da seção AN onde dois eventos sísmicos se cruzam são determinados dois trios de parâmetros SRC, os quais serão usados conjuntamente na simulação dos eventos com mergulhos conflitantes. Para avaliar a precisão e eficiência do novo algoritmo, este foi aplicado em dados sintéticos de dois modelos: um com interfaces contínuas e outro com uma interface descontinua. As seções AN simuladas têm elevada razão sinal-ruído e mostram uma clara definição dos eventos refletidos e difratados. A comparação das seções AN simuladas com as suas similares obtidas por modelamento direto mostra uma correta simulação de reflexões e difrações. Além disso, a comparação dos valores dos três parâmetros otimizados com os seus correspondentes valores exatos calculados por modelamento direto revela também um alto grau de precisão. Usando a aproximação hiperbólica dos tempos de trânsito, porém sob a condição de RNIP = RN, foi desenvolvido um novo algoritmo para a simulação de seções AN contendo predominantemente campos de ondas difratados. De forma similar ao algoritmo de empilhamento SRC, este algoritmo denominado empilhamento por Superfícies de Difração Comum (SDC) também usa os métodos de otimização SA e VM para determinar a dupla de parâmetros ótimos (β0, RNIP) que definem o melhor operador de empilhamento SDC. Na primeira etapa utiliza-se o método de otimização SA para determinar os parâmetros iniciais β°0 e R°NIP usando o operador de empilhamento com grande abertura. Na segunda etapa, usando os valores estimados de β°0 e R°NIP, são melhorados as estimativas do parâmetro RNIP por meio da aplicação do algoritmo VM na seção AN resultante da primeira etapa. Na terceira etapa são determinados os melhores valores de β°0 e R°NIP por meio da aplicação do algoritmo VM nos dados de cobertura múltipla. Vale salientar que a aparente repetição de processos tem como efeito a atenuação progressiva dos eventos refletidos. A aplicação do algoritmo de empilhamento SDC em dados sintéticos contendo campos de ondas refletidos e difratados, produz como resultado principal uma seção AN simulada contendo eventos difratados claramente definidos. Como uma aplicação direta deste resultado na interpretação de dados sísmicos, a migração pós-empilhamento em profundidade da seção AN simulada produz uma seção com a localização correta dos pontos difratores associados às descontinuidades do modelo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The explicit expression for spatial-temporal Airy pulse is derived from the Maxwell's equations in paraxial approximation. The trajectory of the pulse in the time-space coordinates is analysed. The existence of a bifurcation point that separates regions with qualitatively different features of the pulse propagation is demonstrated. At this point the velocity of the pulse becomes infinite and the orientation of it changes to the opposite.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

During the last decade, microfabrication of photonic devices by means of intense femtosecond (fs) laser pulses has emerged as a novel technology. A common requirement for the production of these devices is that the refractive index modification pitch size should be smaller than the inscribing wavelength. This can be achieved by making use of the nonlinear propagation of intense fs laser pulses. Nonlinear propagation of intense fs laser pulses is an extremely complicated phenomenon featuring complex multiscale spatiotemporal dynamics of the laser pulses. We have utilized a principal approach based on finite difference time domain (FDTD) modeling of the full set of Maxwell's equations coupled to the conventional Drude model for generated plasma. Nonlinear effects are included, such as self-phase modulation and multiphoton absorption. Such an approach resolves most problems related to the inscription of subwavelength structures, when the paraxial approximation is not applicable to correctly describe the creation of and scattering on the structures. In a representative simulation of the inscription process, the signature of degenerate four wave mixing has been found. © 2012 Optical Society of America.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A time dependent electromagnetic pulse generated by a current running laterally to the direction of the pulse propagation is considered in paraxial approximation. It is shown that the pulse envelope moves in the time-spatial coordinates on the surface of a parabolic cylinder for the Airy pulse and a hyperbolic cylinder for the Gaussian. These pulses propagate in time with deceleration along the dominant propagation direction and drift uniformly in the lateral direction. The Airy pulse stops at infinity while the asymptotic velocity of the Gaussian is nonzero. © 2013 Optical Society of America.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is shown that an electromagnetic wave equation in time domain is reduced in paraxial approximation to an equation similar to the Schrodinger equation but in which the time and space variables play opposite roles. This equation has solutions in form of time-varying pulses with the Airy function as an envelope. The pulses are generated by a source point with an Airy time varying field and propagate in vacuum preserving their shape and magnitude. The motion is according to a quadratic law with the velocity changing from infinity at the source point to zero in infinity. These one-dimensional results are extended to the 3D+time case when a similar Airy-Bessel pulse is excited by the field at a plane aperture. The same behaviour of the pulses, the non-diffractive preservation and their deceleration, is found. © 2011 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The explicit expression for spatial-temporal Airy pulse is derived from the Maxwell's equations in paraxial approximation. The trajectory of the pulse in the time-space coordinates is analysed. The existence of a bifurcation point that separates regions with qualitatively different features of the pulse propagation is demonstrated. At this point the velocity of the pulse becomes infinite and the orientation of it changes to the opposite. © 2011 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose an accurate technique for obtaining highly collimated beams, which also allows testing the collimation degree of a beam. It is based on comparing the period of two different self-images produced by a single diffraction grating. In this way, variations in the period of the diffraction grating do not affect to the measuring procedure. Self-images are acquired by two CMOS cameras and their periods are determined by fitting the variogram function of the self-images to a cosine function with polynomial envelopes. This way, loss of accuracy caused by imperfections of the measured self-images is avoided. As usual, collimation is obtained by displacing the collimation element with respect to the source along the optical axis. When the period of both self-images coincides, collimation is achieved. With this method neither a strict control of the period of the diffraction grating nor a transverse displacement, required in other techniques, are necessary. As an example, a LED considering paraxial approximation and point source illumination is collimated resulting a resolution in the divergence of the beam of σ φ = ± μrad.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An accurate and simple technique for determining the focal length of a lens is presented. It consists of measuring the period of the fringes produced by a diffraction grating at the near field when it is illuminated with a beam focused by the unknown lens. In paraxial approximation, the period of the fringes varies linearly with the distance. After some calculations, a simple extrapolation of data is performed to obtain the locations of the principal plane and the focal plane of the lens. Thus, the focal length is obtained as the distance between the two mentioned planes. The accuracy of the method is limited by the collimation degree of the incident beam and by the algorithm used to obtain the period of the fringes. We have checked the technique with two commercial lenses, one convergent and one divergent, with nominal focal lengths (+100±1) mm and (−100±1) mm respectively. We have experimentally obtained the focal lengths resulting into the interval given by the manufacturer but with an uncertainty of 0.1%, one order of magnitude lesser than the uncertainty given by the manufacturer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a random medium consisting of N points randomly distributed so that there is no correlation among the distances separating them. This is the random link model, which is the high dimensionality limit (mean-field approximation) for the Euclidean random point structure. In the random link model, at discrete time steps, a walker moves to the nearest point, which has not been visited in the last mu steps (memory), producing a deterministic partially self-avoiding walk (the tourist walk). We have analytically obtained the distribution of the number n of points explored by the walker with memory mu=2, as well as the transient and period joint distribution. This result enables us to explain the abrupt change in the exploratory behavior between the cases mu=1 (memoryless walker, driven by extreme value statistics) and mu=2 (walker with memory, driven by combinatorial statistics). In the mu=1 case, the mean newly visited points in the thermodynamic limit (N >> 1) is just < n >=e=2.72... while in the mu=2 case, the mean number < n > of visited points grows proportionally to N(1/2). Also, this result allows us to establish an equivalence between the random link model with mu=2 and random map (uncorrelated back and forth distances) with mu=0 and the abrupt change between the probabilities for null transient time and subsequent ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The local-density approximation (LDA) together with the half occupation (transitionstate) is notoriously successful in the calculation of atomic ionization potentials. When it comes to extended systems, such as a semiconductor infinite system, it has been very difficult to find a way to half ionize because the hole tends to be infinitely extended (a Bloch wave). The answer to this problem lies in the LDA formalism itself. One proves that the half occupation is equivalent to introducing the hole self-energy (electrostatic and exchange correlation) into the Schrodinger equation. The argument then becomes simple: The eigenvalue minus the self-energy has to be minimized because the atom has a minimal energy. Then one simply proves that the hole is localized, not infinitely extended, because it must have maximal self-energy. Then one also arrives at an equation similar to the self- interaction correction equation, but corrected for the removal of just 1/2 electron. Applied to the calculation of band gaps and effective masses, we use the self- energy calculated in atoms and attain a precision similar to that of GW, but with the great advantage that it requires no more computational effort than standard LDA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the spin-1/2 Ising model on a Bethe lattice in the mean-field limit, with the interaction constants following one of two deterministic aperiodic sequences, the Fibonacci or period-doubling one. New algorithms of sequence generation were implemented, which were fundamental in obtaining long sequences and, therefore, precise results. We calculate the exact critical temperature for both sequences, as well as the critical exponents beta, gamma, and delta. For the Fibonacci sequence, the exponents are classical, while for the period-doubling one they depend on the ratio between the two exchange constants. The usual relations between critical exponents are satisfied, within error bars, for the period-doubling sequence. Therefore, we show that mean-field-like procedures may lead to nonclassical critical exponents.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.