966 resultados para electromagnetic scattering
Resumo:
A scheme is presented to incorporate a mixed potential integral equation (MPIE) using Michalski's formulation C with the method of moments (MoM) for analyzing the scattering of a plane wave from conducting planar objects buried in a dielectric half-space. The robust complex image method with a two-level approximation is used for the calculation of the Green's functions for the half-space. To further speed up the computation, an interpolation technique for filling the matrix is employed. While the induced current distributions on the object's surface are obtained in the frequency domain, the corresponding time domain responses are calculated via the inverse fast Fourier transform (FFT), The complex natural resonances of targets are then extracted from the late time response using the generalized pencil-of-function (GPOF) method. We investigate the pole trajectories as we vary the distance between strips and the depth and orientation of single, buried strips, The variation from the pole position of a single strip in a homogeneous dielectric medium was only a few percent for most of these parameter variations.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Electromagnetic scattering inverse problems, microwave imaging, reconstruction of dielectric media, remote sensing, tomography
Resumo:
Electromagnetic scattering behaviour of a superstrate loaded metallo– dielectric structure based on Sierpinski carpet fractal geometry is reported. The results indicate that the frequency at which backscattering is minimum can be tuned by varying the thickness of the superstrate. A reduction in backscattered power of 44 dB is obtained simultaneously for both TE and TM polarisations of the incident field.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
Resumo:
The goal of this paper is to study and further develop the orthogonality sampling or stationary waves algorithm for the detection of the location and shape of objects from the far field pattern of scattered waves in electromagnetics or acoustics. Orthogonality sampling can be seen as a special beam forming algorithm with some links to the point source method and to the linear sampling method. The basic idea of orthogonality sampling is to sample the space under consideration by calculating scalar products of the measured far field pattern , with a test function for all y in a subset Q of the space , m = 2, 3. The way in which this is carried out is important to extract the information which the scattered fields contain. The theoretical foundation of orthogonality sampling is only partly resolved, and the goal of this work is to initiate further research by numerical demonstration of the high potential of the approach. We implement the method for a two-dimensional setting for the Helmholtz equation, which represents electromagnetic scattering when the setup is independent of the third coordinate. We show reconstructions of the location and shape of objects from measurements of the scattered field for one or several directions of incidence and one or many frequencies or wave numbers, respectively. In particular, we visualize the indicator function both with the Dirichlet and Neumann boundary condition and for complicated inhomogeneous media.
Resumo:
Este trabalho apresenta o desenvolvimento de um algoritmo computacional para análise do espalhamento eletromagnético de nanoestruturas plasmônicas isoladas. O Método dos Momentos tridimensional (MoM-3D) foi utilizado para resolver numericamente a equação integral do campo elétrico, e o modelo de Lorentz-Drude foi usado para representar a permissividade complexa das nanoestruturas metálicas. Baseado nesta modelagem matemática, um algoritmo computacional escrito em linguagem C foi desenvolvido. Como exemplo de aplicação e validação do código, dois problemas clássicos de espalhamento eletromagnético de nanopartículas metálicas foram analisados: nanoesfera e nanobarra, onde foram calculadas a resposta espectral e a distribuição do campo próximo. Os resultados obtidos foram comparados com resultados calculados por outros modelos e observou-se uma boa concordância e convergência entre eles.
Resumo:
In this paper, we present an analysis of the resonant response of modified triangular metallic nanoparticles with polynomial sides. The particles are illuminated by an incident plane wave and the method of moments is used to solve numerically the electromagnetic scattering problem. We investigate spectral response and near field distribution in function of the length and polynomial order of the nanoparticles. Our results show that in the analyzed wavelength range (0.5-1.8) µm these particles possess smaller number of resonances and their resonant wavelengths, near field enhancement and field confinement are higher than those of the conventional triangular particle with linear sides.
Resumo:
In der vorliegenden Arbeit wird die Faktorisierungsmethode zur Erkennung von Gebieten mit sprunghaft abweichenden Materialparametern untersucht. Durch eine abstrakte Formulierung beweisen wir die der Methode zugrunde liegende Bildraumidentität für allgemeine reelle elliptische Probleme und deduzieren bereits bekannte und neue Anwendungen der Methode. Für das spezielle Problem, magnetische oder perfekt elektrisch leitende Objekte durch niederfrequente elektromagnetische Strahlung zu lokalisieren, zeigen wir die eindeutige Lösbarkeit des direkten Problems für hinreichend kleine Frequenzen und die Konvergenz der Lösungen gegen die der elliptischen Gleichungen der Magnetostatik. Durch Anwendung unseres allgemeinen Resultats erhalten wir die eindeutige Rekonstruierbarkeit der gesuchten Objekte aus elektromagnetischen Messungen und einen numerischen Algorithmus zur Lokalisierung der Objekte. An einem Musterproblem untersuchen wir, wie durch parabolische Differentialgleichungen beschriebene Einschlüsse in einem durch elliptische Differentialgleichungen beschriebenen Gebiet rekonstruiert werden können. Dabei beweisen wir die eindeutige Lösbarkeit des zugrunde liegenden parabolisch-elliptischen direkten Problems und erhalten durch eine Erweiterung der Faktorisierungsmethode die eindeutige Rekonstruierbarkeit der Einschlüsse sowie einen numerischen Algorithmus zur praktischen Umsetzung der Methode.
Resumo:
Die elektromagnetischen Nukleon-Formfaktoren sind fundamentale Größen, welche eng mit der elektromagnetischen Struktur der Nukleonen zusammenhängen. Der Verlauf der elektrischen und magnetischen Sachs-Formfaktoren G_E und G_M gegen Q^2, das negative Quadrat des Viererimpulsübertrags im elektromagnetischen Streuprozess, steht über die Fouriertransformation in direkter Beziehung zu der räumlichen Ladungs- und Strom-Verteilung in den Nukleonen. Präzise Messungen der Formfaktoren über einen weiten Q^2-Bereich werden daher für ein quantitatives Verständnis der Nukleonstruktur benötigt.rnrnDa es keine freien Neutrontargets gibt, gestaltet sich die Messung der Neutron-Formfaktoren schwierig im Vergleich zu der Messung am Proton. Konsequenz daraus ist, dass die Genauigkeit der vorhandenen Daten von Neutron-Formfaktoren deutlich geringer ist als die von Formfaktoren des Protons; auch der vermessene Q^2-Bereich ist kleiner. Insbesondere der elektrische Sachs-Formfaktor des Neutrons G_E^n ist schwierig zu messen, da er aufgrund der verschwindenden Nettoladung des Neutrons im Verhältnis zu den übrigen Nukleon-Formfaktoren sehr klein ist. G_E^n charakterisiert die Ladungsverteilung des elektrisch neutralen Neutrons und ist damit besonders sensitiv auf die innere Struktur des Neutrons.rnrnIn der hier vorgestellten Arbeit wurde G_E^n aus Strahlhelizitätsasymmetrien in der quasielastischen Streuung vec{3He}(vec{e}, e'n)pp bei einem Impulsübertrag von Q^2 = 1.58 (GeV/c)^2 bestimmt. Die Messung fand in Mainz an der Elektronbeschleunigeranlage Mainzer Mikrotron innerhalb der A1-Kollaboration im Sommer 2008 statt. rnrnLongitudinal polarisierte Elektronen mit einer Energie von 1.508 GeV wurden an einem polarisierten ^3He-Gastarget, das als effektives, polarisiertes Neutrontarget diente, gestreut. Die gestreuten Elektronen wurden in Koinzidenz mit den herausgeschlagenen Neutronen detektiert; die Elektronen wurden in einem magnetischen Spektrometer nachgewiesen, durch den Nachweis der Neutronen in einer Matrix aus Plastikszintillatoren wurde der Beitrag der quasielastischen Streuung am Proton unterdrückt.rnrnAsymmetrien des Wirkungsquerschnitts bezüglich der Elektronhelizität sind bei Orientierung der Targetpolarisation in der Streuebene und senkrecht zum Impulsübertrag sensitiv auf G_E^n / G_M^n; mittels deren Messung kann G_E^n bestimmt werden, da der magnetische Formfaktor G_M^n mit vergleichsweise hoher Präzision bekannt ist. Zusätzliche Messungen der Asymmetrie bei einer Polarisationsorientierung parallel zum Impulsübertrag wurden genutzt, um systematische Fehler zu reduzieren.rnrnFür die Messung inklusive statistischem (stat) und systematischem (sys) Fehler ergab sich G_E^n = 0.0244 +/- 0.0057_stat +/- 0.0016_sys.
Resumo:
In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.
Resumo:
Radio-frequency ( RF) coils are designed such that they induce homogeneous magnetic fields within some region of interest within a magnetic resonance imaging ( MRI) scanner. Loading the scanner with a patient disrupts the homogeneity of these fields and can lead to a considerable degradation of the quality of the acquired image. In this paper, an inverse method is presented for designing RF coils, in which the presence of a load ( patient) within the MRI scanner is accounted for in the model. To approximate the finite length of the coil, a Fourier series expansion is considered for the coil current density and for the induced fields. Regularization is used to solve this ill-conditioned inverse problem for the unknown Fourier coefficients. That is, the error between the induced and homogeneous target fields is minimized along with an additional constraint, chosen in this paper to represent the curvature of the coil windings. Smooth winding patterns are obtained for both unloaded and loaded coils. RF fields with a high level of homogeneity are obtained in the unloaded case and a limit to the level of homogeneity attainable is observed in the loaded case.
Resumo:
We have obtained total and differential cross sections for the strangeness changing charged current weak reaction ν L + p → Λ(Σ0) + L+ using standard dipole form factors, where L stands for an electron, muon, or tau lepton, and L + stands for an positron, anti-muon or anti-tau lepton. We calculated these reactions from near threshold few hundred MeV to 8 GeV of incoming neutrino energy and obtained the contributions of the various form factors to the total and differential cross sections. We did this in support of possible experiments which might be carried out by the MINERνA collaboration at Fermilab. The calculation is phenomenologically based and makes use of SU(3) relations to obtain the standard vector current form factors and data from Λ beta decay to obtain the axial current form factor. We also made estimates for the contributions of the pseudoscalar form factor and for the F E and FS form factors to the total and differential cross sections. We discuss our results and consider under what circumstances we might extract the various form factors. In particular we wish to test the SU(3) assumptions made in determining all the form factors over a range of q2 values. Recently new form factors were obtained from recoil proton measurements in electron-proton electromagnetic scattering at Jefferson Lab. We thus calculated the contributions of the individual form factors to the total and differential cross sections for this new set of form factors. We found that the differential and total cross sections for Λ production change only slightly between the two sets of form factors but that the differential and total cross sections change substantially for Σ 0 production. We discuss the possibility of distinguishing between the two cases for the experiments planned by the MINERνA Collaboration. We also undertook the calculation for the inverse reaction e − + p → Λ + νe for a polarized outgoing Λ which might be performed at Jefferson Lab, and provided additional analysis of the contributions of the individual form factors to the differential cross sections for this case. ^
Resumo:
We analytically calculate the time-averaged electromagnetic energy stored inside a nondispersive magnetic isotropic cylinder that is obliquely irradiated by an electromagnetic plane wave. An expression for the optical-absorption efficiency in terms of the magnetic internal coefficients is also obtained. In the low absorption limit, we derive a relation between the normalized internal energy and the optical-absorption efficiency that is not affected by the magnetism and the incidence angle. This relation, indeed, seems to be independent of the shape of the scatterer. This universal aspect of the internal energy is connected to the transport velocity and consequently to the diffusion coefficient in the multiple scattering regime. Magnetism favors high internal energy for low size parameter cylinders, which leads to a low diffusion coefficient for electromagnetic propagation in 2D random media. (C) 2010 Optical Society of America