919 resultados para Discrete Fourier analysis
Resumo:
This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed. © 2004 Elsevier Inc. All rights reserved.
Resumo:
This paper presents a rectangular array antenna with a suitable signal-processing algorithm that is able to steer the beam in azimuth over a wide frequency band. In the previous approach, which was reported in the literature, an inverse discrete Fourier transform technique was proposed for obtaining the signal weighting coefficients. This approach was demonstrated for large arrays in which the physical parameters of the antenna elements were not considered. In this paper, a modified signal-weighting algorithm that works for arbitrary-size arrays is described. Its validity is demonstrated in examples of moderate-size arrays with real antenna elements. It is shown that in some cases, the original beam-forming algorithm fails, while the new algorithm is able to form the desired radiation pattern over a wide frequency band. The performance of the new algorithm is assessed for two cases when the mutual coupling between array elements is both neglected and taken into account.
Resumo:
We present a new method of modeling imaging of laser beams in the presence of diffraction. Our method is based on the concept of first orthogonally expanding the resultant diffraction field (that would have otherwise been obtained by the laborious application of the Huygens diffraction principle) and then representing it by an effective multimodal laser beam with different beam parameters. We show not only that the process of obtaining the new beam parameters is straightforward but also that it permits a different interpretation of the diffraction-caused focal shift in laser beams. All of the criteria that we have used to determine the minimum number of higher-order modes needed to accurately represent the diffraction field show that the mode-expansion method is numerically efficient. Finally, the characteristics of the mode-expansion method are such that it allows modeling of a vast array of diffraction problems, regardless of the characteristics of the incident laser beam, the diffracting element, or the observation plane. (C) 2005 Optical Society of America.
Resumo:
This article presents an array antenna with beam-steering capability in azimuth over a wide frequency band using real-valued weighting coefficients that can be realized in practice by amplifiers or attenuators. The described beamforming scheme relies on a 2D (instead of 1D) array structure in order to make sure that there are enough degrees of freedom to realize a given radiation pattern in both the angular and frequency domains. In the presented approach, weights are determined using an inverse discrete Fourier transform (IDFT) technique by neglecting the mutual coupling between array elements. Because of the presence of mutual coupling, the actual array produces a radiation pattern with increased side-lobe levels. In order to counter this effect, the design aims to realize the initial radiation pattern with a lower side-lobe level. This strategy is demonstrated in the design example of 4 X 4 element array. (C) 2005 Wiley Periodicals. Inc.
Resumo:
This article presents the design of a wideband rectangular array of planar monopoles, which is able to steer its beam and nulls over a wide frequency band using real-valued weights. These weights can be realized in practice by amplifiers or attenuators leading to a low cost development of a wideband array antenna with beam and null steering capability. The weights are determined by applying an inverse discrete Fourier transform to an assumed radiation pattern. This wideband beam and null forming concept is verified by full electromagnetic simulations which take into account mutual coupling effects between the array elements.
Resumo:
We evaluate the performance of composite leading indicators of turning points of inflation in the Euro area, constructed by combining the techniques of Fourier analysis and Kalman filters with the National Bureau of Economic Research methodology. In addition, the study compares the empirical performance of Euro Simple Sum and Divisia monetary aggregates and provides a tentative answer to the issue of whether or not the UK should join the Euro area. Our findings suggest that, first, the cyclical pattern of the different composite leading indicators very closely reflect that of the inflation cycle for the Euro area; second, the empirical performance of the Euro Divisia is better than its Simple Sum counterpart and third, the UK is better out of the Euro area. © 2005 Taylor & Francis Group Ltd.
Resumo:
Simulations examining pattern competition have been performed on a horizontal homogeneously heated layer that is bounded by an isothermal plane above an adiabatic plane. Several different circulation patterns arose as the heating regime applied to the horizontal layer was modified. The sequence of the patterns formed as the Grashof number was increased had the following order: laminar, z-axis rolls, squares, hexagons and pentagons, pentagons and then two different square modes of differing orientations. Fourier analysis was used to determine how the key modes interact in the presence of different patterns.
Resumo:
The last decade has seen a considerable increase in the application of quantitative methods in the study of histological sections of brain tissue and especially in the study of neurodegenerative disease. These disorders are characterised by the deposition and aggregation of abnormal or misfolded proteins in the form of extracellular protein deposits such as senile plaques (SP) and intracellular inclusions such as neurofibrillary tangles (NFT). Quantification of brain lesions and studying the relationships between lesions and normal anatomical features of the brain, including neurons, glial cells, and blood vessels, has become an important method of elucidating disease pathogenesis. This review describes methods for quantifying the abundance of a histological feature such as density, frequency, and 'load' and the sampling methods by which quantitative measures can be obtained including plot/quadrat sampling, transect sampling, and the point-quarter method. In addition, methods for determining the spatial pattern of a histological feature, i.e., whether the feature is distributed at random, regularly, or is aggregated into clusters, are described. These methods include the use of the Poisson and binomial distributions, pattern analysis by regression, Fourier analysis, and methods based on mapped point patterns. Finally, the statistical methods available for studying the degree of spatial correlation between pathological lesions and neurons, glial cells, and blood vessels are described.
Resumo:
Simulations examining pattern competition have been performed on a horizontal homogeneously heated layer that is bounded by an isothermal plane above an adiabatic plane. Several different circulation patterns arose as the heating regime applied to the horizontal layer was modified. The sequence of the patterns formed as the Grashof number was increased had the following order: laminar layer, rolls, squares, hexagons and pentagons, and then two square modes of differing orientations. Fourier analysis was used to determine how the key modes interact with each pattern.
Resumo:
* The author was supported by NSF Grant No. DMS 9706883.
Resumo:
The feasibility of using a small-scale avalanche tester to measure the flow properties of pharmaceutical lactose powders was examined. The modes of behaviour observed in larger systems were displayed and showed a clear distinction between angular, free-flowing particles and more spherical particles of similar flow characteristics. Angular Lactohale LH100 particles showed slumping behaviour at a rotational frequency of 0.33Hz which disappeared at higher frequencies. Spherical lactose powder with a similar flow function to LH100 only showed rolling behaviour under the same conditions, as did more cohesive powders LH200 and LH300. Further investigation of the LH100 data using Fast Fourier analysis showed that the slumping frequency was 1/10th of the rotational frequency.
Resumo:
We analyze the physical-chemical surface properties of single-slit, single-groove subwavelength-structured silver films with high-resolution transmission electron microscopy and calculate exact solutions to Maxwell’s equations corresponding to recent far-field interferometry experiments using these structures. Contrary to a recent suggestion the surface analysis shows that the silver films are free of detectable contaminants. The finite-difference time-domain calculations, in excellent agreement with experiment, show a rapid fringe amplitude decrease in the near zone (slit-groove distance out to 3–4 wavelengths). Extrapolation to slit-groove distances beyond the near zone shows that the surface wave evolves to the expected bound surface plasmon polariton (SPP). Fourier analysis of these results indicates the presence of a distribution of transient, evanescent modes around the SPP that dephase and dissipate as the surface wave evolves from the near to the far zone.
Resumo:
Highlights of Data Expedition: • Students explored daily observations of local climate data spanning the past 35 years. • Topological Data Analysis, or TDA for short, provides cutting-edge tools for studying the geometry of data in arbitrarily high dimensions. • Using TDA tools, students discovered intrinsic dynamical features of the data and learned how to quantify periodic phenomenon in a time-series. • Since nature invariably produces noisy data which rarely has exact periodicity, students also considered the theoretical basis of almost-periodicity and even invented and tested new mathematical definitions of almost-periodic functions. Summary The dataset we used for this data expedition comes from the Global Historical Climatology Network. “GHCN (Global Historical Climatology Network)-Daily is an integrated database of daily climate summaries from land surface stations across the globe.” Source: https://www.ncdc.noaa.gov/oa/climate/ghcn-daily/ We focused on the daily maximum and minimum temperatures from January 1, 1980 to April 1, 2015 collected from RDU International Airport. Through a guided series of exercises designed to be performed in Matlab, students explore these time-series, initially by direct visualization and basic statistical techniques. Then students are guided through a special sliding-window construction which transforms a time-series into a high-dimensional geometric curve. These high-dimensional curves can be visualized by projecting down to lower dimensions as in the figure below (Figure 1), however, our focus here was to use persistent homology to directly study the high-dimensional embedding. The shape of these curves has meaningful information but how one describes the “shape” of data depends on which scale the data is being considered. However, choosing the appropriate scale is rarely an obvious choice. Persistent homology overcomes this obstacle by allowing us to quantitatively study geometric features of the data across multiple-scales. Through this data expedition, students are introduced to numerically computing persistent homology using the rips collapse algorithm and interpreting the results. In the specific context of sliding-window constructions, 1-dimensional persistent homology can reveal the nature of periodic structure in the original data. I created a special technique to study how these high-dimensional sliding-window curves form loops in order to quantify the periodicity. Students are guided through this construction and learn how to visualize and interpret this information. Climate data is extremely complex (as anyone who has suffered from a bad weather prediction can attest) and numerous variables play a role in determining our daily weather and temperatures. This complexity coupled with imperfections of measuring devices results in very noisy data. This causes the annual seasonal periodicity to be far from exact. To this end, I have students explore existing theoretical notions of almost-periodicity and test it on the data. They find that some existing definitions are also inadequate in this context. Hence I challenged them to invent new mathematics by proposing and testing their own definition. These students rose to the challenge and suggested a number of creative definitions. While autocorrelation and spectral methods based on Fourier analysis are often used to explore periodicity, the construction here provides an alternative paradigm to quantify periodic structure in almost-periodic signals using tools from topological data analysis.