946 resultados para finite difference time-domain analysis
Resumo:
A theoretical analysis of a symmetric T-shaped rnicrostripfed rectangular microstrip antenna using the finite-difference titnedoniain (FDTD) method is presented in this paper. The resonant frequency, return loss, impedance bandwidth, and radiation patterns are predicted and are in good agreement with the measured results
Resumo:
There is a recent trend to describe physical phenomena without the use of infinitesimals or infinites. This has been accomplished replacing differential calculus by the finite difference theory. Discrete function theory was first introduced in l94l. This theory is concerned with a study of functions defined on a discrete set of points in the complex plane. The theory was extensively developed for functions defined on a Gaussian lattice. In 1972 a very suitable lattice H: {Ci qmxO,I qnyo), X0) 0, X3) 0, O < q < l, m, n 5 Z} was found and discrete analytic function theory was developed. Very recently some work has been done in discrete monodiffric function theory for functions defined on H. The theory of pseudoanalytic functions is a generalisation of the theory of analytic functions. When the generator becomes the identity, ie., (l, i) the theory of pseudoanalytic functions reduces to the theory of analytic functions. Theugh the theory of pseudoanalytic functions plays an important role in analysis, no discrete theory is available in literature. This thesis is an attempt in that direction. A discrete pseudoanalytic theory is derived for functions defined on H.
Resumo:
In the theory of the Navier-Stokes equations, the proofs of some basic known results, like for example the uniqueness of solutions to the stationary Navier-Stokes equations under smallness assumptions on the data or the stability of certain time discretization schemes, actually only use a small range of properties and are therefore valid in a more general context. This observation leads us to introduce the concept of SST spaces, a generalization of the functional setting for the Navier-Stokes equations. It allows us to prove (by means of counterexamples) that several uniqueness and stability conjectures that are still open in the case of the Navier-Stokes equations have a negative answer in the larger class of SST spaces, thereby showing that proof strategies used for a number of classical results are not sufficient to affirmatively answer these open questions. More precisely, in the larger class of SST spaces, non-uniqueness phenomena can be observed for the implicit Euler scheme, for two nonlinear versions of the Crank-Nicolson scheme, for the fractional step theta scheme, and for the SST-generalized stationary Navier-Stokes equations. As far as stability is concerned, a linear version of the Euler scheme, a nonlinear version of the Crank-Nicolson scheme, and the fractional step theta scheme turn out to be non-stable in the class of SST spaces. The positive results established in this thesis include the generalization of classical uniqueness and stability results to SST spaces, the uniqueness of solutions (under smallness assumptions) to two nonlinear versions of the Euler scheme, two nonlinear versions of the Crank-Nicolson scheme, and the fractional step theta scheme for general SST spaces, the second order convergence of a version of the Crank-Nicolson scheme, and a new proof of the first order convergence of the implicit Euler scheme for the Navier-Stokes equations. For each convergence result, we provide conditions on the data that guarantee the existence of nonstationary solutions satisfying the regularity assumptions needed for the corresponding convergence theorem. In the case of the Crank-Nicolson scheme, this involves a compatibility condition at the corner of the space-time cylinder, which can be satisfied via a suitable prescription of the initial acceleration.
Resumo:
Introducción: El glaucoma representa la tercera causa de ceguera a nivel mundial y un diagnóstico oportuno requiere evaluar la excavación del nervio óptico que está relacionada con el área del mismo. Existen reportes de áreas grandes (macrodiscos) que pueden ser protectoras, mientras otros las asocian a susceptibilidad para glaucoma. Objetivo: Establecer si existe asociación entre macrodisco y glaucoma en individuos estudiados con Tomografía Optica Coherente (OCT ) en la Fundación Oftalmológica Nacional. Métodos: Estudio transversal de asociación que incluyó 25 ojos con glaucoma primario de ángulo abierto y 74 ojos sanos. A cada individuo se realizó examen oftalmológico, campo visual computarizado y OCT de nervio óptico. Se compararon por grupos áreas de disco óptico y número de macrodiscos, definidos según Jonas como un área de la media más dos desviaciones estándar y según Adabache como área ≥3.03 mm2 quien evaluó población Mexicana. Resultados: El área promedio de disco óptico fue 2,78 y 2,80 mm2 glaucoma Vs. sanos. De acuerdo al criterio de Jonas, se observó un macrodisco en el grupo sanos y según criterio de Adabache se encontraron ocho y veinticinco macrodiscos glaucoma Vs. sanos. (OR=0,92 IC95%=0.35 – 2.43). Discusión: No hubo diferencia significativa (P=0.870) en el área de disco entre los dos grupos y el porcentaje de macrodiscos para los dos grupos fue similar, aunque el bajo número de éstos no permitió concluir en términos estadísticos sobre la presencia de macrodisco y glaucoma.
Resumo:
The complexity inherent in climate data makes it necessary to introduce more than one statistical tool to the researcher to gain insight into the climate system. Empirical orthogonal function (EOF) analysis is one of the most widely used methods to analyze weather/climate modes of variability and to reduce the dimensionality of the system. Simple structure rotation of EOFs can enhance interpretability of the obtained patterns but cannot provide anything more than temporal uncorrelatedness. In this paper, an alternative rotation method based on independent component analysis (ICA) is considered. The ICA is viewed here as a method of EOF rotation. Starting from an initial EOF solution rather than rotating the loadings toward simplicity, ICA seeks a rotation matrix that maximizes the independence between the components in the time domain. If the underlying climate signals have an independent forcing, one can expect to find loadings with interpretable patterns whose time coefficients have properties that go beyond simple noncorrelation observed in EOFs. The methodology is presented and an application to monthly means sea level pressure (SLP) field is discussed. Among the rotated (to independence) EOFs, the North Atlantic Oscillation (NAO) pattern, an Arctic Oscillation–like pattern, and a Scandinavian-like pattern have been identified. There is the suggestion that the NAO is an intrinsic mode of variability independent of the Pacific.
Resumo:
We provide a system identification framework for the analysis of THz-transient data. The subspace identification algorithm for both deterministic and stochastic systems is used to model the time-domain responses of structures under broadband excitation. Structures with additional time delays can be modelled within the state-space framework using additional state variables. We compare the numerical stability of the commonly used least-squares ARX models to that of the subspace N4SID algorithm by using examples of fourth-order and eighth-order systems under pulse and chirp excitation conditions. These models correspond to structures having two and four modes simultaneously propagating respectively. We show that chirp excitation combined with the subspace identification algorithm can provide a better identification of the underlying mode dynamics than the ARX model does as the complexity of the system increases. The use of an identified state-space model for mode demixing, upon transformation to a decoupled realization form is illustrated. Applications of state-space models and the N4SID algorithm to THz transient spectroscopy as well as to optical systems are highlighted.
Resumo:
We show that an analysis of the mean and variance of discrete wavelet coefficients of coaveraged time-domain interferograms can be used as a specification for determining when to stop coaveraging. We also show that, if a prediction model built in the wavelet domain is used to determine the composition of unknown samples, a stopping criterion for the coaveraging process can be developed with respect to the uncertainty tolerated in the prediction.
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
We discuss the modeling of dielectric responses of electromagnetically excited networks which are composed of a mixture of capacitors and resistors. Such networks can be employed as lumped-parameter circuits to model the response of composite materials containing conductive and insulating grains. The dynamics of the excited network systems are studied using a state space model derived from a randomized incidence matrix. Time and frequency domain responses from synthetic data sets generated from state space models are analyzed for the purpose of estimating the fraction of capacitors in the network. Good results were obtained by using either the time-domain response to a pulse excitation or impedance data at selected frequencies. A chemometric framework based on a Successive Projections Algorithm (SPA) enables the construction of multiple linear regression (MLR) models which can efficiently determine the ratio of conductive to insulating components in composite material samples. The proposed method avoids restrictions commonly associated with Archie’s law, the application of percolation theory or Kohlrausch-Williams-Watts models and is applicable to experimental results generated by either time domain transient spectrometers or continuous-wave instruments. Furthermore, it is quite generic and applicable to tomography, acoustics as well as other spectroscopies such as nuclear magnetic resonance, electron paramagnetic resonance and, therefore, should be of general interest across the dielectrics community.
Resumo:
This study investigates the numerical simulation of three-dimensional time-dependent viscoelastic free surface flows using the Upper-Convected Maxwell (UCM) constitutive equation and an algebraic explicit model. This investigation was carried out to develop a simplified approach that can be applied to the extrudate swell problem. The relevant physics of this flow phenomenon is discussed in the paper and an algebraic model to predict the extrudate swell problem is presented. It is based on an explicit algebraic representation of the non-Newtonian extra-stress through a kinematic tensor formed with the scaled dyadic product of the velocity field. The elasticity of the fluid is governed by a single transport equation for a scalar quantity which has dimension of strain rate. Mass and momentum conservations, and the constitutive equation (UCM and algebraic model) were solved by a three-dimensional time-dependent finite difference method. The free surface of the fluid was modeled using a marker-and-cell approach. The algebraic model was validated by comparing the numerical predictions with analytic solutions for pipe flow. In comparison with the classical UCM model, one advantage of this approach is that computational workload is substantially reduced: the UCM model employs six differential equations while the algebraic model uses only one. The results showed stable flows with very large extrudate growths beyond those usually obtained with standard differential viscoelastic models. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper considers the stability of explicit, implicit and Crank-Nicolson schemes for the one-dimensional heat equation on a staggered grid. Furthemore, we consider the cases when both explicit and implicit approximations of the boundary conditions arc employed. Why we choose to do this is clearly motivated and arises front solving fluid flow equations with free surfaces when the Reynolds number can be very small. in at least parts of the spatial domain. A comprehensive stability analysis is supplied: a novel result is the precise stability restriction on the Crank-Nicolson method when the boundary conditions are approximated explicitly, that is, at t =n delta t rather than t = (n + 1)delta t. The two-dimensional Navier-Stokes equations were then solved by a marker and cell approach for two simple problems that had analytic solutions. It was found that the stability results provided in this paper were qualitatively very similar. thereby providing insight as to why a Crank-Nicolson approximation of the momentum equations is only conditionally, stable. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
A disfunção autonômica está associada com aumento da mortalidade em pacientes diabéticos, especialmente naqueles com doença cardiovascular. Neuropatia periférica, mau controle glicêmico, dislipidemia e hipertensão são alguns dos fatores de risco para o desenvolvimento de doença vascular periférica (DVP) nestes pacientes. O objetivo deste estudo foi avaliar os fatores de risco associados com a presença de DVP em pacientes com DM tipo 2. Um estudo transversal foi realizado em 84 pacientes com DM tipo 2 ( 39 homens, idade média de 64,9 ± 7,5 anos). Os pacientes foram submetidos a uma avaliação clínica e laboratorial. A presença de DVP foi definida, utilizando-se um um aparelho manual de ultrasom com doppler (índice perna-braço < 0,9). A atividade autonômica foi avaliada através da análise da variabilidade da freqüência cardíaca (HRV) por métodos no domínio do tempo e da freqüência (análise espectral), e pelo mapa de retorno tridimensional durante o período do dia e da noite. Para a análise da HRV, um eletrocardiograma de 24 horas foi gravado e as fitas analisadas em um analisador de Holter Mars 8000 (Marquete). A potência espectral foi quantificada pela área em duas bandas de freqüência: 0,04-0,15 Hz – baixa freqüência (BF), 0,015-0,5 Hz – alta freqüência (AF). A razão BF/AF foi calculada em cada paciente. O mapa de retorno tridimensional foi construído através de um modelo matemático onde foram analisados os intervalos RR versus a diferença entre os intervalos RR adjacentes versus o número de contagens verificadas, e quantificado por três índices refletindo a modulação simpática (P1) e vagal (P2 e P3). DVP estava presente em 30 (36%) pacientes. Na análise univariada, pacientes com DVP apresentaram índices que refletem a modulação autonômica (análise espectral) diminuídos quando comparados aos pacientes sem DVP, respectivamente: BF = 0,19 ± 0,07 m/s2 vs. 0,29 ± 0,11 m/s2 P = 0,0001; BF/AF = 1,98 ± 0,9 m/s2 vs. 3,35 ± 1,83 m/s2 p = 0,001. Além disso, o índice que reflete a atividade simpática no mapa de retorno tridimensional (P1), foi mais baixo em pacientes com DVP (61,7 ± 9,4 vs. 66,8 ± 9,7 unidades arbitrárias, P = 0,04) durante a noite, refletindo maior ativação simpática neste período. Estes pacientes também apresentavam uma maior duração do diabetes (20 ± 8,1 vs. 15,3 ± 6,7 anos, P = 0,006), níveis de pressão arterial sistólica (154 ± 20 vs. 145 ± 20 mmHg, P = 0,04), razão cintura-quadril ( 0,98 ± 0,09 vs.0,92 ± 0,08, P = 0,01), e níveis de HbA1c mais elevados (7,7 ± 1,6 vs. 6,9 ± 1,7 %, P = 0,04), bem como valores de triglicerídeos ( 259 ± 94 vs. 230 ± 196 mg/dl, P= 0,03) e de excreção urinária de albumina ( 685,5 ± 1359,9 vs. 188,2 ± 591,1 μ/min, P = 0,02) superiores aos dos pacientes sem DVP.. Nos pacientes com DVP observou-se uma presença aumentada de nefropatia diabética (73,3% vs. 29,6% P = 0,0001), de retinopatia (73,3% vs. 44,4% P = 0,02) e neuropatia periférica (705 vs. 35,1% P = 0,006). Os grupos não diferiram quanto à idade, índice de massa corporal, tabagismo e presença de doença arterial coronariana. Na análise logística multivariada, a DVP permaneceu associada com a disfunção autonômica, mesmo após ter sido controlada pela pressão arterial sistólica, duração do DM, HbA1c, triglicerídeos e excreção urinária de albumina. Concluindo, pacientes com DVP e DM tipo 2 apresentam índices que refletem a modulação autonômica diminuídos, o que pode representar um fator de risco adicional para o aumento da mortalidade nestes pacientes.
Resumo:
The scheme is based on Ami Harten's ideas (Harten, 1994), the main tools coming from wavelet theory, in the framework of multiresolution analysis for cell averages. But instead of evolving cell averages on the finest uniform level, we propose to evolve just the cell averages on the grid determined by the significant wavelet coefficients. Typically, there are few cells in each time step, big cells on smooth regions, and smaller ones close to irregularities of the solution. For the numerical flux, we use a simple uniform central finite difference scheme, adapted to the size of each cell. If any of the required neighboring cell averages is not present, it is interpolated from coarser scales. But we switch to ENO scheme in the finest part of the grids. To show the feasibility and efficiency of the method, it is applied to a system arising in polymer-flooding of an oil reservoir. In terms of CPU time and memory requirements, it outperforms Harten's multiresolution algorithm.The proposed method applies to systems of conservation laws in 1Dpartial derivative(t)u(x, t) + partial derivative(x)f(u(x, t)) = 0, u(x, t) is an element of R-m. (1)In the spirit of finite volume methods, we shall consider the explicit schemeupsilon(mu)(n+1) = upsilon(mu)(n) - Deltat/hmu ((f) over bar (mu) - (f) over bar (mu)-) = [Dupsilon(n)](mu), (2)where mu is a point of an irregular grid Gamma, mu(-) is the left neighbor of A in Gamma, upsilon(mu)(n) approximate to 1/mu-mu(-) integral(mu-)(mu) u(x, t(n))dx are approximated cell averages of the solution, (f) over bar (mu) = (f) over bar (mu)(upsilon(n)) are the numerical fluxes, and D is the numerical evolution operator of the scheme.According to the definition of (f) over bar (mu), several schemes of this type have been proposed and successfully applied (LeVeque, 1990). Godunov, Lax-Wendroff, and ENO are some of the popular names. Godunov scheme resolves well the shocks, but accuracy (of first order) is poor in smooth regions. Lax-Wendroff is of second order, but produces dangerous oscillations close to shocks. ENO schemes are good alternatives, with high order and without serious oscillations. But the price is high computational cost.Ami Harten proposed in (Harten, 1994) a simple strategy to save expensive ENO flux calculations. The basic tools come from multiresolution analysis for cell averages on uniform grids, and the principle is that wavelet coefficients can be used for the characterization of local smoothness.. Typically, only few wavelet coefficients are significant. At the finest level, they indicate discontinuity points, where ENO numerical fluxes are computed exactly. Elsewhere, cheaper fluxes can be safely used, or just interpolated from coarser scales. Different applications of this principle have been explored by several authors, see for example (G-Muller and Muller, 1998).Our scheme also uses Ami Harten's ideas. But instead of evolving the cell averages on the finest uniform level, we propose to evolve the cell averages on sparse grids associated with the significant wavelet coefficients. This means that the total number of cells is small, with big cells in smooth regions and smaller ones close to irregularities. This task requires improved new tools, which are described next.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This article analyzes the electrical parameters of a 3-phase transmission line using a 280-m-high steel tower that has been proposed for the Amazon transmission system in Brazil. The height of the line conductors and the distance between them are intrinsically related to the longitudinal and transverse parameters of the line. Hence, an accurate study is carried out in order to show the electrical variations between a transmission line using the new technology and a conventional 3-phase 440-kV line, considering a wide range of frequencies and variable soil resistivity. First, a brief review of the fundamental theory of line parameters is presented. In addition, by using a digital line model, simulations are carried out in the time domain to analyze possible and critical over-voltage transients on the proposed line representation.