924 resultados para Space-time block coding (STBC)
Resumo:
Pós-graduação em Artes - IA
Resumo:
It has been shown that the vertical structure of the Brazil Current (BC)-Intermediate Western Boundary Current (IWBC) System is dominated by the first baroclinic mode at 22 degrees S-23 degrees S. In this work, we employed the Miami Isopycnic Coordinate Ocean Model to investigate whether the rich mesoscale activity of this current system, between 20 degrees S and 28 degrees S, is reproduced by a two-layer approximation of its vertical structure. The model results showed cyclonic and anticyclonic meanders propagating southwestward along the current axis, resembling the dynamical pattern of Rossby waves superposed on a mean flow. Analysis of the upper layer zonal velocity component, using a space-time diagram, revealed a dominant wavelength of about 450 km and phase velocity of about 0.20 ms(-1) southwestward. The results also showed that the eddy-like structures slowly grew in amplitude as they moved downstream. Despite the simplified design of the numerical experiments conducted here, these results compared favorably with observations and seem to indicate that weakly unstable long baroclinic waves are responsible for most of the variability observed in the BC-IWBC system. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
There are some variants of the widely used Fuzzy C-Means (FCM) algorithm that support clustering data distributed across different sites. Those methods have been studied under different names, like collaborative and parallel fuzzy clustering. In this study, we offer some augmentation of the two FCM-based clustering algorithms used to cluster distributed data by arriving at some constructive ways of determining essential parameters of the algorithms (including the number of clusters) and forming a set of systematically structured guidelines such as a selection of the specific algorithm depending on the nature of the data environment and the assumptions being made about the number of clusters. A thorough complexity analysis, including space, time, and communication aspects, is reported. A series of detailed numeric experiments is used to illustrate the main ideas discussed in the study.
Resumo:
The use of geoid models to estimate the Mean Dynamic Topography was stimulated with the launching of the GRACE satellite system, since its models present unprecedented precision and space-time resolution. In the present study, besides the DNSC08 mean sea level model, the following geoid models were used with the objective of computing the MDTs: EGM96, EIGEN-5C and EGM2008. In the method adopted, geostrophic currents for the South Atlantic were computed based on the MDTs. In this study it was found that the degree and order of the geoid models affect the determination of TDM and currents directly. The presence of noise in the MDT requires the use of efficient filtering techniques, such as the filter based on Singular Spectrum Analysis, which presents significant advantages in relation to conventional filters. Geostrophic currents resulting from geoid models were compared with the HYCOM hydrodynamic numerical model. In conclusion, results show that MDTs and respective geostrophic currents calculated with EIGEN-5C and EGM2008 models are similar to the results of the numerical model, especially regarding the main large scale features such as boundary currents and the retroflection at the Brazil-Malvinas Confluence.
Resumo:
We propose an alternative, nonsingular, cosmic scenario based on gravitationally induced particle production. The model is an attempt to evade the coincidence and cosmological constant problems of the standard model (Lambda CDM) and also to connect the early and late time accelerating stages of the Universe. Our space-time emerges from a pure initial de Sitter stage thereby providing a natural solution to the horizon problem. Subsequently, due to an instability provoked by the production of massless particles, the Universe evolves smoothly to the standard radiation dominated era thereby ending the production of radiation as required by the conformal invariance. Next, the radiation becomes subdominant with the Universe entering in the cold dark matter dominated era. Finally, the negative pressure associated with the creation of cold dark matter (CCDM model) particles accelerates the expansion and drives the Universe to a final de Sitter stage. The late time cosmic expansion history of the CCDM model is exactly like in the standard Lambda CDM model; however, there is no dark energy. The model evolves between two limiting (early and late time) de Sitter regimes. All the stages are also discussed in terms of a scalar field description. This complete scenario is fully determined by two extreme energy densities, or equivalently, the associated de Sitter Hubble scales connected by rho(I)/rho(f) = (H-I/H-f)(2) similar to 10(122), a result that has no correlation with the cosmological constant problem. We also study the linear growth of matter perturbations at the final accelerating stage. It is found that the CCDM growth index can be written as a function of the Lambda growth index, gamma(Lambda) similar or equal to 6/11. In this framework, we also compare the observed growth rate of clustering with that predicted by the current CCDM model. Performing a chi(2) statistical test we show that the CCDM model provides growth rates that match sufficiently well with the observed growth rate of structure.
Resumo:
We study, in a d-dimensional space-time, the nonanalyticity of the thermal free energy in the scalar phi(4) theory as well as in QED. We find that the infrared divergent contributions induce, when d is even, a nonanalyticity in the coupling alpha of the form (alpha)((d-1)/2) whereas when d is odd the nonanalyticity is only logarithmic.
Resumo:
We prove that the hard thermal loop contribution to static thermal amplitudes can be obtained by setting all the external four-momenta to zero before performing the Matsubara sums and loop integrals. At the one-loop order we do an iterative procedure for all the one-particle irreducible one-loop diagrams, and at the two-loop order we consider the self-energy. Our approach is sufficiently general to the extent that it includes theories with any kind of interaction vertices, such as gravity in the weak field approximation, for d space-time dimensions. This result is valid whenever the external fields are all bosonic.
Resumo:
We present an analytic description of numerical results for the Landau-gauge SU(2) gluon propagator D(p(2)), obtained from lattice simulations (in the scaling region) for the largest lattice sizes to date, in d = 2, 3 and 4 space-time dimensions. Fits to the gluon data in 3d and in 4d show very good agreement with the tree-level prediction of the refined Gribov-Zwanziger (RGZ) framework, supporting a massive behavior for D(p(2)) in the infrared limit. In particular, we investigate the propagator's pole structure and provide estimates of the dynamical mass scales that can be associated with dimension-two condensates in the theory. In the 2d case, fitting the data requires a noninteger power of the momentum p in the numerator of the expression for D(p(2)). In this case, an infinite-volume-limit extrapolation gives D(0) = 0. Our analysis suggests that this result is related to a particular symmetry in the complex-pole structure of the propagator and not to purely imaginary poles, as would be expected in the original Gribov-Zwanziger scenario.
Resumo:
We compute the effective Lagrangian of static gravitational fields interacting with thermal fields. Our approach employs the usual imaginary time formalism as well as the equivalence between the static and space-time independent external gravitational fields. This allows to obtain a closed form expression for the thermal effective Lagrangian in d space-time dimensions.
Resumo:
We construct analytical and numerical vortex solutions for an extended Skyrme-Faddeev model in a (3 + 1) dimensional Minkowski space-time. The extension is obtained by adding to the Lagrangian a quartic term, which is the square of the kinetic term, and a potential which breaks the SO(3) symmetry down to SO(2). The construction makes use of an ansatz, invariant under the joint action of the internal SO(2) and three commuting U(1) subgroups of the Poincare group, and which reduces the equations of motion to an ordinary differential equation for a profile function depending on the distance to the x(3) axis. The vortices have finite energy per unit length, and have waves propagating along them with the speed of light. The analytical vortices are obtained for a special choice of potentials, and the numerical ones are constructed using the successive over relaxation method for more general potentials. The spectrum of solutions is analyzed in detail, especially its dependence upon special combinations of coupling constants.
Resumo:
We study general properties of the Landau-gauge Gribov ghost form factor sigma(p(2)) for SU(N-c) Yang-Mills theories in the d-dimensional case. We find a qualitatively different behavior for d = 3, 4 with respect to the d = 2 case. In particular, considering any (sufficiently regular) gluon propagator D(p(2)) and the one-loop-corrected ghost propagator, we prove in the 2d case that the function sigma(p(2)) blows up in the infrared limit p -> 0 as -D(0) ln(p(2)). Thus, for d = 2, the no-pole condition sigma(p(2)) < 1 (for p(2) > 0) can be satisfied only if the gluon propagator vanishes at zero momentum, that is, D(0) = 0. On the contrary, in d = 3 and 4, sigma(p(2)) is finite also if D(0) > 0. The same results are obtained by evaluating the ghost propagator G(p(2)) explicitly at one loop, using fitting forms for D(p(2)) that describe well the numerical data of the gluon propagator in two, three and four space-time dimensions in the SU(2) case. These evaluations also show that, if one considers the coupling constant g(2) as a free parameter, the ghost propagator admits a one-parameter family of behaviors (labeled by g(2)), in agreement with previous works by Boucaud et al. In this case the condition sigma(0) <= 1 implies g(2) <= g(c)(2), where g(c)(2) is a "critical" value. Moreover, a freelike ghost propagator in the infrared limit is obtained for any value of g(2) smaller than g(c)(2), while for g(2) = g(c)(2) one finds an infrared-enhanced ghost propagator. Finally, we analyze the Dyson-Schwinger equation for sigma(p(2)) and show that, for infrared-finite ghost-gluon vertices, one can bound the ghost form factor sigma(p(2)). Using these bounds we find again that only in the d = 2 case does one need to impose D(0) = 0 in order to satisfy the no-pole condition. The d = 2 result is also supported by an analysis of the Dyson-Schwinger equation using a spectral representation for the ghost propagator. Thus, if the no-pole condition is imposed, solving the d = 2 Dyson-Schwinger equations cannot lead to a massive behavior for the gluon propagator. These results apply to any Gribov copy inside the so-called first Gribov horizon; i.e., the 2d result D(0) = 0 is not affected by Gribov noise. These findings are also in agreement with lattice data.
Resumo:
We consider modifications of the nonlinear Schrodinger model (NLS) to look at the recently introduced concept of quasi-integrability. We show that such models possess an in finite number of quasi-conserved charges which present intriguing properties in relation to very specific space-time parity transformations. For the case of two-soliton solutions where the fields are eigenstates of this parity, those charges are asymptotically conserved in the scattering process of the solitons. Even though the charges vary in time their values in the far past and the far future are the same. Such results are obtained through analytical and numerical methods, and employ adaptations of algebraic techniques used in integrable field theories. Our findings may have important consequences on the applications of these models in several areas of non-linear science. We make a detailed numerical study of the modified NLS potential of the form V similar to (vertical bar psi vertical bar(2))(2+epsilon), with epsilon being a perturbation parameter. We perform numerical simulations of the scattering of solitons for this model and find a good agreement with the results predicted by the analytical considerations. Our paper shows that the quasi-integrability concepts recently proposed in the context of modifications of the sine-Gordon model remain valid for perturbations of the NLS model.
Resumo:
The use of geoid models to estimate the Mean Dynamic Topography was stimulated with the launching of the GRACE satellite system, since its models present unprecedented precision and space-time resolution. In the present study, besides the DNSC08 mean sea level model, the following geoid models were used with the objective of computing the MDTs: EGM96, EIGEN-5C and EGM2008. In the method adopted, geostrophic currents for the South Atlantic were computed based on the MDTs. In this study it was found that the degree and order of the geoid models affect the determination of TDM and currents directly. The presence of noise in the MDT requires the use of efficient filtering techniques, such as the filter based on Singular Spectrum Analysis, which presents significant advantages in relation to conventional filters. Geostrophic currents resulting from geoid models were compared with the HYCOM hydrodynamic numerical model. In conclusion, results show that MDTs and respective geostrophic currents calculated with EIGEN-5C and EGM2008 models are similar to the results of the numerical model, especially regarding the main large scale features such as boundary currents and the retroflection at the Brazil-Malvinas Confluence.
Resumo:
Overpopulation of urban areas results from constant migrations that cause disordered urban growth, constituting clusters defined as sets of people or activities concentrated in relatively small physical spaces that often involve precarious conditions. Aim. Using residential grouping, the aim was to identify possible clusters of individuals in São José do Rio Preto, Sao Paulo, Brazil, who have or have had leprosy. Methods. A population-based, descriptive, ecological study using the MapInfo and CrimeStat techniques, geoprocessing, and space-time analysis evaluated the location of 425 people treated for leprosy between 1998 and 2010. Clusters were defined as concentrations of at least 8 people with leprosy; a distance of up to 300 meters between residences was adopted. Additionally, the year of starting treatment and the clinical forms of the disease were analyzed. Results. Ninety-eight (23.1%) of 425 geocoded cases were located within one of ten clusters identified in this study, and 129 cases (30.3%) were in the region of a second-order cluster, an area considered of high risk for the disease. Conclusion.This study identified ten clusters of leprosy cases in the city and identified an area of high risk for the appearance of new cases of the disease.
Resumo:
L’uso frequente dei modelli predittivi per l’analisi di sistemi complessi, naturali o artificiali, sta cambiando il tradizionale approccio alle problematiche ambientali e di rischio. Il continuo miglioramento delle capacità di elaborazione dei computer facilita l’utilizzo e la risoluzione di metodi numerici basati su una discretizzazione spazio-temporale che permette una modellizzazione predittiva di sistemi reali complessi, riproducendo l’evoluzione dei loro patterns spaziali ed calcolando il grado di precisione della simulazione. In questa tesi presentiamo una applicazione di differenti metodi predittivi (Geomatico, Reti Neurali, Land Cover Modeler e Dinamica EGO) in un’area test del Petén, Guatemala. Durante gli ultimi decenni questa regione, inclusa nella Riserva di Biosfera Maya, ha conosciuto una rapida crescita demografica ed un’incontrollata pressione sulle sue risorse naturali. L’area test puó essere suddivisa in sotto-regioni caratterizzate da differenti dinamiche di uso del suolo. Comprendere e quantificare queste differenze permette una migliore approssimazione del sistema reale; é inoltre necessario integrare tutti i parametri fisici e socio-economici, per una rappresentazione più completa della complessità dell’impatto antropico. Data l’assenza di informazioni dettagliate sull’area di studio, quasi tutti i dati sono stati ricavati dall’elaborazione di 11 immagini ETM+, TM e SPOT; abbiamo poi realizzato un’analisi multitemporale dei cambi uso del suolo passati e costruito l’input per alimentare i modelli predittivi. I dati del 1998 e 2000 sono stati usati per la fase di calibrazione per simulare i cambiamenti nella copertura terrestre del 2003, scelta come data di riferimento per la validazione dei risultati. Quest’ultima permette di evidenziare le qualità ed i limiti per ogni modello nelle differenti sub-regioni.