293 resultados para Inhomogeneity
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Naturally occuring or man-made systems displaying periodic spatial modulations of their properties on a nanoscale constitute superlattices. Such modulated structures are important both as prototypes of simple nanotechnological devices and as particular examples of emerging spatial inhomogeneity in interacting many-electron systems. Here we investigate the effect different types of modulation of the system parameters have on the ground-state energy and the charge-density distribution of the system. The superlattices are described by the inhomogeneous attractive Hubbard model, and the calculations are performed by density-functional and density-matrix renormalization group techniques. We find that modulations in local electric potentials are much more effective in shaping the system's properties than modulations in the attractive on-site interaction. This is the same conclusion we previously [M.F. Silva, N.A. Lima, A.L. Malvezzi, K. Capelle, Phys. Rev. B 71 (2005) 125130.] obtained for repulsive interactions, suggesting that it is not an artifact of a specific state, but a general property of modulated structures. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
On the possibility that the universe's matter density is low (Ohm(0) < 1), cosmologies can be considered with the metric of Friedmann's open universe but with closed hyperbolic manifolds as the physical three-space. These models have nontrivial spatial topology, with the property of producing multiple images of cosmic sources. Here a fit is attempted of 10 of these models to the physical cold and hot spots found by Cayon & Smoot in the COBE/DMR maps. These spots are interpreted as early, distant images of much nearer sources of inhomogeneity. The source for one of the cold spots is seen as the seed of a known supercluster.
Resumo:
This paper deals with two aspects of relativistic cosmologies with closed spatial sections. These spacetimes are based on the theory of general relativity, and admit a foliation into space sections S(t), which are spacelike hypersurfaces satisfying the postulate of the closure of space: each S(t) is a three-dimensional closed Riemannian manifold. The topics discussed are: (i) a comparison, previously obtained, between Thurston geometries and Bianchi-Kantowski-Sachs metrics for such three-manifolds is here clarified and developed; and (ii) the implications of global inhomogeneity for locally homogeneous three-spaces of constant curvature are analyzed from an observational viewpoint.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The structural evolution on the drying of wet sonogels of silica with the liquid phase exchanged by acetone, obtained from tetraethoxisilane sonohydrolysis, was studied in situ by small-angle x-ray scattering (SAXS). The periods associated to the structural evolution as determined by SAXS are in agreement with those classical ones established on basis of the features of the evaporation rate of the liquid phase in the obtaining of xerogels. The wet gel can be described as formed by primary particles (microclusters), with characteristic length a ∼ 0.67 nm and surface which is fractal, linking together to form mass fractal structures with mass fractal dimension D=2.24 in a length scale ξ∼6.7 nm. As the network collapses while the liquid/vapor meniscus is kept out of the gel volume, the mass fractal structure becomes more compacted by increasing D and decreasing ξ, with smoothing of the fractal surface of the microclusters. The time evolution of the density of the wet gels was evaluated exclusively from the SAXS parameters ξ, D, and a. The final dried acetone-exchanged gel presents Porod's inhomogeneity length of about 2.8 nm and apparently exhibits an interesting singularity D →3, as determined by the mass fractal modeling used to fit the SAXS intensity data for the obtaining of the parameters ξ and D.
Resumo:
Localizar em subsuperfície a região que mais influencia nas medidas obtidas na superfície da Terra é um problema de grande relevância em qualquer área da Geofísica. Neste trabalho, é feito um estudo sobre a localização dessa região, denominada aqui zona principal, para métodos eletromagnéticos no domínio da freqüência, utilizando-se como fonte uma linha de corrente na superfície de um semi-espaço condutor. No modelo estudado, tem-se, no interior desse semi-espaço, uma heterogeneidade na forma de camada infinita, ou de prisma com seção reta quadrada e comprimento infinito, na direção da linha de corrente. A diferença entre a medida obtida sobre o semi-espaço contendo a heterogeneidade e aquela obtida sobre o semi-espaço homogêneo, depende, entre outros parâmetros, da localização da heterogeneidade em relação ao sistema transmissor-receptor. Portanto, mantidos constantes os demais parâmetros, existirá uma posição da heterogeneidade em que sua influência é máxima nas medidas obtidas. Como esta posição é dependente do contraste de condutividade, das dimensões da heterogeneidade e da freqüência da corrente no transmissor, fica caracterizada uma região e não apenas uma única posição em que a heterogeneidade produzirá a máxima influência nas medidas. Esta região foi denominada zona principal. Identificada a zona principal, torna-se possível localizar com precisão os corpos que, em subsuperfície, provocam as anomalias observadas. Trata-se geralmente de corpos condutores de interesse para algum fim determinado. A localização desses corpos na prospecção, além de facilitar a exploração, reduz os custos de produção. Para localizar a zona principal, foi definida uma função Detetabilidade (∆), capaz de medir a influência da heterogeneidade nas medidas. A função ∆ foi calculada para amplitude e fase das componentes tangencial (Hx) e normal (Hz) à superfície terrestre do campo magnético medido no receptor. Estudando os extremos da função ∆ sob variações de condutividade, tamanho e profundidade da heterogeneidade, em modelos unidimensionais e bidimensionais, foram obtidas as dimensões da zona principal, tanto lateralmente como em profundidade. Os campos eletromagnéticos em modelos unidimensionais foram obtidos de uma forma híbrida, resolvendo numericamente as integrais obtidas da formulação analítica. Para modelos bidimensionais, a solução foi obtida através da técnica de elementos finitos. Os valores máximos da função ∆, calculada para amplitude de Hx, mostraram-se os mais indicados para localizar a zona principal. A localização feita através desta grandeza apresentou-se mais estável do que através das demais, sob variação das propriedades físicas e dimensões geométricas, tanto dos modelos unidimensionais como dos bidimensionais. No caso da heterogeneidade condutora ser uma camada horizontal infinita (caso 1D), a profundidade do plano central dessa camada vem dada pela relação po = 0,17 δo, onde po é essa profundidade e δo o "skin depth" da onda plana (em um meio homogêneo de condutividade igual à do meio encaixante (σ1) e a freqüência dada pelo valor de w em que ocorre o máximo de ∆ calculada para a amplitude de Hx). No caso de uma heterogeneidade bidimensional (caso 2D), as coordenadas do eixo central da zona principal vem dadas por do = 0,77 r0 (sendo do a distância horizontal do eixo à fonte transmissora) e po = 0,36 δo (sendo po a profundidade do eixo central da zona principal), onde r0 é a distância transmissor-receptor e δo o "skin depth" da onda plana, nas mesmas condições já estipuladas no caso 1D. Conhecendo-se os valores de r0 e δo para os quais ocorre o máximo de ∆, calculado para a amplitude de Hx, pode-se determinar (do, po). Para localizar a zona principal (ou, equivalentemente, uma zona condutora anômala em subsuperfície), sugere-se um método que consiste em associar cada valor da função ∆ da amplitude de Hx a um ponto (d, p), gerado através das relações d = 0,77 r e p = 0,36 δ, para cada w, em todo o espectro de freqüências das medidas, em um dado conjunto de configurações transmissor-receptor. São, então, traçadas curvas de contorno com os isovalores de ∆ que vão convergir, na medida em que o valor de ∆ se aproxima do máximo, sobre a localização e as dimensões geométricas aproximadas da heterogeneidade (zona principal).
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This work assessed homogeneity of the Institute of Astronomy, Geophysics and Atmospheric Sciences (IAG) weather station climate series, using various statistical techniques. The record from this target station is one of the longest in Brazil, having commenced in 1933 with observations of precipitation, and temperatures and other variables later in 1936. Thus, it is one of the few stations in Brazil with enough data for long-term climate variability and climate change studies. There is, however, a possibility that its data may have been contaminated by some artifacts over time. Admittedly, there was an intervention on the observations in 1958, with the replacement of instruments, for which the size of impact has not been yet evaluated. The station transformed in the course of time from rural to urban, and this may also have influenced homogeneity of the observations and makes the station less representative for climate studies over larger spatial scales. Homogeneity of the target station was assessed applying both absolute, or single station tests, and tests relatively to regional climate, in annual scale, regarding daily precipitation, relative humidity, maximum (TMax), minimum (TMin), and wet bulb temperatures. Among these quantities, only precipitation does not exhibit any inhomogeneity. A clear signal of change of instruments in 1958 was detected in the TMax and relative humidity data, the latter certainly because of its strong dependence on temperature. This signal is not very clear in TMin, but it presents non-climatic discontinuities around 1953 and around 1970. A significant homogeneity break is found around 1990 for TMax and wet bulb temperature. The discontinuities detected after 1958 may have been caused by urbanization, as the observed warming trend in the station is considerably greater than that corresponding to regional climate.
Resumo:
The current cosmological dark sector (dark matter plus dark energy) is challenging our comprehension about the physical processes taking place in the Universe. Recently, some authors tried to falsify the basic underlying assumptions of such dark matterdark energy paradigm. In this Letter, we show that oversimplifications of the measurement process may produce false positives to any consistency test based on the globally homogeneous and isotropic ? cold dark matter (?CDM) model and its expansion history based on distance measurements. In particular, when local inhomogeneity effects due to clumped matter or voids are taken into account, an apparent violation of the basic assumptions (Copernican Principle) seems to be present. Conversely, the amplitude of the deviations also probes the degree of reliability underlying the phenomenological DyerRoeder procedure by confronting its predictions with the accuracy of the weak lensing approach. Finally, a new method is devised to reconstruct the effects of the inhomogeneities in a ?CDM model, and some suggestions of how to distinguish between clumpiness (or void) effects from different cosmologies are discussed.
Resumo:
Two types of mesoscale wind-speed jet and their effects on boundary-layer structure were studied. The first is a coastal jet off the northern California coast, and the second is a katabatic jet over Vatnajökull, Iceland. Coastal regions are highly populated, and studies of coastal meteorology are of general interest for environmental protection, fishing industry, and for air and sea transportation. Not so many people live in direct contact with glaciers but properties of katabatic flows are important for understanding glacier response to climatic changes. Hence, the two jets can potentially influence a vast number of people. Flow response to terrain forcing, transient behavior in time and space, and adherence to simplified theoretical models were examined. The turbulence structure in these stably stratified boundary layers was also investigated. Numerical modeling is the main tool in this thesis; observations are used primarily to ensure a realistic model behavior. Simple shallow-water theory provides a useful framework for analyzing high-velocity flows along mountainous coastlines, but for an unexpected reason. Waves are trapped in the inversion by the curvature of the wind-speed profile, rather than by an infinite stability in the inversion separating two neutral layers, as assumed in the theory. In the absence of blocking terrain, observations of steady-state supercritical flows are not likely, due to the diurnal variation of flow criticality. In many simplified models, non-local processes are neglected. In the flows studied here, we showed that this is not always a valid approximation. Discrepancies between simulated katabatic flow and that predicted by an analytical model are hypothesized to be due to non-local effects, such as surface inhomogeneity and slope geometry, neglected in the theory. On a different scale, a reason for variations in the shape of local similarity scaling functions between studies is suggested to be differences in non-local contributions to the velocity variance budgets.
Resumo:
The g-factor is a constant which connects the magnetic moment $vec{mu}$ of a charged particle, of charge q and mass m, with its angular momentum $vec{J}$. Thus, the magnetic moment can be writen $ vec{mu}_J=g_Jfrac{q}{2m}vec{J}$. The g-factor for a free particle of spin s=1/2 should take the value g=2. But due to quantum electro-dynamical effects it deviates from this value by a small amount, the so called g-factor anomaly $a_e$, which is of the order of $10^{-3}$ for the free electron. This deviation is even bigger if the electron is exposed to high electric fields. Therefore highly charged ions, where electric field strength gets values on the order of $10^{13}-10^{16}$V/cm at the position of the bound electron, are an interesting field of investigations to test QED-calculations. In previous experiments [H"aff00,Ver04] using a single hydrogen-like ion confined in a Penning trap an accuracy of few parts in $10^{-9}$ was obtained. In the present work a new method for precise measurement of magnetic the electronic g-factor of hydrogen-like ions is discussed. Due to the unavoidable magnetic field inhomogeneity in a Penning trap, a very important contribution to the systematic uncertainty in the previous measurements arose from the elevated energy of the ion required for the measurement of its motional frequencies. Then it was necessary to extrapolate the result to vanishing energies. In the new method the energy in the cyclotron degree of freedom is reduced to the minimum attainable energy. This method consist in measuring the reduced cyclotron frequency $nu_{+}$ indirectly by coupling the axial to the reduced cyclotron motion by irradiation of the radio frequency $nu_{coup}=nu_{+}-nu_{ax}+delta$ where $delta$ is, in principle, an unknown detuning that can be obtained from the knowledge of the coupling process. Then the only unknown parameter is the desired value of $nu_+$. As a test, a measurement with, for simplicity, artificially increased axial energy was performed yielding the result $g_{exp}=2.000~047~020~8(24)(44)$. This is in perfect agreement with both the theoretical result $g_{theo}=2.000~047~020~2(6)$ and the previous experimental result $g_{exp1}=2.000~047~025~4(15)(44).$ In the experimental results the second error-bar is due to the uncertainty in the accepted value for the electron's mass. Thus, with the new method a higher accuracy in the g-factor could lead by comparison to the theoretical value to an improved value of the electron's mass. [H"af00] H. H"affner et al., Phys. Rev. Lett. 85 (2000) 5308 [Ver04] J. Verd'u et al., Phys. Rev. Lett. 92 (2004) 093002-1
Resumo:
In electrical impedance tomography, one tries to recover the conductivity inside a physical body from boundary measurements of current and voltage. In many practically important situations, the investigated object has known background conductivity but it is contaminated by inhomogeneities. The factorization method of Andreas Kirsch provides a tool for locating such inclusions. Earlier, it has been shown that under suitable regularity conditions positive (or negative) inhomogeneities can be characterized by the factorization technique if the conductivity or one of its higher normal derivatives jumps on the boundaries of the inclusions. In this work, we use a monotonicity argument to generalize these results: We show that the factorization method provides a characterization of an open inclusion (modulo its boundary) if each point inside the inhomogeneity has an open neighbourhood where the perturbation of the conductivity is strictly positive (or negative) definite. In particular, we do not assume any regularity of the inclusion boundary or set any conditions on the behaviour of the perturbed conductivity at the inclusion boundary. Our theoretical findings are verified by two-dimensional numerical experiments.
Resumo:
In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.