936 resultados para Maximum entropy
Resumo:
The combined influences of the westerly phase of the quasi-biennial oscillation (QBO-W) and solar maximum (Smax) conditions on the Northern Hemisphere extratropical winter circulation are investigated using reanalysis data and Center for Climate System Research/National Institute for Environmental Studies chemistry climate model (CCM) simulations. The composite analysis for the reanalysis data indicates strengthened polar vortex in December followed by weakened polar vortex in February–March for QBO-W during Smax (QBO-W/Smax) conditions. This relationship need not be specific to QBO-W/Smax conditions but may just require strengthened vortex in December, which is more likely under QBO-W/Smax. Both the reanalysis data and CCM simulations suggest that dynamical processes of planetary wave propagation and meridional circulation related to QBO-W around polar vortex in December are similar in character to those related to Smax; furthermore, both processes may work in concert to maintain stronger vortex during QBO-W/Smax. In the reanalysis data, the strengthened polar vortex in December is associated with the development of north–south dipole tropospheric anomaly in the Atlantic sector similar to the North Atlantic oscillation (NAO) during December–January. The structure of the north–south dipole anomaly has zonal wavenumber 1 (WN1) component, where the longitude of anomalous ridge overlaps with that of climatological ridge in the North Atlantic in January. This implies amplification of the WN1 wave and results in the enhancement of the upward WN1 propagation from troposphere into stratosphere in January, leading to the weakened polar vortex in February–March. Although WN2 waves do not play a direct role in forcing the stratospheric vortex evolution, their tropospheric response to QBO-W/Smax conditions appears to be related to the maintenance of the NAO-like anomaly in the high-latitude troposphere in January. These results may provide a possible explanation for the mechanisms underlying the seasonal evolution of wintertime polar vortex anomalies during QBO-W/Smax conditions and the role of troposphere in this evolution.
Resumo:
Reconstructions of salinity are used to diagnose changes in the hydrological cycle and ocean circulation. A widely used method of determining past salinity uses oxygen isotope (δOw) residuals after the extraction of the global ice volume and temperature components. This method relies on a constant relationship between δOw and salinity throughout time. Here we use the isotope-enabled fully coupled General Circulation Model (GCM) HadCM3 to test the application of spatially and time-independent relationships in the reconstruction of past ocean salinity. Simulations of the Late Holocene (LH), Last Glacial Maximum (LGM), and Last Interglacial (LIG) climates are performed and benchmarked against existing compilations of stable oxygen isotopes in carbonates (δOc), which primarily reflect δOw and temperature. We find that HadCM3 produces an accurate representation of the surface ocean δOc distribution for the LH and LGM. Our simulations show considerable variability in spatial and temporal δOw-salinity relationships. Spatial gradients are generally shallower but within ∼50% of the actual simulated LH to LGM and LH to LIG temporal gradients and temporal gradients calculated from multi-decadal variability are generally shallower than both spatial and actual simulated gradients. The largest sources of uncertainty in salinity reconstructions are found to be caused by changes in regional freshwater budgets, ocean circulation, and sea ice regimes. These can cause errors in salinity estimates exceeding 4 psu. Our results suggest that paleosalinity reconstructions in the South Atlantic, Indian and Tropical Pacific Oceans should be most robust, since these regions exhibit relatively constant δOw-salinity relationships across spatial and temporal scales. Largest uncertainties will affect North Atlantic and high latitude paleosalinity reconstructions. Finally, the results show that it is difficult to generate reliable salinity estimates for regions of dynamic oceanography, such as the North Atlantic, without additional constraints.
Resumo:
The Last Glacial Maximum (LGM) exhibits different large-scale atmospheric conditions compared to present-day climate due to altered boundary conditions. The regional atmospheric circulation and associated precipitation patterns over Europe are characterized for the first time with a weather typing approach (circulation weather types, CWT) for LGM paleoclimate simulations. The CWT approach is applied to four representative regions across Europe. While the CWTs over Western Europe are prevailing westerly for both present-day and LGM conditions, considerable differences are identified elsewhere: Southern Europe experienced more frequent westerly and cyclonic CWTs under LGM conditions, while Central and Eastern Europe was predominantly affected by southerly and easterly flow patterns. Under LGM conditions, rainfall is enhanced over Western Europe but is reduced over most of Central and Eastern Europe. These differences are explained by changing CWT frequencies and evaporation patterns over the North Atlantic Ocean. The regional differences of the CWTs and precipitation patterns are linked to the North Atlantic storm track, which was stronger over Europe in all considered models during the LGM, explaining the overall increase of the cyclonic CWT. Enhanced evaporation over the North Atlantic leads to higher moisture availability over the ocean. Despite the overall cooling during the LGM, this explains the enhanced precipitation over southwestern Europe, particularly Iberia. This study links large-scale atmospheric dynamics to the regional circulation and associated precipitation patterns and provides an improved regional assessment of the European climate under LGM conditions.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an automatic method to detect and classify weathered aggregates by assessing changes of colors and textures. The method allows the extraction of aggregate features from images and the automatic classification of them based on surface characteristics. The concept of entropy is used to extract features from digital images. An analysis of the use of this concept is presented and two classification approaches, based on neural networks architectures, are proposed. The classification performance of the proposed approaches is compared to the results obtained by other algorithms (commonly considered for classification purposes). The obtained results confirm that the presented method strongly supports the detection of weathered aggregates.
Resumo:
We study and compare the information loss of a large class of Gaussian bipartite systems. It includes the usual Caldeira-Leggett-type model as well as Anosov models ( parametric oscillators, the inverted oscillator environment, etc), which exhibit instability, one of the most important characteristics of chaotic systems. We establish a rigorous connection between the quantum Lyapunov exponents and coherence loss, and show that in the case of unstable environments coherence loss is completely determined by the upper quantum Lyapunov exponent, a behavior which is more universal than that of the Caldeira-Leggett-type model.
Resumo:
Measurements of X-ray diffraction, electrical resistivity, and magnetization are reported across the Jahn-Teller phase transition in LaMnO(3). Using a thermodynamic equation, we obtained the pressure derivative of the critical temperature (T(JT)), dT(JT)/dP = -28.3 K GPa(-1). This approach also reveals that 5.7(3)J(mol K)(-1) comes from the volume change and 0.8(2)J(mol K)(-1) from the magnetic exchange interaction change across the phase transition. Around T(JT), a robust increase in the electrical conductivity takes place and the electronic entropy change, which is assumed to be negligible for the majority of electronic systems, was found to be 1.8(3)J(mol K)(-1).
Resumo:
An entropy-based image segmentation approach is introduced and applied to color images obtained from Google Earth. Segmentation refers to the process of partitioning a digital image in order to locate different objects and regions of interest. The application to satellite images paves the way to automated monitoring of ecological catastrophes, urban growth, agricultural activity, maritime pollution, climate changing and general surveillance. Regions representing aquatic, rural and urban areas are identified and the accuracy of the proposed segmentation methodology is evaluated. The comparison with gray level images revealed that the color information is fundamental to obtain an accurate segmentation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Recently, the deterministic tourist walk has emerged as a novel approach for texture analysis. This method employs a traveler visiting image pixels using a deterministic walk rule. Resulting trajectories provide clues about pixel interaction in the image that can be used for image classification and identification tasks. This paper proposes a new walk rule for the tourist which is based on contrast direction of a neighborhood. The yielded results using this approach are comparable with those from traditional texture analysis methods in the classification of a set of Brodatz textures and their rotated versions, thus confirming the potential of the method as a feasible texture analysis methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The time evolution of the out-of-equilibrium Mott insulator is investigated numerically through calculations of space-time-resolved density and entropy profiles resulting from the release of a gas of ultracold fermionic atoms from an optical trap. For adiabatic, moderate and sudden switching-off of the trapping potential, the out-of-equilibrium dynamics of the Mott insulator is found to differ profoundly from that of the band insulator and the metallic phase, displaying a self-induced stability that is robust within a wide range of densities, system sizes and interaction strengths. The connection between the entanglement entropy and changes of phase, known for equilibrium situations, is found to extend to the out-of-equilibrium regime. Finally, the relation between the system`s long time behavior and the thermalization limit is analyzed. Copyright (C) EPLA, 2011
Resumo:
Nuclear receptors are important targets for pharmaceuticals, but similarities between family members cause difficulties in obtaining highly selective compounds. Synthetic ligands that are selective for thyroid hormone (TH) receptor beta (TR beta) vs. TR alpha reduce cholesterol and fat without effects on heart rate; thus, it is important to understand TR beta-selective binding. Binding of 3 selective ligands (GC-1, KB141, and GC-24) is characterized at the atomic level; preferential binding depends on a nonconserved residue (Asn-331 beta) in the TR beta ligand-binding cavity (LBC), and GC-24 gains extra selectivity from insertion of a bulky side group into an extension of the LBC that only opens up with this ligand. Here we report that the natural TH 3,5,3`-triodothyroacetic acid (Triac) exhibits a previously unrecognized mechanism of TR beta selectivity. TR x-ray structures reveal better fit of ligand with the TR alpha LBC. The TR beta LBC, however, expands relative to TR alpha in the presence of Triac (549 angstrom(3) vs. 461 angstrom(3)), and molecular dynamics simulations reveal that water occupies the extra space. Increased solvation compensates for weaker interactions of ligand with TR beta and permits greater flexibility of the Triac carboxylate group in TR beta than in TR alpha. We propose that this effect results in lower entropic restraint and decreases free energy of interactions between Triac and TR beta, explaining subtype-selective binding. Similar effects could potentially be exploited in nuclear receptor drug design.
Resumo:
Nowadays, noninvasive methods of diagnosis have increased due to demands of the population that requires fast, simple and painless exams. These methods have become possible because of the growth of technology that provides the necessary means of collecting and processing signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this paper is to characterize healthy and pathological voice signals with the aid of relative entropy measures. Phase space reconstruction technique is also used as a way to select interesting regions of the signals. Three groups of samples were used, one from healthy individuals and the other two from people with nodule in the vocal fold and Reinke`s edema. All of them are recordings of sustained vowel /a/ from Brazilian Portuguese. The paper shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Relative entropy is well suited due to its sensibility to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. The results showed that the pathological groups had higher entropy values in accordance with other vocal acoustic parameters presented. This suggests that these techniques may improve and complement the recent voice analysis methods available for clinicians. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
A bipartite graph G = (V, W, E) is convex if there exists an ordering of the vertices of W such that, for each v. V, the neighbors of v are consecutive in W. We describe both a sequential and a BSP/CGM algorithm to find a maximum independent set in a convex bipartite graph. The sequential algorithm improves over the running time of the previously known algorithm and the BSP/CGM algorithm is a parallel version of the sequential one. The complexity of the algorithms does not depend on |W|.