980 resultados para empirical correlation
Resumo:
Recent empirical studies have shown that multi-angle spectral data can be useful for predicting canopy height, but the physical reason for this correlation was not understood. We follow the concept of canopy spectral invariants, specifically escape probability, to gain insight into the observed correlation. Airborne Multi-Angle Imaging Spectrometer (AirMISR) and airborne Laser Vegetation Imaging Sensor (LVIS) data acquired during a NASA Terrestrial Ecology Program aircraft campaign underlie our analysis. Two multivariate linear regression models were developed to estimate LVIS height measures from 28 AirMISR multi-angle spectral reflectances and from the spectrally invariant escape probability at 7 AirMISR view angles. Both models achieved nearly the same accuracy, suggesting that canopy spectral invariant theory can explain the observed correlation. We hypothesize that the escape probability is sensitive to the aspect ratio (crown diameter to crown height). The multi-angle spectral data alone therefore may not provide enough information to retrieve canopy height globally.
Resumo:
Empirical Mode Decomposition is presented as an alternative to traditional analysis methods to decompose geomagnetic time series into spectral components. Important comments on the algorithm and its variations will be given. Using this technique, planetary wave modes of 5-, 10-, and 16-day mean periods can be extracted from magnetic field components of three different stations in Germany. In a second step, the amplitude modulation functions of these wave modes can be shown to contain significant contribution from solar cycle variation through correlation with smoothed sunspot numbers. Additionally, the data indicate connections with geomagnetic jerk occurrences, supported by a second set of data providing reconstructed near-Earth magnetic field for 150 years. Usually attributed to internal dynamo processes within the Earth's outer core, the question of who is impacting whom will be briefly discussed here.
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Resumo:
The matrix-tolerance hypothesis suggests that the most abundant species in the inter-habitat matrix would be less vulnerable to their habitat fragmentation. This model was tested with leaf-litter frogs in the Atlantic Forest where the fragmentation process is older and more severe than in the Amazon, where the model was first developed. Frog abundance data from the agricultural matrix, forest fragments and continuous forest localities were used. We found an expected negative correlation between the abundance of frogs in the matrix and their vulnerability to fragmentation, however, results varied with fragment size and species traits. Smaller fragments exhibited stronger matrix-vulnerability correlation than intermediate fragments, while no significant relation was observed for large fragments. Moreover, some species that avoid the matrix were not sensitive to a decrease in the patch size, and the opposite was also true, indicating significant differences with that expected from the model. Most of the species that use the matrix were forest species with aquatic larvae development, but those species do not necessarily respond to fragmentation or fragment size, and thus affect more intensively the strengthen of the expected relationship. Therefore, the main relationship expected by the matrix-tolerance hypothesis was observed in the Atlantic Forest; however we noted that the prediction of this hypothesis can be substantially affected by the size of the fragments, and by species traits. We propose that matrix-tolerance model should be broadened to become a more effective model, including other patch characteristics, particularly fragment size, and individual species traits (e. g., reproductive mode and habitat preference).
Resumo:
Attention is a critical mechanism for visual scene analysis. By means of attention, it is possible to break down the analysis of a complex scene to the analysis of its parts through a selection process. Empirical studies demonstrate that attentional selection is conducted on visual objects as a whole. We present a neurocomputational model of object-based selection in the framework of oscillatory correlation. By segmenting an input scene and integrating the segments with their conspicuity obtained from a saliency map, the model selects salient objects rather than salient locations. The proposed system is composed of three modules: a saliency map providing saliency values of image locations, image segmentation for breaking the input scene into a set of objects, and object selection which allows one of the objects of the scene to be selected at a time. This object selection system has been applied to real gray-level and color images and the simulation results show the effectiveness of the system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Universal properties of the Coulomb interaction energy apply to all many-electron systems. Bounds on the exchange-correlation energy, in particular, are important for the construction of improved density functionals. Here we investigate one such universal property-the Lieb-Oxford lower bound-for ionic and molecular systems. In recent work [J Chem Phys 127, 054106 (2007)], we observed that for atoms and electron liquids this bound may be substantially tightened. Calculations for a few ions and molecules suggested the same tendency, but were not conclusive due to the small number of systems considered. Here we extend that analysis to many different families of ions and molecules, and find that for these, too, the bound can be empirically tightened by a similar margin as for atoms and electron liquids. Tightening the Lieb-Oxford bound will have consequences for the performance of various approximate exchange-correlation functionals. (C) 2008 Wiley Periodicals Inc.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Several empirical studies in the literature have documented the existence of a positive correlation between income inequalitiy and unemployment. I provide a theoretical framework under which this correlation can be better understood. The analysis is based on a dynamic job search under uncertainty. I start by proving the uniqueness of a stationary distribution of wages in the economy. Drawing upon this distribution, I provide a general expression for the Gini coefficient of income inequality. The expression has the advantage of not requiring a particular specification of the distribution of wage offers. Next, I show how the Gini coefficient varies as a function of the parameters of the model, and how it can be expected to be positively correlated with the rate of unemployment. Two examples are offered. The first, of a technical nature, to show that the convergence of the measures implied by the underlying Markov process can fail in some cases. The second, to provide a quantitative assessment of the model and of the mechanism linking unemployment and inequality.
Resumo:
The goal of this paper is to show the possibility of a non-monotone relation between coverage ans risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuous parameter which is correlated with lenience and for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cosr of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and iplies a positive correlation between overage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the SCP be broken, but also the monotonocity of contracts, i.e., the prediction that high (low) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case there are some coverage levels associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation between coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to desentangle single crossing ans non single croosing under an ex-post zero correlation result: the monotonicity of coverage as a function os riskiness. Since by controlling for risk aversion (no asymmetric information), coverage is monotone function of riskiness, this also fives a test for asymmetric information. Finally, we relate this theoretical results to empirical tests in the recent literature, specially the Dionne, Gouruéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variables (asymmetric information) can bias the sign of the correlation of equilibrium variables conditioning on all observable variables. We show that this may be the case when the omitted variables have a non-monotonic relation with the observable ones. Moreover, because this non-dimensional does not capture this deature. Hence, our main results is to point out the importance of the SPC in testing predictions of the hidden information models.
Resumo:
The goal of t.his paper is to show the possibility of a non-monot.one relation between coverage and risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuou.'l parameter which is correlated with lenience and, for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cost of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and implies a positive correlation between coverage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the sep be broken, but also the monotonicity of contracts, i.e., the prediction that high (Iow) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case t,here are some coverage leveIs associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation bet,ween coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to disentangle single crossing and non single crossing under an ex-post zero correlation result: the monotonicity of coverage as a function of riskiness. Since by controlling for risk aversion (no asymmetric informat, ion), coverage is a monotone function of riskiness, this also gives a test for asymmetric information. Finally, we relate this theoretical results to empirica! tests in the recent literature, specially the Dionne, Gouriéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variabIes (asymmetric information) can bias the sign of the correlation of equilibrium variabIes conditioning on ali observabIe variabIes. We show that this may be t,he case when the omitted variabIes have a non-monotonic reIation with t,he observable ones. Moreover, because this non-monotonic reIat,ion is deepIy reIated with the failure of the SCP in one-dimensional screening problems, the existing lit.erature on asymmetric information does not capture t,his feature. Hence, our main result is to point Out the importance of t,he SCP in testing predictions of the hidden information models.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Development of empirical potentials for amorphous silica Amorphous silica (SiO2) is of great importance in geoscience and mineralogy as well as a raw material in glass industry. Its structure is characterized as a disordered continuous network of SiO4 tetrahedra. Many efforts have been undertaken to understand the microscopic properties of silica by classical molecular dynamics (MD) simulations. In this method the interatomic interactions are modeled by an effective potential that does not take explicitely into account the electronic degrees of freedom. In this work, we propose a new methodology to parameterize such a potential for silica using ab initio simulations, namely Car-Parrinello (CP) method [Phys. Rev. Lett. 55, 2471 (1985)]. The new potential proposed is compared to the BKS potential [Phys. Rev. Lett. 64, 1955 (1990)] that is considered as the benchmark potential for silica. First, CP simulations have been performed on a liquid silica sample at 3600 K. The structural features so obtained have been compared to the ones predicted by the classical BKS potential. Regarding the bond lengths the BKS tends to underestimate the Si-O bond whereas the Si-Si bond is overestimated. The inter-tetrahedral angular distribution functions are also not well described by the BKS potential. The corresponding mean value of theSiOSi angle is found to be ≃ 147◦, while the CP yields to aSiOSi angle centered around 135◦. Our aim is to fit a classical Born-Mayer/Coulomb pair potential using ab initio calculations. To this end, we use the force-matching method proposed by Ercolessi and Adams [Europhys. Lett. 26, 583 (1994)]. The CP configurations and their corresponding interatomic forces have been considered for a least square fitting procedure. The classical MD simulations with the resulting potential have lead to a structure that is very different from the CP one. Therefore, a different fitting criterion based on the CP partial pair correlation functions was applied. Using this approach the resulting potential shows a better agreement with the CP data than the BKS ones: pair correlation functions, angular distribution functions, structure factors, density of states and pressure/density were improved. At low temperature, the diffusion coefficients appear to be three times higher than those predicted by the BKS model, however showing a similar temperature dependence. Calculations have also been carried out on crystalline samples in order to check the transferability of the potential. The equilibrium geometry as well as the elastic constants of α-quartz at 0 K are well described by our new potential although the crystalline phases have not been considered for the parameterization. We have developed a new potential for silica which represents an improvement over the pair potentials class proposed so far. Furthermore, the fitting methodology that has been developed in this work can be applied to other network forming systems such as germania as well as mixtures of SiO2 with other oxides (e.g. Al2O3, K2O, Na2O).
Resumo:
Gels are elastic porous polymer networks that are accompanied by pronounced mechanical properties. Due to their biocompatibility, ‘responsive hydrogels’ (HG) have many biomedical applications ranging from biosensors and drug delivery to tissue engineering. They respond to external stimuli such as temperature and salt by changing their dimensions. Of paramount importance is the ability to engineer penetrability and diffusion of interacting molecules in the crowded HG environment, as this would enable one to optimize a specific functionality. Even though the conditions under which biomedical devices operate are rather complex, a bottom-up approach could reduce the complexity of mutually coupled parameters influencing tracer mobility. The present thesis focuses on the interaction-induced tracer diffusion in polymer solutions and their homologous gels, probed by means of Fluorescence Correlation Spectroscopy (FCS). This is a single-molecule-sensitive technique having the advantage of optimal performance under ultralow tracer concentrations, typically employed in biosensors. Two different types of hydrogels have been investigated, a conventional one with broad polydispersity in the distance between crosslink points and a so-called ‘ideal’, with uniform mesh size distribution. The former is based on a thermoresponsive polymer, exhibiting phase separation in water at temperatures close to the human body temperature. The latter represents an optimal platform to study tracer diffusion. Mobilities of different tracers have been investigated in each network, varying in size, geometry and in terms of tracer-polymer attractive strength, as perturbed by different stimuli. The thesis constitutes a systematic effort towards elucidating the role of the strength and nature of different tracer-polymer interactions, on tracer mobilities; it outlines that interactions can still be very important even in the simplified case of dilute polymer solutions; it also demonstrates that the presence of permanent crosslinks exerts distinct tracer slowdown, depending on the tracer type and the nature of the tracer-polymer interactions, expressed differently by each tracer with regard to the selected stimulus. In aqueous polymer solutions, the tracer slowdown is found to be system-dependent and no universal trend seems to hold, in contrast to predictions from scaling theory for non-interacting nanoparticle mobility and empirical relations concerning the mesh size in polymer solutions. Complex tracer dynamics in polymer networks may be distinctly expressed by FCS, depending on the specific synergy among-at least some of - the following parameters: nature of interactions, external stimuli employed, tracer size and type, crosslink density and swelling ratio.
Resumo:
The study is based on experimental work conducted in alpine snow. We made microwave radiometric and near-infrared reflectance measurements of snow slabs under different experimental conditions. We used an empirical relation to link near-infrared reflectance of snow to the specific surface area (SSA), and converted the SSA into the correlation length. From the measurements of snow radiances at 21 and 35 GHz , we derived the microwave scattering coefficient by inverting two coupled radiative transfer models (the sandwich and six-flux model). The correlation lengths found are in the same range as those determined in the literature using cold laboratory work. The technique shows great potential in the determination of the snow correlation length under field conditions.