34 resultados para Empirical Functions
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Estimation of Taylor`s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating function. Furthermore, we investigate a more general regression model allowing for site-specific covariates. This method may be efficiently implemented using a Newton scoring algorithm, with standard errors calculated from the inverse Godambe information matrix. The method is applied to a set of biomass data for benthic macrofauna from two Danish estuaries. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Discussion opposing the Theory of the Firm to the Theory of Stakeholders are contemporaneous and polemical. One focal point of such debates refers to which objective-function companies, should choose, whether that of the shareholders or that of the stakeholders, and whether it is possible to opt for both simultaneously. Several empirical studies. have attempted-to test a possible correlation between both functions, and there has not been any consensus-so far. The objective of the present research is to examine a gap in such discussions: is there (or not) a subordination of the stakeholders` objective-function to that of the shareholders? The research is empirical,and analytical and employs quantitative methods. Hypotheses were tested and data analyzed by using non-parametrical (chi-square test) and parametrical procedures (frequency. correlation `coefficient). Secondary data was collected from he Economitica database and from the Brazilian Institute of Social and-Economic Analyses (IBASE) website, relative to public companies that have published their Social Balance Statements following the IBASE model from 1999 to 2006, whose sample amounted to 65 companies; In order to assess the objective-function of shareholders a proxy was created based on the following three indices: ROE (return on equity), EnterpriseValue and Tobin`s Q. In order to assess the objective-function of stakeholders a proxy was created by employing the following IBASE social balance indices: internal ones (ISI), external ones (ISE), and environmental ones (IAM). The results have shown no evidence of subordination of stakeholders` objective-function to that of the shareholders in analyzed companies, negating initial expectations and calling for deeper investigation of results. Its main conclusion, which states that the attempted subordination does not take place, is limited to the sample herein investigated and calls for ongoing research aiming at improvements which may lead to sample enlargement and, as a consequence, may make feasible the application of other statistical techniques which may yield a more thorough, analysis of the studied phenomehon.
Resumo:
Universal properties of the Coulomb interaction energy apply to all many-electron systems. Bounds on the exchange-correlation energy, in particular, are important for the construction of improved density functionals. Here we investigate one such universal property-the Lieb-Oxford lower bound-for ionic and molecular systems. In recent work [J Chem Phys 127, 054106 (2007)], we observed that for atoms and electron liquids this bound may be substantially tightened. Calculations for a few ions and molecules suggested the same tendency, but were not conclusive due to the small number of systems considered. Here we extend that analysis to many different families of ions and molecules, and find that for these, too, the bound can be empirically tightened by a similar margin as for atoms and electron liquids. Tightening the Lieb-Oxford bound will have consequences for the performance of various approximate exchange-correlation functionals. (C) 2008 Wiley Periodicals Inc.
Resumo:
We consider a nontrivial one-species population dynamics model with finite and infinite carrying capacities. Time-dependent intrinsic and extrinsic growth rates are considered in these models. Through the model per capita growth rate we obtain a heuristic general procedure to generate scaling functions to collapse data into a simple linear behavior even if an extrinsic growth rate is included. With this data collapse, all the models studied become independent from the parameters and initial condition. Analytical solutions are found when time-dependent coefficients are considered. These solutions allow us to perceive nontrivial transitions between species extinction and survival and to calculate the transition's critical exponents. Considering an extrinsic growth rate as a cancer treatment, we show that the relevant quantity depends not only on the intensity of the treatment, but also on when the cancerous cell growth is maximum.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
The Ca II triplet (CaT) feature in the near-infrared has been employed as a metallicity indicator for individual stars as well as integrated light of Galactic globular clusters (GCs) and galaxies with varying degrees of success, and sometimes puzzling results. Using the DEIMOS multi-object spectrograph on Keck we obtain a sample of 144 integrated light spectra of GCs around the brightest group galaxy NGC 1407 to test whether the CaT index can be used as ametallicity indicator for extragalactic GCs. Different sets of single stellar population models make different predictions for the behavior of the CaT as a function of metallicity. In this work, the metallicities of the GCs around NGC 1407 are obtained from CaT index values using an empirical conversion. The measured CaT/metallicity distributions show unexpected features, the most remarkable being that the brightest red and blue GCs have similar CaT values despite their large difference in mean color. Suggested explanations for this behavior in the NGC 1407 GC system are (1) the CaT may be affected by a population of hot blue stars, (2) the CaT may saturate earlier than predicted by the models, and/or (3) color may not trace metallicity linearly. Until these possibilities are understood, the use of the CaT as a metallicity indicator for the integrated spectra of extragalactic GCs will remain problematic.
Resumo:
We use the Kharzeev-Levin-Nardi (KLN) model of the low x gluon distributions to fit recent HERA data on F(L) and F(2)(c)(F(2)(b)). Having checked that this model gives a good description of the data, we use it to predict F(L) and F(2)(c) to be measured in a future electron-ion collider. The results are similar to those obtained with the de Florian-Sassot and Eskola-Paukkunen-Salgado nuclear gluon distributions. The conclusion of this exercise is that the KLN model, simple as it is, may still be used as an auxiliary tool to make estimates for both heavy-ion and electron-ion collisions.
Resumo:
A correlated many-body basis function is used to describe the (4)He trimer and small helium clusters ((4)HeN) with N = 4-9. A realistic helium dimer potential is adopted. The ground state results of the (4)He dimer and trimer are in close agreement with earlier findings. But no evidence is found for the existence of Efimov state in the trimer for the actual (4)He-(4)He interaction. However, decreasing the potential strength we calculate several excited states of the trimer which exhibit Efimov character. We also solve for excited state energies of these clusters which are in good agreement with Monte Carlo hyperspherical description. (C) 2011 American Institute of Physics. [doi:10.1063/1.3583365]
Resumo:
Balance functions have been measured for charged-particle pairs, identified charged-pion pairs, and identified charged-kaon pairs in Au + Au, d + Au, and p + p collisions at root s(NN) = 200 GeV at the Relativistic Heavy Ion Collider using the STAR detector. These balance functions are presented in terms of relative pseudorapidity, Delta eta, relative rapidity, Delta y, relative azimuthal angle, Delta phi, and invariant relative momentum, q(inv). For charged-particle pairs, the width of the balance function in terms of Delta eta scales smoothly with the number of participating nucleons, while HIJING and UrQMD model calculations show no dependence on centrality or system size. For charged-particle and charged-pion pairs, the balance functions widths in terms of Delta eta and Delta y are narrower in central Au + Au collisions than in peripheral collisions. The width for central collisions is consistent with thermal blast-wave models where the balancing charges are highly correlated in coordinate space at breakup. This strong correlation might be explained by either delayed hadronization or limited diffusion during the reaction. Furthermore, the narrowing trend is consistent with the lower kinetic temperatures inherent to more central collisions. In contrast, the width of the balance function for charged-kaon pairs in terms of Delta y shows little centrality dependence, which may signal a different production mechanism for kaons. The widths of the balance functions for charged pions and kaons in terms of q(inv) narrow in central collisions compared to peripheral collisions, which may be driven by the change in the kinetic temperature.
Resumo:
We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.
Resumo:
Using nonequilibrium Green's functions we calculate the spin-polarized current and shot noise in a ferromagnet-quantum-dot-ferromagnet system. Both parallel (P) and antiparallel (AP) magnetic configurations are considered. Coulomb interaction and coherent spin flip (similar to a transverse magnetic field) are taken into account within the dot. We find that the interplay between Coulomb interaction and spin accumulation in the dot can result in a bias-dependent current polarization p. In particular, p can be suppressed in the P alignment and enhanced in the AP case depending on the bias voltage. The coherent spin flip can also result in a switch of the current polarization from the emitter to the collector lead. Interestingly, for a particular set of parameters it is possible to have a polarized current in the collector and an unpolarized current in the emitter lead. We also found a suppression of the Fano factor to values well below 0.5.
Resumo:
The problem of semialgebraic Lipschitz classification of quasihomogeneous polynomials on a Holder triangle is studied. For this problem, the ""moduli"" are described completely in certain combinatorial terms.
Resumo:
The Community Climate Model (CCM3) from the National Center for Atmospheric Research (NCAR) is used to investigate the effect of the South Atlantic sea surface temperature (SST) anomalies on interannual to decadal variability of South American precipitation. Two ensembles composed of multidecadal simulations forced with monthly SST data from the Hadley Centre for the period 1949 to 2001 are analysed. A statistical treatment based on signal-to-noise ratio and Empirical Orthogonal Functions (EOF) is applied to the ensembles in order to reduce the internal variability among the integrations. The ensemble treatment shows a spatial and temporal dependence of reproducibility. High degree of reproducibility is found in the tropics while the extratropics is apparently less reproducible. Austral autumn (MAM) and spring (SON) precipitation appears to be more reproducible over the South America-South Atlantic region than the summer (DJF) and winter (JJA) rainfall. While the Inter-tropical Convergence Zone (ITCZ) region is dominated by external variance, the South Atlantic Convergence Zone (SACZ) over South America is predominantly determined by internal variance, which makes it a difficult phenomenon to predict. Alternatively, the SACZ over western South Atlantic appears to be more sensitive to the subtropical SST anomalies than over the continent. An attempt is made to separate the atmospheric response forced by the South Atlantic SST anomalies from that associated with the El Nino - Southern Oscillation (ENSO). Results show that both the South Atlantic and Pacific SSTs modulate the intensity and position of the SACZ during DJF. Particularly, the subtropical South Atlantic SSTs are more important than ENSO in determining the position of the SACZ over the southeast Brazilian coast during DJF. On the other hand, the ENSO signal seems to influence the intensity of the SACZ not only in DJF but especially its oceanic branch during MAM. Both local and remote influences, however, are confounded by the large internal variance in the region. During MAM and JJA, the South Atlantic SST anomalies affect the magnitude and the meridional displacement of the ITCZ. In JJA, the ENSO has relatively little influence on the interannual variability of the simulated rainfall. During SON, however, the ENSO seems to counteract the effect of the subtropical South Atlantic SST variations on convection over South America.
Resumo:
This paper presents a controller design method for fuzzy dynamic systems based on piecewise Lyapunov functions with constraints on the closed-loop pole location. The main idea is to use switched controllers to locate the poles of the system to obtain a satisfactory transient response. It is shown that the global fuzzy system satisfies the requirements for the design and that the control law can be obtained by solving a set of linear matrix inequalities, which can be efficiently solved with commercially available softwares. An example is given to illustrate the application of the proposed method. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Purpose - The purpose of this paper is to examine whether the level of logistics information systems (LIS) adoption in manufacturing companies is influenced by organizational profile variables, such as the company`s size, the nature of its operations and their subsectors. Design/methodology/approach - A review of the mainstream literature on US was carried out to identify the factors influencing the adoption of such information systems and also some research gaps. The empirical study`s strategy is based on a survey research in Brazilian manufacturing firms from the capital goods industry. Data collected were analyzed through Kruskall-Wallis and Mann Whitney`s non-parametric tests. Findings - The analysis indicates that characteristics such as the size of companies and the nature of their operations influence the levels of LIS adoption, whilst comparisons regarding the subsectors appeared to be of little influence. Originality/value - This is the first known study to examine the influence of organizational profiles such as size, nature of operations and subsector on the level of US adoption in manufacturing companies. Moreover, it is unique in portraying the Brazilian scenario on this topic and addressing the adoption of seven types of LIS in a single study.