993 resultados para Parameter-free functionals
Resumo:
It is suggested here that the ultimate accuracy of DFT methods arises from the type of hybridization scheme followed. This idea can be cast into a mathematical formulation utilizing an integrand connecting the noninteracting and the interacting particle system. We consider two previously developed models for it, dubbed as HYB0 and QIDH, and assess a large number of exchange-correlation functionals against the AE6, G2/148, and S22 reference data sets. An interesting consequence of these hybridization schemes is that the error bars, including the standard deviation, are found to markedly decrease with respect to the density-based (nonhybrid) case. This improvement is substantially better than variations due to the underlying density functional used. We thus finally hypothesize about the universal character of the HYB0 and QIDH models.
Resumo:
This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
Rationale: Focal onset epileptic seizures are due to abnormal interactions between distributed brain areas. By estimating the cross-correlation matrix of multi-site intra-cerebral EEG recordings (iEEG), one can quantify these interactions. To assess the topology of the underlying functional network, the binary connectivity matrix has to be derived from the cross-correlation matrix by use of a threshold. Classically, a unique threshold is used that constrains the topology [1]. Our method aims to set the threshold in a data-driven way by separating genuine from random cross-correlation. We compare our approach to the fixed threshold method and study the dynamics of the functional topology. Methods: We investigate the iEEG of patients suffering from focal onset seizures who underwent evaluation for the possibility of surgery. The equal-time cross-correlation matrices are evaluated using a sliding time window. We then compare 3 approaches assessing the corresponding binary networks. For each time window: * Our parameter-free method derives from the cross-correlation strength matrix (CCS)[2]. It aims at disentangling genuine from random correlations (due to finite length and varying frequency content of the signals). In practice, a threshold is evaluated for each pair of channels independently, in a data-driven way. * The fixed mean degree (FMD) uses a unique threshold on the whole connectivity matrix so as to ensure a user defined mean degree. * The varying mean degree (VMD) uses the mean degree of the CCS network to set a unique threshold for the entire connectivity matrix. * Finally, the connectivity (c), connectedness (given by k, the number of disconnected sub-networks), mean global and local efficiencies (Eg, El, resp.) are computed from FMD, CCS, VMD, and their corresponding random and lattice networks. Results: Compared to FMD and VMD, CCS networks present: *topologies that are different in terms of c, k, Eg and El. *from the pre-ictal to the ictal and then post-ictal period, topological features time courses that are more stable within a period, and more contrasted from one period to the next. For CCS, pre-ictal connectivity is low, increases to a high level during the seizure, then decreases at offset. k shows a ‘‘U-curve’’ underlining the synchronization of all electrodes during the seizure. Eg and El time courses fluctuate between the corresponding random and lattice networks values in a reproducible manner. Conclusions: The definition of a data-driven threshold provides new insights into the topology of the epileptic functional networks.
Resumo:
We present parameter-free calculations of electronic properties of InGaN, InAlN, and AlGaN alloys. The calculations are based on a generalized quasichemical approach, to account for disorder and composition effects, and first-principles calculations within the density functional theory with the LDA-1/2 approach, to accurately determine the band gaps. We provide precise results for AlGaN, InGaN, and AlInN band gaps for the entire range of compositions, and their respective bowing parameters. (C) 2011 American Institute of Physics. [doi:10.1063/1.3576570]
Resumo:
We investigate, via numerical simulations, mean field, and density functional theories, the magnetic response of a dipolar hard sphere fluid at low temperatures and densities, in the region of strong association. The proposed parameter-free theory is able to capture both the density and temperature dependence of the ring-chain equilibrium and the contribution to the susceptibility of a chain of generic length. The theory predicts a nonmonotonic temperature dependence of the initial (zero field) magnetic susceptibility, arising from the competition between magnetically inert particle rings and magnetically active chains. Monte Carlo simulation results closely agree with the theoretical findings. DOI: 10.1103/PhysRevLett.110.148306
Resumo:
The Symbolic Aggregate Approximation (iSAX) is widely used in time series data mining. Its popularity arises from the fact that it largely reduces time series size, it is symbolic, allows lower bounding and is space efficient. However, it requires setting two parameters: the symbolic length and alphabet size, which limits the applicability of the technique. The optimal parameter values are highly application dependent. Typically, they are either set to a fixed value or experimentally probed for the best configuration. In this work we propose an approach to automatically estimate iSAX’s parameters. The approach – AutoiSAX – not only discovers the best parameter setting for each time series in the database, but also finds the alphabet size for each iSAX symbol within the same word. It is based on simple and intuitive ideas from time series complexity and statistics. The technique can be smoothly embedded in existing data mining tasks as an efficient sub-routine. We analyze its impact in visualization interpretability, classification accuracy and motif mining. Our contribution aims to make iSAX a more general approach as it evolves towards a parameter-free method.
Resumo:
Intuitively, music has both predictable and unpredictable components. In this work we assess this qualitative statement in a quantitative way using common time series models fitted to state-of-the-art music descriptors. These descriptors cover different musical facets and are extracted from a large collection of real audio recordings comprising a variety of musical genres. Our findings show that music descriptor time series exhibit a certain predictability not only for short time intervals, but also for mid-term and relatively long intervals. This fact is observed independently of the descriptor, musical facet and time series model we consider. Moreover, we show that our findings are not only of theoretical relevance but can also have practical impact. To this end we demonstrate that music predictability at relatively long time intervals can be exploited in a real-world application, namely the automatic identification of cover songs (i.e. different renditions or versions of the same musical piece). Importantly, this prediction strategy yields a parameter-free approach for cover song identification that is substantially faster, allows for reduced computational storage and still maintains highly competitive accuracies when compared to state-of-the-art systems.
Resumo:
Magnetic interactions in ionic solids are studied using parameter-free methods designed to provide accurate energy differences associated with quantum states defining the Heisenberg constant J. For a series of ionic solids including KNiF3, K2NiF4, KCuF3, K2CuF4, and high- Tc parent compound La2CuO4, the J experimental value is quantitatively reproduced. This result has fundamental implications because J values have been calculated from a finite cluster model whereas experiments refer to infinite solids. The present study permits us to firmly establish that in these wide-gap insulators, J is determined from strongly local electronic interactions involving two magnetic centers only thus providing an ab initio support to commonly used model Hamiltonians.
Resumo:
Normally either the Güntelberg or Davies equation is used to predict activity coefficients of electrolytes in dilute solutions when no better equation is available. The validity of these equations and, additionally, of the parameter-free equations used in the Bates-Guggenheim convention and in the Pitzerformalism for activity coefficients were tested with experimentally determined activity coefficients of HCl, HBr, HI, LiCl, NaCl, KCl, RbCl, CsCl, NH4Cl, LiBr,NaBr and KBr in aqueous solutions at 298.15 K. The experimental activity coefficients of these electrolytes can be usually reproduced within experimental errorby means of a two-parameter equation of the Hückel type. The best Hückel equations were also determined for all electrolytes considered. The data used in the calculations of this study cover almost all reliable galvanic cell results available in the literature for the electrolytes considered. The results of the calculations reveal that the parameter-free activity coefficient equations can only beused for very dilute electrolyte solutions in thermodynamic studies.
Resumo:
Normally either the Güntelberg or Davies equation is used to predict activity coefficients of electrolytes in dilute solutions when no betterequation is available. The validity of these equations and, additionally, of the parameter-free equation used in the Bates-Guggenheim convention for activity coefficients were tested with experimentally determined activity coefficients of LaCl3, CaCl2, SrCl2 and BaCl2 in aqueous solutions at 298.15 K. The experimentalactivity coefficients of these electrolytes can be usually reproduced within experimental error by means of a two-parameter equation of the Hückel type. The best Hückel equations were also determined for all electrolytes considered. The data used in the calculations of this study cover almost all reliable galvanic cell results available in the literature for the electrolytes considered. The results of the calculations reveal that the parameter-free activity coefficient equations can only be used for very dilute electrolyte solutions in thermodynamic studies
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
Magnetic interactions in ionic solids are studied using parameter-free methods designed to provide accurate energy differences associated with quantum states defining the Heisenberg constant J. For a series of ionic solids including KNiF3, K2NiF4, KCuF3, K2CuF4, and high- Tc parent compound La2CuO4, the J experimental value is quantitatively reproduced. This result has fundamental implications because J values have been calculated from a finite cluster model whereas experiments refer to infinite solids. The present study permits us to firmly establish that in these wide-gap insulators, J is determined from strongly local electronic interactions involving two magnetic centers only thus providing an ab initio support to commonly used model Hamiltonians.