992 resultados para Generalized seduction theory
Resumo:
A simple and completely general representation of the exact exchange-correlation functional of density-functional theory is derived from the universal Lieb-Oxford bound, which holds for any Coulomb-interacting system. This representation leads to an alternative point of view on popular hybrid functionals, providing a rationale for why they work and how they can be constructed. A similar representation of the exact correlation functional allows to construct fully nonempirical hyper-generalized-gradient approximations (HGGAs), radically departing from established paradigms of functional construction. Numerical tests of these HGGAs for atomic and molecular correlation energies and molecular atomization energies show that even simple HGGAs match or outperform state-of-the-art correlation functionals currently used in solid-state physics and quantum chemistry.
Resumo:
The knowledge of the atomic structure of clusters composed by few atoms is a basic prerequisite to obtain insights into the mechanisms that determine their chemical and physical properties as a function of diameter, shape, surface termination, as well as to understand the mechanism of bulk formation. Due to the wide use of metal systems in our modern life, the accurate determination of the properties of 3d, 4d, and 5d metal clusters poses a huge problem for nanoscience. In this work, we report a density functional theory study of the atomic structure, binding energies, effective coordination numbers, average bond lengths, and magnetic properties of the 3d, 4d, and 5d metal (30 elements) clusters containing 13 atoms, M(13). First, a set of lowest-energy local minimum structures (as supported by vibrational analysis) were obtained by combining high-temperature first- principles molecular-dynamics simulation, structure crossover, and the selection of five well-known M(13) structures. Several new lower energy configurations were identified, e. g., Pd(13), W(13), Pt(13), etc., and previous known structures were confirmed by our calculations. Furthermore, the following trends were identified: (i) compact icosahedral-like forms at the beginning of each metal series, more opened structures such as hexagonal bilayerlike and double simple-cubic layers at the middle of each metal series, and structures with an increasing effective coordination number occur for large d states occupation. (ii) For Au(13), we found that spin-orbit coupling favors the three-dimensional (3D) structures, i.e., a 3D structure is about 0.10 eV lower in energy than the lowest energy known two-dimensional configuration. (iii) The magnetic exchange interactions play an important role for particular systems such as Fe, Cr, and Mn. (iv) The analysis of the binding energy and average bond lengths show a paraboliclike shape as a function of the occupation of the d states and hence, most of the properties can be explained by the chemistry picture of occupation of the bonding and antibonding states.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Polytomous Item Response Theory Models provides a unified, comprehensive introduction to the range of polytomous models available within item response theory (IRT). It begins by outlining the primary structural distinction between the two major types of polytomous IRT models. This focuses on the two types of response probability that are unique to polytomous models and their associated response functions, which are modeled differently by the different types of IRT model. It describes, both conceptually and mathematically, the major specific polytomous models, including the Nominal Response Model, the Partial Credit Model, the Rating Scale model, and the Graded Response Model. Important variations, such as the Generalized Partial Credit Model are also described as are less common variations, such as the Rating Scale version of the Graded Response Model. Relationships among the models are also investigated and the operation of measurement information is described for each major model. Practical examples of major models using real data are provided, as is a chapter on choosing an appropriate model. Figures are used throughout to illustrate important elements as they are described.
Resumo:
A model for finely layered visco-elastic rock proposed by us in previous papers is revisited and generalized to include couple stresses. We begin with an outline of the governing equations for the standard continuum case and apply a computational simulation scheme suitable for problems involving very large deformations. We then consider buckling instabilities in a finite, rectangular domain. Embedded within this domain, parallel to the longer dimension we consider a stiff, layered beam under compression. We analyse folding up to 40% shortening. The standard continuum solution becomes unstable for extreme values of the shear/normal viscosity ratio. The instability is a consequence of the neglect of the bending stiffness/viscosity in the standard continuum model. We suggest considering these effects within the framework of a couple stress theory. Couple stress theories involve second order spatial derivatives of the velocities/displacements in the virtual work principle. To avoid C-1 continuity in the finite element formulation we introduce the spin of the cross sections of the individual layers as an independent variable and enforce equality to the spin of the unit normal vector to the layers (-the director of the layer system-) by means of a penalty method. We illustrate the convergence of the penalty method by means of numerical solutions of simple shears of an infinite layer for increasing values of the penalty parameter. For the shear problem we present solutions assuming that the internal layering is oriented orthogonal to the surfaces of the shear layer initially. For high values of the ratio of the normal-to the shear viscosity the deformation concentrates in thin bands around to the layer surfaces. The effect of couple stresses on the evolution of folds in layered structures is also investigated. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).
Resumo:
In this paper is presented a Game Theory based methodology to allocate transmission costs, considering cooperation and competition between producers. As original contribution, it finds the degree of participation on the additional costs according to the demand behavior. A comparative study was carried out between the obtained results using Nucleolus balance and Shapley Value, with other techniques such as Averages Allocation method and the Generalized Generation Distribution Factors method (GGDF). As example, a six nodes network was used for the simulations. The results demonstrate the ability to find adequate solutions on open access environment to the networks.
Resumo:
We have generalized earlier work on anchoring of nematic liquid crystals by Sullivan, and Sluckin and Poniewierski, in order to study transitions which may occur in binary mixtures of nematic liquid crystals as a function of composition. Microscopic expressions have been obtained for the anchoring energy of (i) a liquid crystal in contact with a solid aligning surface; (ii) a liquid crystal in contact with an immiscible isotropic medium; (iii) a liquid crystal mixture in contact with a solid aligning surface. For (iii), possible phase diagrams of anchoring angle versus dopant concentration have been calculated using a simple liquid crystal model. These exhibit some interesting features including re-entrant conical anchoring, for what are believed to be realistic values of the molecular parameters. A way of relaxing the most drastic approximation implicit in the above approach is also briefly discussed.
Resumo:
An abstract theory on general synchronization of a system of several oscillators coupled by a medium is given. By generalized synchronization we mean the existence of an invariant manifold that allows a reduction in dimension. The case of a concrete system modeling the dynamics of a chemical solution on two containers connected to a third container is studied from the basics to arbitrary perturbations. Conditions under which synchronization occurs are given. Our theoretical results are complemented with a numerical study.
Resumo:
Generalized multiresolution analyses are increasing sequences of subspaces of a Hilbert space H that fail to be multiresolution analyses in the sense of wavelet theory because the core subspace does not have an orthonormal basis generated by a fixed scaling function. Previous authors have studied a multiplicity function m which, loosely speaking, measures the failure of the GMRA to be an MRA. When the Hilbert space H is L2(Rn), the possible multiplicity functions have been characterized by Baggett and Merrill. Here we start with a function m satisfying a consistency condition which is known to be necessary, and build a GMRA in an abstract Hilbert space with multiplicity function m.
Resumo:
This paper develop and estimates a model of demand estimation for environmental public goods which allows for consumers to learn about their preferences through consumption experiences. We develop a theoretical model of Bayesian updating, perform comparative statics over the model, and show how the theoretical model can be consistently incorporated into a reduced form econometric model. We then estimate the model using data collected for two environmental goods. We find that the predictions of the theoretical exercise that additional experience makes consumers more certain over their preferences in both mean and variance are supported in each case.
Resumo:
Concerns for fairness, workers' morale and reciprocity infuence firms' wage setting policy. In this paper we formalize a theory of wage setting behavior in a simple and tractable model that explicitly considers these behavioral aspects. A worker is assumed to have reference-dependent preferences and displays loss aversion when evaluating the fairness of a wage contract. The theory establishes a wage-effort relationship that captures the worker's reference-dependent reciprocity, which in turn in uences the firm's optimal wage policy. The paper makes two key contributions: it identifies loss aversion as an explanation for a worker's asymmetric reciprocity; and it provides realistic and generalized microfoundation for downward wage rigidity. We further illustrate the implications of our theory for both wage setting and hiring behavior. Downward wage rigidity generates several implications for the outcome of the initial employment contract. The worker's reference wage, his extent of negative reciprocity and the firms expectations are key drivers of the propositions derived.
Resumo:
This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.
Resumo:
A new graph-based construction of generalized low density codes (GLD-Tanner) with binary BCH constituents is described. The proposed family of GLD codes is optimal on block erasure channels and quasi-optimal on block fading channels. Optimality is considered in the outage probability sense. Aclassical GLD code for ergodic channels (e.g., the AWGN channel,the i.i.d. Rayleigh fading channel, and the i.i.d. binary erasure channel) is built by connecting bitnodes and subcode nodes via a unique random edge permutation. In the proposed construction of full-diversity GLD codes (referred to as root GLD), bitnodes are divided into 4 classes, subcodes are divided into 2 classes, and finally both sides of the Tanner graph are linked via 4 random edge permutations. The study focuses on non-ergodic channels with two states and can be easily extended to channels with 3 states or more.