866 resultados para the arousal theory
Resumo:
The principal objective of this paper is to identify the relationship between the results of the Canadian policies implemented to protect female workers against the impact of globalization on the garment industry and the institutional setting in which this labour market is immersed in Winnipeg. This research paper begins with a brief summary of the institutional theory approach that sheds light on the analysis of the effects of institutions on the policy options to protect female workers of the Winnipeg garment industry. Next, this paper identifies the set of beliefs, formal procedures, routines, norms and conventions that characterize the institutional environment of the female workers of Winnipeg’s garment industry. Subsequently, this paper describes the impact of free trade policies on the garment industry of Winnipeg. Afterward, this paper presents an analysis of the barriers that the institutional features of the garment sector in Winnipeg can set to the successful achievement of policy options addressed to protect the female workforce of this sector. Three policy options are considered: ethical purchasing; training/retraining programs and social engagement support for garment workers; and protection of migrated workers through promoting and facilitating bonds between Canada’s trade unions and trade unions of the labour sending countries. Finally, this paper concludes that the formation of isolated cultural groups inside of factories; the belief that there is gender and race discrimination on the part of the garment industry management against workers; the powerless social conditions of immigrant women; the economic rationality of garment factories’ managers; and the lack of political will on the part of Canada and the labour sending countries to set effective bilateral agreements to protect migrate workers, are the principal barriers that divide the actors involved in the garment industry in Winnipeg. This division among the principal actors of Winnipeg’s garment industry impedes the change toward more efficient institutions and, hence, the successful achievement of policy options addressed to protect women workers.
Resumo:
The purpose of this article is to analyze the coverage made by CNN and Al Jazeera (in Arabic) to operation Caste Lead and the Goldstone Report during 2008 and 2009. This investigation is based in the theory of Qualitative Analysis of Content, by Wildemuth and Zhang. The methodology follows up with the one proposed by the authors in the main theory, complementing it with the Gamson and Modigliani´s Framing theory. The methodology mention above display the different in the coverage development, determined by the geopolitical influences; being CNN more influenced by a Western pro USA and pro Israeli speech, while Al Jazeera is more prone to support the Palestinian cause, this is the thesis of this article. During the development of the investigation, the thesis was demonstrated to be only partially accurate as CNN was not completely supportive to the Israeli arguments during the coverage, but Al Jazeera did have preferential speech for the Palestinian cause.
Resumo:
Our new simple method for calculating accurate Franck-Condon factors including nondiagonal (i.e., mode-mode) anharmonic coupling is used to simulate the C2H4+X2B 3u←C2H4X̃1 Ag band in the photoelectron spectrum. An improved vibrational basis set truncation algorithm, which permits very efficient computations, is employed. Because the torsional mode is highly anharmonic it is separated from the other modes and treated exactly. All other modes are treated through the second-order perturbation theory. The perturbation-theory corrections are significant and lead to a good agreement with experiment, although the separability assumption for torsion causes the C2 D4 results to be not as good as those for C2 H4. A variational formulation to overcome this circumstance, and deal with large anharmonicities in general, is suggested
Resumo:
According to linear response theory, all relaxation functions in the linear regime can be obtained using time correlation functions calculated under equilibrium. In this paper, we demonstrate that the cross correlations make a significant contribution to the partial stress relaxation functions in polymer melts. We present two illustrations in the context of polymer rheology using (1) Brownian dynamics simulations of a single chain model for entangled polymers, the slip-spring model, and (2) molecular dynamics simulations of a multichain model. Using the single chain model, we analyze the contribution of the confining potential to the stress relaxation and the plateau modulus. Although the idea is illustrated with a particular model, it applies to any single chain model that uses a potential to confine the motion of the chains. This leads us to question some of the assumptions behind the tube theory, especially the meaning of the entanglement molecular weight obtained from the plateau modulus. To shed some light on this issue, we study the contribution of the nonbonded excluded-volume interactions to the stress relaxation using the multichain model. The proportionality of the bonded/nonbonded contributions to the total stress relaxation (after a density dependent "colloidal" relaxation time) provides some insight into the success of the tube theory in spite of using questionable assumptions. The proportionality indicates that the shape of the relaxation spectrum can indeed be reproduced using the tube theory and the problem is reduced to that of finding the correct prefactor. (c) 2007 American Institute of Physics
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic stability criteria for macrostates rely on macro-level contexts, which make them sensitive to differences between different macro-levels.
Resumo:
This book is a collection of articles devoted to the theory of linear operators in Hilbert spaces and its applications. The subjects covered range from the abstract theory of Toeplitz operators to the analysis of very specific differential operators arising in quantum mechanics, electromagnetism, and the theory of elasticity; the stability of numerical methods is also discussed. Many of the articles deal with spectral problems for not necessarily selfadjoint operators. Some of the articles are surveys outlining the current state of the subject and presenting open problems.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
This short contribution examines the difficulties that have not yet been fully overcome in the many developments made from the simplest (and original) tube model for entangled polymers. It is concluded that many more length scales have to be considered sequentially when deriving a continuum rheological model from molecular considerations than have been considered in the past. In particular, most unresolved issues of the tube theory are related to the length scales of tube diameter, and molecular dynamics simulations is the perfect route to resolve them. The power of molecular simulations is illustrated by two examples: stress contributions from bonded and non-bonded interaction, and the inter-chain coupling, which is usually neglected in the tube theory.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasingly complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I) reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develops conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to building simulation scientists, initiates a dialogue and builds bridges between scientists and engineers, and stimulates future research about a wide range of issues on building environmental systems.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasing complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I), published in the previous issue, reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develop conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to (1) building simulation scientists and designers (2) initiating a dialogue between scientists and engineers, and (3) stimulating future research on a wide range of issues involved in designing and managing building environmental systems.
Resumo:
This paper discusses concepts of value from the point of view of the user of the space and the counter view of the provider of the same. Land and property are factors of production. The value of the land flows from the use to which it is put, and that in turn, is dependent upon the demand (and supply) for the product or service that is produced/provided from that space. If there is a high demand for the product (at a fixed level of supply), the price will increase and the economic rent for the land/property will increase accordingly. This is the underlying paradigm of Ricardian rent theory where the supply of land is fixed and a single good is produced. In such a case the rent of land is wholly an economic rent. Economic theory generally distinguishes between two kinds of price, price of production or “value in use” (as determined by the labour theory of value), and market price or “value in exchange” (as determined by supply and demand). It is based on a coherent and consistent theory of value and price. Effectively the distinction is between what space is ‘worth’ to an individual and that space’s price of exchange in the market place. In a perfect market where any individual has access to the same information as all others in the market, price and worth should coincide. However in a market where access to information is not uniform, and where different uses compete for the same space, it is more likely that the two figures will diverge. This paper argues that the traditional reliance of valuers to use methods of comparison to determine “price” has led to an artificial divergence of “value in use” and “value in exchange”, but now such comparison are becoming more difficult due to the diversity of lettings in the market place, there will be a requirement to return to fundamentals and pay heed to the thought process of the user in assessing the worth of the space to be let.
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
We study linear variable coefficient control problems in descriptor form. Based on a behaviour approach and the general theory for linear differential algebraic systems we give the theoretical analysis and describe numerically stable methods to determine the structural properties of the system.