62 resultados para Relativistic many-body perturbation theory
Resumo:
We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We �rst show that by using a general functional decomposition for space-time dependent forcings, we can de�ne elementary susceptibilities that allow to construct the response of the system to general perturbations. Starting from the de�nition of SRB measure, we then study the consequence of taking di�erent sampling schemes for analysing the response of the system. We show that only a speci�c choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows to obtain the formula �rst presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be �ne-tuned to make the de�nition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analyzing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick.
Resumo:
In the first half of this memoir we explore the interrelationships between the abstract theory of limit operators (see e.g. the recent monographs of Rabinovich, Roch and Silbermann (2004) and Lindner (2006)) and the concepts and results of the generalised collectively compact operator theory introduced by Chandler-Wilde and Zhang (2002). We build up to results obtained by applying this generalised collectively compact operator theory to the set of limit operators of an operator (its operator spectrum). In the second half of this memoir we study bounded linear operators on the generalised sequence space , where and is some complex Banach space. We make what seems to be a more complete study than hitherto of the connections between Fredholmness, invertibility, invertibility at infinity, and invertibility or injectivity of the set of limit operators, with some emphasis on the case when the operator is a locally compact perturbation of the identity. Especially, we obtain stronger results than previously known for the subtle limiting cases of and . Our tools in this study are the results from the first half of the memoir and an exploitation of the partial duality between and and its implications for bounded linear operators which are also continuous with respect to the weaker topology (the strict topology) introduced in the first half of the memoir. Results in this second half of the memoir include a new proof that injectivity of all limit operators (the classic Favard condition) implies invertibility for a general class of almost periodic operators, and characterisations of invertibility at infinity and Fredholmness for operators in the so-called Wiener algebra. In two final chapters our results are illustrated by and applied to concrete examples. Firstly, we study the spectra and essential spectra of discrete Schrödinger operators (both self-adjoint and non-self-adjoint), including operators with almost periodic and random potentials. In the final chapter we apply our results to integral operators on .
Resumo:
A three-point difference scheme recently proposed in Ref. 1 for the numerical solution of a class of linear, singularly perturbed, two-point boundary-value problems is investigated. The scheme is derived from a first-order approximation to the original problem with a small deviating argument. It is shown here that, in the limit, as the deviating argument tends to zero, the difference scheme converges to a one-sided approximation to the original singularly perturbed equation in conservation form. The limiting scheme is shown to be stable on any uniform grid. Therefore, no advantage arises from using the deviating argument, and the most accurate and efficient results are obtained with the deviation at its zero limit.
Resumo:
The probability of a quantum particle being detected in a given solid angle is determined by the S-matrix. The explanation of this fact in time-dependent scattering theory is often linked to the quantum flux, since the quantum flux integrated against a (detector-) surface and over a time interval can be viewed as the probability that the particle crosses this surface within the given time interval. Regarding many particle scattering, however, this argument is no longer valid, as each particle arrives at the detector at its own random time. While various treatments of this problem can be envisaged, here we present a straightforward Bohmian analysis of many particle potential scattering from which the S-matrix probability emerges in the limit of large distances.
Resumo:
Equilibrium theory occupies an important position in chemistry and it is traditionally based on thermodynamics. A novel mathematical approach to chemical equilibrium theory for gaseous systems at constant temperature and pressure is developed. Six theorems are presented logically which illustrate the power of mathematics to explain chemical observations and these are combined logically to create a coherent system. This mathematical treatment provides more insight into chemical equilibrium and creates more tools that can be used to investigate complex situations. Although some of the issues covered have previously been given in the literature, new mathematical representations are provided. Compared to traditional treatments, the new approach relies on straightforward mathematics and less on thermodynamics, thus, giving a new and complementary perspective on equilibrium theory. It provides a new theoretical basis for a thorough and deep presentation of traditional chemical equilibrium. This work demonstrates that new research in a traditional field such as equilibrium theory, generally thought to have been completed many years ago, can still offer new insights and that more efficient ways to present the contents can be established. The work presented here can be considered appropriate as part of a mathematical chemistry course at University level.
Resumo:
This work presents two schemes of measuring the linear and angular kinematics of a rigid body using a kinematically redundant array of triple-axis accelerometers with potential applications in biomechanics. A novel angular velocity estimation algorithm is proposed and evaluated that can compensate for angular velocity errors using measurements of the direction of gravity. Analysis and discussion of optimal sensor array characteristics are provided. A damped 2 axis pendulum was used to excite all 6 DoF of the a suspended accelerometer array through determined complex motion and is the basis of both simulation and experimental studies. The relationship between accuracy and sensor redundancy is investigated for arrays of up to 100 triple axis (300 accelerometer axes) accelerometers in simulation and 10 equivalent sensors (30 accelerometer axes) in the laboratory test rig. The paper also reports on the sensor calibration techniques and hardware implementation.
Resumo:
Many physical systems exhibit dynamics with vastly different time scales. Often the different motions interact only weakly and the slow dynamics is naturally constrained to a subspace of phase space, in the vicinity of a slow manifold. In geophysical fluid dynamics this reduction in phase space is called balance. Classically, balance is understood by way of the Rossby number R or the Froude number F; either R ≪ 1 or F ≪ 1. We examined the shallow-water equations and Boussinesq equations on an f -plane and determined a dimensionless parameter _, small values of which imply a time-scale separation. In terms of R and F, ∈= RF/√(R^2+R^2 ) We then developed a unified theory of (extratropical) balance based on _ that includes all cases of small R and/or small F. The leading-order systems are ensured to be Hamiltonian and turn out to be governed by the quasi-geostrophic potential-vorticity equation. However, the height field is not necessarily in geostrophic balance, so the leading-order dynamics are more general than in quasi-geostrophy. Thus the quasi-geostrophic potential-vorticity equation (as distinct from the quasi-geostrophic dynamics) is valid more generally than its traditional derivation would suggest. In the case of the Boussinesq equations, we have found that balanced dynamics generally implies hydrostatic balance without any assumption on the aspect ratio; only when the Froude number is not small and it is the Rossby number that guarantees a timescale separation must we impose the requirement of a small aspect ratio to ensure hydrostatic balance.
Resumo:
Despite many decades investigating scalp recordable 8–13-Hz (alpha) electroencephalographic activity, no consensus has yet emerged regarding its physiological origins nor its functional role in cognition. Here we outline a detailed, physiologically meaningful, theory for the genesis of this rhythm that may provide important clues to its functional role. In particular we find that electroencephalographically plausible model dynamics, obtained with physiological admissible parameterisations, reveals a cortex perched on the brink of stability, which when perturbed gives rise to a range of unanticipated complex dynamics that include 40-Hz (gamma) activity. Preliminary experimental evidence, involving the detection of weak nonlinearity in resting EEG using an extension of the well-known surrogate data method, suggests that nonlinear (deterministic) dynamics are more likely to be associated with weakly damped alpha activity. Thus rather than the “alpha rhythm” being an idling rhythm it may be more profitable to conceive it as a readiness rhythm.
Resumo:
This paper represents the second part of a study of semi-geostrophic (SG) geophysical fluid dynamics. SG dynamics shares certain attractive properties with the better known and more widely used quasi-geostrophic (QG) model, but is also a good prototype for balanced models that are more accurate than QG dynamics. The development of such balanced models is an area of great current interest. The goal of the present work is to extend a central body of QG theory, concerning the evolution of disturbances to prescribed basic states, to SG dynamics. Part 1 was based on the pseudomomentum; Part 2 is based on the pseudoenergy. A pseudoenergy invariant is a conserved quantity, of second order in disturbance amplitude relative to a prescribed steady basic state, which is related to the time symmetry of the system. We derive such an invariant for the semi-geostrophic equations, and use it to obtain: (i) a linear stability theorem analogous to Arnol'd's ‘first theorem’; and (ii) a small-amplitude local conservation law for the invariant, obeying the group-velocity property in the WKB limit. The results are analogous to their quasi-geostrophic forms, and reduce to those forms in the limit of small Rossby number. The results are derived for both the f-plane Boussinesq form of semi-geostrophic dynamics, and its extension to β-plane compressible flow by Magnusdottir & Schubert. Novel features particular to semi-geostrophic dynamics include apparently unnoticed lateral boundary stability criteria. Unlike the boundary stability criteria found in the first part of this study, however, these boundary criteria do not necessarily preclude the construction of provably stable basic states. The interior semi-geostrophic dynamics has an underlying Hamiltonian structure, which guarantees that symmetries in the system correspond naturally to the system's invariants. This is an important motivation for the theoretical approach used in this study. The connection between symmetries and conservation laws is made explicit using Noether's theorem applied to the Eulerian form of the Hamiltonian description of the interior dynamics.
Resumo:
More than thirty years ago, Wind's seminal review of research in market segmentation culminated with a research agenda for the subject area. In the intervening period, research has focused on the development of segmentation bases and models, segmentation research techniques and the identification of statistically sound solutions. Practical questions about implementation and the integration of segmentation into marketing strategy have received less attention, even though practitioners are known to struggle with the actual practice of segmentation. This special issue is motivated by this tension between theory and practice, which has shaped and continues to influence the research priorities for the field. Although many years may have elapsed since Wind's original research agenda, pressing questions about effectiveness and productivity apparently remain; namely: (i) concerns about the link between segmentation and performance, and its measurement; and (ii) the notion that productivity improvements arising from segmentation are only achievable if the segmentation process is effectively implemented. There were central themes to the call for papers for this special issue, which aims to develop our understanding of segmentation value, productivity and strategies, and managerial issues and implementation.
Resumo:
Social tagging has become very popular around the Internet as well as in research. The main idea behind tagging is to allow users to provide metadata to the web content from their perspective to facilitate categorization and retrieval. There are many factors that influence users' tag choice. Many studies have been conducted to reveal these factors by analysing tagging data. This paper uses two theories to identify these factors, namely the semiotics theory and activity theory. The former treats tags as signs and the latter treats tagging as an activity. The paper uses both theories to analyse tagging behaviour by explaining all aspects of a tagging system, including tags, tagging system components and the tagging activity. The theoretical analysis produced a framework that was used to identify a number of factors. These factors can be considered as categories that can be consulted to redirect user tagging choice in order to support particular tagging behaviour, such as cross-lingual tagging.
Resumo:
Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.
Resumo:
Body area networks (BANs) are emerging as enabling technology for many human-centered application domains such as health-care, sport, fitness, wellness, ergonomics, emergency, safety, security, and sociality. A BAN, which basically consists of wireless wearable sensor nodes usually coordinated by a static or mobile device, is mainly exploited to monitor single assisted livings. Data generated by a BAN can be processed in real-time by the BAN coordinator and/or transmitted to a server-side for online/offline processing and long-term storing. A network of BANs worn by a community of people produces large amount of contextual data that require a scalable and efficient approach for elaboration and storage. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of body sensor data streams. In this paper, we motivate the introduction of Cloud-assisted BANs along with the main challenges that need to be addressed for their development and management. The current state-of-the-art is overviewed and framed according to the main requirements for effective Cloud-assisted BAN architectures. Finally, relevant open research issues in terms of efficiency, scalability, security, interoperability, prototyping, dynamic deployment and management, are discussed.
Resumo:
During the last 30 years, significant debate has taken place regarding multilevel research. However, the extent to which multilevel research is overtly practiced remains to be examined. This article analyzes 10 years of organizational research within a multilevel framework (from 2001 to 2011). The goals of this article are (a) to understand what has been done, during this decade, in the field of organizational multilevel research and (b) to suggest new arenas of research for the next decade. A total of 132 articles were selected for analysis through ISI Web of Knowledge. Through a broad-based literature review, results suggest that there is equilibrium between the amount of empirical and conceptual papers regarding multilevel research, with most studies addressing the cross-level dynamics between teams and individuals. In addition, this study also found that the time still has little presence in organizational multilevel research. Implications, limitations, and future directions are addressed in the end. Organizations are made of interacting layers. That is, between layers (such as divisions, departments, teams, and individuals) there is often some degree of interdependence that leads to bottom-up and top-down influence mechanisms. Teams and organizations are contexts for the development of individual cognitions, attitudes, and behaviors (top-down effects; Kozlowski & Klein, 2000). Conversely, individual cognitions, attitudes, and behaviors can also influence the functioning and outcomes of teams and organizations (bottom-up effects; Arrow, McGrath, & Berdahl, 2000). One example is when the rewards system of one organization may influence employees’ intention to quit and the existence or absence of extra role behaviors. At the same time, many studies have showed the importance of bottom-up emergent processes that yield higher level phenomena (Bashshur, Hernández, & González-Romá, 2011; Katz-Navon & Erez, 2005; Marques-Quinteiro, Curral, Passos, & Lewis, in press). For example, the affectivity of individual employees may influence their team’s interactions and outcomes (Costa, Passos, & Bakker, 2012). Several authors agree that organizations must be understood as multilevel systems, meaning that adopting a multilevel perspective is fundamental to understand real-world phenomena (Kozlowski & Klein, 2000). However, whether this agreement is reflected in practicing multilevel research seems to be less clear. In fact, how much is known about the quantity and quality of multilevel research done in the last decade? The aim of this study is to compare what has been proposed theoretically, concerning the importance of multilevel research, with what has really been empirically studied and published. First, this article outlines a review of the multilevel theory, followed by what has been theoretically “put forward” by researchers. Second, this article presents what has really been “practiced” based on the results of a review of multilevel studies published from 2001 to 2011 in business and management journals. Finally, some barriers and challenges to true multilevel research are suggested. This study contributes to multilevel research as it describes the last 10 years of research. It quantitatively depicts the type of articles being written, and where we can find the majority of the publications on empirical and conceptual work related to multilevel thinking.
Resumo:
The incorporation of cobalt in mixed metal carbonates is a possible route to the immobilization of this toxic element in the environment. However, the thermodynamics of (Ca,Co)CO3 solid solutions are still unclear due to conflicting data from experiment and from the observation of natural ocurrences. We report here the results of a computer simulation study of the mixing of calcite (CaCO3) and spherocobaltite (CoCO3), using density functional theory calculations. Our simulations suggest that previously proposed thermodynamic models, based only on the range of observed compositions, significantly overestimate the solubility between the two solids and therefore underestimate the extension of the miscibility gap under ambient conditions. The enthalpy of mixing of the disordered solid solution is strongly positive and moderately asymmetric: calcium incorporation in spherocobaltite is more endothermic than cobalt incorporation in calcite. Ordering of the impurities in (0001) layers is energetically favourable with respect to the disordered solid solution at low temperatures and intermediate compositions, but the ordered phase is still unstable to demixing. We calculate the solvus and spinodal lines in the phase diagram using a sub-regular solution model, and conclude that many Ca1-xCoxCO3 mineral solid solutions (with observed compositions of up to x=0.027, and above x=0.93) are metastable with respect to phase separation. We also calculate solid/aqueous distribution coefficients to evaluate the effect of the strong non-ideality of mixing on the equilibrium with aqueous solution, showing that the thermodynamically-driven incorporation of cobalt in calcite (and of calcium in spherocobaltite) is always very low, regardless of the Co/Ca ratio of the aqueous environment.