24 resultados para Many-body problem
em CentAUR: Central Archive University of Reading - UK
Resumo:
The many-body effect in the kinetic responses of ER fluids is studied by a molecular-dynamic simulation method. The mutual polarization effects of the particles are considered by self-consistently calculating the dipole strength on each particle according to the external field and the dipole field due to all the other particles in the fluids. The many-body effect is found to increase with the enhancement of the particle concentration and the permittivity ratio between the solvent and the particles. The calculated response times are shorter than that predicted with the 'point-dipole' model and agree very well with experimental results. The many-body effect enhances the shear stresses of the fluids by several times. But they are not proportional to the many-body correction factor lambda as expected. This is due to the fact that larger interaction forces between the particles lead to coarsening of the fibers formed in the suspensions. The results show that the many-body and multipolar interaction between the particles must be treated comprehensively in the simulations in order to get more reliable results.
Resumo:
The formulation of four-dimensional variational data assimilation allows the incorporation of constraints into the cost function which need only be weakly satisfied. In this paper we investigate the value of imposing conservation properties as weak constraints. Using the example of the two-body problem of celestial mechanics we compare weak constraints based on conservation laws with a constraint on the background state.We show how the imposition of conservation-based weak constraints changes the nature of the gradient equation. Assimilation experiments demonstrate how this can add extra information to the assimilation process, even when the underlying numerical model is conserving.
Resumo:
A potential energy function has been derived for the ground state surface of C2H2 as a many-body expansion. The 2- and 3-body terms have been obtained by preliminary investigation of the ground state surfaces of CH2( 3B1) and C2H( 2Σ+). A 4-body term has been derived which reproduces the energy, geometry and harmonic force field of C2H2. The potential has a secondary minimum corresponding to the vinylidene structure and the geometry and energy of this are in close agreement with predictions from ab initio calculations. The saddle point for the HCCH-H2CC rearrangement is predicted to lie 2•530 eV above the acetylene minimum.
Resumo:
A full dimensional, ab initio-based semiglobal potential energy surface for C2H3+ is reported. The ab initio electronic energies for this molecule are calculated using the spin-restricted, coupled cluster method restricted to single and double excitations with triples corrections [RCCSD(T)]. The RCCSD(T) method is used with the correlation-consistent polarized valence triple-zeta basis augmented with diffuse functions (aug-cc-pVTZ). The ab initio potential energy surface is represented by a many-body (cluster) expansion, each term of which uses functions that are fully invariant under permutations of like nuclei. The fitted potential energy surface is validated by comparing normal mode frequencies at the global minimum and secondary minimum with previous and new direct ab initio frequencies. The potential surface is used in vibrational analysis using the "single-reference" and "reaction-path" versions of the code MULTIMODE. (c) 2006 American Institute of Physics.
Resumo:
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Resumo:
Six parameters uniquely describe the orbit of a body about the Sun. Given these parameters, it is possible to make predictions of the body's position by solving its equation of motion. The parameters cannot be directly measured, so they must be inferred indirectly by an inversion method which uses measurements of other quantities in combination with the equation of motion. Inverse techniques are valuable tools in many applications where only noisy, incomplete, and indirect observations are available for estimating parameter values. The methodology of the approach is introduced and the Kepler problem is used as a real-world example. (C) 2003 American Association of Physics Teachers.
Resumo:
In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.
Resumo:
Preface. Iron is considered to be a minor element employed, in a variety of forms, by nearly all living organisms. In some cases, it is utilised in large quantities, for instance for the formation of magnetosomes within magnetotactic bacteria or during use of iron as a respiratory donor or acceptor by iron oxidising or reducing bacteria. However, in most cases the role of iron is restricted to its use as a cofactor or prosthetic group assisting the biological activity of many different types of protein. The key metabolic processes that are dependent on iron as a cofactor are numerous; they include respiration, light harvesting, nitrogen fixation, the Krebs cycle, redox stress resistance, amino acid synthesis and oxygen transport. Indeed, it is clear that Life in its current form would be impossible in the absence of iron. One of the main reasons for the reliance of Life upon this metal is the ability of iron to exist in multiple redox states, in particular the relatively stable ferrous (Fe2+) and ferric (Fe3+) forms. The availability of these stable oxidation states allows iron to engage in redox reactions over a wide range of midpoint potentials, depending on the coordination environment, making it an extremely adaptable mediator of electron exchange processes. Iron is also one of the most common elements within the Earth’s crust (5% abundance) and thus is considered to have been readily available when Life evolved on our early, anaerobic planet. However, as oxygen accumulated (the ‘Great oxidation event’) within the atmosphere some 2.4 billion years ago, and as the oceans became less acidic, the iron within primordial oceans was converted from its soluble reduced form to its weakly-soluble oxidised ferric form, which precipitated (~1.8 billion years ago) to form the ‘banded iron formations’ (BIFs) observed today in Precambrian sedimentary rocks around the world. These BIFs provide a geological record marking a transition point away from the ancient anaerobic world towards modern aerobic Earth. They also indicate a period over which the bio-availability of iron shifted from abundance to limitation, a condition that extends to the modern day. Thus, it is considered likely that the vast majority of extant organisms face the common problem of securing sufficient iron from their environment – a problem that Life on Earth has had to cope with for some 2 billion years. This struggle for iron is exemplified by the competition for this metal amongst co-habiting microorganisms who resort to stealing (pirating) each others iron supplies! The reliance of micro-organisms upon iron can be disadvantageous to them, and to our innate immune system it represents a chink in the microbial armour, offering an opportunity that can be exploited to ward off pathogenic invaders. In order to infect body tissues and cause disease, pathogens must secure all their iron from the host. To fight such infections, the host specifically withdraws available iron through the action of various iron depleting processes (e.g. the release of lactoferrin and lipocalin-2) – this represents an important strategy in our defence against disease. However, pathogens are frequently able to deploy iron acquisition systems that target host iron sources such as transferrin, lactoferrin and hemoproteins, and thus counteract the iron-withdrawal approaches of the host. Inactivation of such host-targeting iron-uptake systems often attenuates the pathogenicity of the invading microbe, illustrating the importance of ‘the battle for iron’ in the infection process. The role of iron sequestration systems in facilitating microbial infections has been a major driving force in research aimed at unravelling the complexities of microbial iron transport processes. But also, the intricacy of such systems offers a challenge that stimulates the curiosity. One such challenge is to understand how balanced levels of free iron within the cytosol are achieved in a way that avoids toxicity whilst providing sufficient levels for metabolic purposes – this is a requirement that all organisms have to meet. Although the systems involved in achieving this balance can be highly variable amongst different microorganisms, the overall strategy is common. On a coarse level, the homeostatic control of cellular iron is maintained through strict control of the uptake, storage and utilisation of available iron, and is co-ordinated by integrated iron-regulatory networks. However, much yet remains to be discovered concerning the fine details of these different iron regulatory processes. As already indicated, perhaps the most difficult task in maintaining iron homeostasis is simply the procurement of sufficient iron from external sources. The importance of this problem is demonstrated by the plethora of distinct iron transporters often found within a single bacterium, each targeting different forms (complex or redox state) of iron or a different environmental condition. Thus, microbes devote considerable cellular resource to securing iron from their surroundings, reflecting how successful acquisition of iron can be crucial in the competition for survival. The aim of this book is provide the reader with an overview of iron transport processes within a range of microorganisms and to provide an indication of how microbial iron levels are controlled. This aim is promoted through the inclusion of expert reviews on several well studied examples that illustrate the current state of play concerning our comprehension of how iron is translocated into the bacterial (or fungal) cell and how iron homeostasis is controlled within microbes. The first two chapters (1-2) consider the general properties of microbial iron-chelating compounds (known as ‘siderophores’), and the mechanisms used by bacteria to acquire haem and utilise it as an iron source. The following twelve chapters (3-14) focus on specific types of microorganism that are of key interest, covering both an array of pathogens for humans, animals and plants (e.g. species of Bordetella, Shigella, , Erwinia, Vibrio, Aeromonas, Francisella, Campylobacter and Staphylococci, and EHEC) as well as a number of prominent non-pathogens (e.g. the rhizobia, E. coli K-12, Bacteroides spp., cyanobacteria, Bacillus spp. and yeasts). The chapters relay the common themes in microbial iron uptake approaches (e.g. the use of siderophores, TonB-dependent transporters, and ABC transport systems), but also highlight many distinctions (such as use of different types iron regulator and the impact of the presence/absence of a cell wall) in the strategies employed. We hope that those both within and outside the field will find this book useful, stimulating and interesting. We intend that it will provide a source for reference that will assist relevant researchers and provide an entry point for those initiating their studies within this subject. Finally, it is important that we acknowledge and thank wholeheartedly the many contributors who have provided the 14 excellent chapters from which this book is composed. Without their considerable efforts, this book, and the understanding that it relays, would not have been possible. Simon C Andrews and Pierre Cornelis
Resumo:
This note presents a robust method for estimating response surfaces that consist of linear response regimes and a linear plateau. The linear response-and-plateau model has fascinated production scientists since von Liebig (1855) and, as Upton and Dalton indicated, some years ago in this Journal, the response-and-plateau model seems to fit the data in many empirical studies. The estimation algorithm evolves from Bayesian implementation of a switching-regression (finite mixtures) model and demonstrates routine application of Gibbs sampling and data augmentation-techniques that are now in widespread application in other disciplines.
Resumo:
P>1. Management of lowland mesotrophic grasslands in north-west Europe often makes use of inorganic fertilizers, high stocking densities and silage-based forage systems to maximize productivity. The impact of these practices has resulted in a simplification of the plant community combined with wide-scale declines in the species richness of grassland invertebrates. We aim to identify how field margin management can be used to promote invertebrate diversity across a suite of functionally diverse taxa (beetles, planthoppers, true bugs, butterflies, bumblebees and spiders). 2. Using an information theoretic approach we identify the impacts of management (cattle grazing, cutting and inorganic fertilizer) and plant community composition (forb species richness, grass species richness and sward architecture) on invertebrate species richness and body size. As many of these management practices are common to grassland systems throughout the world, understanding invertebrate responses to them is important for the maintenance of biodiversity. 3. Sward architecture was identified as the primary factor promoting increased species richness of both predatory and phytophagous trophic levels, as well as being positively correlated with mean body size. In all cases phytophagous invertebrate species richness was positively correlated with measures of plant species richness. 4. The direct effects of management practices appear to be comparatively weak, suggesting that their impacts are indirect and mediated though the continuous measures of plant community structure, such as sward architecture or plant species richness. 5. Synthesis and applications. By partitioning field margins from the remainder of the field, economically viable intensive grassland management can be combined with extensive management aimed at promoting native biodiversity. The absence of inorganic fertilizer, combined with a reduction in the intensity of both cutting and grazing regimes, promotes floral species richness and sward architectural complexity. By increasing sward architecture the total biomass of invertebrates also increased (by c. 60% across the range of sward architectural measures seen in this study), increasing food available for higher trophic levels, such as birds and mammals.
Resumo:
The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.
Resumo:
This paper tackles the problem of computing smooth, optimal trajectories on the Euclidean group of motions SE(3). The problem is formulated as an optimal control problem where the cost function to be minimized is equal to the integral of the classical curvature squared. This problem is analogous to the elastic problem from differential geometry and thus the resulting rigid body motions will trace elastic curves. An application of the Maximum Principle to this optimal control problem shifts the emphasis to the language of symplectic geometry and to the associated Hamiltonian formalism. This results in a system of first order differential equations that yield coordinate free necessary conditions for optimality for these curves. From these necessary conditions we identify an integrable case and these particular set of curves are solved analytically. These analytic solutions provide interpolating curves between an initial given position and orientation and a desired position and orientation that would be useful in motion planning for systems such as robotic manipulators and autonomous-oriented vehicles.
Resumo:
Overseas trained teachers (OTTs) have grown in numbers during the past decade, particularly in London and the South East of England. In this recruitment explosion many OTTs have experienced difficulties. In professional literature as well as press coverage OTTs often become part of a deficit discourse. A small-scale pilot investigation of OTT experience has begun to suggest why OTTs have been successful as well as the principal challenges they have faced. An important factor in their success was felt to be the quality of support in school from others on the staff. Major challenges included the complexity of the primary curriculum. The argument that globalisation leads to brain-drain may be exaggerated. Suggestions for further research are made, which might indicate the positive benefits OTTs can bring to a school.
Resumo:
Research in the last four decades has brought a considerable advance in our understanding of how the brain synthesizes information arising from different sensory modalities. Indeed, many cortical and subcortical areas, beyond those traditionally considered to be ‘associative,’ have been shown to be involved in multisensory interaction and integration (Ghazanfar and Schroeder 2006). Visuo-tactile interaction is of particular interest, because of the prominent role played by vision in guiding our actions and anticipating their tactile consequences in everyday life. In this chapter, we focus on the functional role that visuo-tactile processing may play in driving two types of body-object interactions: avoidance and approach. We will first review some basic features of visuo-tactile interactions, as revealed by electrophysiological studies in monkeys. These will prove to be relevant for interpreting the subsequent evidence arising from human studies. A crucial point that will be stressed is that these visuo-tactile mechanisms have not only sensory, but also motor-related activity that qualifies them as multisensory-motor interfaces. Evidence will then be presented for the existence of functionally homologous processing in the human brain, both from neuropsychological research in brain-damaged patients and in healthy participants. The final part of the chapter will focus on some recent studies in humans showing that the human motor system is provided with a multisensory interface that allows for continuous monitoring of the space near the body (i.e., peripersonal space). We further demonstrate that multisensory processing can be modulated on-line as a consequence of interacting with objects. This indicates that, far from being passive, the monitoring of peripersonal space is an active process subserving actions between our body and objects located in the space around us.
Resumo:
This paper examines the implications of policy fracture and arms length governance within the decision making processes currently shaping curriculum design within the English education system. In particular it argues that an unresolved ‘ideological fracture’ at government level has been passed down to school leaders whose response to the dilemma is distorted by the target-driven agenda of arms length agencies. Drawing upon the findings of a large scale on-line survey of history teaching in English secondary schools, this paper illustrates the problems that occur when policy making is divorced from curriculum theory, and in particular from any consideration of the nature of knowledge. Drawing on the social realist theory of knowledge elaborated by Young (2008), we argue that the rapid spread of alternative curricular arrangements, implemented in the absence of an understanding of curriculum theory, undermines the value of disciplined thinking to the detriment of many young people, particularly those in areas of social and economic deprivation.