998 resultados para massive gravitational models
Resumo:
We consider the coupling of quantum massless and massive scalar particles with exact gravitational plane waves. The cross section for scattering of the quantum particles by the waves is shown to coincide with the classical cross section for scattering of geodesics. The expectation value of the scalar field stress tensor between scattering states diverges at the points where classical test particles focus after colliding with the wave. This indicates that back-reaction effects cannot be ignored for plane waves propagating in the presence of quantum particles and that classical singularities are likely to develop.
Resumo:
We present the concept of a sensitive and broadband resonant mass gravitational wave detector. A massive sphere is suspended inside a second hollow one. Short, high-finesse Fabry-Perot optical cavities read out the differential displacements of the two spheres as their quadrupole modes are excited. At cryogenic temperatures, one approaches the standard quantum limit for broadband operation with reasonable choices for the cavity finesses and the intracavity light power. A molybdenum detector, of overall size of 2 m, would reach spectral strain sensitivities of 2x10-23Hz-1/2 between 1000 and 3000 Hz.
Resumo:
An important evaporitic sedimentation occurred during the Paleogene (Eocene to lower Oligocene) in the Barberà sector of the southeastern margin of the Tertiary Ebro Basin. This sedimentation took place in shallow lacustrine environments and was controlled by a number of factors: 1) the tectonic structuration of the margin; 2) the high calcium sulphate content in the meteoric waters coming from the marginal reliefs; 3) the semiarid climate; and 4) the development of large alluvial fans along the basin margin, which also conditioned the location of the saline lakes. The evaporites are currently composed of secondary gypsum in surface and anhydrite at depth. There are, however, vestiges of the local presence of sodium sulphates. The evaporite units, with individual thicknesses ranging between 50 and 100 m, are intercalated within various lithostratigraphic formations and exhibit a paleogeographical pattern. The units located closer to the basin margin are characterized by a massive gypsum lithofacies (originally, bioturbated gypsum) bearing chert, and also by meganodular gypsum locally (originally, meganodules of anhydrite) in association with red lutites and clastic intercalations (gypsarenites, sandstones and conglomerates). Chert, which is only linked to the thickest gypsum layers, seems to be an early diagenetic, lacustrine product. Cyclicity in these proximal units indicates the progressive development of lowsalinity, lacustrine bodies on red mud flats. At the top of some cycles, exposure episodes commonly resulted in dissolution, erosion, and the formation of edaphic features. In contrast, the units located in a more distal position with regard to the basin margin are formed by an alternation of banded-nodular gypsum and laminated gypsum layers in association with grey lutites and few clastic intercalations. These distal units formed in saline lakes with a higher ionic concentration. Exposure episodes in these lakes resulted in the formation of synsedimentary anhydrite and sabkha cycles. In some of these units, however, outer rims characterized by a lithofacies association similar to that of the proximal units occur (nodular gypsum, massive gypsum and chert nodules).
Resumo:
We study new supergravity solutions related to large-N c N=1 supersymmetric gauge field theories with a large number N f of massive flavors. We use a recently proposed framework based on configurations with N c color D5 branes and a distribution of N f flavor D5 branes, governed by a function N f S(r). Although the system admits many solutions, under plausible physical assumptions the relevant solution is uniquely determined for each value of x ≡ N f /N c . In the IR region, the solution smoothly approaches the deformed Maldacena-Núñez solution. In the UV region it approaches a linear dilaton solution. For x < 2 the gauge coupling β g function computed holographically is negative definite, in the UV approaching the NSVZ β function with anomalous dimension γ 0 = −1/2 (approaching − 3/(32π 2)(2N c − N f )g 3)), and with β g → −∞ in the IR. For x = 2, β g has a UV fixed point at strong coupling, suggesting the existence of an IR fixed point at a lower value of the coupling. We argue that the solutions with x > 2 describe a"Seiberg dual" picture where N f − 2N c flips sign.
Resumo:
In order to shed light on the main physical processes controlling fragmentation of massive dense cores, we present a uniform study of the density structure of 19 massive dense cores, selected to be at similar evolutionary stages, for which their relative fragmentation level was assessed in a previous work. We inferred the density structure of the 19 cores through a simultaneous fit of the radial intensity profiles at 450 and 850 μm (or 1.2 mm in two cases) and the spectral energy distribution, assuming spherical symmetry and that the density and temperature of the cores decrease with radius following power-laws. Even though the estimated fragmentation level is strictly speaking a lower limit, its relative value is significant and several trends could be explored with our data. We find a weak (inverse) trend of fragmentation level and density power-law index, with steeper density profiles tending to show lower fragmentation, and vice versa. In addition, we find a trend of fragmentation increasing with density within a given radius, which arises from a combination of flat density profile and high central density and is consistent with Jeans fragmentation. We considered the effects of rotational-to-gravitational energy ratio, non-thermal velocity dispersion, and turbulence mode on the density structure of the cores, and found that compressive turbulence seems to yield higher central densities. Finally, a possible explanation for the origin of cores with concentrated density profiles, which are the cores showing no fragmentation, could be related with a strong magnetic field, consistent with the outcome of radiation magnetohydrodynamic simulations.
Resumo:
The cosmological standard view is based on the assumptions of homogeneity, isotropy and general relativistic gravitational interaction. These alone are not sufficient for describing the current cosmological observations of accelerated expansion of space. Although general relativity is extremely accurately tested to describe the local gravitational phenomena, there is a strong demand for modifying either the energy content of the universe or the gravitational interaction itself to account for the accelerated expansion. By adding a non-luminous matter component and a constant energy component with negative pressure, the observations can be explained with general relativity. Gravitation, cosmological models and their observational phenomenology are discussed in this thesis. Several classes of dark energy models that are motivated by theories outside the standard formulation of physics were studied with emphasis on the observational interpretation. All the cosmological models that seek to explain the cosmological observations, must also conform to the local phenomena. This poses stringent conditions for the physically viable cosmological models. Predictions from a supergravity quintessence model was compared to Supernova 1a data and several metric gravity models were studied with local experimental results. Polytropic stellar configurations of solar, white dwarf and neutron stars were numerically studied with modified gravity models. The main interest was to study the spacetime around the stars. The results shed light on the viability of the studied cosmological models.
Resumo:
Formées lors de l’effondrement gravitationnel d’un nuage de gaz moléculaire, les étoiles naissantes auront différentes masses variant entre 0.08 et environ 100M . La majorité de la population stellaire de la Galaxie est constituée d’étoiles dont la masse est inférieure à environ 0.6 M . Le dernier évènement de formation stellaire dans le voisinage solaire s’est produit dans la bulle locale il y a au plus 100 millions d’années, vraisemblablement provoqué par le passage d’une onde de choc dans le bras local de la Galaxie. C’est ainsi que se formèrent de jeunes associations d’étoiles dont les membres se caractérisent en particulier par une vitesse spatiale et une position commune dans la Galaxie. Les associations jeunes étant peu densément peuplées et relativement proches du Soleil, leurs membres se font plutôt rares et dispersés sur toute la voûte céleste. Jusqu’à présent, surtout les étoiles les plus massives (brillantes) ont été répertoriées. Les étoiles jeunes de faible masse, constituant la majorité de la population, restent pour la plupart à être identifiées. Les étoiles jeunes de faible masse représentent une population clef pour contraindre les modèles évolutifs des étoiles M et des naines brunes. Elles sont également d’excellentes candidates pour chercher des exoplanètes via les techniques d’imagerie directe. Ce mémoire présente une nouvelle méthode utilisant un modèle cinématique enrichi d’une analyse statistique Bayesienne pour identifier des étoiles jeunes de faible masse dans les associations beta Pictoris, Tucana-Horologium et AB Doradus. À partir d’un échantillon de 1080 étoiles K et M, toutes comportant des indicateurs de jeunesse tels l’émission Halpha et une forte luminosité dans les rayons X, leurs propriétés cinématiques (mouvement propre) et photométriques sont analysées pour en extraire 98 candidates hautement probables membres d’une des trois associations. Une confirmation de leur statut comme membre nécessitera en particulier une mesure de leur vitesse radiale (prédit par notre analyse) et une mesure de la largeur équivalente du lithium à 6708 Å pour mieux contraindre leur âge.
Resumo:
In this paper, the available potential energy (APE) framework of Winters et al. (J. Fluid Mech., vol. 289, 1995, p. 115) is extended to the fully compressible Navier– Stokes equations, with the aims of clarifying (i) the nature of the energy conversions taking place in turbulent thermally stratified fluids; and (ii) the role of surface buoyancy fluxes in the Munk & Wunsch (Deep-Sea Res., vol. 45, 1998, p. 1977) constraint on the mechanical energy sources of stirring required to maintain diapycnal mixing in the oceans. The new framework reveals that the observed turbulent rate of increase in the background gravitational potential energy GPEr , commonly thought to occur at the expense of the diffusively dissipated APE, actually occurs at the expense of internal energy, as in the laminar case. The APE dissipated by molecular diffusion, on the other hand, is found to be converted into internal energy (IE), similar to the viscously dissipated kinetic energy KE. Turbulent stirring, therefore, does not introduce a new APE/GPEr mechanical-to-mechanical energy conversion, but simply enhances the existing IE/GPEr conversion rate, in addition to enhancing the viscous dissipation and the entropy production rates. This, in turn, implies that molecular diffusion contributes to the dissipation of the available mechanical energy ME =APE +KE, along with viscous dissipation. This result has important implications for the interpretation of the concepts of mixing efficiency γmixing and flux Richardson number Rf , for which new physically based definitions are proposed and contrasted with previous definitions. The new framework allows for a more rigorous and general re-derivation from the first principles of Munk & Wunsch (1998, hereafter MW98)’s constraint, also valid for a non-Boussinesq ocean: G(KE) ≈ 1 − ξ Rf ξ Rf Wr, forcing = 1 + (1 − ξ )γmixing ξ γmixing Wr, forcing , where G(KE) is the work rate done by the mechanical forcing, Wr, forcing is the rate of loss of GPEr due to high-latitude cooling and ξ is a nonlinearity parameter such that ξ =1 for a linear equation of state (as considered by MW98), but ξ <1 otherwise. The most important result is that G(APE), the work rate done by the surface buoyancy fluxes, must be numerically as large as Wr, forcing and, therefore, as important as the mechanical forcing in stirring and driving the oceans. As a consequence, the overall mixing efficiency of the oceans is likely to be larger than the value γmixing =0.2 presently used, thereby possibly eliminating the apparent shortfall in mechanical stirring energy that results from using γmixing =0.2 in the above formula.
Resumo:
There is a growing need for massive computational resources for the analysis of new astronomical datasets. To tackle this problem, we present here our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, AstroGrid) and the computa- tional grid (e.g. TeraGrid, COSMOS etc.). We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of compu- tational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We discuss our planned usages of the VOTechBroker in computing a huge number of n–point correlation functions from the SDSS data and mas- sive model-fitting of millions of CMBfast models to WMAP data. We also discuss other applications including the determination of the XMM Cluster Survey selection function and the construction of new WMAP maps.
Resumo:
The research network “Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models” was organized with European funding (COST Action ES0905) for the period of 2010–2014. Its extensive brainstorming suggests how the subgrid-scale parameterization problem in atmospheric modeling, especially for convection, can be examined and developed from the point of view of a robust theoretical basis. Our main cautions are current emphasis on massive observational data analyses and process studies. The closure and the entrainment–detrainment problems are identified as the two highest priorities for convection parameterization under the mass–flux formulation. The need for a drastic change of the current European research culture as concerns policies and funding in order not to further deplete the visions of the European researchers focusing on those basic issues is emphasized.
Resumo:
Massive Open Online Courses (MOOCs) have become very popular among learners millions of users from around the world registered with leading platforms. There are hundreds of universities (and other organizations) offering MOOCs. However, sustainability of MOOCs is a pressing concern as MOOCs incur up front creation costs, maintenance costs to keep content relevant and on-going support costs to provide facilitation while a course is being run. At present, charging a fee for certification (for example Coursera Signature Track and FutureLearn Statement of Completion) seems a popular business model. In this paper, the authors discuss other possible business models and their pros and cons. Some business models discussed here are: Freemium model – providing content freely but charging for premium services such as course support, tutoring and proctored exams. Sponsorships – courses can be created in collaboration with industry where industry sponsorships are used to cover the costs of course production and offering. For example Teaching Computing course was offered by the University of East Anglia on the FutureLearn platform with the sponsorship from British Telecom while the UK Government sponsored the course Introduction to Cyber Security offered by the Open University on FutureLearn. Initiatives and Grants – The government, EU commission or corporations could commission the creation of courses through grants and initiatives according to the skills gap identified for the economy. For example, the UK Government’s National Cyber Security Programme has supported a course on Cyber Security. Similar initiatives could also provide funding to support relevant course development and offering. Donations – Free software, Wikipedia and early OER initiatives such as the MIT OpenCourseware accept donations from the public and this could well be used as a business model where learners could contribute (if they wish) to the maintenance and facilitation of a course. Merchandise – selling merchandise could also bring revenue to MOOCs. As many participants do not seek formal recognition (European Commission, 2014) for their completion of a MOOC, merchandise that presents their achievement in a playful way could well be attractive for them. Sale of supplementary material –supplementary course material in the form of an online or physical book or similar could be sold with the revenue being reinvested in the course delivery. Selective advertising – courses could have advertisements relevant to learners Data sharing – though a controversial topic, sharing learner data with relevant employers or similar could be another revenue model for MOOCs. Follow on events – the courses could lead to follow on summer schools, courses or other real-life or online events that are paid-for in which case a percentage of the revenue could be passed on to the MOOC for its upkeep. Though these models are all possible ways of generating revenue for MOOCs, some are more controversial and sensitive than others. Nevertheless unless appropriate business models are identified the sustainability of MOOCs would be problematic.
Resumo:
The first stars that formed after the Big Bang were probably massive(1), and they provided the Universe with the first elements heavier than helium (`metals`), which were incorporated into low-mass stars that have survived to the present(2,3). Eight stars in the oldest globular cluster in the Galaxy, NGC 6522, were found to have surface abundances consistent with the gas from which they formed being enriched by massive stars(4) (that is, with higher alpha-element/Fe and Eu/Fe ratios than those of the Sun). However, the same stars have anomalously high abundances of Ba and La with respect to Fe(4), which usually arises through nucleosynthesis in low-mass stars(5) (via the slow-neutron-capture process, or s-process). Recent theory suggests that metal-poor fast-rotating massive stars are able to boost the s-process yields by up to four orders of magnitude(6), which might provide a solution to this contradiction. Here we report a reanalysis of the earlier spectra, which reveals that Y and Sr are also over-abundant with respect to Fe, showing a large scatter similar to that observed in extremely metal-poor stars(7), whereas C abundances are not enhanced. This pattern is best explained as originating in metal-poor fast-rotating massive stars, which might point to a common property of the first stellar generations and even of the `first stars`.
Resumo:
Data from 58 strong-lensing events surveyed by the Sloan Lens ACS Survey are used to estimate the projected galaxy mass inside their Einstein radii by two independent methods: stellar dynamics and strong gravitational lensing. We perform a joint analysis of these two estimates inside models with up to three degrees of freedom with respect to the lens density profile, stellar velocity anisotropy, and line-of-sight (LOS) external convergence, which incorporates the effect of the large-scale structure on strong lensing. A Bayesian analysis is employed to estimate the model parameters, evaluate their significance, and compare models. We find that the data favor Jaffe`s light profile over Hernquist`s, but that any particular choice between these two does not change the qualitative conclusions with respect to the features of the system that we investigate. The density profile is compatible with an isothermal, being sightly steeper and having an uncertainty in the logarithmic slope of the order of 5% in models that take into account a prior ignorance on anisotropy and external convergence. We identify a considerable degeneracy between the density profile slope and the anisotropy parameter, which largely increases the uncertainties in the estimates of these parameters, but we encounter no evidence in favor of an anisotropic velocity distribution on average for the whole sample. An LOS external convergence following a prior probability distribution given by cosmology has a small effect on the estimation of the lens density profile, but can increase the dispersion of its value by nearly 40%.
Resumo:
The problem of cosmological particle creation for a spatially flat, homogeneous and isotropic universes is discussed in the context of f (R) theories of gravity. Different from cosmological models based on general relativity theory, it is found that a conformal invariant metric does not forbid the creation of massless particles during the early stages (radiation era) of the universe. (C) 2010 Elsevier B.V. All rights reserved.