948 resultados para gravitational lensing: strong
Resumo:
Music making was a common practice during the 1989−90 strike against the Pittston Coal Company, an action led by the United Mine Workers of America. The types of music made varied greatly based on the contexts in which musicians and protesters were participating. In this thesis, I discuss how performers and audiences engaged with the music of the Pittston strike, with a focus on how different participatory and presentational contexts included music with similar or the same lyrics to achieve different goals. I argue that the musicians’ understanding of the people around them as potential participants, audiences, or inherent audiences shifted their use of music as they worked to use music strategically and effectively for the strike. The musical methods and considerations of the Pittston strike protesters have had a lasting impact on more recent protest movements.
Resumo:
"Sponsored mainly by the Bureau of Naval Weapons, through Contract NOw 62-0604-c under Brown University Subcontract No. 168319."
Resumo:
With over 30 years of tradition, breaking in Germany provides fascinating insights into the learning of dance in Hip Hop culture, reaching from informal street learning to the introduction of courses in educational institutions. This article draws information from a qualitative empirical study based on the Grounded Theory Methodology. The study asked subjects ranging from first-generation German B-Boys and B-Girls to teenage students about how they have learned and currently learn to break. The interview material reveals a rich and self-regulated learning culture with strong impact on protagonists. A synergy of social, aesthetic, and ethical principles seems to be characteristic, creating a gravitational field of learning with a unique and complex form of imitation at its core. (DIPF/Orig.)
Resumo:
Only few months ago some physicists gave the official announcement that gravitational waves exist, but, from a geometrical point of view, they have always been ``real objects'' and their properties have been widely investigated. The aim of this talk is introducing generalized plane waves and discussing some of their properties such as geodesic connectedness and geodesic completeness.
Resumo:
In this thesis we present a mathematical formulation of the interaction between microorganisms such as bacteria or amoebae and chemicals, often produced by the organisms themselves. This interaction is called chemotaxis and leads to cellular aggregation. We derive some models to describe chemotaxis. The first is the pioneristic Keller-Segel parabolic-parabolic model and it is derived by two different frameworks: a macroscopic perspective and a microscopic perspective, in which we start with a stochastic differential equation and we perform a mean-field approximation. This parabolic model may be generalized by the introduction of a degenerate diffusion parameter, which depends on the density itself via a power law. Then we derive a model for chemotaxis based on Cattaneo's law of heat propagation with finite speed, which is a hyperbolic model. The last model proposed here is a hydrodynamic model, which takes into account the inertia of the system by a friction force. In the limit of strong friction, the model reduces to the parabolic model, whereas in the limit of weak friction, we recover a hyperbolic model. Finally, we analyze the instability condition, which is the condition that leads to aggregation, and we describe the different kinds of aggregates we may obtain: the parabolic models lead to clusters or peaks whereas the hyperbolic models lead to the formation of network patterns or filaments. Moreover, we discuss the analogy between bacterial colonies and self gravitating systems by comparing the chemotactic collapse and the gravitational collapse (Jeans instability).
Resumo:
This dissertation examines the quality of hazard mitigation elements in a coastal, hazard prone state. I answer two questions. First, in a state with a strong mandate for hazard mitigation elements in comprehensive plans, does plan quality differ among county governments? Second, if such variation exists, what drives this variation? My research focuses primarily on Florida’s 35 coastal counties, which are all at risk for hurricane and flood hazards, and all fall under Florida’s mandate to have a comprehensive plan that includes a hazard mitigation element. Research methods included document review to rate the hazard mitigation elements of all 35 coastal county plans and subsequent analysis against demographic and hazard history factors. Following this, I conducted an electronic, nationwide survey of planning professionals and academics, informed by interviews of planning leaders in Florida counties. I found that hazard mitigation element quality varied widely among the 35 Florida coastal counties, but were close to a normal distribution. No plans were of exceptionally high quality. Overall, historical hazard effects did not correlate with hazard mitigation element quality, but some demographic variables that are associated with urban populations did. The variance in hazard mitigation element quality indicates that while state law may mandate, and even prescribe, hazard mitigation in local comprehensive plans, not all plans will result in equal, or even adequate, protection for people. Furthermore, the mixed correlations with demographic variables representing social and disaster vulnerability shows that, at least at the county level, vulnerability to hazards does not have a strong effect on hazard mitigation element quality. From a theory perspective, my research is significant because it compares assumptions about vulnerability based on hazard history and demographics to plan quality. The only vulnerability-related variables that appeared to correlate, and at that mildly so, with hazard mitigation element quality, were those typically representing more urban areas. In terms of the theory of Neo-Institutionalism and theories related to learning organizations, my research shows that planning departments appear to have set norms and rules of operating that preclude both significant public involvement and learning from prior hazard events.
Resumo:
BACKGROUND Field vaccination trials with Mycobacterium bovis BCG, an attenuated mutant of M. bovis, are ongoing in Spain, where the Eurasian wild boar (Sus scrofa) is regarded as the main driver of animal tuberculosis (TB). The oral baiting strategy consists in deploying vaccine baits twice each summer, in order to gain access to a high proportion of wild boar piglets. The aim of this study was to assess the response of wild boar to re-vaccination with BCG and to subsequent challenge with an M. bovis field strain. RESULTS BCG re-vaccinated wild boar showed reductions of 75.8% in lesion score and 66.9% in culture score, as compared to unvaccinated controls. Only one of nine vaccinated wild boar had a culture-confirmed lung infection, as compared to seven of eight controls. Serum antibody levels were highly variable and did not differ significantly between BCG re-vaccinated wild boar and controls. Gamma IFN levels differed significantly between BCG re-vaccinated wild boar and controls. The mRNA levels for IL-1b, C3 and MUT were significantly higher in vaccinated wild boar when compared to controls after vaccination and decreased after mycobacterial challenge. CONCLUSIONS Oral re-vaccination of wild boar with BCG yields a strong protective response against challenge with a field strain. Moreover, re-vaccination of wild boar with BCG is not counterproductive. These findings are relevant given that re-vaccination is likely to happen under real (field) conditions.
Resumo:
The relative role of drift versus selection underlying the evolution of bacterial species within the gut microbiota remains poorly understood. The large sizes of bacterial populations in this environment suggest that even adaptive mutations with weak effects, thought to be the most frequently occurring, could substantially contribute to a rapid pace of evolutionary change in the gut. We followed the emergence of intra-species diversity in a commensal Escherichia coli strain that previously acquired an adaptive mutation with strong effect during one week of colonization of the mouse gut. Following this first step, which consisted of inactivating a metabolic operon, one third of the subsequent adaptive mutations were found to have a selective effect as high as the first. Nevertheless, the order of the adaptive steps was strongly affected by a mutational hotspot with an exceptionally high mutation rate of 10-5. The pattern of polymorphism emerging in the populations evolving within different hosts was characterized by periodic selection, which reduced diversity, but also frequency-dependent selection, actively maintaining genetic diversity. Furthermore, the continuous emergence of similar phenotypes due to distinct mutations, known as clonal interference, was pervasive. Evolutionary change within the gut is therefore highly repeatable within and across hosts, with adaptive mutations of selection coefficients as strong as 12% accumulating without strong constraints on genetic background. In vivo competitive assays showed that one of the second steps (focA) exhibited positive epistasis with the first, while another (dcuB) exhibited negative epistasis. The data shows that strong effect adaptive mutations continuously recur in gut commensal bacterial species.
Resumo:
Over the past decades star formation has been a very attractive field because knowledge of star formation leads to a better understanding of the formation of planets and thus of our solar system but also of the evolution of galaxies. Conditions leading to the formation of high-mass stars are still under investigation but an evolutionary scenario has been proposed: As a cold pre-stellar core collapses under gravitational force, the medium warms up until it reaches a temperature of 100 K and enters the hot molecular core (HMC) phase. The forming central proto-star accretes materials, increasing its mass and luminosity and eventually it becomes sufficiently evolved to emit UV photons which irradiate the surrounding environment forming a hyper compact (HC) and then a ultracompact (UC) HII region. At this stage, a very dense and very thin internal photon-dominated region (PDR) forms between the HII region and the molecular core. Information on the chemistry allows to trace the physical processes occurring in these different phases of star formation. Formation and destruction routes of molecules are influenced by the environment as reaction rates depend on the temperature and radiation field. Therefore, chemistry also allows the determination of the evolutionary stage of astrophysical objects through the use of chemical models including the time evolution of the temperature and radiation field. Because HMCs host a very rich chemistry with high abundances of complex organic molecules (COMs), several astrochemical models have been developed to study the gas phase chemistry as well as grain chemistry in these regions. In addition to HMCs models, models of PDRs have also been developed to study in particular photo-chemistry. So far, few studies have investigated internal PDRs and only in the presence of outflows cavities. Thus, these unique regions around HC/UCHII regions remain to be examined thoroughly. My PhD thesis focuses on the spatio-temporal chemical evolution in HC/UC HII regions with internal PDRs as well as in HMCs. The purpose of this study is first to understand the impact and effects of the radiation field, usually very strong in these regions, on the chemistry. Secondly, the goal is to study the emission of various tracers of HC/UCHII regions and compare it with HMCs models, where the UV radiation field does not impact the region as it is immediately attenuated by the medium. Ultimately we want to determine the age of a given region using chemistry in combination with radiative transfer.
Resumo:
This thesis investigates how the strong verb system inherited from Old English evolved in the regional dialects of Middle English (ca. 1100-1500). Old English texts preserve a relatively complex system of strong verbs, in which traditionally seven different ablaut classes are distinguished. This system becomes seriously disrupted from the Late Old English and Early Middle English periods onwards. As a result, many strong verbs die out, or have their ablaut patterns affected by sound change and morphological analogy, or transfer to the weak conjugation. In my thesis, I study the beginnings of two of these developments in two strong verb classes to find out what the evidence from Middle English regional dialects can tell us about their origins and diffusion. Chapter 2 concentrates on the strong-to-weak shift in Class III verbs, and investigates to what extent strong, mixed and weak past tense and participle forms vary in Middle English dialects, and whether the variation is more pronounced in the paradigms of specific verbs or sub-classes. Chapter 3 analyses the regional distribution of ablaut levelling in strong Class IV verbs throughout the Middle English period. The Class III and IV data for the Early Middle English period are drawn from A Linguistic Atlas of Early Middle English, and the data for the Late Middle English period from a sub-corpus of files from The Penn-Helsinki Parsed Corpus of Middle English and The Middle English Grammar Corpus. Furthermore, The English Dialect Dictionary and Grammar are consulted as an additional reference point to find out to what extent the Middle English developments are reflected in Late Modern English dialects. Finally, referring to modern insights into language variation and change and linguistic interference, Chapter 4 discusses to what extent intra- and extra-linguistc factors, such as token and type frequency, stem structure and language contact, might correlate with the strong-to-weak shift and ablaut levelling in Class III and IV verbs in the Middle English period. The thesis is accompanied by six appendices that contain further information about my distinction of Middle English dialect areas (Appendix A), historical Class III and IV verbs (B and C) and the text samples and linguistic data from the Middle English text corpora (D, E and F).
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
This Thesis explores two novel and independent cosmological probes, Cosmic Chronometers (CCs) and Gravitational Waves (GWs), to measure the expansion history of the Universe. CCs provide direct and cosmology-independent measurements of the Hubble parameter H(z) up to z∼2. In parallel, GWs provide a direct measurement of the luminosity distance without requiring additional calibration, thus yielding a direct measurement of the Hubble constant H0=H(z=0). This Thesis extends the methodologies of both of these probes to maximize their scientific yield. This is achieved by accounting for the interplay of cosmological and astrophysical parameters to derive them jointly, study possible degeneracies, and eventually minimize potential systematic effects. As a legacy value, this work also provides interesting insights into galaxy evolution and compact binary population properties. The first part presents a detailed study of intermediate-redshift passive galaxies as CCs, with a focus on the selection process and the study of their stellar population properties using specific spectral features. From their differential aging, we derive a new measurement of the Hubble parameter H(z) and thoroughly assess potential systematics. In the second part, we develop a novel methodology and pipeline to obtain joint cosmological and astrophysical population constraints using GWs in combination with galaxy catalogs. This is applied to GW170817 to obtain a measurement of H0. We then perform realistic forecasts to predict joint cosmological and astrophysical constraints from black hole binary mergers for upcoming gravitational wave observatories and galaxy surveys. Using these two probes we provide an independent reconstruction of H(z) with direct measurements of H0 from GWs and H(z) up to z∼2 from CCs and demonstrate that they can be powerful independent probes to unveil the expansion history of the Universe.