948 resultados para gravitational capture
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
In this work the maximum carbon dioxide adsorption capacity of carbon aerogels, obtained by a sol-gel process using 2,4-dihydroxybenzoic acid/formaldehyde (DHBAF) and resorcinol/formaldehyde (RF) as precursors, was studied. The effect of increasing the temperature of carbonization and physical activation of the samples DHBAF was also studied. The results showed that the maximum adsorption capacity is favoured at lower temperatures, adsorption and desorption are rapid and the performance is maintained over several cycles of CO2 adsorption/desorption. A comparison with samples of commercial carbons was also made and it was concluded that carbon aerogels exhibit a behaviour comparable or superior to that obtained for the commercial carbons studied.
Resumo:
The late Paleozoic collision between Gondwana and Laurussia resulted in the polyphase deformation and magmatism that characterizes the Iberian Massif of the Variscan orogen. In the Central Iberian Zone, initial con- tinental thickening (D1; folding and thrusting) was followed by extensional orogenic collapse (D2) responsible for the exhumation of high-grade rocks coeval to the emplacement of granitoids. This study presents a tectonometamorphic analysis of the Trancoso-Pinhel region (Central Iberian Zone) to ex- plain the processes in place during the transition froman extension-dominated state (D2) to a compression-dom- inated one (D3).Wereveal the existence of low-dipping D2 extensional structures later affected by several pulses of subhorizontal shortening, each of them typified by upright folds and strike-slip shearing (D3, D4 and D5, as identified by superimposition of structures). The D2 Pinhel extensional shear zone separates a low-grade domain from an underlying high-grade domain, and it contributed to the thermal reequilibration of the orogen by facil- itating heat advection from lower parts of the crust, crustal thinning, decompression melting, and magma intru- sion. Progressive lessening of the gravitational disequilibrium carried out by this D2 shear zone led to a switch from subhorizontal extension to compression and the eventual cessation and capture of the Pinhel shear zone by strike-slip tectonics during renewed crustal shortening. High-grade domains of the Pinhel shear zone were folded together with low-grade domains to define the current upright folded structure of the Trancoso-Pinhel re- gion, the D3 Tamames-Marofa-Sátão synform. Newdating of syn-orogenic granitoids (SHRIMP U\\Pb zircon dat- ing) intruding the Pinhel shear zone, together with the already published ages of early extensional fabrics constrain the functioning of this shear zone to ca. 331–311 Ma, with maximum tectonomagmatic activity at ca. 321–317 Ma. The capture and apparent cessation of movement of the Pinhel shear zone occurred at ca. 317– 311 Ma.
Resumo:
Pure hydrogen production from methane is a multi-step process run on a large scale for economic reasons. However, hydrogen can be produced in a one-pot continuous process for small scale applications, namely Low Temperature Steam Reforming. Here, Steam Reforming is carried out in a reactor whose walls are composed by a membrane selective toward hydrogen. Pd is the most used membrane material due to its high permeability and selectivity. However, Pd deteriorates at temperatures higher than 500°C, thus the operative temperature of the reaction has to be lowered. However, the employment of a membrane reactor may allow to give high yields thanks to hydrogen removal, which shifts the reaction toward the products. Moreover, pure hydrogen is produced. This work is concentrated on the synthesis of a catalytic system and the investigation of its performances in different processes, namely oxy-reforming, steam reforming and water gas shift, to find appropriate conditions for hydrogen production in a catalytic membrane reactor. The catalyst supports were CeZr and Zr oxides synthesized by microemulsion, impregnated with different noble metals. Pt, Rh and PtRh based catalysts were tested in the oxy reforming process at 500°C, where Rh on CeZr gave the most interesting results. On the opposite, the best performances in low temperature steam reforming were obtained with Rh impregnated on Zr oxide. This catalyst was selected to perform low temperature steam reforming in a Pd membrane reactor. The hydrogen removal given by the membrane allowed to increase the methane conversion over the equilibrium of a classical fixed bed reactor thanks to an equilibrium shift effect. High hydrogen production and recoveries were also obtained, and no other compound permeated through the membrane which proved to be hydrogen selective.
Resumo:
The scope of this dissertation is to study the transport phenomena of small molecules in polymers and membranes for gas separation applications, with particular attention to energy efficiency and environmental sustainability. This work seeks to contribute to the development of new competitive selective materials through the characterization of novel organic polymers such as CANALs and ROMPs, as well as through the combination of selective materials obtaining mixed matrix membranes (MMMs), to make membrane technologies competitive with the traditional ones. Kinetic and thermodynamic aspects of the transport properties were investigated in ideal and non-ideal scenarios, such as mixed-gas experiments. The information we gathered contributed to the development of the fundamental understanding related to phenomenon like CO2-induced plasticization and physical aging. Among the most significant results, ZIF-8/PPO MMMs provided materials whose permeability and selectivity were higher than those of the pure materials for He/CO2 separation. The CANALs featured norbornyl benzocyclobutene backbone and thereby introduced a third typology of ladder polymers in the gas separation field, expanding the structural diversity of microporous materials. CANALs have a completely hydrocarbon-based and non-polar rigid backbone, which makes them an ideal model system to investigate structure-property correlations. ROMPs were synthesized by means of the ring opening metathesis living polymerization, which allowed the formation of bottlebrush polymers. CF3-ROMP reveled to be ultrapermeable to CO2, with unprecedented plasticization resistance properties. Mixed-gas experiments in glassy polymer showed that solubility-selectivity controls the separation efficiency of materials in multicomponent conditions. Finally, it was determined that plasticization pressure in not an intrinsic property of a material and does not represent a state of the system, but rather comes from the contribution of solubility coefficient and diffusivity coefficient in the framework of the solution-diffusion model.
Resumo:
This Thesis explores two novel and independent cosmological probes, Cosmic Chronometers (CCs) and Gravitational Waves (GWs), to measure the expansion history of the Universe. CCs provide direct and cosmology-independent measurements of the Hubble parameter H(z) up to z∼2. In parallel, GWs provide a direct measurement of the luminosity distance without requiring additional calibration, thus yielding a direct measurement of the Hubble constant H0=H(z=0). This Thesis extends the methodologies of both of these probes to maximize their scientific yield. This is achieved by accounting for the interplay of cosmological and astrophysical parameters to derive them jointly, study possible degeneracies, and eventually minimize potential systematic effects. As a legacy value, this work also provides interesting insights into galaxy evolution and compact binary population properties. The first part presents a detailed study of intermediate-redshift passive galaxies as CCs, with a focus on the selection process and the study of their stellar population properties using specific spectral features. From their differential aging, we derive a new measurement of the Hubble parameter H(z) and thoroughly assess potential systematics. In the second part, we develop a novel methodology and pipeline to obtain joint cosmological and astrophysical population constraints using GWs in combination with galaxy catalogs. This is applied to GW170817 to obtain a measurement of H0. We then perform realistic forecasts to predict joint cosmological and astrophysical constraints from black hole binary mergers for upcoming gravitational wave observatories and galaxy surveys. Using these two probes we provide an independent reconstruction of H(z) with direct measurements of H0 from GWs and H(z) up to z∼2 from CCs and demonstrate that they can be powerful independent probes to unveil the expansion history of the Universe.
Resumo:
According to the SM, while Lepton Flavour Violation is allowed in the neutral sector, Charged Lepton Flavour Violation (CLFV) processes are forbidden. The Mu2e Experiment at Fermilab will search for the CLFV process of neutrinoless conversion of a muon into an electron within the field of an Al nucleus. The Mu2e detectors and its state-of-the-art superconducting magnetic system are presented, with special focus put to the electromagnetic crystal calorimeter. The calorimeter is composed by two annular disks, each one hosting pure CsI crystals read-out by custom silicon photomultipliers (SiPMs). The SiPMs are amplified by custom electronics (FEE) and are glued to copper holders in group of 2 SiPMs and 2 FEE boards thus forming a crystal Readout Unit. These Readout Units are being tested at the Quality Control (QC) Station, whose design, realization and operations are presented in this work. The QC Station allows to determine the gain, the response and the photon detection efficiency of each unit and to evaluate the dependence of these parameters from the supply voltage and temperature. The station is powered by two remotely-controlled power supplies and monitored thanks to a Slow Control system which is also illustrated in this work. In this thesis, we also demonstrated that the calorimeter can perform its own measurement of the Mu2e normalization factor, i.e. the counting of the 1.8 MeV photon line produced in nuclear muon captures. A specific calorimeter sub-system called CAPHRI, composed by four LYSO crystals with SiPM readout, has been designed and tested. We simulated the capability of this system on performing this task showing that it can get a faster and more reliable measurement of the muon capture rates with respect to the current Mu2e detector dedicated to this measurement. The characterization of energy resolution and response uniformity of the four procured LYSO crystals are llustrated.
Resumo:
Grand Unification Theories (GUTs) predict the unification of three of the fundamental forces and are a possible extension of the Standard Model, some of them predict neutrino mass and baryon asymmetry. We consider a minimal non-supersymmetric $SO(10)$ GUT model that can reproduce the observed fermionic masses and mixing parameters of the Standard Model. We calculate the scales of spontaneous symmetry breaking from the GUT to the Standard Model gauge group using two-loop renormalisation group equations. This procedure determines the proton decay rate and the scale of $U(1)_{B-L}$ breaking, which generates cosmic strings, and the right-handed neutrino mass scales. Consequently, the regions of parameter space where thermal leptogenesis is viable are identified and correlated with the fermion masses and mixing, the neutrinoless double beta decay rate, the proton decay rate, and the gravitational wave signal resulting from the network of cosmic strings. We demonstrate that this framework, which can explain the Standard Model fermion masses and mixing and the observed baryon asymmetry, will be highly constrained by the next generation of gravitational wave detectors and neutrino oscillation experiments which will also constrain the proton lifetime
Resumo:
Gravitational lensing is a powerful tool to investigate the properties of the distribution of matter, be it barionic or dark. In this work we take advantage of Strong Gravitational Lensing to infer the properties of one of the galaxy-scale substructures that makes up the cluster MACSJ1206. It is relatively easy to model the morphology of the visible components of a galaxy, while the morphology of the dark matter distribution cannot be so easily constrained. Being sensitive to the whole mass, strong lensing provides a way to probe DM distribution, and this is the reason why it is the best tool to study the substructure. The goal of this work consists of performing an analysis of the substructure previously mentioned, an early type galaxy (ETG), by analyzing the highly magnified Einstein ring around it, in order to put stringent constraints on its matter distribution, that, for an ETG, is commonly well described by an isothermal profilele. This turns out to be interesting for three main different reasons. It is well known that galaxies in clusters are subject to interaction processes, both dynamic and hydrodynamic, that can significantly modify the distribution of matter within them. Therefore, finding a different profile from the one usually expected could be a sign that the galaxy has undergone processes that have changed its structure. Studying the mass distribution also means studying the dark matter component, which not only still presents great questions today, but which is also not obviously distributed in the same way as in an isolated galaxy. What emerges from the analysis is that the total mass distribution of the galaxy under examination turns out to have a slope much steeper than the isothermal usually expected.
Resumo:
Despite the success of the ΛCDM model in describing the Universe, a possible tension between early- and late-Universe cosmological measurements is calling for new independent cosmological probes. Amongst the most promising ones, gravitational waves (GWs) can provide a self-calibrated measurement of the luminosity distance. However, to obtain cosmological constraints, additional information is needed to break the degeneracy between parameters in the gravitational waveform. In this thesis, we exploit the latest LIGO-Virgo-KAGRA Gravitational Wave Transient Catalog (GWTC-3) of GW sources to constrain the background cosmological parameters together with the astrophysical properties of Binary Black Holes (BBHs), using information from their mass distribution. We expand the public code MGCosmoPop, previously used for the application of this technique, by implementing a state-of-the-art model for the mass distribution, needed to account for the presence of non-trivial features, i.e. a truncated power law with two additional Gaussian peaks, referred to as Multipeak. We then analyse GWTC-3 comparing this model with simpler and more commonly adopted ones, both in the case of fixed and varying cosmology, and assess their goodness-of-fit with different model selection criteria, and their constraining power on the cosmological and population parameters. We also start to explore different sampling methods, namely Markov Chain Monte Carlo and Nested Sampling, comparing their performances and evaluating the advantages of both. We find concurring evidence that the Multipeak model is favoured by the data, in line with previous results, and show that this conclusion is robust to the variation of the cosmological parameters. We find a constraint on the Hubble constant of H0 = 61.10+38.65−22.43 km/s/Mpc (68% C.L.), which shows the potential of this method in providing independent constraints on cosmological parameters. The results obtained in this work have been included in [1].
Resumo:
In this thesis, we explore constraints which can be put on the primordial power spectrum of curvature perturbations beyond the scales probed by anisotropies of the cosmic microwave background and galaxy surveys. We exploit present and future measurements of CMB spectral distortions, and their synergy with CMB anisotropies, as well existing and future upper limits on the stochastic background of gravitational waves. We derive for the first time phenomenological templates that fit small-scale bumps in the primordial power spectrum generated in multi-field models of inflation. By using such templates, we study for the first time imprints of primordial peaks on anisotropies and spectral distortions of the cosmic microwave background and we investigate their contribution to the stochastic background of gravitational waves. Through a Monte Carlo Markov Chain analysis we infer for the first time the constraints on the amplitude, the width and the location of such bumps using Planck and FIRAS data. We also forecast how a future spectrometer like PIXIE could improve FIRAS boundaries. The results derived in this thesis have implications for the possibility of primordial black holes from inflation.
Resumo:
The aim of this study was to test fear, anxiety and control related to dental treatment. The subjects were 364 children with ages between 7 and 13 years. Three questionnaires with multiple choice questions were applied in groups of 10 children. The first instrument was the 15-item dental subscale from the Childrens Fear Survey Schedule9. The subjects rated their level of fear on a 5-point scale. The second survey instrument was the 20-item subscale from the State Trait Anxiety Inventory for Children16. This measure was used to capture how anxious the child was, in general. The third instrument was the Child Dental Control Assessment19. It contained 20 items to assess perceived control and 20 items to assess desired control. The results of the survey indicated that dental fear and anxiety were slightly higher for females when compared with male subjects (P < 0.05). Older children (11 to 13 years old) obtained higher fear scores than younger ones (7 to 9 years old). Concerning perceived control, the results indicate that younger children perceive more control than older ones. For desired control, the results indicate that younger children reported higher percentages than older ones. In this study, patients who had undergone anesthesia during treatment revealed higher fear scores when compared with those who had not. Dental fear etiology seems to be related to a procedure that may involve pain or lack of control.
Resumo:
Culture supernatant of Staphylococcus aureus 722 in 3% triptone plus 1% yeast extract was used for EEA purification, proceeding comparison between dye ligand Red A affinity chromatography and classic chromatography. The capture of SEA with Amberlite CG-50 allowed rapid enterotoxin concentration from the culture supernatant. However, the ratio of 15 mg of the resin to a total of 150 mg of the toxin satured the resin, giving only 10 to 30% of SEA recuperation from the supernatant. The elution of concentrated material throught the Red A column resulted in a recovery of 60,87% of the toxin, and required 76 hours, indicating advantage on classic chromatography. Ion exchange column plus gel filtration recovered only 6,5 % of the SEA, and required 114 hours to conclude the procedure. The eletrophoresis of purified SEA indicated high grade of toxin obtained from Red A column, with 90 % of purity, compared to 60 % of classic column.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Prostaglandins control osteoblastic and osteoclastic function under physiological or pathological conditions and are important modulators of the bone healing process. The non-steroidal anti-inflammatory drugs (NSAIDs) inhibit cyclooxygenase (COX) activity and consequently prostaglandins synthesis. Experimental and clinical evidence has indicated a risk for reparative bone formation related to the use of non-selective (COX-1 and COX-2) and COX-2 selective NSAIDs. Ketorolac is a non-selective NSAID which, at low doses, has a preferential COX-1 inhibitory effect and etoricoxib is a new selective COX-2 inhibitor. Although literature data have suggested that ketorolac can interfere negatively with long bone fracture healing, there seems to be no study associating etoricoxib with reparative bone formation. Paracetamol/acetaminophen, one of the first choices for pain control in clinical dentistry, has been considered a weak anti-inflammatory drug, although supposedly capable of inhibiting COX-2 activity in inflammatory sites. OBJECTIVE: The purpose of the present study was to investigate whether paracetamol, ketorolac and etoricoxib can hinder alveolar bone formation, taking the filling of rat extraction socket with newly formed bone as experimental model. MATERIAL AND METHODS: The degree of new bone formation inside the alveolar socket was estimated two weeks after tooth extraction by a differential point-counting method, using an optical microscopy with a digital camera for image capture and histometry software. Differences between groups were analyzed by ANOVA after confirming a normal distribution of sample data. RESULTS AND CONCLUSIONS: Histometric results confirmed that none of the tested drugs had a detrimental effect in the volume fraction of bone trabeculae formed inside the alveolar socket.