968 resultados para Non-polarizable Water Models
Resumo:
Pós-graduação em Agronomia (Produção Vegetal) - FCAV
Resumo:
The main object of this thesis is the analysis and the quantization of spinning particle models which employ extended ”one dimensional supergravity” on the worldline, and their relation to the theory of higher spin fields (HS). In the first part of this work we have described the classical theory of massless spinning particles with an SO(N) extended supergravity multiplet on the worldline, in flat and more generally in maximally symmetric backgrounds. These (non)linear sigma models describe, upon quantization, the dynamics of particles with spin N/2. Then we have analyzed carefully the quantization of spinning particles with SO(N) extended supergravity on the worldline, for every N and in every dimension D. The physical sector of the Hilbert space reveals an interesting geometrical structure: the generalized higher spin curvature (HSC). We have shown, in particular, that these models of spinning particles describe a subclass of HS fields whose equations of motions are conformally invariant at the free level; in D = 4 this subclass describes all massless representations of the Poincar´e group. In the third part of this work we have considered the one-loop quantization of SO(N) spinning particle models by studying the corresponding partition function on the circle. After the gauge fixing of the supergravity multiplet, the partition function reduces to an integral over the corresponding moduli space which have been computed by using orthogonal polynomial techniques. Finally we have extend our canonical analysis, described previously for flat space, to maximally symmetric target spaces (i.e. (A)dS background). The quantization of these models produce (A)dS HSC as the physical states of the Hilbert space; we have used an iterative procedure and Pochhammer functions to solve the differential Bianchi identity in maximally symmetric spaces. Motivated by the correspondence between SO(N) spinning particle models and HS gauge theory, and by the notorious difficulty one finds in constructing an interacting theory for fields with spin greater than two, we have used these one dimensional supergravity models to study and extract informations on HS. In the last part of this work we have constructed spinning particle models with sp(2) R symmetry, coupled to Hyper K¨ahler and Quaternionic-K¨ahler (QK) backgrounds.
Resumo:
The main scope of my PhD is the reconstruction of the large-scale bivalve phylogeny on the basis of four mitochondrial genes, with samples taken from all major groups of the class. To my knowledge, it is the first attempt of such a breadth in Bivalvia. I decided to focus on both ribosomal and protein coding DNA sequences (two ribosomal encoding genes -12s and 16s -, and two protein coding ones - cytochrome c oxidase I and cytochrome b), since either bibliography and my preliminary results confirmed the importance of combined gene signals in improving evolutionary pathways of the group. Moreover, I wanted to propose a methodological pipeline that proved to be useful to obtain robust results in bivalves phylogeny. Actually, best-performing taxon sampling and alignment strategies were tested, and several data partitioning and molecular evolution models were analyzed, thus demonstrating the importance of molding and implementing non-trivial evolutionary models. In the line of a more rigorous approach to data analysis, I also proposed a new method to assess taxon sampling, by developing Clarke and Warwick statistics: taxon sampling is a major concern in phylogenetic studies, and incomplete, biased, or improper taxon assemblies can lead to misleading results in reconstructing evolutionary trees. Theoretical methods are already available to optimize taxon choice in phylogenetic analyses, but most involve some knowledge about genetic relationships of the group of interest, or even a well-established phylogeny itself; these data are not always available in general phylogenetic applications. The method I proposed measures the "phylogenetic representativeness" of a given sample or set of samples and it is based entirely on the pre-existing available taxonomy of the ingroup, which is commonly known to investigators. Moreover, it also accounts for instability and discordance in taxonomies. A Python-based script suite, called PhyRe, has been developed to implement all analyses.
Resumo:
In this thesis we develop further the functional renormalization group (RG) approach to quantum field theory (QFT) based on the effective average action (EAA) and on the exact flow equation that it satisfies. The EAA is a generalization of the standard effective action that interpolates smoothly between the bare action for krightarrowinfty and the standard effective action rnfor krightarrow0. In this way, the problem of performing the functional integral is converted into the problem of integrating the exact flow of the EAA from the UV to the IR. The EAA formalism deals naturally with several different aspects of a QFT. One aspect is related to the discovery of non-Gaussian fixed points of the RG flow that can be used to construct continuum limits. In particular, the EAA framework is a useful setting to search for Asymptotically Safe theories, i.e. theories valid up to arbitrarily high energies. A second aspect in which the EAA reveals its usefulness are non-perturbative calculations. In fact, the exact flow that it satisfies is a valuable starting point for devising new approximation schemes. In the first part of this thesis we review and extend the formalism, in particular we derive the exact RG flow equation for the EAA and the related hierarchy of coupled flow equations for the proper-vertices. We show how standard perturbation theory emerges as a particular way to iteratively solve the flow equation, if the starting point is the bare action. Next, we explore both technical and conceptual issues by means of three different applications of the formalism, to QED, to general non-linear sigma models (NLsigmaM) and to matter fields on curved spacetimes. In the main part of this thesis we construct the EAA for non-abelian gauge theories and for quantum Einstein gravity (QEG), using the background field method to implement the coarse-graining procedure in a gauge invariant way. We propose a new truncation scheme where the EAA is expanded in powers of the curvature or field strength. Crucial to the practical use of this expansion is the development of new techniques to manage functional traces such as the algorithm proposed in this thesis. This allows to project the flow of all terms in the EAA which are analytic in the fields. As an application we show how the low energy effective action for quantum gravity emerges as the result of integrating the RG flow. In any treatment of theories with local symmetries that introduces a reference scale, the question of preserving gauge invariance along the flow emerges as predominant. In the EAA framework this problem is dealt with the use of the background field formalism. This comes at the cost of enlarging the theory space where the EAA lives to the space of functionals of both fluctuation and background fields. In this thesis, we study how the identities dictated by the symmetries are modified by the introduction of the cutoff and we study so called bimetric truncations of the EAA that contain both fluctuation and background couplings. In particular, we confirm the existence of a non-Gaussian fixed point for QEG, that is at the heart of the Asymptotic Safety scenario in quantum gravity; in the enlarged bimetric theory space where the running of the cosmological constant and of Newton's constant is influenced by fluctuation couplings.
Resumo:
This thesis examines two panel data sets of 48 states from 1981 to 2009 and utilizes ordinary least squares (OLS) and fixed effects models to explore the relationship between rural Interstate speed limits and fatality rates and whether rural Interstate speed limits affect non-Interstate safety. Models provide evidence that rural Interstate speed limits higher than 55 MPH lead to higher fatality rates on rural Interstates though this effect is somewhat tempered by reductions in fatality rates for roads other than rural Interstates. These results provide some but not unanimous support for the traffic diversion hypothesis that rural Interstate speed limit increases lead to decreases in fatality rates of other roads. To the author’s knowledge, this paper is the first econometric study to differentiate between the effects of 70 MPH speed limits and speed limits above 70 MPH on fatality rates using a multi-state data set. Considering both rural Interstates and other roads, rural Interstate speed limit increases above 55 MPH are responsible for 39,700 net fatalities, 4.1 percent of total fatalities from 1987, the year limits were first raised, to 2009.
Resumo:
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model, normal base measures and Gibbs sampling procedures based on the Pólya urn scheme are often used to simulate posterior draws. These algorithms are applicable in the conjugate case when (for a normal base measure) the likelihood is normal. In the non-conjugate case, the algorithms proposed by MacEachern and Müller (1998) and Neal (2000) are often applied to generate posterior samples. Some common problems associated with simulation algorithms for non-conjugate MDP models include convergence and mixing difficulties. This paper proposes an algorithm based on the Pólya urn scheme that extends the Gibbs sampling algorithms to non-conjugate models with normal base measures and exponential family likelihoods. The algorithm proceeds by making Laplace approximations to the likelihood function, thereby reducing the procedure to that of conjugate normal MDP models. To ensure the validity of the stationary distribution in the non-conjugate case, the proposals are accepted or rejected by a Metropolis-Hastings step. In the special case where the data are normally distributed, the algorithm is identical to the Gibbs sampler.
Resumo:
Distribution of Cd and Pb in sea ice and in under-ice water of the Amur Bay at the end of February 1998 is considered. The metals were determined by technique of inversion voltammetry. Contribution of Cd and Pb from atmospheric precipitation and from under-ice water to sea ice examined is discussed. On the basis of analysis of vertical distribution in ice, atmospheric fluxes supplying metals to the aquatic area of the bay are estimated at 100 and 2000 µg/m**2/year for Cd and Pb, respectively. Concentrations of Cd and Pb found in middle and lower parts of ice cores allow to suggest that their accumulation relative to main ions of seawater occurs in the ice. Estimated enrichment factors of Cd and Pb in sea ice relative to seawate are ~9 and ~5. A possible mechanism of relative metal accumulation in sea ice is considered.
Resumo:
We investigated the multivariate relationships between adipose tissue residue levels of 48 individual organohalogen contaminants (OHCs) and circulating thyroid hormone (TH) levels in polar bears (Ursus maritimus) from East Greenland (1999-2001, n = 62), using projection to latent structure (PLS) regression for four groupings of polar bears; subadults (SubA), adult females with cubs (AdF_N), adult females without cubs (AdF_S) and adult males (AdM). In the resulting significant PLS models for SubA, AdF_N and AdF_S, some OHCs were especially important in explaining variations in circulating TH levels: polybrominated diphenylether (PBDE)-99, PBDE-100, PBDE-153, polychlorinated biphenyl (PCB)-52, PCB-118, cis-nonachlor, trans-nonachlor, trichlorobenzene (TCB) and pentachlorobenzene (QCB), and both negative and positive relationships with THs were found. In addition, the models revealed that DDTs had a positive influence on total 3,5,3'-triiodothyronine (TT3) in AdF_S, and that a group of 17 higher chlorinated ortho-PCBs had a positive influence on total 3,5,3',5'-tetraiodothyronine (thyroxine, TT4) in AdF_N. TH levels in AdM seemed less influenced by OHCs because of non-significant PLS models. TH levels were also influenced by biological factors such as age, sex, body size, lipid content of adipose tissue and sampling date. When controlling for biological variables, the major relationships from the PLS models for SubA, AdF_N and AdF_S were found significant in partial correlations. The most important OHCs that influenced TH levels in the significant PLS models may potentially act through similar mechanisms on the hypothalamic-pituitary-thyroid (HPT) axis, suggesting that both combined effects by dose and response addition and perhaps synergistic potentiation may be a possibility in these polar bears. Statistical associations are not evidence per se of biological cause-effect relationships. Still, the results of the present study indicate that OHCs may affect circulating TH levels in East Greenland polar bears, adding to the "weight of evidence" suggesting that OHCs might interfere with thyroid homeostasis in polar bears.
Resumo:
Several chemical reactions are able to produce swelling of concrete for decades after its initial curing, a problem that affects a considerable number of concrete dams around the world. Principia has had several contracts to study this problem in recent years, which have required reviewing the state-of-the-art, adopting appropriate mathematical descriptions, programming them into user routines in Abaqus, determining model parameters on the basis of some parts of the dams’ monitored histories, ensuring reliability using some other parts, and finally predicting the future evolution of the dams and their safety margins. The paper describes some of the above experience, including the programming of sophisticated non-isotropic swelling models, that must be compatible with cracking and other nonlinearities involved in concrete behaviour. The applications concentrate on two specific cases, an archgravity dam and a double-curvature arch dam, both with a long history of concrete swelling and which, interestingly, entailed different degrees of success in the modelling efforts
Resumo:
The present study investigates the potential use of non-catalyzed water-soluble blocked polyurethane prepolymer (PUP) as a bifunctional cross-linker for collagenous scaffolds. The effect of concentration (5, 10, 15 and 20%), time (4, 6, 12 and 24 h), medium volume (50, 100, 200 and 300%) and pH (7.4, 8.2, 9 and 10) over stability, microstructure and tensile mechanical behavior of acellular pericardial matrix was studied. The cross-linking index increased up to 81% while the denaturation temperature increased up to 12 °C after PUP crosslinking. PUP-treated scaffold resisted the collagenase degradation (0.167 ± 0.14 mmol/g of liberated amine groups vs. 598 ± 60 mmol/g for non-cross-linked matrix). The collagen fiber network was coated with PUP while viscoelastic properties were altered after cross-linking. The treatment of the pericardial scaffold with PUP allows (i) different densities of cross-linking depending of the process parameters and (ii) tensile properties similar to glutaraldehyde method.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
The aim of this work was to synthesise a series of hydrophilic derivatives of cis-1,2-dihydroxy-3,5-cyclohexadiene (cis-DHCD) and copolymerise them with 2-hydroxyethyl methacrylate (HEMA), to produce a completely new range of hydrogel materials. It is theorised that hydrogels incorporating such derivatives of cis-DHCD will exhibit good strength and elasticity in addition to good water binding ability. The synthesis of derivatives was attempted by both enzymatic and chemical methods. Enzyme synthesis involved the transesterification of cis-DHCD with a number of trichloro and trifluoroethyl esters using the enzyme lipase porcine pancreas to catalyse the reaction in organic solvent. Cyclohexanol was used in initial studies to assess the viability of enzyme catalysed reactions. Chemical synthesis involved the epoxidation of a number of unsaturated carboxylic acids and the subsequent reaction of these epoxy acids with cis-DHCD in DCC/DMAP catalysed esterifications. The silylation of cis-DHCD using TBDCS and BSA was also studied. The rate of aromatisation of cis-DHCD at room temperature was studied in order to assess its stability and 1H NMR studies were also undertaken to determine the conformations adopted by derivatives of cis-DHCD. The copolymerisation of diepoxybutanoate, diepoxyundecanoate, dibutenoate and silyl protected derivatives of cis-DHCD with HEMA, to produce a new group of hydrogels was investigated. The EWC and mechanical properties of these hydrogels were measured and DSC was used to determine the amount of freezing and non-freezing water in the membranes. The effect on EWC of opening the epoxide rings of the comonomers was also investigated
Resumo:
This thesis looks at two issues. Firstly, statistical work was undertaken examining profit margins, labour productivity and total factor productivity in telecommunications in ten member states of the EU over a 21-year period (not all member states of the EU could be included due to data inadequacy). Also, three non-members, namely Switzerland, Japan and US, were included for comparison. This research was to provide an understanding of how telecoms in the European Union (EU) have developed. There are two propositions in this part of the thesis: (i) privatisation and market liberalisation improve performance; (ii) countries that liberalised their telecoms sectors first show a better productivity growth than countries that liberalised later. In sum, a mixed picture is revealed. Some countries performed better than others over time, but there is no apparent relationship between productivity performance and the two propositions. Some of the results from this part of the thesis were published in Dabler et al. (2002). Secondly, the remainder of the tests the proposition that the telecoms directives of the European Commission created harmonised regulatory systems in the member states of the EU. By undertaking explanatory research, this thesis not only seeks to establish whether harmonisation has been achieved, but also tries to find an explanation as to why this is so. To accomplish this, as a first stage to questionnaire survey was administered to the fifteen telecoms regulators in the EU. The purpose of the survey was to provide knowledge of methods, rationales and approaches adopted by the regulatory offices across the EU. This allowed for the decision as to whether harmonisation in telecoms regulation has been achieved. Stemming from the results of the questionnaire analysis, follow-up case studies with four telecoms regulators were undertaken, in a second stage of this research. The objective of these case studies was to take into account the country-specific circumstances of telecoms regulation in the EU. To undertake the case studies, several sources of evidence were combined. More specifically, the annual Implementation Reports of the European Commission were reviewed, alongside the findings from the questionnaire. Then, interviews with senior members of staff in the four regulatory authorities were conducted. Finally, the evidence from the questionnaire survey and from the case studies was corroborated to provide an explanation as to why telecoms regulation in the EU has reached or has not reached a state of harmonisation. In addition to testing whether harmonisation has been achieved and why, this research has found evidence of different approaches to control over telecoms regulators and to market intervention administered by telecoms regulators within the EU. Regarding regulatory control, it was found that some member states have adopted mainly a proceduralist model, some have implemented more of a substantive model, and others have adopted a mix between both. Some findings from the second stage of the research were published in Dabler and Parker (2004). Similarly, regarding market intervention by regulatory authorities, different member states treat market intervention differently, namely according to market-driven or non-market-driven models, or a mix between both approaches.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.