44 resultados para Flavor symmetry
Resumo:
A smooth map is said to be stable if small perturbations of the map only differ from the original one by a smooth change of coordinates. Smoothly stable maps are generic among the proper maps between given source and target manifolds when the source and target dimensions belong to the so-called nice dimensions, but outside this range of dimensions, smooth maps cannot generally be approximated by stable maps. This leads to the definition of topologically stable maps, where the smooth coordinate changes are replaced with homeomorphisms. The topologically stable maps are generic among proper maps for any dimensions of source and target. The purpose of this thesis is to investigate methods for proving topological stability by constructing extremely tame (E-tame) retractions onto the map in question from one of its smoothly stable unfoldings. In particular, we investigate how to use E-tame retractions from stable unfoldings to find topologically ministable unfoldings for certain weighted homogeneous maps or germs. Our first results are concerned with the construction of E-tame retractions and their relation to topological stability. We study how to construct the E-tame retractions from partial or local information, and these results form our toolbox for the main constructions. In the next chapter we study the group of right-left equivalences leaving a given multigerm f invariant, and show that when the multigerm is finitely determined, the group has a maximal compact subgroup and that the corresponding quotient is contractible. This means, essentially, that the group can be replaced with a compact Lie group of symmetries without much loss of information. We also show how to split the group into a product whose components only depend on the monogerm components of f. In the final chapter we investigate representatives of the E- and Z-series of singularities, discuss their instability and use our tools to construct E-tame retractions for some of them. The construction is based on describing the geometry of the set of points where the map is not smoothly stable, discovering that by using induction and our constructional tools, we already know how to construct local E-tame retractions along the set. The local solutions can then be glued together using our knowledge about the symmetry group of the local germs. We also discuss how to generalize our method to the whole E- and Z- series.
Resumo:
The module of a quadrilateral is a positive real number which divides quadrilaterals into conformal equivalence classes. This is an introductory text to the module of a quadrilateral with some historical background and some numerical aspects. This work discusses the following topics: 1. Preliminaries 2. The module of a quadrilateral 3. The Schwarz-Christoffel Mapping 4. Symmetry properties of the module 5. Computational results 6. Other numerical methods Appendices include: Numerical evaluation of the elliptic integrals of the first kind. Matlab programs and scripts and possible topics for future research. Numerical results section covers additive quadrilaterals and the module of a quadrilateral under the movement of one of its vertex.
Resumo:
The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications
The Mediated Immediacy : João Batista Libanio and the Question of Latin American Liberation Theology
Resumo:
This study is a systematic analysis of mediated immediacy in the production of the Brazilian professor of theology João Batista Libanio. He stresses both ethical mediation and the immediate character of the faith. Libanio has sought an answer to the problem of science and faith. He makes use of the neo-scholastic distinction between matter and form. According to St. Thomas Aquinas, God cannot be known as a scientific object, but it is possible to predicate a formal theological content of other subject matter with the help of revelation. This viewpoint was emphasized in neo-Thomism and supported by the liberation theologians. For them, the material starting point was social science. It becomes a theologizable or revealable (revelabile) reality. This social science has its roots in Latin American Marxism which was influenced by the school of Louis Althusser and considered Marxism a science of history . The synthesis of Thomism and Marxism is a challenge Libanio faced, especially in his Teologia da libertação from 1987. He emphasized the need for a genuinely spiritual and ethical discernment, and was particularly critical of the ethical implications of class struggle. Libanio s thinking has a strong hermeneutic flavor. It is more important to understand than to explain. He does not deny the need for social scientific data, but that they cannot be the exclusive starting point of theology. There are different readings of the world, both scientific and theological. A holistic understanding of the nature of religious experience is needed. Libanio follows the interpretation given by H. C. de Lima Vaz, according to whom the Hegelian dialectic is a rational circulation between the totality and its parts. He also recalls Oscar Cullmann s idea of God s Kingdom that is already and not yet . In other words, there is a continuous mediation of grace into the natural world. This dialectic is reflected in ethics. Faith must be verified in good works. Libanio uses the Thomist fides caritate formata principle and the modern orthopraxis thinking represented by Edward Schillebeeckx. One needs both the ortho of good faith and the praxis of the right action. The mediation of praxis is the mediation of human and divine love. Libanio s theology has strong roots in the Jesuit spirituality that places the emphasis on contemplation in action.
Resumo:
Symmetry is a key principle in viral structures, especially the protein capsid shells. However, symmetry mismatches are very common, and often correlate with dynamic functionality of biological significance. The three-dimensional structures of two isometric viruses, bacteriophage phi8 and the archaeal virus SH1 were reconstructed using electron cryo-microscopy. Two image reconstruction methods were used: the classical icosahedral method yielded high resolution models for the symmetrical parts of the structures, and a novel asymmetric in-situ reconstruction method allowed us to resolve the symmetry mismatches at the vertices of the viruses. Evidence was found that the hexameric packaging enzyme at the vertices of phi8 does not rotate relative to the capsid. The large two-fold symmetric spikes of SH1 were found not to be responsible for infectivity. Both virus structures provided insight into the evolution of viruses. Comparison of the phi8 polymerase complex capsid with those of phi6 and other dsRNA viruses suggests that the quaternary structure in dsRNA bacteriophages differs from other dsRNA viruses. SH1 is unusual because there are two major types of capsomers building up the capsid, both of which seem to be composed mainly of single beta-barrels perpendicular to the capsid surface. This indicates that the beta-barrel may be ancestral to the double beta-barrel fold.
Resumo:
Angiosperms represent a huge diversity in floral structures. Thus, they provide an attractive target for comparative developmental genetics studies. Research on flower development has focused on few main model plants, and studies on these species have revealed the importance of transcription factors, such as MADS-box and TCP genes, for regulating the floral form. The MADS-box genes determine floral organ identities, whereas the TCP genes are known to regulate flower shape and the number of floral organs. In this study, I have concentrated on these two gene families and their role in regulating flower development in Gerbera hybrida, a species belonging to the large sunflower family (Asteraceae). The Gerbera inflorescence is comprised of hundreds of tightly clustered flowers that differ in their size, shape and function according to their position in the inflorescence. The presence of distinct flower types tells Gerbera apart from the common model species that bear only single kinds of flowers in their inflorescences. The marginally located ray flowers have large bilaterally symmetrical petals and non-functional stamens. The centrally located disc flowers are smaller, have less pronounced bilateral symmetry and carry functional stamens. Early stages of flower development were studied in Gerbera to understand the differentiation of flower types better. After morphological analysis, we compared gene expression between ray and disc flowers to reveal transcriptional differences in flower types. Interestingly, MADS-box genes showed differential expression, suggesting that they might take part in defining flower types by forming flower-type-specific regulatory complexes. Functional analysis of a CYCLOIDEA-like TCP gene GhCYC2 provided evidence that TCP transcription factors are involved in flower type differentiation in Gerbera. The expression of GhCYC2 is ray-flower-specific at early stages of development and activated only later in disc flowers. Overexpression of GhCYC2 in transgenic Gerbera-lines causes disc flowers to obtain ray-flower-like characters, such as elongated petals and disrupted stamen development. The expression pattern and transgenic phenotypes further suggest that GhCYC2 may shape ray flowers by promoting organ fusion. Cooperation of GhCYC2 with other Gerbera CYC-like TCP genes is most likely needed for proper flower type specification, and by this means for shaping the elaborate inflorescence structure. Gerbera flower development was also approached by characterizing B class MADS-box genes, which in the main model plants are known regulators of petal and stamen identity. The four Gerbera B class genes were phylogenetically grouped into three clades; GGLO1 into the PI/GLO clade, GDEF2 and GDEF3 into the euAP3 clade and GDEF1 into the TM6 clade. Putative orthologs for GDEF2 and GDEF3 were identified in other Asteraceae species, which suggests that they appeared through an Asteraceae-specific duplication. Functional analyses indicated that GGLO1 and GDEF2 perform conventional B-function as they determine petal and stamen identities. Our studies on GDEF1 represent the first functional analysis of a TM6-like gene outside the Solanaceae lineage and provide further evidence for the role of TM6 clade members in specifying stamen development. Overall, the Gerbera B class genes showed both commonalities and diversifications with the conventional B-function described in the main model plants.
Resumo:
Microneurovascular free muscle transfer with cross-over nerve grafts in facial reanimation Loss of facial symmetry and mimetic function as seen in facial paralysis has an enormous impact on the psychosocial conditions of the patients. Patients with severe long-term facial paralysis are often reanimated with a two-stage procedure combining cross-facial nerve grafting, and 6 to 8 months later with microneurovascular (MNV) muscle transfer. In this thesis, we recorded the long-term results of MNV surgery in facial paralysis and observed the possible contributing factors to final functional and aesthetic outcome after this procedure. Twenty-seven out of forty patients operated on were interviewed, and the functional outcome was graded. Magnetic resonance imaging (MRI) of MNV muscle flaps was done, and nerve graft samples (n=37) were obtained in second stage of the operation and muscle biopsies (n=18) were taken during secondary operations.. The structure of MNV muscles and nerve grafts was evaluated using histological and immunohistochemical methods ( Ki-67, anti-myosin fast, S-100, NF-200, CD-31, p75NGFR, VEGF, Flt-1, Flk-1). Statistical analysis was performed. In our studies, we found that almost two-thirds of the patients achieved good result in facial reanimation. The longer the follow-up time after muscle transfer the weaker was the muscle function. A majority of the patients (78%) defined their quality of life improved after surgery. In MRI study, the free MNV flaps were significantly smaller than originally. A correlation was found between good functional outcome and normal muscle structure in MRI. In muscle biopsies, the mean muscle fiber diameter was diminished to 40% compared to control values. Proliferative activity of satellite cells was seen in 60% of the samples and it tended to decline with an increase of follow-up time. All samples showed intramuscular innervation. Severe muscle atrophy correlated with prolonged intraoperative ischaemia. The good long-term functional outcome correlated with dominance of fast fibers in muscle grafts. In nerve grafts, the mean number of viable axons amounted to 38% of that in control samples. The grafted nerves characterized by fibrosis and regenerated axons were thinner than in control samples although they were well vascularized. A longer time between cross facial nerve grafting and biopsy sampling correlated with a higher number of viable axons. P75Nerve Growth Factor Receptor (p75NGFR) was expressed in every nerve graft sample. The expression of p75NGFR was lower in older than in younger patients. A high expression of p75NGFR was often seen with better function of the transplanted muscle. In grafted nerve Vascular Endothelial Growth Factor (VEGF) and its receptors were expressed in nervous tissue. In conclusion, most of the patients achieved good result in facial reanimation and were satisfied with the functional outcome. The mimic function was poorer in patients with longer follow-up time. MRI can be used to evaluate the structure of the microneurovascular muscle flaps. Regeneration of the muscle flaps was still going on many years after the transplantation and reinnervation was seen in all muscle samples. Grafted nerves were characterized by fibrosis and fewer, thinner axons compared to control nerves although they were well vascularized. P75NGFR and VEGF were expressed in human nerve grafts with higher intensity than in control nerves which is described for the first time.
Resumo:
This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.
Resumo:
Inelastic x-ray scattering can be used to study the electronic structure of matter. The x rays scattered from the target both induce and carry information on the electronic excitations taking place in the system. These excitations are the manifestations of the electronic structure and the physics governing the many-body system. This work presents results of non-resonant inelastic x-ray scattering experiments on a range of materials including metallic, insulating and semiconducting compounds as well as an organic polymer. The experiments were carried out at the National Synchrotron Light Source, USA and at the European Synchrotron Radiation Facility, France. The momentum transfer dependence of the experimental valence- and core-electron excitation spectra is compared with the results of theoretical first principles computations that incorporate the electron-hole interaction. A recently developed method for analyzing the momentum transfer dependence of core-electron excitation spectra is studied in detail. This method is based on real space multiple scattering calculations and is used to extract the angular symmetry components of the local unoccupied density of final states.
Resumo:
Acceleration of the universe has been established but not explained. During the past few years precise cosmological experiments have confirmed the standard big bang scenario of a flat universe undergoing an inflationary expansion in its earliest stages, where the perturbations are generated that eventually form into galaxies and other structure in matter, most of which is non-baryonic dark matter. Curiously, the universe has presently entered into another period of acceleration. Such a result is inferred from observations of extra-galactic supernovae and is independently supported by the cosmic microwave background radiation and large scale structure data. It seems there is a positive cosmological constant speeding up the universal expansion of space. Then the vacuum energy density the constant describes should be about a dozen times the present energy density in visible matter, but particle physics scales are enormously larger than that. This is the cosmological constant problem, perhaps the greatest mystery of contemporary cosmology. In this thesis we will explore alternative agents of the acceleration. Generically, such are called dark energy. If some symmetry turns off vacuum energy, its value is not a problem but one needs some dark energy. Such could be a scalar field dynamically evolving in its potential, or some other exotic constituent exhibiting negative pressure. Another option is to assume that gravity at cosmological scales is not well described by general relativity. In a modified theory of gravity one might find the expansion rate increasing in a universe filled by just dark matter and baryons. Such possibilities are taken here under investigation. The main goal is to uncover observational consequences of different models of dark energy, the emphasis being on their implications for the formation of large-scale structure of the universe. Possible properties of dark energy are investigated using phenomenological paramaterizations, but several specific models are also considered in detail. Difficulties in unifying dark matter and dark energy into a single concept are pointed out. Considerable attention is on modifications of gravity resulting in second order field equations. It is shown that in a general class of such models the viable ones represent effectively the cosmological constant, while from another class one might find interesting modifications of the standard cosmological scenario yet allowed by observations. The thesis consists of seven research papers preceded by an introductory discussion.
Resumo:
When ordinary nuclear matter is heated to a high temperature of ~ 10^12 K, it undergoes a deconfinement transition to a new phase, strongly interacting quark-gluon plasma. While the color charged fundamental constituents of the nuclei, the quarks and gluons, are at low temperatures permanently confined inside color neutral hadrons, in the plasma the color degrees of freedom become dominant over nuclear, rather than merely nucleonic, volumes. Quantum Chromodynamics (QCD) is the accepted theory of the strong interactions, and confines quarks and gluons inside hadrons. The theory was formulated in early seventies, but deriving first principles predictions from it still remains a challenge, and novel methods of studying it are needed. One such method is dimensional reduction, in which the high temperature dynamics of static observables of the full four-dimensional theory are described using a simpler three-dimensional effective theory, having only the static modes of the various fields as its degrees of freedom. A perturbatively constructed effective theory is known to provide a good description of the plasma at high temperatures, where asymptotic freedom makes the gauge coupling small. In addition to this, numerical lattice simulations have, however, shown that the perturbatively constructed theory gives a surprisingly good description of the plasma all the way down to temperatures a few times the transition temperature. Near the critical temperature, the effective theory, however, ceases to give a valid description of the physics, since it fails to respect the approximate center symmetry of the full theory. The symmetry plays a key role in the dynamics near the phase transition, and thus one expects that the regime of validity of the dimensionally reduced theories can be significantly extended towards the deconfinement transition by incorporating the center symmetry in them. In the introductory part of the thesis, the status of dimensionally reduced effective theories of high temperature QCD is reviewed, placing emphasis on the phase structure of the theories. In the first research paper included in the thesis, the non-perturbative input required in computing the g^6 term in the weak coupling expansion of the pressure of QCD is computed in the effective theory framework at an arbitrary number of colors. The two last papers on the other hand focus on the construction of the center-symmetric effective theories, and subsequently the first non-perturbative studies of these theories are presented. Non-perturbative lattice simulations of a center-symmetric effective theory for SU(2) Yang-Mills theory show --- in sharp contrast to the perturbative setup --- that the effective theory accommodates a phase transition in the correct universality class of the full theory. This transition is seen to take place at a value of the effective theory coupling constant that is consistent with the full theory coupling at the critical temperature.
Resumo:
Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.
Resumo:
When heated to high temperatures, the behavior of matter changes dramatically. The standard model fields go through phase transitions, where the strongly interacting quarks and gluons are liberated from their confinement to hadrons, and the Higgs field condensate melts, restoring the electroweak symmetry. The theoretical framework for describing matter at these extreme conditions is thermal field theory, combining relativistic field theory and quantum statistical mechanics. For static observables the physics is simplified at very high temperatures, and an effective three-dimensional theory can be used instead of the full four-dimensional one via a method called dimensional reduction. In this thesis dimensional reduction is applied to two distinct problems, the pressure of electroweak theory and the screening masses of mesonic operators in quantum chromodynamics (QCD). The introductory part contains a brief review of finite-temperature field theory, dimensional reduction and the central results, while the details of the computations are contained in the original research papers. The electroweak pressure is shown to converge well to a value slightly below the ideal gas result, whereas the pressure of the full standard model is dominated by the QCD pressure with worse convergence properties. For the mesonic screening masses a small positive perturbative correction is found, and the interpretation of dimensional reduction on the fermionic sector is discussed.
Resumo:
We present a measurement of the electric charge of the top quark using $\ppbar$ collisions corresponding to an integrated luminosity of 2.7~fb$^{-1}$ at the CDF II detector. We reconstruct $\ttbar$ events in the lepton+jets final state and use kinematic information to determine which $b$-jet is associated with the leptonically- or hadronically-decaying $t$-quark. Soft lepton taggers are used to determine the $b$-jet flavor. Along with the charge of the $W$ boson decay lepton, this information permits the reconstruction of the top quark's electric charge. Out of 45 reconstructed events with $2.4\pm0.8$ expected background events, 29 are reconstructed as $\ttbar$ with the standard model $+$2/3 charge, whereas 16 are reconstructed as $\ttbar$ with an exotic $-4/3$ charge. This is consistent with the standard model and excludes the exotic scenario at 95\% confidence level. This is the strongest exclusion of the exotic charge scenario and the first to use soft leptons for this purpose.
Resumo:
The cross section for jets from b quarks produced with a W boson has been measured in ppbar collision data from 1.9/fb of integrated luminosity recorded by the CDF II detector at the Tevatron. The W+b-jets process poses a significant background in measurements of top quark production and prominent searches for the Higgs boson. We measure a b-jet cross section of 2.74 +- 0.27(stat.) +- 0.42(syst.) pb in association with a single flavor of leptonic W boson decay over a limited kinematic phase space. This measured result cannot be accommodated in several available theoretical predictions.