521 resultados para ERS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Exercise referral schemes (ERS) aim to identify inactive adults in the primary care setting. The primary care professional refers the patient to a third party service, with this service taking responsibility for prescribing and monitoring an exercise programme tailored to the needs of the patient. This paper examines the cost-effectiveness of ERS in promoting physical activity compared with usual care in primary care setting. Methods A decision analytic model was developed to estimate the cost-effectiveness of ERS from a UK NHS perspective. The costs and outcomes of ERS were modelled over the patient's lifetime. Data were derived from a systematic review of the literature on the clinical and cost-effectiveness of ERS, and on parameter inputs in the modelling framework. Outcomes were expressed as incremental cost per quality-adjusted life-year (QALY). Deterministic and probabilistic sensitivity analyses investigated the impact of varying ERS cost and effectiveness assumptions. Sub-group analyses explored the cost-effectiveness of ERS in sedentary people with an underlying condition. Results Compared with usual care, the mean incremental lifetime cost per patient for ERS was £169 and the mean incremental QALY was 0.008, generating a base-case incremental cost-effectiveness ratio (ICER) for ERS at £20,876 per QALY in sedentary individuals without a diagnosed medical condition. There was a 51% probability that ERS was cost-effective at £20,000 per QALY and 88% probability that ERS was cost-effective at £30,000 per QALY. In sub-group analyses, cost per QALY for ERS in sedentary obese individuals was £14,618, and in sedentary hypertensives and sedentary individuals with depression the estimated cost per QALY was £12,834 and £8,414 respectively. Incremental lifetime costs and benefits associated with ERS were small, reflecting the preventative public health context of the intervention, with this resulting in estimates of cost-effectiveness that are sensitive to variations in the relative risk of becoming physically active and cost of ERS. Conclusions ERS is associated with modest increase in lifetime costs and benefits. The cost-effectiveness of ERS is highly sensitive to small changes in the effectiveness and cost of ERS and is subject to some significant uncertainty mainly due to limitations in the clinical effectiveness evidence base.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eutrophication of the Baltic Sea is a serious problem. This thesis estimates the benefit to Finns from reduced eutrophication in the Gulf of Finland, the most eutrophied part of the Baltic Sea, by applying the choice experiment method, which belongs to the family of stated preference methods. Because stated preference methods have been subject to criticism, e.g., due to their hypothetical survey context, this thesis contributes to the discussion by studying two anomalies that may lead to biased welfare estimates: respondent uncertainty and preference discontinuity. The former refers to the difficulty of stating one s preferences for an environmental good in a hypothetical context. The latter implies a departure from the continuity assumption of conventional consumer theory, which forms the basis for the method and the analysis. In the three essays of the thesis, discrete choice data are analyzed with the multinomial logit and mixed logit models. On average, Finns are willing to contribute to the water quality improvement. The probability for willingness increases with residential or recreational contact with the gulf, higher than average income, younger than average age, and the absence of dependent children in the household. On average, for Finns the relatively most important characteristic of water quality is water clarity followed by the desire for fewer occurrences of blue-green algae. For future nutrient reduction scenarios, the annual mean household willingness to pay estimates range from 271 to 448 and the aggregate welfare estimates for Finns range from 28 billion to 54 billion euros, depending on the model and the intensity of the reduction. Out of the respondents (N=726), 72.1% state in a follow-up question that they are either Certain or Quite certain about their answer when choosing the preferred alternative in the experiment. Based on the analysis of other follow-up questions and another sample (N=307), 10.4% of the respondents are identified as potentially having discontinuous preferences. In relation to both anomalies, the respondent- and questionnaire-specific variables are found among the underlying causes and a departure from standard analysis may improve the model fit and the efficiency of estimates, depending on the chosen modeling approach. The introduction of uncertainty about the future state of the Gulf increases the acceptance of the valuation scenario which may indicate an increased credibility of a proposed scenario. In conclusion, modeling preference heterogeneity is an essential part of the analysis of discrete choice data. The results regarding uncertainty in stating one s preferences and non-standard choice behavior are promising: accounting for these anomalies in the analysis may improve the precision of the estimates of benefit from reduced eutrophication in the Gulf of Finland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our present-day understanding of fundamental constituents of matter and their interactions is based on the Standard Model of particle physics, which relies on quantum gauge field theories. On the other hand, the large scale dynamical behaviour of spacetime is understood via the general theory of relativity of Einstein. The merging of these two complementary aspects of nature, quantum and gravity, is one of the greatest goals of modern fundamental physics, the achievement of which would help us understand the short-distance structure of spacetime, thus shedding light on the events in the singular states of general relativity, such as black holes and the Big Bang, where our current models of nature break down. The formulation of quantum field theories in noncommutative spacetime is an attempt to realize the idea of nonlocality at short distances, which our present understanding of these different aspects of Nature suggests, and consequently to find testable hints of the underlying quantum behaviour of spacetime. The formulation of noncommutative theories encounters various unprecedented problems, which derive from their peculiar inherent nonlocality. Arguably the most serious of these is the so-called UV/IR mixing, which makes the derivation of observable predictions especially hard by causing new tedious divergencies, to which our previous well-developed renormalization methods for quantum field theories do not apply. In the thesis I review the basic mathematical concepts of noncommutative spacetime, different formulations of quantum field theories in the context, and the theoretical understanding of UV/IR mixing. In particular, I put forward new results to be published, which show that also the theory of quantum electrodynamics in noncommutative spacetime defined via Seiberg-Witten map suffers from UV/IR mixing. Finally, I review some of the most promising ways to overcome the problem. The final solution remains a challenge for the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dimeric phenolic compounds lignans and dilignols form in the so-called oxidative coupling reaction of phenols. Enzymes such as peroxidases and lac-cases catalyze the reaction using hydrogen peroxide or oxygen respectively as oxidant generating phenoxy radicals which couple together according to certain rules. In this thesis, the effects of the structures of starting materials mono-lignols and the effects of reaction conditions such as pH and solvent system on this coupling mechanism and on its regio- and stereoselectivity have been studied. After the primary coupling of two phenoxy radicals a very reactive quinone me-thide intermediate is formed. This intermediate reacts quickly with a suitable nucleophile which can be, for example, an intramolecular hydroxyl group or another nucleophile such as water, methanol, or a phenolic compound in the reaction system. This reaction is catalyzed by acids. After the nucleophilic addi-tion to the quinone methide, other hydrolytic reactions, rearrangements, and elimination reactions occur leading finally to stable dimeric structures called lignans or dilignols. Similar reactions occur also in the so-called lignification process when monolignol (or dilignol) reacts with the growing lignin polymer. New kinds of structures have been observed in this thesis. The dimeric com-pounds with so-called spirodienone structure have been observed to form both in the dehydrodimerization of methyl sinapate and in the beta-1-type cross-coupling reaction of two different monolignols. This beta-1-type dilignol with a spirodienone structure was the first synthetized and published dilignol model compound, and at present, it has been observed to exist as a fundamental construction unit in lignins. The enantioselectivity of the oxidative coupling reaction was also studied for obtaining enantiopure lignans and dilignols. A rather good enantioselectivity was obtained in the oxidative coupling reaction of two monolignols with chiral auxiliary substituents using peroxidase/H2O2 as an oxidation system. This observation was published as one of the first enantioselective oxidative coupling reaction of phenols. Pure enantiomers of lignans were also obtained by using chiral cryogenic chromatography as a chiral resolution technique. This technique was shown to be an alternative route to prepare enantiopure lignans or lignin model compounds in a preparative scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The importance of intermolecular interactions to chemistry, physics, and biology is difficult to overestimate. Without intermolecular forces, condensed phase matter could not form. The simplest way to categorize different types of intermolecular interactions is to describe them using van der Waals and hydrogen bonded (H-bonded) interactions. In the H-bond, the intermolecular interaction appears between a positively charged hydrogen atom and electronegative fragments and it originates from strong electrostatic interactions. H-bonding is important when considering the properties of condensed phase water and in many biological systems including the structure of DNA and proteins. Vibrational spectroscopy is a useful tool for studying complexes and the solvation of molecules. Vibrational frequency shift has been used to characterize complex formation. In an H-bonded system A∙∙∙H-X (A and X are acceptor and donor species, respectively), the vibrational frequency of the H-X stretching vibration usually decreases from its value in free H-X (red-shift). This frequency shift has been used as evidence for H-bond formation and the magnitude of the shift has been used as an indicator of the H-bonding strength. In contrast to this normal behavior are the blue-shifting H-bonds, in which the H-X vibrational frequency increases upon complex formation. In the last decade, there has been active discussion regarding these blue-shifting H-bonds. Noble-gases have been considered inert due to their limited reactivity with other elements. In the early 1930 s, Pauling predicted the stable noble-gas compounds XeF6 and KrF6. It was not until three decades later Neil Bartlett synthesized the first noble-gas compound, XePtF6, in 1962. A renaissance of noble-gas chemistry began in 1995 with the discovery of noble-gas hydride molecules at the University of Helsinki. The first hydrides were HXeCl, HXeBr, HXeI, HKrCl, and HXeH. These molecules have the general formula of HNgY, where H is a hydrogen atom, Ng is a noble-gas atom (Ar, Kr, or Xe), and Y is an electronegative fragment. At present, this class of molecules comprises 23 members including both inorganic and organic compounds. The first and only argon-containing neutral chemical compound HArF was synthesized in 2000 and its properties have since been investigated in a number of studies. A helium-containing chemical compound, HHeF, was predicted computationally, but its lifetime has been predicted to be severely limited by hydrogen tunneling. Helium and neon are the only elements in the periodic table that do not form neutral, ground state molecules. A noble-gas matrix is a useful medium in which to study unstable and reactive species including ions. A solvated proton forms a centrosymmetric NgHNg+ (Ng = Ar, Kr, and Xe) structure in a noble-gas matrix and this is probably the simplest example of a solvated proton. Interestingly, the hypothetical NeHNe+ cation is isoelectronic with the water-solvated proton H5O2+ (Zundel-ion). In addition to the NgHNg+ cations, the isoelectronic YHY- (Y = halogen atom or pseudohalogen fragment) anions have been studied with the matrix-isolation technique. These species have been known to exist in alkali metal salts (YHY)-M+ (M = alkali metal e.g. K or Na) for more than 80 years. Hydrated HF forms the FHF- structure in aqueous solutions, and these ions participate in several important chemical processes. In this thesis, studies of the intermolecular interactions of HNgY molecules and centrosymmetric ions with various species are presented. The HNgY complexes show unusual spectral features, e.g. large blue-shifts of the H-Ng stretching vibration upon complexation. It is suggested that the blue-shift is a normal effect for these molecules, and that originates from the enhanced (HNg)+Y- ion-pair character upon complexation. It is also found that the HNgY molecules are energetically stabilized in the complexed form, and this effect is computationally demonstrated for the HHeF molecule. The NgHNg+ and YHY- ions also show blue-shifts in their asymmetric stretching vibration upon complexation with nitrogen. Additionally, the matrix site structure and hindered rotation (libration) of the HNgY molecules were studied. The librational motion is a much-discussed solid state phenomenon, and the HNgY molecules embedded in noble-gas matrices are good model systems to study this effect. The formation mechanisms of the HNgY molecules and the decay mechanism of NgHNg+ cations are discussed. A new electron tunneling model for the decay of NgHNg+ absorptions in noble-gas matrices is proposed. Studies of the NgHNg+∙∙∙N2 complexes support this electron tunneling mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tässä kirjallisuuskatsauksessa perehdyttiin ensisijaisesti puuvartisten- ja ruohokasvien soluseinien fenyylipropanoidien ja ferulahappojen biosynteesiin ja kytkeytymisreaktioihin. Fenyylipropanoidireitti alkaa fenyylialaniinista ja johtaa monien prekursoreiden kuten lignaanien, flavonoidien, salisyylihappojen ja ligniiniprekursoreiden syntymiseen. Tutkielmassa keskityttiin ligniiniprekursoreiden muodostumiseen ja erityisesti sen biosynteesireitin välituotteen, ferulahapon hapetettuihin kytkeytymisreaktioihin kasvien soluseinillä. Fenyylipropanoiditutkimuksen lähtökohtana on jo vuosia ollut selvittää biosynteesireittejä ja menetelmiä, joiden avulla ligniini saadaan kasvin soluseinältä liukenemaan ja hiilihydraatti otettua talteen. Eräs tapa tunnistaa näitä hajoamistapahtumia on tutkia fenyylipropanoidien kytkentöjen muodostumista. Tässä pro gradu -tutkielmassa fenyylipropanoidireitin välituotteiden entsyymien säätelyä tarkasteltiin luonnonvaraisissa ja geneettisesti muunnelluissa kasveissa. Bieosynteesireitti selkeytyi paljon. Lisäksi siirtogeenisillä kasveilla havaittiin kokonaan uusia kytkentöjä ja rakenteita. Eräillä geeniyhdistelmillä voitiin lisätä tuntuvasti hiilihydraattimäärää samalla kun ligniinin kokonaismäärä väheni. Näin arveltiin voitavan kasvattaa biomassan määrää puukasveilla. Ferulahapot dehydrogenoituvat entsymaattisesti hapettavissa olosuhteissa fenoksiradikaaleiksi, jotka reagoivat edelleen muodostaen toisen radikaalimonomeerin tai -polymeerin kanssa kytkentöjä. Soluseinä tuottaa radikaalireaktioissa tarvitsemansa hapettimet ja entsyymit, vetyperoksidin ja peroksidaasin itse. Ferulahapon monomeerit ja dimeerit muodostavat esterisidoksia soluseinän hemiselluloosan kanssa. Näin syntyneet ferulaattidimeerit ja -trimeerit muodostivat ristikytkentöjä hiilihydraattien ja ligniinin välille sekä yhden tai useamman polysakkaridiketjun välille. Ferulahappojen katsottiin olevan lignifioitumisen aloituskohtia soluseinillä ja yhdistävän kaksi suurta polymeerista verkkorakennetta toisiinsa. Myös soluseinän hiilihydraattien koostumuksen havaittiin vaikuttavan muodostuvien kytkentöjen rakenteeseen. Lopuksi tarkasteltiin vielä ferulahapon antioksidatiivisia ominaisuuksia. Todettiin ligniinin ja ferulahapon määrän korreloivan soluseinän peroksidaasi- ja vetyperoksidimäärän kanssa. Kaveissa monet taudinaiheuttajat, vioittuneet kasvin osat sekä UV-säteily lisäsivät peroksidaasien tuotantoa ja edelleen ferulahappojen määrää. Fenoksiradikaalina ferulahappo kykeni eliminoimaan vetyperoksidin haitallisten happiradikaalien vaikutuksia pelkistämällä ne hapettuen itse radikaalisessa kytkeytymisreaktiossa. Tämä johti mielenkiintoisiin tulevaisuuden näkymiin ferulahaposta funktionaalisena elintarvikkeena, lääkeaineena sekä kosmeettisena valmisteena.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Postglacial climate changes and vegetation responses were studied using a combination of biological and physical indicators preserved in lake sediments. Low-frequency trends, high-frequency events and rapid shifts in temperature and moisture balance were probed using pollen-based quantitative temperature reconstructions and oxygen-isotopes from authigenic carbonate and aquatic cellulose, respectively. Pollen and plant macrofossils were employed to shed light on the presence and response rates of plant populations in response to climate changes, particularly focusing on common boreal and temperate tree species. Additional geochemical and isotopic tracers facilitated the interpretation of pollen- and oxygen-isotope data. The results show that the common boreal trees were present in the Baltic region (~55°N) during the Lateglacial, which contrasts with the traditional view of species refuge locations in the south-European peninsulas during the glacial/interglacial cycles. The findings of this work are in agreement with recent paleoecological and genetic evidence suggesting that scattered populations of tree species persisted at higher latitudes, and that these taxa were likely limited to boreal trees. Moreover, the results demonstrate that stepwise changes in plant communities took place in concert with major climate fluctuations of the glacial/interglacial transition. Postglacial climate trends in northern Europe were characterized by rise, maxima and fall in temperatures and related changes in moisture balance. Following the deglaciation of the Northern Hemisphere and the early Holocene reorganization of the ice-ocean-atmosphere system, the long-term temperature trends followed gradually decreasing summer insolation. The early Holocene (~11,700-8000 cal yr BP) was overall cool, moist and oceanic, although the earliest Holocene effective humidity may have been low particularly in the eastern part of northern Europe. The gradual warming trend was interrupted by a cold event ~8200 cal yr BP. The maximum temperatures, ~1.5-3.0°C above modern values, were attained ~8000-4000 cal yr BP. This mid-Holocene peak warmth was coupled with low lake levels, low effective humidity and summertime drought. The late Holocene (~4000 cal yr BP-present) was characterized by gradually decreasing temperatures, higher lake levels and higher effective humidity. Moreover, the gradual trends of the late Holocene were probably superimposed by higher-frequency variability. The spatial variability of the Holocene temperature and moisture balance patterns were tentatively attributed to the differing heat capacities of continents and oceans, changes in atmospheric circulation modes and position of sites and subregions with respect to large water bodies and topographic barriers. The combination of physical and biological proxy archives is a pivotal aspect of this work, because non-climatic factors, such as postglacial migration, disturbances and competitive interactions, can influence reshuffling of vegetation and hence, pollen-based climate reconstructions. The oxygen-isotope records and other physical proxies presented in this work manifest that postglacial climate changes were the main driver of the establishment and expansion of temperate and boreal tree populations, and hence, large-scale and long-term vegetation patterns were in dynamic equilibrium with climate. A notable exception to this pattern may be the postglacial invasion of Norway spruce and the related suppression of mid-Holocene temperate forest. This salient step in north-European vegetation history, the development of the modern boreal ecosystem, cannot be unambiguously explained by current evidence of postglacial climate changes. The results of this work highlight that plant populations, including long-lived trees, may be able to respond strikingly rapidly to changes in climate. Moreover, interannual and seasonal variation and extreme events can exert an important influence on vegetation reshuffling. Importantly, the studies imply that the presence of diffuse refuge populations or local stands among the prevailing vegetation may have provided the means for extraordinarily rapid vegetation responses. Hence, if scattered populations are not provided and tree populations are to migrate long distances, their capacity to keep up with predicted rates of future climate change may be lower than previously thought.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new rock mass classification scheme, the Host Rock Classification system (HRC-system) has been developed for evaluating the suitability of volumes of rock mass for the disposal of high-level nuclear waste in Precambrian crystalline bedrock. To support the development of the system, the requirements of host rock to be used for disposal have been studied in detail and the significance of the various rock mass properties have been examined. The HRC-system considers both the long-term safety of the repository and the constructability in the rock mass. The system is specific to the KBS-3V disposal concept and can be used only at sites that have been evaluated to be suitable at the site scale. By using the HRC-system, it is possible to identify potentially suitable volumes within the site at several different scales (repository, tunnel and canister scales). The selection of the classification parameters to be included in the HRC-system is based on an extensive study on the rock mass properties and their various influences on the long-term safety, the constructability and the layout and location of the repository. The parameters proposed for the classification at the repository scale include fracture zones, strength/stress ratio, hydraulic conductivity and the Groundwater Chemistry Index. The parameters proposed for the classification at the tunnel scale include hydraulic conductivity, Q´ and fracture zones and the parameters proposed for the classification at the canister scale include hydraulic conductivity, Q´, fracture zones, fracture width (aperture + filling) and fracture trace length. The parameter values will be used to determine the suitability classes for the volumes of rock to be classified. The HRC-system includes four suitability classes at the repository and tunnel scales and three suitability classes at the canister scale and the classification process is linked to several important decisions regarding the location and acceptability of many components of the repository at all three scales. The HRC-system is, thereby, one possible design tool that aids in locating the different repository components into volumes of host rock that are more suitable than others and that are considered to fulfil the fundamental requirements set for the repository host rock. The generic HRC-system, which is the main result of this work, is also adjusted to the site-specific properties of the Olkiluoto site in Finland and the classification procedure is demonstrated by a test classification using data from Olkiluoto. Keywords: host rock, classification, HRC-system, nuclear waste disposal, long-term safety, constructability, KBS-3V, crystalline bedrock, Olkiluoto

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The monograph dissertation deals with kernel integral operators and their mapping properties on Euclidean domains. The associated kernels are weakly singular and examples of such are given by Green functions of certain elliptic partial differential equations. It is well known that mapping properties of the corresponding Green operators can be used to deduce a priori estimates for the solutions of these equations. In the dissertation, natural size- and cancellation conditions are quantified for kernels defined in domains. These kernels induce integral operators which are then composed with any partial differential operator of prescribed order, depending on the size of the kernel. The main object of study in this dissertation being the boundedness properties of such compositions, the main result is the characterization of their Lp-boundedness on suitably regular domains. In case the aforementioned kernels are defined in the whole Euclidean space, their partial derivatives of prescribed order turn out to be so called standard kernels that arise in connection with singular integral operators. The Lp-boundedness of singular integrals is characterized by the T1 theorem, which is originally due to David and Journé and was published in 1984 (Ann. of Math. 120). The main result in the dissertation can be interpreted as a T1 theorem for weakly singular integral operators. The dissertation deals also with special convolution type weakly singular integral operators that are defined on Euclidean spaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let X be a topological space and K the real algebra of the reals, the complex numbers, the quaternions, or the octonions. The functions form X to K form an algebra T(X,K) with pointwise addition and multiplication. We study first-order definability of the constant function set N' corresponding to the set of the naturals in certain subalgebras of T(X,K). In the vocabulary the symbols Constant, +, *, 0', and 1' are used, where Constant denotes the predicate defining the constants, and 0' and 1' denote the constant functions with values 0 and 1 respectively. The most important result is the following. Let X be a topological space, K the real algebra of the reals, the compelex numbers, the quaternions, or the octonions, and R a subalgebra of the algebra of all functions from X to K containing all constants. Then N' is definable in , if at least one of the following conditions is true. (1) The algebra R is a subalgebra of the algebra of all continuous functions containing a piecewise open mapping from X to K. (2) The space X is sigma-compact, and R is a subalgebra of the algebra of all continuous functions containing a function whose range contains a nonempty open set of K. (3) The algebra K is the set of reals or the complex numbers, and R contains a piecewise open mapping from X to K and does not contain an everywhere unbounded function. (4) The algebra R contains a piecewise open mapping from X to the set of the reals and function whose range contains a nonempty open subset of K. Furthermore R does not contain an everywhere unbounded function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD Thesis is about certain infinite-dimensional Grassmannian manifolds that arise naturally in geometry, representation theory and mathematical physics. From the physics point of view one encounters these infinite-dimensional manifolds when trying to understand the second quantization of fermions. The many particle Hilbert space of the second quantized fermions is called the fermionic Fock space. A typical element of the fermionic Fock space can be thought to be a linear combination of the configurations m particles and n anti-particles . Geometrically the fermionic Fock space can be constructed as holomorphic sections of a certain (dual)determinant line bundle lying over the so called restricted Grassmannian manifold, which is a typical example of an infinite-dimensional Grassmannian manifold one encounters in QFT. The construction should be compared with its well-known finite-dimensional analogue, where one realizes an exterior power of a finite-dimensional vector space as the space of holomorphic sections of a determinant line bundle lying over a finite-dimensional Grassmannian manifold. The connection with infinite-dimensional representation theory stems from the fact that the restricted Grassmannian manifold is an infinite-dimensional homogeneous (Kähler) manifold, i.e. it is of the form G/H where G is a certain infinite-dimensional Lie group and H its subgroup. A central extension of G acts on the total space of the dual determinant line bundle and also on the space its holomorphic sections; thus G admits a (projective) representation on the fermionic Fock space. This construction also induces the so called basic representation for loop groups (of compact groups), which in turn are vitally important in string theory / conformal field theory. The Thesis consists of three chapters: the first chapter is an introduction to the backround material and the other two chapters are individually written research articles. The first article deals in a new way with the well-known question in Yang-Mills theory, when can one lift the action of the gauge transformation group on the space of connection one forms to the total space of the Fock bundle in a compatible way with the second quantized Dirac operator. In general there is an obstruction to this (called the Mickelsson-Faddeev anomaly) and various geometric interpretations for this anomaly, using such things as group extensions and bundle gerbes, have been given earlier. In this work we give a new geometric interpretation for the Faddeev-Mickelsson anomaly in terms of differentiable gerbes (certain sheaves of categories) and central extensions of Lie groupoids. The second research article deals with the question how to define a Dirac-like operator on the restricted Grassmannian manifold, which is an infinite-dimensional space and hence not in the landscape of standard Dirac operator theory. The construction relies heavily on infinite-dimensional representation theory and one of the most technically demanding challenges is to be able to introduce proper normal orderings for certain infinite sums of operators in such a way that all divergences will disappear and the infinite sum will make sense as a well-defined operator acting on a suitable Hilbert space of spinors. This research article was motivated by a more extensive ongoing project to construct twisted K-theory classes in Yang-Mills theory via a Dirac-like operator on the restricted Grassmannian manifold.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we study a few games related to non-wellfounded and stationary sets. Games have turned out to be an important tool in mathematical logic ranging from semantic games defining the truth of a sentence in a given logic to for example games on real numbers whose determinacies have important effects on the consistency of certain large cardinal assumptions. The equality of non-wellfounded sets can be determined by a so called bisimulation game already used to identify processes in theoretical computer science and possible world models for modal logic. Here we present a game to classify non-wellfounded sets according to their branching structure. We also study games on stationary sets moving back to classical wellfounded set theory. We also describe a way to approximate non-wellfounded sets with hereditarily finite wellfounded sets. The framework used to do this is domain theory. In the Banach-Mazur game, also called the ideal game, the players play a descending sequence of stationary sets and the second player tries to keep their intersection stationary. The game is connected to precipitousness of the corresponding ideal. In the pressing down game first player plays regressive functions defined on stationary sets and the second player responds with a stationary set where the function is constant trying to keep the intersection stationary. This game has applications in model theory to the determinacy of the Ehrenfeucht-Fraisse game. We show that it is consistent that these games are not equivalent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.