30 resultados para GALAXIES: CLUSTERS: GENERAL
Resumo:
Background and aims. Type 1 diabetes (T1D), an autoimmune disease in which the insulin producing beta cells are gradually destroyed, is preceded by a prodromal phase characterized by appearance of diabetes-associated autoantibodies in circulation. Both the timing of the appearance of autoantibodies and their quality have been used in the prediction of T1D among first-degree relatives of diabetic patients (FDRs). So far, no general strategies for identifying individuals at increased disease risk in the general population have been established, although the majority of new cases originate in this population. The current work aimed at assessing the predictive role of diabetes-associated immunologic and metabolic risk factors in the general population, and comparing these factors with data obtained from studies on FDRs. Subjects and methods. Study subjects in the current work were subcohorts of participants of the Childhood Diabetes in Finland Study (DiMe; n=755), the Cardiovascular Risk in Young Finns Study (LASERI; n=3475), and the Finnish Type 1 Diabetes Prediction and Prevention Study (DIPP) Study subjects (n=7410). These children were observed for signs of beta-cell autoimmunity and progression to T1D, and the results obtained were compared between the FDRs and the general population cohorts. --- Results and conclusions. By combining HLA and autoantibody screening, T1D risks similar to those reported for autoantibody-positive FDRs are observed in the pediatric general population. Progression rate to T1D is high in genetically susceptible children with persistent multipositivity. Measurement of IAA affinity failed in stratifying the risk assessment in young IAA-positive children with HLA-conferred disease susceptibility, among whom affinity of IAA did not increase during the prediabetic period. Young age at seroconversion, increased weight-for-height, decreased early insulin response, and increased IAA and IA-2A levels predict T1D in young children with genetic disease susceptibility and signs of advanced beta-cell autoimmunity. Since the incidence of T1D continues to increase, efforts aimed at preventing T1D are important, and reliable disease prediction is needed both for intervention trials and for effective and safe preventive therapies in the future. Our observations confirmed that combined HLA-based screening and regular autoantibody measurements reveal similar disease risks in pediatric general population as those seen in prediabetic FDRs, and that risk assessment can be stratified further by studying glucose metabolism of prediabetic subjects. As these screening efforts are feasible in practice, the knowledge now obtained can be exploited while designing intervention trials aimed at secondary prevention of T1D.
Resumo:
Background: Irritable bowel syndrome (IBS) is a common functional gastrointestinal (GI) disorder characterised by abdominal pain and abnormal bowel function. It is associated with a high rate of healthcare consumption and significant health care costs. The prevalence and economic burden of IBS in Finland has not been studied before. The aims of this study were to assess the prevalence of IBS according to various diagnostic criteria and to study the rates of psychiatric and somatic comorbidity in IBS. In addition, health care consumption and societal costs of IBS were to be evaluated. Methods: The study was a two-phase postal survey. Questionnaire I identifying IBS by Manning 2 (at least two of the six Manning symptoms), Manning 3 (at least three Manning symptoms), Rome I, and Rome II criteria, was mailed to a random sample of 5 000 working age subjects. It also covered extra-GI symptoms such as headache, back pain, and depression. Questionnaire II, covering rates of physician visits, and use of GI medication, was sent to subjects fulfilling Manning 2 or Rome II IBS criteria in Questionnaire I. Results: The response rate was 73% and 86% for questionnaires I and II. The prevalence of IBS was 15.9%, 9.6%, 5.6%, and 5.1% according to Manning 2, Manning 3, Rome I, and Rome II criteria. Of those meeting Rome II criteria, 97% also met Manning 2 criteria. Presence of severe abdominal pain was more often reported by subjects meeting either of the Rome criteria than those meeting either of the Manning criteria. Presence of depression, anxiety, and several somatic symptoms was more common among subjects meeting any IBS criterion than by controls. Of subjects with depressive symptoms, 11.6% met Rome II IBS criteria compared to 3.7% of those with no depressiveness. Subjects meeting any IBS criteria made more physician visits than controls. Intensity of GI symptoms and presence of dyspeptic symptoms were the strongest predictors of GI consultations. Presence of dyspeptic symptoms and a history of abdominal pain in childhood also predicted non-GI visits. Annual GI related individual costs were higher in the Rome II group (497 ) than in the Manning 2 group (295 ). Direct expenses of GI symptoms and non GI physician visits ranged between 98M for Rome II and 230M for Manning 2 criteria. Conclusions: The prevalence of IBS varies substantially depending on the criteria applied. Rome II criteria are more restrictive than Manning 2, and they identify an IBS population with more severe GI symptoms, more frequent health care use, and higher individual health care costs. Subjects with IBS demonstrate high rates of psychiatric and somatic comorbidity regardless of health care seeking status. Perceived symptom severity rather than psychiatric comorbidity predicts health care seeking for GI symptoms. IBS incurs considerable medical costs. The direct GI and non-GI costs are equivalent to up to 5% of outpatient health care and medicine costs in Finland. A more integral approach to IBS by physicians, accounting also for comorbid conditions, may produce a more favourable course in IBS patients and reduce health care expenditures.
Resumo:
The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.
Resumo:
Acceleration of the universe has been established but not explained. During the past few years precise cosmological experiments have confirmed the standard big bang scenario of a flat universe undergoing an inflationary expansion in its earliest stages, where the perturbations are generated that eventually form into galaxies and other structure in matter, most of which is non-baryonic dark matter. Curiously, the universe has presently entered into another period of acceleration. Such a result is inferred from observations of extra-galactic supernovae and is independently supported by the cosmic microwave background radiation and large scale structure data. It seems there is a positive cosmological constant speeding up the universal expansion of space. Then the vacuum energy density the constant describes should be about a dozen times the present energy density in visible matter, but particle physics scales are enormously larger than that. This is the cosmological constant problem, perhaps the greatest mystery of contemporary cosmology. In this thesis we will explore alternative agents of the acceleration. Generically, such are called dark energy. If some symmetry turns off vacuum energy, its value is not a problem but one needs some dark energy. Such could be a scalar field dynamically evolving in its potential, or some other exotic constituent exhibiting negative pressure. Another option is to assume that gravity at cosmological scales is not well described by general relativity. In a modified theory of gravity one might find the expansion rate increasing in a universe filled by just dark matter and baryons. Such possibilities are taken here under investigation. The main goal is to uncover observational consequences of different models of dark energy, the emphasis being on their implications for the formation of large-scale structure of the universe. Possible properties of dark energy are investigated using phenomenological paramaterizations, but several specific models are also considered in detail. Difficulties in unifying dark matter and dark energy into a single concept are pointed out. Considerable attention is on modifications of gravity resulting in second order field equations. It is shown that in a general class of such models the viable ones represent effectively the cosmological constant, while from another class one might find interesting modifications of the standard cosmological scenario yet allowed by observations. The thesis consists of seven research papers preceded by an introductory discussion.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Both management scholars and economic geographers have studied knowledge and argued that the ability to transfer knowledge is critical to competitive success. Networks and other forms for cooperation are often the context when analyzing knowledge transfer within management research, while economic geographers focus on the role of the cluster for knowledge transfer and creation. With the common interest in knowledge transfer, few attempts to interdisciplinary research have been made. The aim of this paper is to outline the knowledge transfer concepts in the two strands of literature of management and economic geography (EG). The paper takes an analytical approach to review the existing contributions and seek to identify the benefits of further interaction between the disciplines. Furthermore, it offers an interpretation of the concepts of cluster and network, and suggests a clearer distinction between their respective definitions. The paper posits that studies of internal networks transcending national borders and clusters are not necessarily mutually exclusive when it comes to transfer of knowledge and the learning process of the firm. Our conclusion is that researchers in general seem to increasingly acknowledge the importance of studying both the effect of and the need for geographical proximity and external networks for the knowledge transfer process, but that there exists equivocalness in defining clusters and networks.
Resumo:
Vuorokausivirtaaman ennustaminen yhdyskuntien vesi- ja viemärilaitosten yleissuunnittelussa.
Resumo:
All protein-encoding genes in eukaryotes are transcribed into messenger RNA (mRNA) by RNA Polymerase II (RNAP II), whose activity therefore needs to be tightly controlled. An important and only partially understood level of regulation is the multiple phosphorylations of RNAP II large subunit C-terminal domain (CTD). Sequential phosphorylations regulate transcription initiation and elongation, and recruit factors involved in co-transcriptional processing of mRNA. Based largely on studies in yeast models and in vitro, the kinase activity responsible for the phosphorylation of the serine-5 (Ser5) residues of RNAP II CTD has been attributed to the Mat1/Cdk7/CycH trimer as part of Transcription Factor IIH. However, due to the lack of good mammalian genetic models, the roles of both RNAP II Ser5 phosphorylation as well as TFIIH kinase in transcription have provided ambiguous results and the in vivo kinase of Ser5 has remained elusive. The primary objective of this study was to elucidate the role of mammalian TFIIH, and specifically the Mat1 subunit in CTD phosphorylation and general RNAP II-mediated transcription. The approach utilized the Cre-LoxP system to conditionally delete murine Mat1 in cardiomyocytes and hepatocytes in vivo and and in cell culture models. The results identify the TFIIH kinase as the major mammalian Ser5 kinase and demonstrate its requirement for general transcription, noted by the use of nascent mRNA labeling. Also a role for Mat1 in regulating general mRNA turnover was identified, providing a possible rationale for earlier negative findings. A secondary objective was to identify potential gene- and tissue-specific roles of Mat1 and the TFIIH kinase through the use of tissue-specific Mat1 deletion. Mat1 was found to be required for the transcriptional function of PGC-1 in cardiomyocytes. Transriptional activation of lipogenic SREBP1 target genes following Mat1 deletion in hepatocytes revealed a repressive role for Mat1apparently mediated via co-repressor DMAP1 and the DNA methyltransferase Dnmt1. Finally, Mat1 and Cdk7 were also identified as a negative regulators of adipocyte differentiation through the inhibitory phosphorylation of Peroxisome proliferator-activated receptor (PPAR) γ. Together, these results demonstrate gene- and tissue-specific roles for the Mat1 subunit of TFIIH and open up new therapeutic possibilities in the treatment of diseases such as type II diabetes, hepatosteatosis and obesity.
Resumo:
Positron emission tomography (PET) is a molecular imaging technique that utilises radiopharmaceuticals (radiotracers) labelled with a positron-emitting radionuclide, such as fluorine-18 (18F). Development of a new radiotracer requires an appropriate radiosynthesis method: the most common of which with 18F is nucleophilic substitution with [18F]fluoride ion. The success of the labelling reaction is dependent on various factors such as the reactivity of [18F]fluoride, the structure of the target compound in addition to the chosen solvent. The overall radiosynthesis procedure must be optimised in terms of radiochemical yield and quality of the final product. Therefore, both quantitative and qualitative radioanalytical methods are essential in developing radiosynthesis methods. Furthermore, biological properties of the tracer candidate need to be evaluated by various pre-clinical studies in animal models. In this work, the feasibility of various nucleophilic 18F-fluorination strategies were studied and a labelling method for a novel radiotracer, N-3-[18F]fluoropropyl-2beta-carbomethoxy-3beta-4-fluorophenyl)nortropane ([18F]beta-CFT-FP), was optimised. The effect of solvent was studied by labelling a series of model compounds, 4-(R1-methyl)benzyl R2-benzoates. 18F-Fluorination reactions were carried out both in polar aprotic and protic solvents (tertiary alcohols). Assessment of the 18F-fluorinated products was studied by mass spectrometry (MS) in addition to conventional radiochromatographic methods, using radiosynthesis of 4-[18F]fluoro-N-[2-[1-(2-methoxyphenyl)-1-piperazinyl]ethyl-N-2-pyridinyl-benzamide (p-[18F]MPPF) as a model reaction. Labelling of [18F]beta-CFT-FP was studied using two 18F-fluoroalkylation reagents, [18F]fluoropropyl bromide and [18F]fluoropropyl tosylate, as well as by direct 18F-fluorination of sulfonate ester precursor. Subsequently, the suitability of [18F]beta-CFT-FP for imaging dopamine transporter (DAT) was evaluated by determining its biodistribution in rats. The results showed that protic solvents can be useful co-solvents in aliphatic 18F-fluorinations, especially in the labelling of sulfonate esters. Aromatic 18F-fluorination was not promoted in tert-alcohols. Sensitivity of the ion trap MS was sufficient for the qualitative analysis of the 18F-labelled products; p-[18F]MPPF was identified from the isolated product fraction with a mass-to-charge (m/z) ratio of 435 (i.e. protonated molecule [M+H]+). [18F]beta-CFT-FP was produced most efficiently via [18F]fluoropropyl tosylate, leading to sufficient radiochemical yield and specific radioactivity for PET studies. The ex vivo studies in rats showed fast kinetics as well as the specific uptake of [18F]beta-CFT-FP to the DAT rich brain regions. Thus, it was concluded that [18F]beta-CFT-FP has potential as a radiotracer for imaging DAT by PET.