16 resultados para Galaxy: general
Resumo:
The present study addressed the epistemology of teachers’ practical knowledge. Drawing from the literature, teachers’ practical knowledge is defined as all teachers’ cognitions (e.g., beliefs, values, motives, procedural knowing, and declarative knowledge) that guide their practice of teaching. The teachers’ reasoning that lies behind their practical knowledge is addressed to gain insight into its epistemic nature. I studied six class teachers’ practical knowledge; they teach in the metropolitan region of Helsinki. Relying on the assumptions of the phenomenographic inquiry, I collected and analyzed the data. I analyzed the data in two stages where the first stage involved an abductive procedure, and the second stage an inductive procedure for interpretation, and thus developed the system of categories. In the end, a quantitative analysis nested into the qualitative findings to study the patterns of the teachers’’ reasoning. The results indicated that teachers justified their practical knowledge based on morality and efficiency of action; efficiency of action was found to be presented in two different ways: authentic efficiency and naïve efficiency. The epistemic weight of morality was embedded in what I call “moral care”. The core intention of teachers in the moral care was the commitment that they felt about the “whole character” of students. From this perspective the “dignity” and the moral character of the students should not replaced for any other “instrumental price”. “Caring pedagogy” was the epistemic value of teachers’ reasoning in the authentic efficiency. The central idea in the caring pedagogy was teachers’ intentions to improve the “intellectual properties” of “all or most” of the students using “flexible” and “diverse” pedagogies. However, “regulating pedagogy” was the epistemic condition of practice in the cases corresponding to naïve efficiency. Teachers argued that an effective practical knowledge should regulate and manage the classroom activities, but the targets of the practical knowledge were mainly other “issues “or a certain percentage of the students. In these cases, the teachers’ arguments were mainly based on the notion of “what worked” regardless of reflecting on “what did not work”. Drawing from the theoretical background and the data, teachers’ practical knowledge calls for “praxial knowledge” when they used the epistemic conditions of “caring pedagogy” and “moral care”. It however calls for “practicable” epistemic status when teachers use the epistemic condition of regulating pedagogy. As such, praxial knowledge with the dimensions of caring pedagogy and moral care represents the “normative” perspective on teachers’ practical knowledge, and thus reflects a higher epistemic status in comparison to “practicable” knowledge, which represents a “descriptive” perception toward teachers’ practical knowledge and teaching.
Resumo:
Spirometry is the most widely used lung function test in the world. It is fundamental in diagnostic and functional evaluation of various pulmonary diseases. In the studies described in this thesis, the spirometric assessment of reversibility of bronchial obstruction, its determinants, and variation features are described in a general population sample from Helsinki, Finland. This study is a part of the FinEsS study, which is a collaborative study of clinical epidemiology of respiratory health between Finland (Fin), Estonia (Es), and Sweden (S). Asthma and chronic obstructive pulmonary disease (COPD) constitute the two major obstructive airways diseases. The prevalence of asthma has increased, with around 6% of the population in Helsinki reporting physician-diagnosed asthma. The main cause of COPD is smoking with changes in smoking habits in the population affecting its prevalence with a delay. Whereas airway obstruction in asthma is by definition reversible, COPD is characterized by fixed obstruction. Cough and sputum production, the first symptoms of COPD, are often misinterpreted for smokers cough and not recognized as first signs of a chronic illness. Therefore COPD is widely underdiagnosed. More extensive use of spirometry in primary care is advocated to focus smoking cessation interventions on populations at risk. The use of forced expiratory volume in six seconds (FEV6) instead of forced vital capacity (FVC) has been suggested to enable office spirometry to be used in earlier detection of airflow limitation. Despite being a widely accepted standard method of assessment of lung function, the methodology and interpretation of spirometry are constantly developing. In 2005, the ATS/ERS Task Force issued a joint statement which endorsed the 12% and 200 ml thresholds for significant change in forced expiratory volume in one second (FEV1) or FVC during bronchodilation testing, but included the notion that in cases where only FVC improves it should be verified that this is not caused by a longer exhalation time in post-bronchodilator spirometry. This elicited new interest in the assessment of forced expiratory time (FET), a spirometric variable not usually reported or used in assessment. In this population sample, we examined FET and found it to be on average 10.7 (SD 4.3) s and to increase with ageing and airflow limitation in spirometry. The intrasession repeatability of FET was the poorest of the spirometric variables assessed. Based on the intrasession repeatability, a limit for significant change of 3 s was suggested for FET during bronchodilation testing. FEV6 was found to perform equally well as FVC in the population and in a subgroup of subjects with airways obstruction. In the bronchodilation test, decreases were frequently observed in FEV1 and particularly in FVC. The limit of significant increase based on the 95th percentile of the population sample was 9% for FEV1 and 6% for FEV6 and FVC; these are slightly lower than the current limits for single bronchodilation tests (ATS/ERS guidelines). FEV6 was proven as a valid alternative to FVC also in the bronchodilation test and would remove the need to control duration of exhalation during the spirometric bronchodilation test.
Resumo:
Background and aims. Type 1 diabetes (T1D), an autoimmune disease in which the insulin producing beta cells are gradually destroyed, is preceded by a prodromal phase characterized by appearance of diabetes-associated autoantibodies in circulation. Both the timing of the appearance of autoantibodies and their quality have been used in the prediction of T1D among first-degree relatives of diabetic patients (FDRs). So far, no general strategies for identifying individuals at increased disease risk in the general population have been established, although the majority of new cases originate in this population. The current work aimed at assessing the predictive role of diabetes-associated immunologic and metabolic risk factors in the general population, and comparing these factors with data obtained from studies on FDRs. Subjects and methods. Study subjects in the current work were subcohorts of participants of the Childhood Diabetes in Finland Study (DiMe; n=755), the Cardiovascular Risk in Young Finns Study (LASERI; n=3475), and the Finnish Type 1 Diabetes Prediction and Prevention Study (DIPP) Study subjects (n=7410). These children were observed for signs of beta-cell autoimmunity and progression to T1D, and the results obtained were compared between the FDRs and the general population cohorts. --- Results and conclusions. By combining HLA and autoantibody screening, T1D risks similar to those reported for autoantibody-positive FDRs are observed in the pediatric general population. Progression rate to T1D is high in genetically susceptible children with persistent multipositivity. Measurement of IAA affinity failed in stratifying the risk assessment in young IAA-positive children with HLA-conferred disease susceptibility, among whom affinity of IAA did not increase during the prediabetic period. Young age at seroconversion, increased weight-for-height, decreased early insulin response, and increased IAA and IA-2A levels predict T1D in young children with genetic disease susceptibility and signs of advanced beta-cell autoimmunity. Since the incidence of T1D continues to increase, efforts aimed at preventing T1D are important, and reliable disease prediction is needed both for intervention trials and for effective and safe preventive therapies in the future. Our observations confirmed that combined HLA-based screening and regular autoantibody measurements reveal similar disease risks in pediatric general population as those seen in prediabetic FDRs, and that risk assessment can be stratified further by studying glucose metabolism of prediabetic subjects. As these screening efforts are feasible in practice, the knowledge now obtained can be exploited while designing intervention trials aimed at secondary prevention of T1D.
Resumo:
Background: Irritable bowel syndrome (IBS) is a common functional gastrointestinal (GI) disorder characterised by abdominal pain and abnormal bowel function. It is associated with a high rate of healthcare consumption and significant health care costs. The prevalence and economic burden of IBS in Finland has not been studied before. The aims of this study were to assess the prevalence of IBS according to various diagnostic criteria and to study the rates of psychiatric and somatic comorbidity in IBS. In addition, health care consumption and societal costs of IBS were to be evaluated. Methods: The study was a two-phase postal survey. Questionnaire I identifying IBS by Manning 2 (at least two of the six Manning symptoms), Manning 3 (at least three Manning symptoms), Rome I, and Rome II criteria, was mailed to a random sample of 5 000 working age subjects. It also covered extra-GI symptoms such as headache, back pain, and depression. Questionnaire II, covering rates of physician visits, and use of GI medication, was sent to subjects fulfilling Manning 2 or Rome II IBS criteria in Questionnaire I. Results: The response rate was 73% and 86% for questionnaires I and II. The prevalence of IBS was 15.9%, 9.6%, 5.6%, and 5.1% according to Manning 2, Manning 3, Rome I, and Rome II criteria. Of those meeting Rome II criteria, 97% also met Manning 2 criteria. Presence of severe abdominal pain was more often reported by subjects meeting either of the Rome criteria than those meeting either of the Manning criteria. Presence of depression, anxiety, and several somatic symptoms was more common among subjects meeting any IBS criterion than by controls. Of subjects with depressive symptoms, 11.6% met Rome II IBS criteria compared to 3.7% of those with no depressiveness. Subjects meeting any IBS criteria made more physician visits than controls. Intensity of GI symptoms and presence of dyspeptic symptoms were the strongest predictors of GI consultations. Presence of dyspeptic symptoms and a history of abdominal pain in childhood also predicted non-GI visits. Annual GI related individual costs were higher in the Rome II group (497 ) than in the Manning 2 group (295 ). Direct expenses of GI symptoms and non GI physician visits ranged between 98M for Rome II and 230M for Manning 2 criteria. Conclusions: The prevalence of IBS varies substantially depending on the criteria applied. Rome II criteria are more restrictive than Manning 2, and they identify an IBS population with more severe GI symptoms, more frequent health care use, and higher individual health care costs. Subjects with IBS demonstrate high rates of psychiatric and somatic comorbidity regardless of health care seeking status. Perceived symptom severity rather than psychiatric comorbidity predicts health care seeking for GI symptoms. IBS incurs considerable medical costs. The direct GI and non-GI costs are equivalent to up to 5% of outpatient health care and medicine costs in Finland. A more integral approach to IBS by physicians, accounting also for comorbid conditions, may produce a more favourable course in IBS patients and reduce health care expenditures.
Resumo:
The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.
Resumo:
Black hole X-ray binaries, binary systems where matter from a companion star is accreted by a stellar mass black hole, thereby releasing enormous amounts of gravitational energy converted into radiation, are seen as strong X-ray sources in the sky. As a black hole can only be detected via its interaction with its surroundings, these binary systems provide important evidence for the existence of black holes. There are now at least twenty cases where the measured mass of the X-ray emitting compact object in a binary exceeds the upper limit for a neutron star, thus inferring the presence of a black hole. These binary systems serve as excellent laboratories not only to study the physics of accretion but also to test predictions of general relativity in strongly curved space time. An understanding of the accretion flow onto these, the most compact objects in our Universe, is therefore of great importance to physics. We are only now slowly beginning to understand the spectra and variability observed in these X-ray sources. During the last decade, a framework has developed that provides an interpretation of the spectral evolution as a function of changes in the physics and geometry of the accretion flow driven by a variable accretion rate. This doctoral thesis presents studies of two black hole binary systems, Cygnus~X-1 and GRS~1915+105, plus the possible black hole candidate Cygnus~X-3, and the results from an attempt to interpret their observed properties within this emerging framework. The main result presented in this thesis is an interpretation of the spectral variability in the enigmatic source Cygnus~X-3, including the nature and accretion geometry of its so-called hard spectral state. The results suggest that the compact object in this source, which has not been uniquely identified as a black hole on the basis of standard mass measurements, is most probably a massive, ~30 Msun, black hole, and thus the most massive black hole observed in a binary in our Galaxy so far. In addition, results concerning a possible observation of limit-cycle variability in the microquasar GRS~1915+105 are presented as well as evidence of `mini-hysteresis' in the extreme hard state of Cygnus X-1.
Resumo:
New stars form in dense interstellar clouds of gas and dust called molecular clouds. The actual sites where the process of star formation takes place are the dense clumps and cores deeply embedded in molecular clouds. The details of the star formation process are complex and not completely understood. Thus, determining the physical and chemical properties of molecular cloud cores is necessary for a better understanding of how stars are formed. Some of the main features of the origin of low-mass stars, like the Sun, are already relatively well-known, though many details of the process are still under debate. The mechanism through which high-mass stars form, on the other hand, is poorly understood. Although it is likely that the formation of high-mass stars shares many properties similar to those of low-mass stars, the very first steps of the evolutionary sequence are unclear. Observational studies of star formation are carried out particularly at infrared, submillimetre, millimetre, and radio wavelengths. Much of our knowledge about the early stages of star formation in our Milky Way galaxy is obtained through molecular spectral line and dust continuum observations. The continuum emission of cold dust is one of the best tracers of the column density of molecular hydrogen, the main constituent of molecular clouds. Consequently, dust continuum observations provide a powerful tool to map large portions across molecular clouds, and to identify the dense star-forming sites within them. Molecular line observations, on the other hand, provide information on the gas kinematics and temperature. Together, these two observational tools provide an efficient way to study the dense interstellar gas and the associated dust that form new stars. The properties of highly obscured young stars can be further examined through radio continuum observations at centimetre wavelengths. For example, radio continuum emission carries useful information on conditions in the protostar+disk interaction region where protostellar jets are launched. In this PhD thesis, we study the physical and chemical properties of dense clumps and cores in both low- and high-mass star-forming regions. The sources are mainly studied in a statistical sense, but also in more detail. In this way, we are able to examine the general characteristics of the early stages of star formation, cloud properties on large scales (such as fragmentation), and some of the initial conditions of the collapse process that leads to the formation of a star. The studies presented in this thesis are mainly based on molecular line and dust continuum observations. These are combined with archival observations at infrared wavelengths in order to study the protostellar content of the cloud cores. In addition, centimetre radio continuum emission from young stellar objects (YSOs; i.e., protostars and pre-main sequence stars) is studied in this thesis to determine their evolutionary stages. The main results of this thesis are as follows: i) filamentary and sheet-like molecular cloud structures, such as infrared dark clouds (IRDCs), are likely to be caused by supersonic turbulence but their fragmentation at the scale of cores could be due to gravo-thermal instability; ii) the core evolution in the Orion B9 star-forming region appears to be dynamic and the role played by slow ambipolar diffusion in the formation and collapse of the cores may not be significant; iii) the study of the R CrA star-forming region suggests that the centimetre radio emission properties of a YSO are likely to change with its evolutionary stage; iv) the IRDC G304.74+01.32 contains candidate high-mass starless cores which may represent the very first steps of high-mass star and star cluster formation; v) SiO outflow signatures are seen in several high-mass star-forming regions which suggest that high-mass stars form in a similar way as their low-mass counterparts, i.e., via disk accretion. The results presented in this thesis provide constraints on the initial conditions and early stages of both low- and high-mass star formation. In particular, this thesis presents several observational results on the early stages of clustered star formation, which is the dominant mode of star formation in our Galaxy.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
Vuorokausivirtaaman ennustaminen yhdyskuntien vesi- ja viemärilaitosten yleissuunnittelussa.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
All protein-encoding genes in eukaryotes are transcribed into messenger RNA (mRNA) by RNA Polymerase II (RNAP II), whose activity therefore needs to be tightly controlled. An important and only partially understood level of regulation is the multiple phosphorylations of RNAP II large subunit C-terminal domain (CTD). Sequential phosphorylations regulate transcription initiation and elongation, and recruit factors involved in co-transcriptional processing of mRNA. Based largely on studies in yeast models and in vitro, the kinase activity responsible for the phosphorylation of the serine-5 (Ser5) residues of RNAP II CTD has been attributed to the Mat1/Cdk7/CycH trimer as part of Transcription Factor IIH. However, due to the lack of good mammalian genetic models, the roles of both RNAP II Ser5 phosphorylation as well as TFIIH kinase in transcription have provided ambiguous results and the in vivo kinase of Ser5 has remained elusive. The primary objective of this study was to elucidate the role of mammalian TFIIH, and specifically the Mat1 subunit in CTD phosphorylation and general RNAP II-mediated transcription. The approach utilized the Cre-LoxP system to conditionally delete murine Mat1 in cardiomyocytes and hepatocytes in vivo and and in cell culture models. The results identify the TFIIH kinase as the major mammalian Ser5 kinase and demonstrate its requirement for general transcription, noted by the use of nascent mRNA labeling. Also a role for Mat1 in regulating general mRNA turnover was identified, providing a possible rationale for earlier negative findings. A secondary objective was to identify potential gene- and tissue-specific roles of Mat1 and the TFIIH kinase through the use of tissue-specific Mat1 deletion. Mat1 was found to be required for the transcriptional function of PGC-1 in cardiomyocytes. Transriptional activation of lipogenic SREBP1 target genes following Mat1 deletion in hepatocytes revealed a repressive role for Mat1apparently mediated via co-repressor DMAP1 and the DNA methyltransferase Dnmt1. Finally, Mat1 and Cdk7 were also identified as a negative regulators of adipocyte differentiation through the inhibitory phosphorylation of Peroxisome proliferator-activated receptor (PPAR) γ. Together, these results demonstrate gene- and tissue-specific roles for the Mat1 subunit of TFIIH and open up new therapeutic possibilities in the treatment of diseases such as type II diabetes, hepatosteatosis and obesity.
Resumo:
Positron emission tomography (PET) is a molecular imaging technique that utilises radiopharmaceuticals (radiotracers) labelled with a positron-emitting radionuclide, such as fluorine-18 (18F). Development of a new radiotracer requires an appropriate radiosynthesis method: the most common of which with 18F is nucleophilic substitution with [18F]fluoride ion. The success of the labelling reaction is dependent on various factors such as the reactivity of [18F]fluoride, the structure of the target compound in addition to the chosen solvent. The overall radiosynthesis procedure must be optimised in terms of radiochemical yield and quality of the final product. Therefore, both quantitative and qualitative radioanalytical methods are essential in developing radiosynthesis methods. Furthermore, biological properties of the tracer candidate need to be evaluated by various pre-clinical studies in animal models. In this work, the feasibility of various nucleophilic 18F-fluorination strategies were studied and a labelling method for a novel radiotracer, N-3-[18F]fluoropropyl-2beta-carbomethoxy-3beta-4-fluorophenyl)nortropane ([18F]beta-CFT-FP), was optimised. The effect of solvent was studied by labelling a series of model compounds, 4-(R1-methyl)benzyl R2-benzoates. 18F-Fluorination reactions were carried out both in polar aprotic and protic solvents (tertiary alcohols). Assessment of the 18F-fluorinated products was studied by mass spectrometry (MS) in addition to conventional radiochromatographic methods, using radiosynthesis of 4-[18F]fluoro-N-[2-[1-(2-methoxyphenyl)-1-piperazinyl]ethyl-N-2-pyridinyl-benzamide (p-[18F]MPPF) as a model reaction. Labelling of [18F]beta-CFT-FP was studied using two 18F-fluoroalkylation reagents, [18F]fluoropropyl bromide and [18F]fluoropropyl tosylate, as well as by direct 18F-fluorination of sulfonate ester precursor. Subsequently, the suitability of [18F]beta-CFT-FP for imaging dopamine transporter (DAT) was evaluated by determining its biodistribution in rats. The results showed that protic solvents can be useful co-solvents in aliphatic 18F-fluorinations, especially in the labelling of sulfonate esters. Aromatic 18F-fluorination was not promoted in tert-alcohols. Sensitivity of the ion trap MS was sufficient for the qualitative analysis of the 18F-labelled products; p-[18F]MPPF was identified from the isolated product fraction with a mass-to-charge (m/z) ratio of 435 (i.e. protonated molecule [M+H]+). [18F]beta-CFT-FP was produced most efficiently via [18F]fluoropropyl tosylate, leading to sufficient radiochemical yield and specific radioactivity for PET studies. The ex vivo studies in rats showed fast kinetics as well as the specific uptake of [18F]beta-CFT-FP to the DAT rich brain regions. Thus, it was concluded that [18F]beta-CFT-FP has potential as a radiotracer for imaging DAT by PET.