42 resultados para SAMI
Resumo:
The aim of the study was to compare the effect physical exercise and bright light has on mood in healthy, working-age subjects with varying degrees of depressive symptoms. Previous research suggests that exercise may have beneficial effects on mood at least in subjects with depression. Bright light exposure is an effective treatment of winter depression, and possibly of non-seasonal depression as well. Limited data exist on the effect of exercise and bright light on mood in non-clinical populations, and no research has been done on the combination of these interventions. Working-age subjects were recruited through occupational health centres and 244 subjects were randomized into intervention groups: exercise, either in bright light or normal lighting, and relaxation / stretching sessions, either in bright light or normal gym lighting. During the eight-week intervention in midwinter, subjects rated their mood using a self-rating version of the Hamilton Depression Scale with additional questions for atypical depressive symptoms. The main finding of the study was that both exercise and bright-light exposure were effective in treating depressive symptoms. When the interventions were combined, the relative reduction in the Hamilton Depression Scale was 40 to 66%, and in atypical depressive symptoms even higher, 45 to 85%. Bright light exposure was more effective than exercise in treating atypical depressive symptoms. No single factor could be found that would predict a good response to these interventions. In conclusion, aerobic physical exercise twice a week during wintertime was effective in treating depressive symptoms. Adding bright light exposure to exercise increased the benefit, especially by reducing atypical depressive symptoms. Since this is so, this treatment could prevent subsequent major depressive episodes among the population generally.
Resumo:
Cardiovascular diseases (CVD) are, in developed countries, the leading cause of mortality. The majority of premature deaths and disability caused by CVD are due to atherosclerosis, a degenerating inflammatory disease affecting arterial walls. Early identification of lesions and initiation of treatment is crucial because the first manifestations quite often are major disabling cardiovascular events. Methods of finding individuals at high risk for these events are under development. Because magnetic resonance imaging (MRI) is an excellent non-invasive tool to study the structure and function of vascular system, we sought to discover whether existing MRI methods are able to show any difference in aortic and intracranial atherosclerotic lesions between patients at high risk for atherosclerosis and healthy controls. Our younger group (age 6-48) comprised 39 symptomless familial hypercholesterolemia (FH) patients and 25 healthy controls. Our older group (age 48-64) comprised 19 FH patients and 18 type 2 diabetes mellitus (DM) patients with coronary heart disease (CHD) and 29 healthy controls. Intracranial and aortic MRI was compared with carotid and femoral ultrasound (US). In neither age-group did MRI reveal any difference in the number of ischemic brain lesions or white matter hyperintensities (WMHIs) - possible signs of intracranial atherosclerosis - between patients and controls. Furthermore, MRI showed no difference in the structure or function of the aorta between FH patients and controls in either group. DM patients had lower compliance of the aorta than did controls, while no difference appeared between DM and FH patients. However, ultrasound showed greater plaque burden and increased thickness of carotid arterial walls in FH and DM patients in both age-groups, suggesting a more advanced atherosclerosis. The mortality of FH patients has decreased substantially after the late 1980´s when statin treatment became available. With statins, the progression of atherosclerotic lesions slows. We think that this, in concert with improvements in treatment of other risk factors, is one reason for the lack of differences between FH patients and controls in MRI measurements of the aorta and brain despite the more advanced disease of the carotid arteries assessed with US. Furthermore, whereas atherosclerotic lesions between different vascular territories correlate, differences might still exist in the extent and location of these lesions among different diseases. Small (<5 mm in diameter) WMHIs are more likely a phenomenon related to aging, but the larger ones may be the ones related to CVD and may be intermediate surrogates of stroke. The image quality in aortic imaging, although constantly improving, is not yet optimal and thus is a source of bias.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
Arguments arising from quantum mechanics and gravitation theory as well as from string theory, indicate that the description of space-time as a continuous manifold is not adequate at very short distances. An important candidate for the description of space-time at such scales is provided by noncommutative space-time where the coordinates are promoted to noncommuting operators. Thus, the study of quantum field theory in noncommutative space-time provides an interesting interface where ordinary field theoretic tools can be used to study the properties of quantum spacetime. The three original publications in this thesis encompass various aspects in the still developing area of noncommutative quantum field theory, ranging from fundamental concepts to model building. One of the key features of noncommutative space-time is the apparent loss of Lorentz invariance that has been addressed in different ways in the literature. One recently developed approach is to eliminate the Lorentz violating effects by integrating over the parameter of noncommutativity. Fundamental properties of such theories are investigated in this thesis. Another issue addressed is model building, which is difficult in the noncommutative setting due to severe restrictions on the possible gauge symmetries imposed by the noncommutativity of the space-time. Possible ways to relieve these restrictions are investigated and applied and a noncommutative version of the Minimal Supersymmetric Standard Model is presented. While putting the results obtained in the three original publications into their proper context, the introductory part of this thesis aims to provide an overview of the present situation in the field.
Resumo:
Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.
Resumo:
Neuroblastoma has successfully served as a model system for the identification of neuroectoderm-derived oncogenes. However, in spite of various efforts, only a few clinically useful prognostic markers have been found. Here, we present a framework, which integrates DNA, RNA and tissue data to identify and prioritize genetic events that represent clinically relevant new therapeutic targets and prognostic biomarkers for neuroblastoma.
Resumo:
Aneuploidy is among the most obvious differences between normal and cancer cells. However, mechanisms contributing to development and maintenance of aneuploid cell growth are diverse and incompletely understood. Functional genomics analyses have shown that aneuploidy in cancer cells is correlated with diffuse gene expression signatures and that aneuploidy can arise by a variety of mechanisms, including cytokinesis failures, DNA endoreplication and possibly through polyploid intermediate states. Here, we used a novel cell spot microarray technique to identify genes with a loss-of-function effect inducing polyploidy and/or allowing maintenance of polyploid cell growth of breast cancer cells. Integrative genomics profiling of candidate genes highlighted GINS2 as a potential oncogene frequently overexpressed in clinical breast cancers as well as in several other cancer types. Multivariate analysis indicated GINS2 to be an independent prognostic factor for breast cancer outcome (p = 0.001). Suppression of GINS2 expression effectively inhibited breast cancer cell growth and induced polyploidy. In addition, protein level detection of nuclear GINS2 accurately distinguished actively proliferating cancer cells suggesting potential use as an operational biomarker.
Resumo:
Männyn laatuun vaikuttavista tekijöistä tässä tutkimuksessa selvitettiin, mitkä tekijät aiheuttavat nuorissa männyissä oksikkuutta, mutkaisuutta käsittäen myös lenkouden ja haaraisuutta. Jokaiselle laatuvialle laadittiin kolme eritasoista mallia. Eniten keskityttiin tutkimaan oksikkuutta aiheuttavia tekijöitä, koska oksikkuus vaikuttaa männyn sisäiseen oksaisuuteen, joka on yleisin sahatavarakappaleen laatuluokan määräävä ominaisuus. Lisäksi mutkaisuus ja erityisesti haaraisuus osoittautuivat melko sattumanvaraisesti esiintyviksi laatuvioiksi, minkä takia niiden kehittymiseen vaikuttavia tekijöitä on vaikea määritellä. Tutkimuksessa käytettiin Metsäntutkimuslaitoksen taimikoiden inventointikokeiden (TINKA-kokeet) aineiston ensimmäisen ja kolmannen mittauskerran tuloksia. Näiden mittauskertojen väli oli 15 vuotta. Tutkimuksen perustuminen useampaan kuin yhteen mittauskertaan erottaa tämän tutkimuksen monista muista mäntyjen laatua käsittelevistä tutkimuksista, jotka ovat perustuneet yhden mittauskerran poikkileikkausaineistoon. Laadittujen kolmen eritasoisen mallin perusteella voidaan todeta, että oksikkuuden ja mutkaisuuden kehittymistä mäntyihin pystytään arvioimaan kohtalaisesti jo metsikön perustietojen (syntytapa, kasvupaikka, lämpösumma) avulla, jotka metsänomistajalla on tiedossaan jo taimikkoa perustaessaan. Puutason mittauksilla pystytään selvästi tarkentamaan arviota siitä, tuleeko männystä oksikas vai ei. Sitä vastoin puutason mittaukset parantavat vain vähän arviota mutkaisuuden kehittymisestä. Haaraisuuden esiintymistä luotettavasti ennustavaa mallia ei pystytty tekemään. Oksikkuutta lisääviä tekijöitä mallien mukaan olivat mm. männyn suuri läpimitan kasvu, suuri suhteellinen pituus saman metsikön muihin puihin verrattuna, alhainen kasvatustiheys männyntaimikon alkukehitysvaiheessa ja taimikonhoidon tekemättä jättäminen. Mutkaisuutta lisääviä tekijöitä olivat mm. suuri lämpösumma eli männikön sijainti Etelä-Suomessa, männyn suuri läpimitan kasvu, pieni suhteellinen pituus, alhainen kasvatustiheys männyntaimikon alkukehitysvaiheessa ja taimikonhoidon tekemättä jättäminen. Eri uudistamistavoista kylvö osoittautui parhaimmaksi laadun kannalta sekä oksikkuutta että mutkaisuutta tarkasteltaessa. Haaraisuutta lisääviä tekijöitä olivat mm. pieni lämpösumma eli männikön sijainti Pohjois-Suomessa, männyn suuri läpimitan kasvu ja alhainen kasvatustiheys männyntaimikon alkukehitysvaiheessa.
Resumo:
ALICE (A Large Ion Collider Experiment) is the LHC (Large Hadron Collider) experiment devoted to investigating the strongly interacting matter created in nucleus-nucleus collisions at the LHC energies. The ALICE ITS, Inner Tracking System, consists of six cylindrical layers of silicon detectors with three different technologies; in the outward direction: two layers of pixel detectors, two layers each of drift, and strip detectors. The number of parameters to be determined in the spatial alignment of the 2198 sensor modules of the ITS is about 13,000. The target alignment precision is well below 10 micron in some cases (pixels). The sources of alignment information include survey measurements, and the reconstructed tracks from cosmic rays and from proton-proton collisions. The main track-based alignment method uses the Millepede global approach. An iterative local method was developed and used as well. We present the results obtained for the ITS alignment using about 10^5 charged tracks from cosmic rays that have been collected during summer 2008, with the ALICE solenoidal magnet switched off.
Resumo:
Measurements of inclusive charged-hadron transverse-momentum and pseudorapidity distributions are presented for proton-proton collisions at sqrt(s) = 0.9 and 2.36 TeV. The data were collected with the CMS detector during the LHC commissioning in December 2009. For non-single-diffractive interactions, the average charged-hadron transverse momentum is measured to be 0.46 +/- 0.01 (stat.) +/- 0.01 (syst.) GeV/c at 0.9 TeV and 0.50 +/- 0.01 (stat.) +/- 0.01 (syst.) GeV/c at 2.36 TeV, for pseudorapidities between -2.4 and +2.4. At these energies, the measured pseudorapidity densities in the central region, dN(charged)/d(eta) for |eta|