21 resultados para Lyapunov coefficient

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of the study was to explore why the MuPSiNet project - a computer and network supported learning environment for the field of health care and social work - did not develop as expected. To grasp the problem some hypotheses were formulated. The hypotheses regarded the teachers' skills in and attitudes towards computing and their attitudes towards constructivist study methods. An online survey containing 48 items was performed. The survey targeted all the teachers within the field of health care and social work in the country, and it produced 461 responses that were analysed against the hypotheses. The reliability of the variables was tested using the Cronbach alpha coefficient and t-tests. Poor basic computing skills among the teachers combined with a vulnerable technical solution, and inadequate project management combined with lack of administrative models for transforming economic resources into manpower were the factors that turned out to play a decisive role in the project. Other important findings were that the teachers had rather poor skills and knowledge in computing, computer safety and computer supported instruction, and that these skills were significantly poorer among female teachers who were in majority in the sample. The fraction of teachers who were familiar with software for electronic patient records (EPR) was low. The attitudes towards constructivist teaching methods were positive, and further education seemed to utterly increase the teachers' readiness to use alternative teaching methods. The most important conclusions were the following: In order to integrate EPR software as a natural tool in teaching planning and documenting health care, it is crucial that the teachers have sufficient basic skills in computing and that more teachers have personal experience of using EPR software. In order for computer supported teaching to become accepted it is necessary to arrange with extensive further education for the teachers presently working, and for that further education to succeed it should be backed up locally among other things by sufficient support in matters concerning computer supported teaching. The attitudes towards computing showed significant gender differences. Based on the findings it is suggested that basic skills in computing should also include an awareness of data safety in relation to work in different kinds of computer networks, and that projects of this kind should be built up around a proper project organisation with sufficient resources. Suggestions concerning curricular development and further education are also presented. Conclusions concerning the research method were that reminders have a better effect, and that respondents tend to answer open-ended questions more verbosely in electronically distributed online surveys compared to traditional surveys. A method of utilising randomized passwords to guarantee respondent anonymity while maintaining sample control is presented. Keywords: computer-assisted learning, computer-assisted instruction, health care, social work, vocational education, computerized patient record, online survey

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies the informational efficiency of the European Union emission allowance (EUA) market. In an efficient market, the market price is unpredictable and profits above average are impossible in the long run. The main research problem is does the EUA price follow a random walk. The method is an econometric analysis of the price series, which includes an autocorrelation coefficient test and a variance ratio test. The results reveal that the price series is autocorrelated and therefore a nonrandom walk. In order to find out the extent of predictability, the price series is modelled with an autoregressive model. The conclusion is that the EUA price is autocorrelated only to a small degree and that the predictability cannot be used to make extra profits. The EUA market is therefore considered informationally efficient, although the price series does not fulfill the requirements of a random walk. A market review supports the conclusion, but it is clear that the maturing of the market is still in process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Photosynthesis is a chemical process in which the energy of the light quanta is transformed into chemical energy. Chlorophyll (Chl) molecules play a key role in photosynthesis; they function in the antennae systems and in the photosynthetic reaction center where the primary charge separation (CS) takes place. Bio-inspired mimicry of the CS is an essential unit in dye-sensitized solar cells. Aim of this study was to design and develop electron donor-acceptor (EDA) pairs from Chls and fullerenes (C60) or carbon nanotubes (CNT). The supramolecular approach was chosen, as long synthetic sequences required by the covalent approach lead to long reaction schemes and low yields. Here, a π-interaction between soluble CNTs and Chl was used in EDA construction. Also, a beta-face selective two-point bound Chl-C60 EDA was introduced. In addition, the photophysical properties of the supramolecular EDA dyads were analyzed. In organic chemistry, nuclear magnetic resonance (NMR) spectroscopy is the most vital analytical technique in use. Multi-dimensional NMR experiments have enabled a structural analysis of complex natural products and proteins. However, in mixture analysis NMR is still facing difficulties. In many cases overlapping signals can t be resolved even with the help of multi-dimensional experiments. In this work, an NMR tool based on simple host-guest chemistry between analytes and macromolecules was developed. Diffusion ordered NMR spectroscopy (DOSY) measures the mobilities of compounds in an NMR sample. In a liquid state NMR sample, each of the analytes has a characteristic diffusion coefficient, which is proportional to the size of the analyte. With normal DOSY experiment, provided that the diffusion coefficients of the analytes differ enough, individual spectra of analytes can be extracted. When similar sized analytes differ chemically, an additive can be introduced into the sample. Since macromolecules in a liquid state NMR sample can be considered practically stationary, even faint supramolecular interaction can change the diffusion coefficient of the analyte sufficiently for a successful resolution in DOSY. In this thesis, polyvinylpyrrolidone and polyethyleneglycol enhanced DOSY NMR techniques, which enable mixture analysis of similar in size but chemically differing natural products, are introduced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study brings new insights into the magmatic evolution of natural F-enriched peraluminous granitic systems. The Artjärvi, Sääskjärvi and Kymi granite stocks within the 1.64 Ga Wiborg rapakivi granite batholith have been investigated by petrographic, geochemical, experimental and melt inclusion methods. These stocks represent late-stage leucocratic and weakly peraluminous intrusive phases typical of rapakivi granites worldwide. The Artjärvi and Sääskjärvi stocks are multiphase intrusions in which the most evolved phase is topaz granite. The Kymi stock contains topaz throughout and has a well-developed zoned structure, from the rim to the center: stockscheider pegmatite equigranular topaz granite porphyritic topaz granite. Geochemically the topaz granites are enriched in F, Li, Be, Ga, Rb, Sn and Nb and depleted in Mg, Fe, Ti, Ba, Sr, Zr and Eu. The anomalous geochemistry and mineralogy of the topaz granites are essentially magmatic in origin; postmagmatic reactions have only slightly modified the compositions. The Kymi equigranular topaz granite shows the most evolved character, and the topaz granites at Artjärvi and Sääskjärvi resemble the less evolved porphyritic topaz granite of the Kymi stock. Stockscheiders are found at the roof contacts of the Artjärvi and Kymi stocks. The stockscheider at Artjärvi is composed of biotite-rich schlieren and pegmatite layers parallel to the contact. The schlieren layering is considered to have formed by velocity-gradient sorting mechanism parallel to the flow, which led to the accumulation of mafic minerals along the upper contact of the topaz granite. Cooling and contraction of the topaz granite formed fractures parallel to the roof contact and residual pegmatite magmas were injected along the fractures and formed the pegmatite layers. The zoned structure of the Kymi stock is the result of intrusion of highly evolved residual melt from deeper parts of the magma chamber along the fractured contact between the porphyritic granite crystal mush and country rock. The equigranular topaz granite and marginal pegmatite (stockscheider) crystallized from this evolved melt. Phase relations of the Kymi equigranular topaz granite have been investigated utilizing crystallization experiments at 100 to 500 MPa as a function of water activity and F content. Fluorite and topaz can crystallize as liquidus phases in F-rich peraluminous systems, but the F content of the melt should exceed 2.5 - 3.0 wt % to facilitate crystallization of topaz. In peraluminous F-bearing melts containing more than 1 wt % F, topaz and muscovite are expected to be the first F-bearing phases to crystallize at high pressure, whereas fluorite and topaz should crystallize first at low pressure. Overall, the saturation of fluorite and topaz follows the reaction: CaAl2Si2O8 (plagioclase) + 2[AlF3]melt = CaF2 (fluorite) + 2Al2SiO4F2 (topaz). The obtained partition coefficient for F between biotite and glass D(F)Bt/glass is 1.89 to 0.80 (average 1.29) and can be used as an empirical fluormeter to determine the F content of coexisting melts. In order to study the magmatic evolution of the Kymi stock, crystallized melt inclusions in quartz and topaz grains in the porphyritic and the equigranular topaz granites and the marginal pegmatite were rehomogenized and analyzed. The homogenization conditions for the melt inclusions from the granites were 700 °C, 300 MPa, and 24 h, and for melt inclusions from the pegmatite, 700 °C, 100 MPa, and 24/96 h. The majority of the melt inclusions is chemically similar to the bulk rocks (excluding H2O content), but a few melt inclusions in the equigranular granite show clearly higher F and low K2O contents (on average 11.6 wt % F, 0.65 wt % K2O). The melt inclusion compositions indicate coexistence of two melt fractions, a prevailing peraluminous and a very volatile-rich, possibly peralkaline. Combined petrological, experimental and melt inclusion studies of the Kymi equigranular topaz granite indicate that plagioclase was the liquidus phase at nearly water-saturated (fluid-saturated) conditions and that the F content of the melt was at least 2 wt %. The early crystallization of biotite and the presence of muscovite in crystallization experiments at 200 MPa contrasts with the late-stage crystallization of biotite and the absence of muscovite in the equigranular granite, indicating that crystallization pressure may have been lower than 200 MPa for the granite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of an atomic decomposition was introduced by Coifman and Rochberg (1980) for weighted Bergman spaces on the unit disk. By the Riemann mapping theorem, functions in every simply connected domain in the complex plane have an atomic decomposition. However, a decomposition resulting from a conformal mapping of the unit disk tends to be very implicit and often lacks a clear connection to the geometry of the domain that it has been mapped into. The lattice of points, where the atoms of the decomposition are evaluated, usually follows the geometry of the original domain, but after mapping the domain into another this connection is easily lost and the layout of points becomes seemingly random. In the first article we construct an atomic decomposition directly on a weighted Bergman space on a class of regulated, simply connected domains. The construction uses the geometric properties of the regulated domain, but does not explicitly involve any conformal Riemann map from the unit disk. It is known that the Bergman projection is not bounded on the space L-infinity of bounded measurable functions. Taskinen (2004) introduced the locally convex spaces LV-infinity consisting of measurable and HV-infinity of analytic functions on the unit disk with the latter being a closed subspace of the former. They have the property that the Bergman projection is continuous from LV-infinity onto HV-infinity and, in some sense, the space HV-infinity is the smallest possible substitute to the space H-infinity of analytic functions. In the second article we extend the above result to a smoothly bounded strictly pseudoconvex domain. Here the related reproducing kernels are usually not known explicitly, and thus the proof of continuity of the Bergman projection is based on generalised Forelli-Rudin estimates instead of integral representations. The minimality of the space LV-infinity is shown by using peaking functions first constructed by Bell (1981). Taskinen (2003) showed that on the unit disk the space HV-infinity admits an atomic decomposition. This result is generalised in the third article by constructing an atomic decomposition for the space HV-infinity on a smoothly bounded strictly pseudoconvex domain. In this case every function can be presented as a linear combination of atoms such that the coefficient sequence belongs to a suitable Köthe co-echelon space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work develops methods to account for shoot structure in models of coniferous canopy radiative transfer. Shoot structure, as it varies along the light gradient inside canopy, affects the efficiency of light interception per unit needle area, foliage biomass, or foliage nitrogen. The clumping of needles in the shoot volume also causes a notable amount of multiple scattering of light within coniferous shoots. The effect of shoot structure on light interception is treated in the context of canopy level photosynthesis and resource use models, and the phenomenon of within-shoot multiple scattering in the context of physical canopy reflectance models for remote sensing purposes. Light interception. A method for estimating the amount of PAR (Photosynthetically Active Radiation) intercepted by a conifer shoot is presented. The method combines modelling of the directional distribution of radiation above canopy, fish-eye photographs taken at shoot locations to measure canopy gap fraction, and geometrical measurements of shoot orientation and structure. Data on light availability, shoot and needle structure and nitrogen content has been collected from canopies of Pacific silver fir (Abies amabilis (Dougl.) Forbes) and Norway spruce (Picea abies (L.) Karst.). Shoot structure acclimated to light gradient inside canopy so that more shaded shoots have better light interception efficiency. Light interception efficiency of shoots varied about two-fold per needle area, about four-fold per needle dry mass, and about five-fold per nitrogen content. Comparison of fertilized and control stands of Norway spruce indicated that light interception efficiency is not greatly affected by fertilization. Light scattering. Structure of coniferous shoots gives rise to multiple scattering of light between the needles of the shoot. Using geometric models of shoots, multiple scattering was studied by photon tracing simulations. Based on simulation results, the dependence of the scattering coefficient of shoot from the scattering coefficient of needles is shown to follow a simple one-parameter model. The single parameter, termed the recollision probability, describes the level of clumping of the needles in the shoot, is wavelength independent, and can be connected to previously used clumping indices. By using the recollision probability to correct for the within-shoot multiple scattering, canopy radiative transfer models which have used leaves as basic elements can use shoots as basic elements, and thus be applied for coniferous forests. Preliminary testing of this approach seems to explain, at least partially, why coniferous forests appear darker than broadleaved forests in satellite data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: To characterize the inhibition of platelet function by paracetamol in vivo and in vitro, and to evaluate the possible interaction of paracetamol and diclofenac or valdecoxib in vivo. To assess the analgesic effect of the drugs in an experimental pain model. Methods: Healthy volunteers received increasing doses of intravenous paracetamol (15, 22.5 and 30 mg/kg), or the combination of paracetamol 1 g and diclofenac 1.1 mg/kg or valdecoxib 40 mg (as the pro-drug parecoxib). Inhibition of platelet function was assessed with photometric aggregometry, the platelet function analyzer (PFA-100), and release of thromboxane B2. Analgesia was assessed with the cold pressor test. The inhibition coefficient of platelet aggregation by paracetamol was determined as well as the nature of interaction between paracetamol and diclofenac by an isobolographic analysis in vitro. Results: Paracetamol inhibited platelet aggregation and TxB2-release dose-dependently in volunteers and concentration-dependently in vitro. The inhibition coefficient was 15.2 mg/L (95% CI 11.8 - 18.6). Paracetamol augmented the platelet inhibition by diclofenac in vivo, and the isobole showed that this interaction is synergistic. Paracetamol showed no interaction with valdecoxib. PFA-100 appeared insensitive in detecting platelet dysfunction by paracetamol, and the cold-pressor test showed no analgesia. Conclusions: Paracetamol inhibits platelet function in vivo and shows synergism when combined with diclofenac. This effect may increase the risk of bleeding in surgical patients with an impaired haemostatic system. The combination of paracetamol and valdecoxib may be useful in patients with low risk for thromboembolism. The PFA-100 seems unsuitable for detection of platelet dysfunction and the cold-pressor test seems unsuitable for detection of analgesia by paracetamol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: To evaluate the applicability of visual feedback posturography (VFP) for quantification of postural control, and to characterize the horizontal angular vestibulo-ocular reflex (AVOR) by use of a novel motorized head impulse test (MHIT). Methods: In VFP, subjects standing on a platform were instructed to move their center of gravity to symmetrically placed peripheral targets as fast and accurately as possible. The active postural control movements were measured in healthy subjects (n = 23), and in patients with vestibular schwannoma (VS) before surgery (n = 49), one month (n = 17), and three months (n = 36) after surgery. In MHIT we recorded head and eye position during motorized head impulses (mean velocity of 170º/s and acceleration of 1 550º/s²) in healthy subjects (n = 22), in patients with VS before surgery (n = 38) and about four months afterwards (n = 27). The gain, asymmetry and latency in MHIT were calculated. Results: The intraclass correlation coefficient for VFP parameters during repeated tests was significant (r = 0.78-0.96; p < 0.01), although two of four VFP parameters improved slightly during five test sessions in controls. At least one VFP parameter was abnormal pre- and postoperatively in almost half the patients, and these abnormal preoperative VFP results correlated significantly with abnormal postoperative results. The mean accuracy in postural control in patients was reduced pre- and postoperatively. A significant side difference with VFP was evident in 10% of patients. In the MHIT, the normal gain was close to unity, the asymmetry in gain was within 10%, and the latency was a mean ± standard deviation 3.4 ± 6.3 milliseconds. Ipsilateral gain or asymmetry in gain was preoperatively abnormal in 71% of patients, whereas it was abnormal in every patient after surgery. Preoperative gain (mean ± 95% confidence interval) was significantly lowered to 0.83 ± 0.08 on the ipsilateral side compared to 0.98 ± 0.06 on the contralateral side. The ipsilateral postoperative mean gain of 0.53 ± 0.05 was significantly different from preoperative gain. Conclusion: The VFP is a repeatable, quantitative method to assess active postural control within individual subjects. The mean postural control in patients with VS was disturbed before and after surgery, although not severely. Side difference in postural control in the VFP was rare. The horizontal AVOR results in healthy subjects and in patients with VS, measured with MHIT, were in agreement with published data achieved using other techniques with head impulse stimuli. The MHIT is a non-invasive method which allows reliable clinical assessment of the horizontal AVOR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this study was to assess the utility of two subjective facial grading systems, to evaluate the etiologic role of human herpesviruses in peripheral facial palsy (FP), and to explore characteristics of Melkersson-Rosenthal syndrome (MRS). Intrarater repeatability and interrater agreement were assessed for Sunnybrook (SFGS) and House-Brackmann facial grading systems (H-B FGS). Eight video-recorded FP patients were graded in two sittings by 26 doctors. Repeatability for SFGS was from good to excellent and agreement between doctors from moderate to excellent by intraclass correlation coefficient and coefficient of repeatability. For H-B FGS, repeatability was from fair to good and agreement from poor to fair by agreement percentage and kappa coefficients. Because SFGS was at least as good in repeatability as H-B FGS and showed more reliable results in agreement between doctors, we encourage the use of SFGS over H-B FGS. Etiologic role of human herpesviruses in peripheral FP was studied by searching DNA of herpes simplex virus (HSV) -1 and -2, varicella-zoster virus (VZV), human herpesvirus (HHV) -6A, -6B, and -7, Epstein-Barr virus (EBV), and cytomegalovirus (CMV) by PCR/microarray methods in cerebrospinal fluid (CSF) of 33 peripheral FP patients and 36 controls. Three patients and five controls had HHV-6 or -7 DNA in CSF. No DNA of HSV-1 or -2, VZV, EBV, or CMV was found. Detecting HHV-7 and dual HHV-6A and -6B DNA in CSF of FP patients is intriguing, but does not allow etiologic conclusions as such. These DNA findings in association with FP and the other diseases that they accompanied require further exploration. MRS is classically defined as a triad of recurrent labial or oro-facial edema, recurrent peripheral FP, and plicated tongue. All three signs are present in the minority of patients. Edema-dominated forms are more common in the literature, while MRS with FP has received little attention. The etiology and true incidence of MRS are unknown. Characteristics of MRS were evaluated at the Departments of Otorhinolaryngology and Dermatology focusing on patients with FP. There were 35 MRS patients, 20 with FP and they were mailed a questionnaire (17 answered) and were clinically examined (14 patients). At the Department of Otorhinolaryngology, every MRS patient had FP and half had the triad form of MRS. Two patients, whose tissue biopsies were taken during an acute edema episode, revealed nonnecrotizing granulomatous findings typical for MRS, the other without persisting edema and with symptoms for less than a year. A peripheral blood DNA was searched for gene mutations leading to UNC-93B protein deficiency predisposing to HSV-1 infections; no gene mutations were found. Edema in most MRS FP patients did not dominate the clinical picture, and no progression of the disease was observed, contrary to existing knowledge. At the Department of Dermatology, two patients had triad MRS and 15 had monosymptomatic granulomatous cheilitis with frequent or persistent edema and typical MRS tissue histology. The clinical picture of MRS varied according to the department where the patient was treated. More studies from otorhinolaryngology departments and on patients with FP would clarify the actual incidence and clinical picture of the syndrome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The superconducting (or cryogenic) gravimeter (SG) is based on the levitation of a super­conducting sphere in a stable magnetic field created by current in superconducting coils. Depending on frequency, it is capable of detecting gravity variations as small as 10-11ms-2. For a single event, the detection threshold is higher, conservatively about 10-9 ms-2. Due to its high sensitivity and low drift rate, the SG is eminently suitable for the study of geodynamical phenomena through their gravity signatures. I present investigations of Earth dynamics with the superconducting gravimeter GWR T020 at Metsähovi from 1994 to 2005. The history and key technical details of the installation are given. The data processing methods and the development of the local tidal model at Metsähovi are presented. The T020 is a part of the worldwide GGP (Global Geodynamics Project) network, which consist of 20 working station. The data of the T020 and of other participating SGs are available to the scientific community. The SG T020 have used as a long-period seismometer to study microseismicity and the Earth s free oscillation. The annual variation, spectral distribution, amplitude and the sources of microseism at Metsähovi were presented. Free oscillations excited by three large earthquakes were analyzed: the spectra, attenuation and rotational splitting of the modes. The lowest modes of all different oscillation types are studied, i.e. the radial mode 0S0, the "football mode" 0S2, and the toroidal mode 0T2. The very low level (0.01 nms-1) incessant excitation of the Earth s free oscillation was detected with the T020. The recovery of global and regional variations in gravity with the SG requires the modelling of local gravity effects. The most important of them is hydrology. The variation in the groundwater level at Metsähovi as measured in a borehole in the fractured bedrock correlates significantly (0.79) with gravity. The influence of local precipitation, soil moisture and snow cover are detectable in the gravity record. The gravity effect of the variation in atmospheric mass and that of the non-tidal loading by the Baltic Sea were investigated together, as sea level and air pressure are correlated. Using Green s functions it was calculated that a 1 metre uniform layer of water in the Baltic Sea increases the gravity at Metsähovi by 31 nms-2 and the vertical deformation is -11 mm. The regression coefficient for sea level is 27 nms-2m-1, which is 87% of the uniform model. These studies are associated with temporal height variations using the GPS data of Metsähovi permanent station. Results of long time series at Metsähovi demonstrated high quality of data and correctly carried out offsets and drift corrections. The superconducting gravimeter T020 has been proved to be an eminent and versatile tool in studies of the Earth dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Antarctic system comprises of the continent itself, Antarctica, and the ocean surrounding it, the Southern Ocean. The system has an important part in the global climate due to its size, its high latitude location and the negative radiation balance of its large ice sheets. Antarctica has also been in focus for several decades due to increased ultraviolet (UV) levels caused by stratospheric ozone depletion, and the disintegration of its ice shelves. In this study, measurements were made during three Austral summers to study the optical properties of the Antarctic system and to produce radiation information for additional modeling studies. These are related to specific phenomena found in the system. During the summer of 1997-1998, measurements of beam absorption and beam attenuation coefficients, and downwelling and upwelling irradiance were made in the Southern Ocean along a S-N transect at 6°E. The attenuation of photosynthetically active radiation (PAR) was calculated and used together with hydrographic measurements to judge whether the phytoplankton in the investigated areas of the Southern Ocean are light limited. By using the Kirk formula the diffuse attenuation coefficient was linked to the absorption and scattering coefficients. The diffuse attenuation coefficients (Kpar) for PAR were found to vary between 0.03 and 0.09 1/m. Using the values for KPAR and the definition of the Sverdrup critical depth, the studied Southern Ocean plankton systems were found not to be light limited. Variabilities in the spectral and total albedo of snow were studied in the Queen Maud Land region of Antarctica during the summers of 1999-2000 and 2000-2001. The measurement areas were the vicinity of the South African Antarctic research station SANAE 4, and a traverse near the Finnish Antarctic research station Aboa. The midday mean total albedos for snow were between 0.83, for clear skies, and 0.86, for overcast skies, at Aboa and between 0.81 and 0.83 for SANAE 4. The mean spectral albedo levels at Aboa and SANAE 4 were very close to each other. The variations in the spectral albedos were due more to differences in ambient conditions than variations in snow properties. A Monte-Carlo model was developed to study the spectral albedo and to develop a novel nondestructive method to measure the diffuse attenuation coefficient of snow. The method was based on the decay of upwelling radiation moving horizontally away from a source of downwelling light. This was assumed to have a relation to the diffuse attenuation coefficient. In the model, the attenuation coefficient obtained from the upwelling irradiance was higher than that obtained using vertical profiles of downwelling irradiance. The model results were compared to field measurements made on dry snow in Finnish Lapland and they correlated reasonably well. Low-elevation (below 1000 m) blue-ice areas may experience substantial melt-freeze cycles due to absorbed solar radiation and the small heat conductivity in the ice. A two-dimensional (x-z) model has been developed to simulate the formation and water circulation in the subsurface ponds. The model results show that for a physically reasonable parameter set the formation of liquid water within the ice can be reproduced. The results however are sensitive to the chosen parameter values, and their exact values are not well known. Vertical convection and a weak overturning circulation is generated stratifying the fluid and transporting warmer water downward, thereby causing additional melting at the base of the pond. In a 50-year integration, a global warming scenario mimicked by a decadal scale increase of 3 degrees per 100 years in air temperature, leads to a general increase in subsurface water volume. The ice did not disintegrate due to the air temperature increase after the 50 year integration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The structure and the mechanical properties of wood of Norway spruce (Picea abies [L.] Karst.) were studied using small samples from Finland and Sweden. X-ray diffraction (XRD) was used to determine the orientation of cellulose microfibrils (microfibril angle, MFA), the dimensions of cellulose crystallites and the average shape of the cell cross-section. X-ray attenuation and x-ray fluorescence measurements were used to study the chemical composition and the trace element content. Tensile testing with in situ XRD was used to characterise the mechanical properties of wood and the deformation of crystalline cellulose within the wood cell walls. Cellulose crystallites were found to be 192 284 Å long and 28.9 33.4 Å wide in chemically untreated wood and they were longer and wider in mature wood than in juvenile wood. The MFA distribution of individual Norway spruce tracheids and larger samples was asymmetric. In individual cell walls, the mean MFA was 19 30 degrees, while the mode of the MFA distribution was 7 21 degrees. Both the mean MFA and the mode of the MFA distribution decreased as a function of the annual ring. Tangential cell walls exhibited smaller mean MFA and mode of the MFA distribution than radial cell walls. Maceration of wood material caused narrowing of the MFA distribution and removed contributions observed at around 90 degrees. In wood of both untreated and fertilised trees, the average shape of the cell cross-section changed from circular via ambiguous to rectangular as the cambial age increased. The average shape of the cell cross-section and the MFA distribution did not change as a result of fertilisation. The mass absorption coefficient for x-rays was higher in wood of fertilised trees than in that of untreated trees and wood of fertilised trees contained more of the elements S, Cl, and K, but a smaller amount of Mn. Cellulose crystallites were longer in wood of fertilised trees than in that of untreated trees. Kraft cooking caused widening and shortening of the cellulose crystallites. Tensile tests parallel to the cells showed that if the mean MFA is initially around 10 degrees or smaller, no systematic changes occur in the MFA distribution due to strain. The role of mean MFA in defining the tensile strength or the modulus of elasticity of wood was not as dominant as that reported earlier. Crystalline cellulose elongated much less than the entire samples. The Poisson ratio νca of crystalline cellulose in Norway spruce wood was shown to be largely dependent on the surroundings of crystalline cellulose in the cell wall, varying between -1.2 and 0.8. The Poisson ratio was negative in kraft cooked wood and positive in chemically untreated wood. In chemically untreated wood, νca was larger in mature wood and in latewood compared to juvenile wood and earlywood.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diagnostic radiology represents the largest man-made contribution to population radiation doses in Europe. To be able to keep the diagnostic benefit versus radiation risk ratio as high as possible, it is important to understand the quantitative relationship between the patient radiation dose and the various factors which affect the dose, such as the scan parameters, scan mode, and patient size. Paediatric patients have a higher probability for late radiation effects, since longer life expectancy is combined with the higher radiation sensitivity of the developing organs. The experience with particular paediatric examinations may be very limited and paediatric acquisition protocols may not be optimised. The purpose of this thesis was to enhance and compare different dosimetric protocols, to promote the establishment of the paediatric diagnostic reference levels (DRLs), and to provide new data on patient doses for optimisation purposes in computed tomography (with new applications for dental imaging) and in paediatric radiography. Large variations in radiation exposure in paediatric skull, sinus, chest, pelvic and abdominal radiography examinations were discovered in patient dose surveys. There were variations between different hospitals and examination rooms, between different sized patients, and between imaging techniques; emphasising the need for harmonisation of the examination protocols. For computed tomography, a correction coefficient, which takes individual patient size into account in patient dosimetry, was created. The presented patient size correction method can be used for both adult and paediatric purposes. Dental cone beam CT scanners provided adequate image quality for dentomaxillofacial examinations while delivering considerably smaller effective doses to patient compared to the multi slice CT. However, large dose differences between cone beam CT scanners were not explained by differences in image quality, which indicated the lack of optimisation. For paediatric radiography, a graphical method was created for setting the diagnostic reference levels in chest examinations, and the DRLs were given as a function of patient projection thickness. Paediatric DRLs were also given for sinus radiography. The detailed information about the patient data, exposure parameters and procedures provided tools for reducing the patient doses in paediatric radiography. The mean tissue doses presented for paediatric radiography enabled future risk assessments to be done. The calculated effective doses can be used for comparing different diagnostic procedures, as well as for comparing the use of similar technologies and procedures in different hospitals and countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.