213 resultados para Commercial scale
Resumo:
This study is part of an ongoing collaborative research and development project, the Vantaa Depression Study (VDS), between the National Public Health Institute, Helsinki and the Department of Psychiatry of Helsinki University Hospital (HUCH), Peijas hospital, Vantaa. The VDS is a prospective, naturalistic cohort study of 269 secondary-level care psychiatric out- and inpatients with a new episode of DSM-IV major depressive disorder (MDD). 269 patients (Nmales=72, Nfemales=197) with a current DSM-IV MDD were interviewed with semistructured interviews to assess all other psychiatric diagnoses. At 6- and 18-month follow-up the interviews were repeated. Suicidal behaviour was investigated both at intake and follow-up by using a psychometric scale (Scale for Suicidal Ideation) and interviewer's questions as well as the patient's psychiatric records. Patients, who reported suicidal ideation while entering the study were followed up weekly, and their level of suicidal ideation, hopelessness, anxiety and depression was measured. In this study suicidal ideation was common among psychiatric patients with MDD. Almost 60% of the depressed patients reported suicidal ideation and 15% of patients attempted suicide at the baseline. Patients with suicidal ideation or attempts had a clearly higher level of overall psychopathology than non-suicidal patients. During the 18-month follow-up period 8% of patients attempted suicide. The risk of an attempt was markedly higher (RR=7.54) during an episode of major depression compared with a period of remission. Suicide attempt during the follow-up period was predicted by lack of partner, a history of previous suicide attempts and time spent in depression. Suicidal ideation resolved for most of the suicidal patients during the first 2 to 3 months. The duration of suicidal ideation was longer for patients with an initially higher level of psychopathology. Declines both in depression and hopelessness independently predicted the subsequent decline in suicidal ideation. They both could have a causal role in reversing the suicidal process. Thus effective treatment of depression is a credible measure in suicide prevention. Patients with suicidal behaviour often received more antidepressants and had more frequent appointments with mental health professionals than non-suicidal patients. Suicidal patients had also more favourable attitudes towards antidepressant treatment and comparable adherence to treatment than those not suicidal. This study does not support the conception that patient attitudes or adherence to treatments would be a factor differentiating suicidal patients from non-suicidal. Instead, problems with adherence or attitudes seem to be generic to all psychiatric care.
Resumo:
Background: Brachial plexus birth palsy (BPBP) most often occurs as a result of foetal-maternal disproportion. The C5 and C6 nerve roots of the brachial plexus are most frequently affected. In contrast, roots from the C7 to Th1 that result in total injury together with C5 and C6 injury, are affected in fewer than half of the patients. BPBP was first described by Smellie in 1764. Erb published his classical description of the injury in 1874 and his name became linked with the paralysis that is associated with upper root injury. Since then, early results of brachial plexus surgery have been reasonably well documented. However, from a clinical point of view not all primary results are maintained and there is also a need for later follow-up results. In addition most of the studies that are published emanate from highly specialized clinics and no nation wide epidemiological reports are available. One of the plexus injuries is the avulsion type, in which the nerve root or roots are ruptured at the neural cord. It has been speculated whether this might cause injury to the whole neural system or whether shoulder asymmetry and upper limb inequality results in postural deformities of the spine. Alternatively, avulsion could manifest as other signs and symptoms of the whole musculoskeletal system. In addition, there is no available information covering activities of daily living after obstetric brachial plexus surgery. Patients and methods: This was a population-based cross-sectional study on all patients who had undergone brachial plexus surgery with at least 5 years of follow-up. An incidence of 3.05/1000 for BPBP was obtained from the registers for this study period. A total of 1706 BPBP patients needing hospital treatment out of 1 717 057 newborns were registered in Finland between 1971 and 1997 inclusive. Of these BPBP patients, 124 (7.3%) underwent brachial plexus surgery at a mean age of 2.8 months (range: 0.4―13.2 months). Surgery was most often performed by direct neuroraphy after neuroma resection (53%). Depending on the phase of the study, 105 to 112 patients (85-90%) participated in a clinical and radiological follow-up assessment. The mean follow up time exceeded 13 years (range: 5.0―31.5 years). Functional status of the upper extremity was evaluated using Mallet, Gilbert and Raimondi scales. Isometric strength of the upper limb, sensation of the hand and stereognosis were evaluated for both the affected and unaffected sides then the differences and their ratios were calculated and recorded. In addition to the upper extremity, assessment of the spine and lower extremities were performed. Activities of daily living (ADL), participation in normal physical activities, and the use of physiotherapy and occupational therapy were recorded in a questionnaire. Results: The unaffected limb functioned as the dominant hand in all, except four patients. The mean length of the affected upper limb was 6 cm (range: 1-13.5 cm) shorter in 106 (95%) patients. Shoulder function was recorded as a mean Mallet score of 3 (range: 2―4) which was moderate. Both elbow function and hand function were good. The mean Gilbert elbow scale value was 3 (range: -1―5) and the mean Raimondi hand scale was 4 (range:1―5). One-third of the patients experienced pain in the affected limb including all those patients (n=9) who had clavicular non-union resulting from surgery. A total of 61 patients (57%) had an active shoulder external rotation of less than 0° and an active elbow extension deficiency was noted in 82 patients (77%) giving a mean of 26° (range: 5°―80°). In all, expect two patients, shoulder external rotation strength at a mean ratio 35% (range: 0―83%) and in all patients elbow flexion strength at a mean ratio of 41% (range: 0―79%) were impaired compared to the unaffected side. According to radiographs, incongruence of the glenohumeral joint was noted in 15 (16%) patients, whereas incongruence of the radiohumeral joint was found in 20 (21%) patients. Fine sensation was normal for 34/49 (69%) patients with C5-6 injury, for 15/31 (48%) with C5-7 and for only 8/25 (32%) of patients with total injury. Loss of protective sensation or absent sensation was noted in some palmar areas of the hand for 12/105 patients (11%). Normal stereognosis was recorded for 88/105 patients (84%). No significant inequalities in leg length were found and the incidence of structural scoliosis (1.7%) did not differ from that of the reference population. Nearly half of the patients (43%) had asynchronous motion of the upper limbs during gait, which was associated with impaired upper limb function. Data obtained from the completed questionnaires indicated that two thirds (63%) of the patients were satisfied with the functional outcome of the affected hand although one third of all patients needed help with ADL. Only a few patients were unable to participate in physical activities such as: bicycling, cross-country skiing or swimming. However, 71% of the patients reported problems related to the affected upper limb, such as muscle weakness and/or joint stiffness during the aforementioned activities. Incongruity of the radiohumeral joints, extent of the injury, avulsion type injury, age less than three months of age at the time of plexus surgery and inexperience of the surgeon was related to poor results as determined by multivariate analyses. Conclusions: Most of the patients had persistent sequelae, especially of shoulder function. Almost all measurements for the total injury group were poorer compared with those of the C5-6 type injury group. Most of the patients had asymmetry of the shoulder region and a shorter affected upper limb, which is a probable reason for having an abnormal gait. However, BPBP did not have an effect on normal growth of the lower extremities or the spine. Although, participation in physical activities was similar to that of the normal population, two-thirds of the patients reported problems. One-third of the patients needed help with ADL. During the period covered by this study, 7.3% BPBP of patients that needed hospital treatment had a brachial plexus operation, which amounts to fewer than 10 operations per year in Finland. It seems that better results of obstetric plexus surgery and more careful follow-up including opportunities for late reconstructive procedures will be expected, if the treatment is solely concentrated on by a few specialised teams.
Resumo:
In order to predict the current state and future development of Earth s climate, detailed information on atmospheric aerosols and aerosol-cloud-interactions is required. Furthermore, these interactions need to be expressed in such a way that they can be represented in large-scale climate models. The largest uncertainties in the estimate of radiative forcing on the present day climate are related to the direct and indirect effects of aerosol. In this work aerosol properties were studied at Pallas and Utö in Finland, and at Mount Waliguan in Western China. Approximately two years of data from each site were analyzed. In addition to this, data from two intensive measurement campaigns at Pallas were used. The measurements at Mount Waliguan were the first long term aerosol particle number concentration and size distribution measurements conducted in this region. They revealed that the number concentration of aerosol particles at Mount Waliguan were much higher than those measured at similar altitudes in other parts of the world. The particles were concentrated in the Aitken size range indicating that they were produced within a couple of days prior to reaching the site, rather than being transported over thousands of kilometers. Aerosol partitioning between cloud droplets and cloud interstitial particles was studied at Pallas during the two measurement campaigns, First Pallas Cloud Experiment (First PaCE) and Second Pallas Cloud Experiment (Second PaCE). The method of using two differential mobility particle sizers (DMPS) to calculate the number concentration of activated particles was found to agree well with direct measurements of cloud droplet. Several parameters important in cloud droplet activation were found to depend strongly on the air mass history. The effects of these parameters partially cancelled out each other. Aerosol number-to-volume concentration ratio was studied at all three sites using data sets with long time-series. The ratio was found to vary more than in earlier studies, but less than either aerosol particle number concentration or volume concentration alone. Both air mass dependency and seasonal pattern were found at Pallas and Utö, but only seasonal pattern at Mount Waliguan. The number-to-volume concentration ratio was found to follow the seasonal temperature pattern well at all three sites. A new parameterization for partitioning between cloud droplets and cloud interstitial particles was developed. The parameterization uses aerosol particle number-to-volume concentration ratio and aerosol particle volume concentration as the only information on the aerosol number and size distribution. The new parameterization is computationally more efficient than the more detailed parameterizations currently in use, but the accuracy of the new parameterization was slightly lower. The new parameterization was also compared to directly observed cloud droplet number concentration data, and a good agreement was found.
Resumo:
In this thesis, the solar wind-magnetosphere-ionosphere coupling is studied observationally, with the main focus on the ionospheric currents in the auroral region. The thesis consists of five research articles and an introductory part that summarises the most important results reached in the articles and places them in a wider context within the field of space physics. Ionospheric measurements are provided by the International Monitor for Auroral Geomagnetic Effects (IMAGE) magnetometer network, by the low-orbit CHAllenging Minisatellite Payload (CHAMP) satellite, by the European Incoherent SCATter (EISCAT) radar, and by the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite. Magnetospheric observations, on the other hand, are acquired from the four spacecraft of the Cluster mission, and solar wind observations from the Advanced Composition Explorer (ACE) and Wind spacecraft. Within the framework of this study, a new method for determining the ionospheric currents from low-orbit satellite-based magnetic field data is developed. In contrast to previous techniques, all three current density components can be determined on a matching spatial scale, and the validity of the necessary one-dimensionality approximation, and thus, the quality of the results, can be estimated directly from the data. The new method is applied to derive an empirical model for estimating the Hall-to-Pedersen conductance ratio from ground-based magnetic field data, and to investigate the statistical dependence of the large-scale ionospheric currents on solar wind and geomagnetic parameters. Equations describing the amount of field-aligned current in the auroral region, as well as the location of the auroral electrojets, as a function of these parameters are derived. Moreover, the mesoscale (10-1000 km) ionospheric equivalent currents related to two magnetotail plasma sheet phenomena, bursty bulk flows and flux ropes, are studied. Based on the analysis of 22 events, the typical equivalent current pattern related to bursty bulk flows is established. For the flux ropes, on the other hand, only two conjugate events are found. As the equivalent current patterns during these two events are not similar, it is suggested that the ionospheric signatures of a flux rope depend on the orientation and the length of the structure, but analysis of additional events is required to determine the possible ionospheric connection of flux ropes.
Resumo:
Emissions of coal combustion fly ash through real scale ElectroStatic Precipitators (ESP) were studied in different coal combustion and operation conditions. Sub-micron fly-ash aerosol emission from a power plant boiler and the ESP were determined and consequently the aerosol penetration, as based on electrical mobility measurements, thus giving thereby an indication for an estimate on the size and the maximum extent that the small particles can escape. The experimentals indicate a maximum penetration of 4% to 20 % of the small particles, as counted on number basis instead of the normally used mass basis, while simultaneously the ESP is operating at a nearly 100% collection efficiency on mass basis. Although the size range as such seems to appear independent of the coal, of the boiler or even of the device used for the emission control, the maximum penetration level on the number basis depends on the ESP operating parameters. The measured emissions were stable during stable boiler operation for a fired coal, and the emissions seemed each to be different indicating that the sub-micron size distribution of the fly-ash could be used as a specific characteristics for recognition, for instance for authenticity, provided with an indication of known stable operation. Consequently, the results on the emissions suggest an optimum particle size range for environmental monitoring in respect to the probability of finding traces from the samples. The current work embodies also an authentication system for aerosol samples for post-inspection from any macroscopic sample piece. The system can comprise newly introduced new devices, for mutually independent use, or, for use in a combination with each other, as arranged in order to promote the sampling operation length and/or the tag selection diversity. The tag for the samples can be based on naturally occurring measures and/or added measures of authenticity in a suitable combination. The method involves not only military related applications but those in civil industries as well. Alternatively to the samples, the system can be applied to ink for note printing or other monetary valued papers, but also in a filter manufacturing for marking fibrous filters.
Resumo:
Pack ice is an aggregate of ice floes drifting on the sea surface. The forces controlling the motion and deformation of pack ice are air and water drag forces, sea surface tilt, Coriolis force and the internal force due to the interaction between ice floes. In this thesis, the mechanical behavior of compacted pack ice is investigated using theoretical and numerical methods, focusing on the three basic material properties: compressive strength, yield curve and flow rule. A high-resolution three-category sea ice model is applied to investigate the sea ice dynamics in two small basins, the whole Gulf Riga and the inside Pärnu Bay, focusing on the calibration of the compressive strength for thin ice. These two basins are on the scales of 100 km and 20 km, respectively, with typical ice thickness of 10-30 cm. The model is found capable of capturing the main characteristics of the ice dynamics. The compressive strength is calibrated to be about 30 kPa, consistent with the values from most large-scale sea ice dynamic studies. In addition, the numerical study in Pärnu Bay suggests that the shear strength drops significantly when the ice-floe size markedly decreases. A characteristic inversion method is developed to probe the yield curve of compacted pack ice. The basis of this method is the relationship between the intersection angle of linear kinematic features (LKFs) in sea ice and the slope of the yield curve. A summary of the observed LKFs shows that they can be basically divided into three groups: intersecting leads, uniaxial opening leads and uniaxial pressure ridges. Based on the available observed angles, the yield curve is determined to be a curved diamond. Comparisons of this yield curve with those from other methods show that it possesses almost all the advantages identified by the other methods. A new constitutive law is proposed, where the yield curve is a diamond and the flow rule is a combination of the normal and co-axial flow rule. The non-normal co-axial flow rule is necessary for the Coulombic yield constraint. This constitutive law not only captures the main features of forming LKFs but also takes the advantage of avoiding overestimating divergence during shear deformation. Moreover, this study provides a method for observing the flow rule for pack ice during deformation.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.
Resumo:
This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.
Resumo:
The Antarctic system comprises of the continent itself, Antarctica, and the ocean surrounding it, the Southern Ocean. The system has an important part in the global climate due to its size, its high latitude location and the negative radiation balance of its large ice sheets. Antarctica has also been in focus for several decades due to increased ultraviolet (UV) levels caused by stratospheric ozone depletion, and the disintegration of its ice shelves. In this study, measurements were made during three Austral summers to study the optical properties of the Antarctic system and to produce radiation information for additional modeling studies. These are related to specific phenomena found in the system. During the summer of 1997-1998, measurements of beam absorption and beam attenuation coefficients, and downwelling and upwelling irradiance were made in the Southern Ocean along a S-N transect at 6°E. The attenuation of photosynthetically active radiation (PAR) was calculated and used together with hydrographic measurements to judge whether the phytoplankton in the investigated areas of the Southern Ocean are light limited. By using the Kirk formula the diffuse attenuation coefficient was linked to the absorption and scattering coefficients. The diffuse attenuation coefficients (Kpar) for PAR were found to vary between 0.03 and 0.09 1/m. Using the values for KPAR and the definition of the Sverdrup critical depth, the studied Southern Ocean plankton systems were found not to be light limited. Variabilities in the spectral and total albedo of snow were studied in the Queen Maud Land region of Antarctica during the summers of 1999-2000 and 2000-2001. The measurement areas were the vicinity of the South African Antarctic research station SANAE 4, and a traverse near the Finnish Antarctic research station Aboa. The midday mean total albedos for snow were between 0.83, for clear skies, and 0.86, for overcast skies, at Aboa and between 0.81 and 0.83 for SANAE 4. The mean spectral albedo levels at Aboa and SANAE 4 were very close to each other. The variations in the spectral albedos were due more to differences in ambient conditions than variations in snow properties. A Monte-Carlo model was developed to study the spectral albedo and to develop a novel nondestructive method to measure the diffuse attenuation coefficient of snow. The method was based on the decay of upwelling radiation moving horizontally away from a source of downwelling light. This was assumed to have a relation to the diffuse attenuation coefficient. In the model, the attenuation coefficient obtained from the upwelling irradiance was higher than that obtained using vertical profiles of downwelling irradiance. The model results were compared to field measurements made on dry snow in Finnish Lapland and they correlated reasonably well. Low-elevation (below 1000 m) blue-ice areas may experience substantial melt-freeze cycles due to absorbed solar radiation and the small heat conductivity in the ice. A two-dimensional (x-z) model has been developed to simulate the formation and water circulation in the subsurface ponds. The model results show that for a physically reasonable parameter set the formation of liquid water within the ice can be reproduced. The results however are sensitive to the chosen parameter values, and their exact values are not well known. Vertical convection and a weak overturning circulation is generated stratifying the fluid and transporting warmer water downward, thereby causing additional melting at the base of the pond. In a 50-year integration, a global warming scenario mimicked by a decadal scale increase of 3 degrees per 100 years in air temperature, leads to a general increase in subsurface water volume. The ice did not disintegrate due to the air temperature increase after the 50 year integration.
Resumo:
Together with cosmic spherules, interplanetary dust particles and lunar samples returned by Apollo and Luna missions, meteorites are the only source of extraterrestrial material on Earth. The physical properties of meteorites, especially their magnetic susceptibility, bulk and grain density, porosity and paleomagnetic information, have wide applications in planetary research and can reveal information about origin and internal structure of asteroids. Thus, an expanded database of meteorite physical properties was compiled with new measurements done in meteorite collections across Europe using a mobile laboratory facility. However, the scale problem may bring discrepancies in the comparison of asteroid and meteorite properties. Due to inhomogenity, the physical properties of meteorites studied on a centimeter or millimeter scale may differ from those of asteroids determined on kilometer scales. Further difference may arise from shock effects, space and terrestrial weathering and from difference in material properties at various temperatures. Close attention was given to the reliability of the paleomagnetic and paleointensity information in meteorites and the methodology to test for magnetic overprints was prepared and verified.
Resumo:
In recent years there has been growing interest in selecting suitable wood raw material to increase end product quality and to increase the efficiency of industrial processes. Genetic background and growing conditions are known to affect properties of growing trees, but only a few parameters reflecting wood quality, such as volume and density can be measured on an industrial scale. Therefore research on cellular level structures of trees grown in different conditions is needed to increase understanding of the growth process of trees leading to desired wood properties. In this work the cellular and cell wall structures of wood were studied. Parameters, such as the mean microfibril angle (MFA), the spiral grain angles, the fibre length, the tracheid cell wall thickness and the cross-sectional shape of the tracheid, were determined as a function of distance from the pith towards the bark and mutual dependencies of these parameters were discussed. Samples from fast-grown trees, which belong to a same clone, grown in fertile soil and also from fertilised trees were measured. It was found that in fast-grown trees the mean MFA decreased more gradually from the pith to the bark than in reference stems. In fast-grown samples cells were shorter, more thin-walled and their cross-sections were rounder than in slower-grown reference trees. Increased growth rate was found to cause an increase in spiral grain variation both within and between annual rings. Furthermore, methods for determination of the mean MFA using x-ray diffraction were evaluated. Several experimental arrangements including the synchrotron radiation based microdiffraction were compared. For evaluation of the data analysis procedures a general form for diffraction conditions in terms of angles describing the fibre orientation and the shape of the cell was derived. The effects of these parameters on the obtained microfibril angles were discussed. The use of symmetrical transmission geometry and tangentially cut samples gave the most reliable MFA values.
Resumo:
Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
The ever-increasing demand for faster computers in various areas, ranging from entertaining electronics to computational science, is pushing the semiconductor industry towards its limits on decreasing the sizes of electronic devices based on conventional materials. According to the famous law by Gordon E. Moore, a co-founder of the world s largest semiconductor company Intel, the transistor sizes should decrease to the atomic level during the next few decades to maintain the present rate of increase in the computational power. As leakage currents become a problem for traditional silicon-based devices already at sizes in the nanometer scale, an approach other than further miniaturization is needed to accomplish the needs of the future electronics. A relatively recently proposed possibility for further progress in electronics is to replace silicon with carbon, another element from the same group in the periodic table. Carbon is an especially interesting material for nanometer-sized devices because it forms naturally different nanostructures. Furthermore, some of these structures have unique properties. The most widely suggested allotrope of carbon to be used for electronics is a tubular molecule having an atomic structure resembling that of graphite. These carbon nanotubes are popular both among scientists and in industry because of a wide list of exciting properties. For example, carbon nanotubes are electronically unique and have uncommonly high strength versus mass ratio, which have resulted in a multitude of proposed applications in several fields. In fact, due to some remaining difficulties regarding large-scale production of nanotube-based electronic devices, fields other than electronics have been faster to develop profitable nanotube applications. In this thesis, the possibility of using low-energy ion irradiation to ease the route towards nanotube applications is studied through atomistic simulations on different levels of theory. Specifically, molecular dynamic simulations with analytical interaction models are used to follow the irradiation process of nanotubes to introduce different impurity atoms into these structures, in order to gain control on their electronic character. Ion irradiation is shown to be a very efficient method to replace carbon atoms with boron or nitrogen impurities in single-walled nanotubes. Furthermore, potassium irradiation of multi-walled and fullerene-filled nanotubes is demonstrated to result in small potassium clusters in the hollow parts of these structures. Molecular dynamic simulations are further used to give an example on using irradiation to improve contacts between a nanotube and a silicon substrate. Methods based on the density-functional theory are used to gain insight on the defect structures inevitably created during the irradiation. Finally, a new simulation code utilizing the kinetic Monte Carlo method is introduced to follow the time evolution of irradiation-induced defects on carbon nanotubes on macroscopic time scales. Overall, the molecular dynamic simulations presented in this thesis show that ion irradiation is a promisingmethod for tailoring the nanotube properties in a controlled manner. The calculations made with density-functional-theory based methods indicate that it is energetically favorable for even relatively large defects to transform to keep the atomic configuration as close to the pristine nanotube as possible. The kinetic Monte Carlo studies reveal that elevated temperatures during the processing enhance the self-healing of nanotubes significantly, ensuring low defect concentrations after the treatment with energetic ions. Thereby, nanotubes can retain their desired properties also after the irradiation. Throughout the thesis, atomistic simulations combining different levels of theory are demonstrated to be an important tool for determining the optimal conditions for irradiation experiments, because the atomic-scale processes at short time scales are extremely difficult to study by any other means.