757 resultados para työ - muutokset
Resumo:
Acute pain has substantial survival value because of its protective function in the everyday environment. Instead, chronic pain lacks survival and adaptive function, causes great amount of individual suffering, and consumes the resources of the society due to the treatment costs and loss of production. The treatment of chronic pain has remained challenging because of inadequate understanding of mechanisms working at different levels of the nervous system in the development, modulation, and maintenance of chronic pain. Especially in unclear chronic pain conditions the treatment may be suboptimal because it can not be targeted to the underlying mechanisms. Noninvasive neuroimaging techniques have greatly contributed to our understanding of brain activity associated with pain in healthy individuals. Many previous studies, focusing on brain activations to acute experimental pain in healthy individuals, have consistently demonstrated a widely-distributed network of brain regions that participate in the processing of acute pain. The aim of the present thesis was to employ non-invasive brain imaging to better understand the brain mechanisms in patients suffering from chronic pain. In Study I, we used magnetoencephalography (MEG) to measure cortical responses to painful laser stimulation in healthy individuals for optimization of the stimulus parameters for patient studies. In Studies II and III, we monitored with MEG the cortical processing of touch and acute pain in patients with complex regional pain syndrome (CRPS). We found persisting plastic changes in the hand representation area of the primary somatosensory (SI) cortex, suggesting that chronic pain causes cortical reorganization. Responses in the posterior parietal cortex to both tactile and painful laser stimulation were attenuated, which could be associated with neglect-like symptoms of the patients. The primary motor cortex reactivity to acute pain was reduced in patients who had stronger spontaneous pain and weaker grip strength in the painful hand. The tight coupling between spontaneous pain and motor dysfunction supports the idea that motor rehabilitation is important in CRPS. In Studies IV and V we used MEG and functional magnetic resonance imaging (fMRI) to investigate the central processing of touch and acute pain in patients who suffered from recurrent herpes simplex virus infections and from chronic widespread pain in one side of the body. With MEG, we found plastic changes in the SI cortex, suggesting that many different types of chronic pain may be associated with similar cortical reorganization. With fMRI, we found functional and morphological changes in the central pain circuitry, as an indication of central contribution for the pain. These results show that chronic pain is associated with morphological and functional changes in the brain, and that such changes can be measured with functional imaging.
Resumo:
Cardiovascular diseases (CVD) are, in developed countries, the leading cause of mortality. The majority of premature deaths and disability caused by CVD are due to atherosclerosis, a degenerating inflammatory disease affecting arterial walls. Early identification of lesions and initiation of treatment is crucial because the first manifestations quite often are major disabling cardiovascular events. Methods of finding individuals at high risk for these events are under development. Because magnetic resonance imaging (MRI) is an excellent non-invasive tool to study the structure and function of vascular system, we sought to discover whether existing MRI methods are able to show any difference in aortic and intracranial atherosclerotic lesions between patients at high risk for atherosclerosis and healthy controls. Our younger group (age 6-48) comprised 39 symptomless familial hypercholesterolemia (FH) patients and 25 healthy controls. Our older group (age 48-64) comprised 19 FH patients and 18 type 2 diabetes mellitus (DM) patients with coronary heart disease (CHD) and 29 healthy controls. Intracranial and aortic MRI was compared with carotid and femoral ultrasound (US). In neither age-group did MRI reveal any difference in the number of ischemic brain lesions or white matter hyperintensities (WMHIs) - possible signs of intracranial atherosclerosis - between patients and controls. Furthermore, MRI showed no difference in the structure or function of the aorta between FH patients and controls in either group. DM patients had lower compliance of the aorta than did controls, while no difference appeared between DM and FH patients. However, ultrasound showed greater plaque burden and increased thickness of carotid arterial walls in FH and DM patients in both age-groups, suggesting a more advanced atherosclerosis. The mortality of FH patients has decreased substantially after the late 1980´s when statin treatment became available. With statins, the progression of atherosclerotic lesions slows. We think that this, in concert with improvements in treatment of other risk factors, is one reason for the lack of differences between FH patients and controls in MRI measurements of the aorta and brain despite the more advanced disease of the carotid arteries assessed with US. Furthermore, whereas atherosclerotic lesions between different vascular territories correlate, differences might still exist in the extent and location of these lesions among different diseases. Small (<5 mm in diameter) WMHIs are more likely a phenomenon related to aging, but the larger ones may be the ones related to CVD and may be intermediate surrogates of stroke. The image quality in aortic imaging, although constantly improving, is not yet optimal and thus is a source of bias.
Resumo:
An HIV outbreak among Finnish injecting drug users (IDUs) occurred in 1998. By the end of 2005, 282 IDUs were in-fected, most of them by recombinant virus CRF01_AE of HIV. After a rapid spread, the outbreak subsided, and the prevalence of HIV among IDUs remained low (<2%). The purpose of the study was to describe the outbreak in order to recognise factors that have influenced the spread and restriction of the outbreak, and thus to find tools for HIV preven-tion. Data on Finnish IDUs newly diagnosed HIV-positive between 1998 and 2005 was collected through interviews and patient documents. Study I compared markers of disease progression between 93 Finnish IDUs and 63 Dutch IDUs. In study II, geographical spread of the HIV outbreak was examined and compared with the spatial distribution of employed males. In study III, risk behaviour data from interviews of 89 HIV-positive and 207 HIV-negative IDUs was linked, and prevalence and risk factors for unprotected sex were evaluated. In study IV, data on 238 newly diagnosed IDUs was combined with data on 675 sexually transmitted HIV cases, and risk factors for late HIV diagnosis (CD4 cell count <200/µL, or AIDS at HIV diagnosis) were analysed. Finnish IDUs infected with CRF01_AE exhibited higher viral loads than did Amsterdam IDUs infected with subtype B, but there was no difference in CD4 development. The Finnish IDU outbreak spread and was restricted socially in a marginalised IDU population and geographically in areas characterised by low proportions of employed males. Up to 40% of the cases in the two clusters outside the city centre had no contact with the centre, where needle exchange services were available since 1997. Up to 63% of HIV-positive and 80% of HIV-negative sexually active IDUs reported inconsistent condom use, which was associated with steady relationships and recent inpatient addiction care. Com-pared to other transmission groups, HIV-positive IDUs were diagnosed earlier in their infection. The proportion of late diagnosed HIV cases in all transmission groups was 23%, but was only 6% among IDUs diagnosed during the first four years of the epidemic. The high viral load in early HIV infection may have contributed to the rapid spread of recombinant virus in the Finnish outbreak. The outbreak was restricted to a marginalised IDU population, and limited spatially to local pockets of pov-erty. To prevent HIV among IDUs, these pockets should be recognised and reached early through outreach work and the distribution of needle exchange and other prevention activities. To prevent the sexual transmission of HIV among IDUs, prevention programmes should be combined with addiction care services and targeted at every IDU. The early detection of the outbreak and early implementation of needle exchange programmes likely played a crucial role in re-versing the IDU outbreak.
Resumo:
The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.
Resumo:
Differentiation of various types of soft tissues is of high importance in medical imaging, because changes in soft tissue structure are often associated with pathologies, such as cancer. However, the densities of different soft tissues may be very similar, making it difficult to distinguish them in absorption images. This is especially true when the consideration of patient dose limits the available signal-to-noise ratio. Refraction is more sensitive than absorption to changes in the density, and small angle x-ray scattering on the other hand contains information about the macromolecular structure of the tissues. Both of these can be used as potential sources of contrast when soft tissues are imaged, but little is known about the visibility of the signals in realistic imaging situations. In this work the visibility of small-angle scattering and refraction in the context of medical imaging has been studied using computational methods. The work focuses on the study of analyzer based imaging, where the information about the sample is recorded in the rocking curve of the analyzer crystal. Computational phantoms based on simple geometrical shapes with differing material properties are used. The objects have realistic dimensions and attenuation properties that could be encountered in real imaging situations. The scattering properties mimic various features of measured small-angle scattering curves. Ray-tracing methods are used to calculate the refraction and attenuation of the beam, and a scattering halo is accumulated, including the effect of multiple scattering. The changes in the shape of the rocking curve are analyzed with different methods, including diffraction enhanced imaging (DEI), extended DEI (E-DEI) and multiple image radiography (MIR). A wide angle DEI, called W-DEI, is introduced and its performance is compared with that of the established methods. The results indicate that the differences in scattered intensities from healthy and malignant breast tissues are distinguishable to some extent with reasonable dose. Especially the fraction of total scattering has large enough differences that it can serve as a useful source of contrast. The peaks related to the macromolecular structure come to angles that are rather large, and have intensities that are only a small fraction of the total scattered intensity. It is found that such peaks seem to have only limited usefulness in medical imaging. It is also found that W-DEI performs rather well when most of the intensity remains in the direct beam, indicating that dark field imaging methods may produce the best results when scattering is weak. Altogether, it is found that the analysis of scattered intensity is a viable option even in medical imaging where the patient dose is the limiting factor.
Resumo:
There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.
Resumo:
This thesis reports investigations into the paper wetting process and its effects on the surface roughness and the out-of-plane (ZD) stiffness of machine-made paper. The aim of this work was to test the feasibility of employing air-borne ultrasound methods to determine surface roughness (by reflection) and ZD stiffness (by through transmission) of paper during penetration of distilled water, isopropanol and their mixtures. Air-borne ultrasound provides a non-contacting way to evaluate sample structure and mechanics during the liquid penetration event. Contrary to liquid immersion techniques, an air-borne measurement allows studying partial wetting of paper. In addition, two optical methods were developed to reveal the liquid location in paper during wetting. The laser light through transmission method was developed to monitor the liquid location in partially wetted paper. The white light reflection method was primarily used to monitor the penetration of the liquid front in the thickness direction. In the latter experiment the paper was fully wetted. The main results of the thesis were: 1) Liquid penetration induced surface roughening was quantified by monitoring the ultrasound reflection from the paper surface. 2) Liquid penetration induced stiffness alteration in the ZD of paper could be followed by measuring the change in the ultrasound ZD resonance in paper. 3) Through transmitted light revealed the liquid location in the partially wetted paper. 4) Liquid movement in the ZD of the paper could be observed by light reflection. The results imply that the presented ultrasonic means can without contact measure the alteration of paper roughness and stiffness during liquid transport. These methods can help avoiding over engineering the paper which reduces raw material and energy consumption in paper manufacturing. The presented optical means can estimate paper specific wetting properties, such as liquid penetration speed, transport mechanisms and liquid location within the paper structure. In process monitoring, these methods allow process tuning and manufacturing of paper with engineered liquid transport characteristics. With such knowledge the paper behaviour during printing can be predicted. These findings provide new methods for paper printing, surface sizing, and paper coating research.
Resumo:
Among the most striking natural phenomena affecting ozone are solar proton events (SPE), during which high-energy protons precipitate into the middle atmosphere in the polar regions. Ionisation caused by the protons results in changes in the lower ionosphere, and in production of neutral odd nitrogen and odd hydrogen species which then destroy ozone in well-known catalytic chemical reaction chains. Large SPEs are able to decrease the ozone concentration of upper stratosphere and mesosphere, but are not expected to significantly affect the ozone layer at 15--30~km altitude. In this work we have used the Sodankylä Ion and Neutral Chemistry Model (SIC) in studies of the short-term effects caused by SPEs. The model results were found to be in a good agreement with ionospheric observations from incoherent scatter radars, riometers, and VLF radio receivers as well as with measurements from the GOMOS/Envisat satellite instrument. For the first time, GOMOS was able to observe the SPE effects on odd nitrogen and ozone in the winter polar region. Ozone observations from GOMOS were validated against those from MIPAS/Envisat instrument, and a good agreement was found throughout the middle atmosphere. For the case of the SPE of October/November 2003, long-term ozone depletion was observed in the upper stratosphere. The depletion was further enhanced by the descent of odd nitrogen from the mesosphere inside the polar vortex, until the recovery occurred in late December. During the event, substantial diurnal variation of ozone depletion was seen in the mesosphere, caused mainly by the the strong diurnal cycle of the odd hydrogen species. In the lower ionosphere, SPEs increase the electron density which is very low in normal conditions. Therefore, SPEs make radar observations easier. In the case of the SPE of October, 1989, we studied the sunset transition of negative charge from electrons to ions, a long-standing problem. The observed phenomenon, which is controlled by the amount of solar radiation, was successfully explained by considering twilight changes in both the rate of photodetachment of negative ions and concentrations of minor neutral species. Changes in the magnetic field of the Earth control the extent of SPE-affected area. For the SPE of November 2001, the results indicated that for low and middle levels of geomagnetic disturbance the estimated cosmic radio noise absorption levels based on a magnetic field model are in a good agreement with ionospheric observations. For high levels of disturbance, the model overestimates the stretching of the geomagnetic field and the geographical extent of SPE-affected area. This work shows the importance of ionosphere-atmosphere interaction for SPE studies. By using both ionospheric and atmospheric observations, we have been able to cover for the most part the whole chain of SPE-triggered processes, from proton-induced ionisation to depletion of ozone.
Resumo:
Solar ultraviolet (UV) radiation has a broad range of effects concerning life on Earth. Soon after the mid-1980s, it was recognized that the stratospheric ozone content was declining over large areas of the globe. Because the stratospheric ozone layer protects life on Earth from harmful UV radiation, this lead to concern about possible changes in the UV radiation due to anthropogenic activity. Initiated by this concern, many stations for monitoring of the surface UV radiation were founded in the late 1980s and early 1990s. As a consequence, there is an apparent lack of information on UV radiation further in the past: measurements cannot tell us how the UV radiation levels have changed on time scales of, for instance, several decades. The aim of this thesis was to improve our understanding of past variations in the surface UV radiation by developing techniques for UV reconstruction. Such techniques utilize commonly available meteorological data together with measurements of the total ozone column for reconstructing, or estimating, the amount of UV radiation reaching Earth's surface in the past. Two different techniques for UV reconstruction were developed. Both are based on first calculating the clear-sky UV radiation using a radiative transfer model. The clear-sky value is then corrected for the effect of clouds based on either (i) sunshine duration or (ii) pyranometer measurements. Both techniques account also for the variations in the surface albedo caused by snow, whereas aerosols are included as a typical climatological aerosol load. Using these methods, long time series of reconstructed UV radiation were produced for five European locations, namely Sodankylä and Jokioinen in Finland, Bergen in Norway, Norrköping in Sweden, and Davos in Switzerland. Both UV reconstruction techniques developed in this thesis account for the greater part of the factors affecting the amount of UV radiation reaching the Earth's surface. Thus, they are considered reliable and trustworthy, as suggested also by the good performance of the methods. The pyranometer-based method shows better performance than the sunshine-based method, especially for daily values. For monthly values, the difference between the performances of the methods is smaller, indicating that the sunshine-based method is roughly as good as the pyranometer-based for assessing long-term changes in the surface UV radiation. The time series of reconstructed UV radiation produced in this thesis provide new insight into the past UV radiation climate and how the UV radiation has varied throughout the years. Especially the sunshine-based UV time series, extending back to 1926 and 1950 at Davos and Sodankylä, respectively, also put the recent changes driven by the ozone decline observed over the last few decades into perspective. At Davos, the reconstructed UV over the period 1926-2003 shows considerable variation throughout the entire period, with high values in the mid-1940s, early 1960s, and in the 1990s. Moreover, the variations prior to 1980 were found to be caused primarily by variations in the cloudiness, while the increase of 4.5 %/decade over the period 1979-1999 was supported by both the decline in the total ozone column and changes in the cloudiness. Of the other stations included in this work, both Sodankylä and Norrköping show a clear increase in the UV radiation since the early 1980s (3-4 %/decade), driven primarily by changes in the cloudiness, and to a lesser extent by the diminution of the total ozone. At Jokioinen, a weak increase was found, while at Bergen there was no considerable overall change in the UV radiation level.
Resumo:
This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.
Resumo:
Snow cover is very sensitive to climate change and has a large feedback effect on the climate system due to the high albedo. Snow covers almost all surfaces in Antarctica and small changes in snow properties can mean large changes in absorbed radiation. In the ongoing discussion of climatic change, the mass balance of Antarctica has received increasing focus during recent decades, since its reaction to global warming strongly influences sea-level change. The aim of the present work was to examine the spatial and temporal variations in the physical and chemical characteristics of surface snow and annual accumulation rates in western Dronning Maud Land, Antarctica. The data were collected along a 350-km-long transect from the coast to the plateau during the years 1999-2004 as a part of the Finnish Antarctic Research Programme (FINNARP). The research focused on the most recent annual accumulation in the coastal area. The results show that the distance from the sea, and the moisture source, was the most predominant factor controlling the variations in both physical (conductivity, grain size, oxygen isotope ratio and accumulation) and chemical snow properties. The sea-salt and sulphur-containing components predominated in the coastal region. The local influences of nunataks and topographic highs were also visible on snow. The variations in all measured properties were wide within single sites mostly due to redistribution by winds and sastrugi topography, which reveals the importance of the spatially representative measurements. The mean accumulations occurred on the ice shelf, in the coastal region and on the plateau: 312 ± 28, 215 ± 43 and 92 ± 25 mm w.e., respectively. Depth hoar layers were usually found under the thin ice crust and were associated with a low dielectric constant and high concentrations of nitrate. Taking into account the vast size of the Antarctic ice sheet and its geographic characteristics, it is important to extend investigation of the distribution of surface snow properties and accumulation to provide well-documented data.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
Currently, we live in an era characterized by the completion and first runs of the LHC accelerator at CERN, which is hoped to provide the first experimental hints of what lies beyond the Standard Model of particle physics. In addition, the last decade has witnessed a new dawn of cosmology, where it has truly emerged as a precision science. Largely due to the WMAP measurements of the cosmic microwave background, we now believe to have quantitative control of much of the history of our universe. These two experimental windows offer us not only an unprecedented view of the smallest and largest structures of the universe, but also a glimpse at the very first moments in its history. At the same time, they require the theorists to focus on the fundamental challenges awaiting at the boundary of high energy particle physics and cosmology. What were the contents and properties of matter in the early universe? How is one to describe its interactions? What kind of implications do the various models of physics beyond the Standard Model have on the subsequent evolution of the universe? In this thesis, we explore the connection between in particular supersymmetric theories and the evolution of the early universe. First, we provide the reader with a general introduction to modern day particle cosmology from two angles: on one hand by reviewing our current knowledge of the history of the early universe, and on the other hand by introducing the basics of supersymmetry and its derivatives. Subsequently, with the help of the developed tools, we direct the attention to the specific questions addressed in the three original articles that form the main scientific contents of the thesis. Each of these papers concerns a distinct cosmological problem, ranging from the generation of the matter-antimatter asymmetry to inflation, and finally to the origin or very early stage of the universe. They nevertheless share a common factor in their use of the machinery of supersymmetric theories to address open questions in the corresponding cosmological models.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Transport plays an important role in the distribution of long-lived gases such as ozone and water vapour in the atmosphere. Understanding of observed variability in these gases as well as prediction of the future changes depends therefore on our knowledge of the relevant atmospheric dynamics. This dissertation studies certain dynamical processes in the stratosphere and upper troposphere which influence the distribution of ozone and water vapour in the atmosphere. The planetary waves that originate in the troposphere drive the stratospheric circulation. They influence both the meridional transport of substances as well as parameters of the polar vortices. In turn, temperatures inside the polar vortices influence abundance of the Polar Stratospheric Clouds (PSC) and therefore the chemical ozone destruction. Wave forcing of the stratospheric circulation is not uniform during winter. The November-December averaged stratospheric eddy heat flux shows a significant anticorrelation with the January-February averaged eddy heat flux in the midlatitude stratosphere and troposphere. These intraseasonal variations are attributable to the internal stratospheric vacillations. In the period 1979-2002, the wave forcing exhibited a negative trend which was confined to the second half of winter only. In the period 1958-2002, area, strength and longevity of the Arctic polar vortices do not exhibit significant long-term changes while the area with temperatures lower than the threshold temperature for PSC formation shows statistically significant increase. However, the Arctic vortex parameters show significant decadal changes which are mirrored in the ozone variability. Monthly ozone tendencies in the Northern Hemisphere show significant correlations (|r|=0.7) with proxies of the stratospheric circulation. In the Antarctic, the springtime vortex in the lower stratosphere shows statistically significant trends in temperature, longevity and strength (but not in area) in the period 1979-2001. Analysis of the ozone and water vapour vertical distributions in the Arctic UTLS shows that layering below and above the tropopause is often associated with poleward Rossby wave-breaking. These observations together with calculations of cross-tropopause fluxes emphasize the importance of poleward Rossby wave breaking for the stratosphere-troposphere exchange in the Arctic.