37 resultados para Calculation, Arithmetical.
em Helda - Digital Repository of University of Helsinki
Resumo:
From Arithmetic to Algebra. Changes in the skills in comprehensive school over 20 years. In recent decades we have emphasized the understanding of calculation in mathematics teaching. Many studies have found that better understanding helps to apply skills in new conditions and that the ability to think on an abstract level increases the transfer to new contexts. In my research I take into consideration competence as a matrix where content is in a horizontal line and levels of thinking are in a vertical line. The know-how is intellectual and strategic flexibility and understanding. The resources and limitations of memory have their effects on learning in different ways in different phases. Therefore both flexible conceptual thinking and automatization must be considered in learning. The research questions that I examine are what kind of changes have occurred in mathematical skills in comprehensive school over the last 20 years and what kind of conceptual thinking is demonstrated by students in this decade. The study consists of two parts. The first part is a statistical analysis of the mathematical skills and their changes over the last 20 years in comprehensive school. In the test the pupils did not use calculators. The second part is a qualitative analysis of the conceptual thinking of pupils in comprehensive school in this decade. The study shows significant differences in algebra and in some parts of arithmetic. The largest differences were detected in the calculation skills of fractions. In the 1980s two out of three pupils were able to complete tasks with fractions, but in the 2000s only one out of three pupils were able to do the same tasks. Also remarkable is that out of the students who could complete the tasks with fractions, only one out of three pupils was on the conceptual level in his/her thinking. This means that about 10% of pupils are able to understand the algebraic expression, which has the same isomorphic structure as the arithmetical expression. This finding is important because the ability to think innovatively is created when learning the basic concepts. Keywords: arithmetic, algebra, competence
Resumo:
The objective of this paper is to improve option risk monitoring by examining the information content of implied volatility and by introducing the calculation of a single-sum expected risk exposure similar to the Value-at-Risk. The figure is calculated in two steps. First, there is a need to estimate the value of a portfolio of options for a number of different market scenarios, while the second step is to summarize the information content of the estimated scenarios into a single-sum risk measure. This involves the use of probability theory and return distributions, which confronts the user with the problems of non-normality in the return distribution of the underlying asset. Here the hyperbolic distribution is used to describe one alternative for dealing with heavy tails. Results indicate that the information content of implied volatility is useful when predicting future large returns in the underlying asset. Further, the hyperbolic distribution provides a good fit to historical returns enabling a more accurate definition of statistical intervals and extreme events.
Resumo:
Multiple sclerosis (MS) is a chronic, inflammatory disease of the central nervous system, characterized especially by myelin and axon damage. Cognitive impairment in MS is common but difficult to detect without a neuropsychological examination. Valid and reliable methods are needed in clinical practice and research to detect deficits, follow their natural evolution, and verify treatment effects. The Paced Auditory Serial Addition Test (PASAT) is a measure of sustained and divided attention, working memory, and information processing speed, and it is widely used in MS patients neuropsychological evaluation. Additionally, the PASAT is the sole cognitive measure in an assessment tool primarly designed for MS clinical trials, the Multiple Sclerosis Functional Composite (MSFC). The aims of the present study were to determine a) the frequency, characteristics, and evolution of cognitive impairment among relapsing-remitting MS patients, and b) the validity and reliability of the PASAT in measuring cognitive performance in MS patients. The subjects were 45 relapsing-remitting MS patients from Seinäjoki Central Hospital, Department of Neurology and 48 healthy controls. Both groups underwent comprehensive neuropsychological assessments, including the PASAT, twice in a one-year follow-up, and additionally a sample of 10 patients and controls were evaluated with the PASAT in serial assessments five times in one month. The frequency of cognitive dysfunction among relapsing-remitting MS patients in the present study was 42%. Impairments were characterized especially by slowed information processing speed and memory deficits. During the one-year follow-up, the cognitive performance was relatively stable among MS patients on a group level. However, the practice effects in cognitive tests were less pronounced among MS patients than healthy controls. At an individual level the spectrum of MS patients cognitive deficits was wide in regards to their characteristics, severity, and evolution. The PASAT was moderately accurate in detecting MS-associated cognitive impairment, and 69% of patients were correctly classified as cognitively impaired or unimpaired when comprehensive neuropsychological assessment was used as a "gold standard". Self-reported nervousness and poor arithmetical skills seemed to explain misclassifications. MS-related fatigue was objectively demonstrated as fading performance towards the end of the test. Despite the observed practice effect, the reliability of the PASAT was excellent, and it was sensitive to the cognitive decline taking place during the follow-up in a subgroup of patients. The PASAT can be recommended for use in the neuropsychological assessment of MS patients. The test is fairly sensitive, but less specific; consequently, the reasons for low scores have to be carefully identified before interpreting them as clinically significant.
Resumo:
Inadvertent climate modification has led to an increase in urban temperatures compared to the surrounding rural area. The main reason for the temperature rise is the altered energy portioning of input net radiation to heat storage and sensible and latent heat fluxes in addition to the anthropogenic heat flux. The heat storage flux and anthropogenic heat flux have not yet been determined for Helsinki and they are not directly measurable. To the contrary, turbulent fluxes of sensible and latent heat in addition to net radiation can be measured, and the anthropogenic heat flux together with the heat storage flux can be solved as a residual. As a result, all inaccuracies in the determination of the energy balance components propagate to the residual term and special attention must be paid to the accurate determination of the components. One cause of error in the turbulent fluxes is the fluctuation attenuation at high frequencies which can be accounted for by high frequency spectral corrections. The aim of this study is twofold: to assess the relevance of high frequency corrections to water vapor fluxes and to assess the temporal variation of the energy fluxes. Turbulent fluxes of sensible and latent heat have been measured at SMEAR III station, Helsinki, since December 2005 using the eddy covariance technique. In addition, net radiation measurements have been ongoing since July 2007. The used calculation methods in this study consist of widely accepted eddy covariance data post processing methods in addition to Fourier and wavelet analysis. The high frequency spectral correction using the traditional transfer function method is highly dependent on relative humidity and has an 11% effect on the latent heat flux. This method is based on an assumption of spectral similarity which is shown not to be valid. A new correction method using wavelet analysis is thus initialized and it seems to account for the high frequency variation deficit. Anyhow, the resulting wavelet correction remains minimal in contrast to the traditional transfer function correction. The energy fluxes exhibit a behavior characteristic for urban environments: the energy input is channeled to sensible heat as latent heat flux is restricted by water availability. The monthly mean residual of the energy balance ranges from 30 Wm-2 in summer to -35 Wm-2 in winter meaning a heat storage to the ground during summer. Furthermore, the anthropogenic heat flux is approximated to be 50 Wm-2 during winter when residential heating is important.
Resumo:
Determination of testosterone and related compounds in body fluids is of utmost importance in doping control and the diagnosis of many diseases. Capillary electromigration techniques are a relatively new approach for steroid research. Owing to their electrical neutrality, however, separation of steroids by capillary electromigration techniques requires the use of charged electrolyte additives that interact with the steroids either specifically or non-specifically. The analysis of testosterone and related steroids by non-specific micellar electrokinetic chromatography (MEKC) was investigated in this study. The partial filling (PF) technique was employed, being suitable for detection by both ultraviolet spectrophotometry (UV) and electrospray ionization mass spectrometry (ESI-MS). Efficient, quantitative PF-MEKC UV methods for steroid standards were developed through the use of optimized pseudostationary phases comprising surfactants and cyclodextrins. PF-MEKC UV proved to be a more sensitive, efficient and repeatable method for the steroids than PF-MEKC ESI-MS. It was discovered that in PF-MEKC analyses of electrically neutral steroids, ESI-MS interfacing sets significant limitations not only on the chemistry affecting the ionization and detection processes, but also on the separation. The new PF-MEKC UV method was successfully employed in the determination of testosterone in male urine samples after microscale immunoaffinity solid-phase extraction (IA-SPE). The IA-SPE method, relying on specific interactions between testosterone and a recombinant anti-testosterone Fab fragment, is the first such method described for testosterone. Finally, new data for interactions between steroids and human and bovine serum albumins were obtained through the use of affinity capillary electrophoresis. A new algorithm for the calculation of association constants between proteins and neutral ligands is introduced.
Resumo:
There is intense activity in the area of theoretical chemistry of gold. It is now possible to predict new molecular species, and more recently, solids by combining relativistic methodology with isoelectronic thinking. In this thesis we predict a series of solid sheet-type crystals for Group-11 cyanides, MCN (M=Cu, Ag, Au), and Group-2 and 12 carbides MC2 (M=Be-Ba, Zn-Hg). The idea of sheets is then extended to nanostrips which can be bent to nanorings. The bending energies and deformation frequencies can be systematized by treating these molecules as an elastic bodies. In these species Au atoms act as an 'intermolecular glue'. Further suggested molecular species are the new uncongested aurocarbons, and the neutral Au_nHg_m clusters. Many of the suggested species are expected to be stabilized by aurophilic interactions. We also estimate the MP2 basis-set limit of the aurophilicity for the model compounds [ClAuPH_3]_2 and [P(AuPH_3)_4]^+. Beside investigating the size of the basis-set applied, our research confirms that the 19-VE TZVP+2f level, used a decade ago, already produced 74 % of the present aurophilic attraction energy for the [ClAuPH_3]_2 dimer. Likewise we verify the preferred C4v structure for the [P(AuPH_3)_4]^+ cation at the MP2 level. We also perform the first calculation on model aurophilic systems using the SCS-MP2 method and compare the results to high-accuracy CCSD(T) ones. The recently obtained high-resolution microwave spectra on MCN molecules (M=Cu, Ag, Au) provide an excellent testing ground for quantum chemistry. MP2 or CCSD(T) calculations, correlating all 19 valence electrons of Au and including BSSE and SO corrections, are able to give bond lengths to 0.6 pm, or better. Our calculated vibrational frequencies are expected to be better than the currently available experimental estimates. Qualitative evidence for multiple Au-C bonding in triatomic AuCN is also found.
Resumo:
In the present work the methods of relativistic quantum chemistry have been applied to a number of small systems containing heavy elements, for which relativistic effects are important. First, a thorough introduction of the methods used is presented. This includes some of the general methods of computational chemistry and a special section dealing with how to include the effects of relativity in quantum chemical calculations. Second, after this introduction the results obtained are presented. Investigations on high-valent mercury compounds are presented and new ways to synthesise such compounds are proposed. The methods described were applied to certain systems containing short Pt-Tl contacts. It was possible to explain the interesting bonding situation in these compounds. One of the most common actinide compounds, uranium hexafluoride was investigated and a new picture of the bonding was presented. Furthermore the rareness of uranium-cyanide compounds was discussed. In a foray into the chemistry of gold, well known for its strong relativistic effects, investigations on different gold systems were performed. Analogies between Au$^+$ and platinum on one hand and oxygen on the other were found. New systems with multiple bonds to gold were proposed to experimentalists. One of the proposed systems was spectroscopically observed shortly afterwards. A very interesting molecule, which was theoretically predicted a few years ago is WAu$_{12}$. Some of its properties were calculated and the bonding situation was discussed. In a further study on gold compounds it was possible to explain the substitution pattern in bis[phosphane-gold(I)] thiocyanate complexes. This is of some help to experimentalists as the systems could not be crystallised and the structure was therefore unknown. Finally, computations on one of the heaviest elements in the periodic table were performed. Calculation on compounds containing element 110, darmstadtium, showed that it behaves similarly as its lighter homologue platinum. The extreme importance of relativistic effects for these systems was also shown.
Resumo:
In Finland, peat harvesting sites are utilized down almost to the mineral soil. In this situation the properties of mineral subsoil are likely to have considerable influence on the suitability for the various after-use forms. The aims of this study were to recognize the chemical and physical properties of mineral subsoils possibly limiting the after-use of cut-over peatlands, to define a minimum practice for mineral subsoil studies and to describe the role of different geological areas. The future percentages of the different after-use forms were predicted, which made it possible to predict also carbon accumulation in this future situation. Mineral subsoils of 54 different peat production areas were studied. Their general features and grain size distribution was analysed. Other general items studied were pH, electrical conductivity, organic matter, water soluble nutrients (P, NO3-N, NH4-N, S and Fe) and exchangeable nutrients (Ca, Mg and K). In some cases also other elements were analysed. In an additional case study carbon accumulation effectiveness before the intervention was evaluated on three sites in Oulu area (representing sites typically considered for peat production). Areas with relatively sulphur rich mineral subsoil and pool-forming areas with very fine and compact mineral subsoil together covered approximately 1/5 of all areas. These areas were unsuitable for commercial use. They were recommended for example for mire regeneration. Another approximate 1/5 of the areas included very coarse or very fine sediments. Commercial use of these areas would demand special techniques - like using the remaining peat layer for compensating properties missing from the mineral subsoil. One after-use form was seldom suitable for one whole released peat production area. Three typical distribution patterns (models) of different mineral subsoils within individual peatlands were found. 57 % of studied cut-over peatlands were well suited for forestry. In a conservative calculation 26% of the areas were clearly suitable for agriculture, horticulture or energy crop production. If till without large boulders was included, the percentage of areas suitable to field crop production would be 42 %. 9-14 % of all areas were well suitable for mire regeneration or bird sanctuaries, but all areas were considered possible for mire regeneration with correct techniques. Also another 11 % was recommended for mire regeneration to avoid disturbing the mineral subsoil, so total 20-25 % of the areas would be used for rewetting. High sulphur concentrations and acidity were typical to the areas below the highest shoreline of the ancient Litorina sea and Lake Ladoga Bothnian Bay zone. Also differences related to nutrition were detected. In coarse sediments natural nutrient concentration was clearly higher in Lake Ladoga Bothnian Bay zone and in the areas of Svecokarelian schists and gneisses, than in Granitoid area of central Finland and in Archaean gneiss areas. Based on this study the recommended minimum analysis for after-use planning was for pH, sulphur content and fine material (<0.06 mm) percentage. Nutrition capacity could be analysed using the natural concentrations of calcium, magnesium and potassium. Carbon accumulation scenarios were developed based on the land-use predictions. These scenarios were calculated for areas in peat production and the areas released from peat production (59300 ha + 15 671 ha). Carbon accumulation of the scenarios varied between 0.074 and 0.152 million t C a-1. In the three peatlands considered for peat production the long term carbon accumulation rates varied between 13 and 24 g C m-2 a-1. The natural annual carbon accumulation had been decreasing towards the time of possible intervention.
Resumo:
This study explores the decline of terrorism by conducting source-based case studies on two left-wing terrorist campaigns in the 1970s, those of the Rode Jeugd in the Netherlands and the Symbionese Liberation Army in the United States. The purpose of the case studies is to bring more light into the interplay of different external and internal factors in the development of terrorist campaigns. This is done by presenting the history of the two chosen campaigns as narratives from the participants’ points of view, based on interviews with participants and extensive archival material. Organizational resources and dynamics clearly influenced the course of the two campaigns, but in different ways. This divergence derives at least partly from dissimilarities in organizational design and the incentive structure. Comparison of even these two cases shows that organizations using terrorism as a strategy can differ significantly, even when they share ideological orientation, are of the same size and operate in the same time period. Theories on the dynamics of terrorist campaigns would benefit from being more sensitive to this. The study also highlights that the demise of a terrorist organization does not necessarily lead to the decline of the terrorist campaign. Therefore, research should look at the development of terrorist activity beyond the lifespan of a single organization. The collective ideological beliefs and goals functioned primarily as a sustaining force, a lens through which the participants interpreted all developments. On the other hand, it appears that the role of ideology should not be overstated. Namely, not all participants in the campaigns under study fully internalized the radical ideology. Rather, their participation was mainly based on their friendship with other participants. Instead of ideology per se, it is more instructive to look at how those involved described their organization, themselves and their role in the revolutionary struggle. In both cases under study, the choice of the terrorist strategy was not merely a result of a cost-benefit calculation, but an important part of the participants’ self-image. Indeed, the way the groups portrayed themselves corresponded closely with the forms of action that they got involved in. Countermeasures and the lack of support were major reasons for the decline of the campaigns. However, what is noteworthy is that the countermeasures would not have had the same kind of impact had it not been for certain weaknesses of the groups themselves. Moreover, besides the direct impact the countermeasures had on the campaign, equally important was how they affected the attitudes of the larger left-wing community and the public in general. In this context, both the attitudes towards the terrorist campaign and the authorities were relevant to the outcome of the campaigns.
Resumo:
Lead contamination in the environment is of particular concern, as it is a known toxin. Until recently, however, much less attention has been given to the local contamination caused by activities at shooting ranges compared to large-scale industrial contamination. In Finland, more than 500 tons of Pb is produced each year for shotgun ammunition. The contaminant threatens various organisms, ground water and the health of human populations. However, the forest at shooting ranges usually shows no visible sign of stress compared to nearby clean environments. The aboveground biota normally reflects the belowground ecosystem. Thus, the soil microbial communities appear to bear strong resistance to contamination, despite the influence of lead. The studies forming this thesis investigated a shooting range site at Hälvälä in Southern Finland, which is heavily contaminated by lead pellets. Previously it was experimentally shown that the growth of grasses and degradation of litter are retarded. Measurements of acute toxicity of the contaminated soil or soil extracts gave conflicting results, as enchytraeid worms used as toxicity reporters were strongly affected, while reporter bacteria showed no or very minor decreases in viability. Measurements using sensitive inducible luminescent reporter bacteria suggested that the bioavailability of lead in the soil is indeed low, and this notion was supported by the very low water extractability of the lead. Nevertheless, the frequency of lead-resistant cultivable bacteria was elevated based on the isolation of cultivable strains. The bacterial and fungal diversity in heavily lead contaminated shooting sectors were compared with those of pristine sections of the shooting range area. The bacterial 16S rRNA gene and fungal ITS rRNA gene were amplified, cloned and sequenced using total DNA extracted from the soil humus layer as the template. Altogether, 917 sequenced bacterial clones and 649 sequenced fungal clones revealed a high soil microbial diversity. No effect of lead contamination was found on bacterial richness or diversity, while fungal richness and diversity significantly differed between lead contaminated and clean control areas. However, even in the case of fungi, genera that were deemed sensitive were not totally absent from the contaminated area: only their relative frequency was significantly reduced. Some operational taxonomic units (OTUs) assigned to Basidiomycota were clearly affected, and were much rarer in the lead contaminated areas. The studies of this thesis surveyed EcM sporocarps, analyzed morphotyped EcM root tips by direct sequencing, and 454-pyrosequenced fungal communities in in-growth bags. A total of 32 EcM fungi that formed conspicuous sporocarps, 27 EcM fungal OTUs from 294 root tips, and 116 EcM fungal OTUs from a total of 8 194 ITS2 454 sequences were recorded. The ordination analyses by non-parametric multidimensional scaling (NMS) indicated that Pb enrichment induced a shift in the EcM community composition. This was visible as indicative trends in the sporocarp and root tip datasets, but explicitly clear in the communities observed in the in-growth bags. The compositional shift in the EcM community was mainly attributable to an increase in the frequencies of OTUs assigned to the genus Thelephora, and to a decrease in the OTUs assigned to Pseudotomentella, Suillus and Tylospora in Pb-contaminated areas when compared to the control. The enrichment of Thelephora in contaminated areas was also observed when examining the total fungal communities in soil using DNA cloning and sequencing technology. While the compositional shifts are clear, their functional consequences for the dominant trees or soil ecosystem remain undetermined. The results indicate that at the Hälvälä shooting range, lead influences the fungal communities but not the bacterial communities. The forest ecosystem shows apparent functional redundancy, since no significant effects were seen on forest trees. Recently, by means of 454 pyrosequencing , the amount of sequences in a single analysis run can be up to one million. It has been applied in microbial ecology studies to characterize microbial communities. The handling of sequence data with traditional programs is becoming difficult and exceedingly time consuming, and novel tools are needed to handle the vast amounts of data being generated. The field of microbial ecology has recently benefited from the availability of a number of tools for describing and comparing microbial communities using robust statistical methods. However, although these programs provide methods for rapid calculation, it has become necessary to make them more amenable to larger datasets and numbers of samples from pyrosequencing. As part of this thesis, a new program was developed, MuSSA (Multi-Sample Sequence Analyser), to handle sequence data from novel high-throughput sequencing approaches in microbial community analyses. The greatest advantage of the program is that large volumes of sequence data can be manipulated, and general OTU series with a frequency value can be calculated among a large number of samples.
Resumo:
Lipid analysis is commonly performed by gas chromatography (GC) in laboratory conditions. Spectroscopic techniques, however, are non-destructive and can be implemented noninvasively in vivo. Excess fat (triglycerides) in visceral adipose tissue and liver is known predispose to metabolic abnormalities, collectively known as the metabolic syndrome. Insulin resistance is the likely cause with diets high in saturated fat known to impair insulin sensitivity. Tissue triglyceride composition has been used as marker of dietary intake but it can also be influenced by tissue specific handling of fatty acids. Recent studies have shown that adipocyte insulin sensitivity correlates positively with their saturated fat content, contradicting the common view of dietary effects. A better understanding of factors affecting tissue triglyceride composition is needed to provide further insights into tissue function in lipid metabolism. In this thesis two spectroscopic techniques were developed for in vitro and in vivo analysis of tissue triglyceride composition. In vitro studies (Study I) used infrared spectroscopy (FTIR), a fast and cost effective analytical technique well suited for multivariate analysis. Infrared spectra are characterized by peak overlap leading to poorly resolved absorbances and limited analytical performance. In vivo studies (Studies II, III and IV) used proton magnetic resonance spectroscopy (1H-MRS), an established non-invasive clinical method for measuring metabolites in vivo. 1H-MRS has been limited in its ability to analyze triglyceride composition due to poorly resolved resonances. Using an attenuated total reflection accessory, we were able to obtain pure triglyceride infrared spectra from adipose tissue biopsies. Using multivariate curve resolution (MCR), we were able to resolve the overlapping double bond absorbances of monounsaturated fat and polyunsaturated fat. MCR also resolved the isolated trans double bond and conjugated linoleic acids from an overlapping background absorbance. Using oil phantoms to study the effects of different fatty acid compositions on the echo time behaviour of triglycerides, it was concluded that the use of long echo times improved peak separation with T2 weighting having a negligible impact. It was also discovered that the echo time behaviour of the methyl resonance of omega-3 fats differed from other fats due to characteristic J-coupling. This novel insight could be used to detect omega-3 fats in human adipose tissue in vivo at very long echo times (TE = 470 and 540 ms). A comparison of 1H-MRS of adipose tissue in vivo and GC of adipose tissue biopsies in humans showed that long TE spectra resulted in improved peak fitting and better correlations with GC data. The study also showed that calculation of fatty acid fractions from 1H-MRS data is unreliable and should not be used. Omega-3 fatty acid content derived from long TE in vivo spectra (TE = 540 ms) correlated with total omega-3 fatty acid concentration measured by GC. The long TE protocol used for adipose tissue studies was subsequently extended to the analysis of liver fat composition. Respiratory triggering and long TE resulted in spectra with the olefinic and tissue water resonances resolved. Conversion of the derived unsaturation to double bond content per fatty acid showed that the results were in accordance with previously published gas chromatography data on liver fat composition. In patients with metabolic syndrome, liver fat was found to be more saturated than subcutaneous or visceral adipose tissue. The higher saturation observed in liver fat may be a result of a higher rate of de-novo-lipogenesis in liver than in adipose tissue. This thesis has introduced the first non-invasive method for determining adipose tissue omega-3 fatty acid content in humans in vivo. The methods introduced here have also shown that liver fat is more saturated than adipose tissue fat.
Resumo:
The methods for estimating patient exposure in x-ray imaging are based on the measurement of radiation incident on the patient. In digital imaging, the useful dose range of the detector is large and excessive doses may remain undetected. Therefore, real-time monitoring of radiation exposure is important. According to international recommendations, the measurement uncertainty should be lower than 7% (confidence level 95%). The kerma-area product (KAP) is a measurement quantity used for monitoring patient exposure to radiation. A field KAP meter is typically attached to an x-ray device, and it is important to recognize the effect of this measurement geometry on the response of the meter. In a tandem calibration method, introduced in this study, a field KAP meter is used in its clinical position and calibration is performed with a reference KAP meter. This method provides a practical way to calibrate field KAP meters. However, the reference KAP meters require comprehensive calibration. In the calibration laboratory it is recommended to use standard radiation qualities. These qualities do not entirely correspond to the large range of clinical radiation qualities. In this work, the energy dependence of the response of different KAP meter types was examined. According to our findings, the recommended accuracy in KAP measurements is difficult to achieve with conventional KAP meters because of their strong energy dependence. The energy dependence of the response of a novel large KAP meter was found out to be much lower than with a conventional KAP meter. The accuracy of the tandem method can be improved by using this meter type as a reference meter. A KAP meter cannot be used to determine the radiation exposure of patients in mammography, in which part of the radiation beam is always aimed directly at the detector without attenuation produced by the tissue. This work assessed whether pixel values from this detector area could be used to monitor the radiation beam incident on the patient. The results were congruent with the tube output calculation, which is the method generally used for this purpose. The recommended accuracy can be achieved with the studied method. New optimization of radiation qualities and dose level is needed when other detector types are introduced. In this work, the optimal selections were examined with one direct digital detector type. For this device, the use of radiation qualities with higher energies was recommended and appropriate image quality was achieved by increasing the low dose level of the system.
Resumo:
The Earth's ecosystems are protected from the dangerous part of the solar ultraviolet (UV) radiation by stratospheric ozone, which absorbs most of the harmful UV wavelengths. Severe depletion of stratospheric ozone has been observed in the Antarctic region, and to a lesser extent in the Arctic and midlatitudes. Concern about the effects of increasing UV radiation on human beings and the natural environment has led to ground based monitoring of UV radiation. In order to achieve high-quality UV time series for scientific analyses, proper quality control (QC) and quality assurance (QA) procedures have to be followed. In this work, practices of QC and QA are developed for Brewer spectroradiometers and NILU-UV multifilter radiometers, which measure in the Arctic and Antarctic regions, respectively. These practices are applicable to other UV instruments as well. The spectral features and the effect of different factors affecting UV radiation were studied for the spectral UV time series at Sodankylä. The QA of the Finnish Meteorological Institute's (FMI) two Brewer spectroradiometers included daily maintenance, laboratory characterizations, the calculation of long-term spectral responsivity, data processing and quality assessment. New methods for the cosine correction, the temperature correction and the calculation of long-term changes of spectral responsivity were developed. Reconstructed UV irradiances were used as a QA tool for spectroradiometer data. The actual cosine correction factor was found to vary between 1.08-1.12 and 1.08-1.13. The temperature characterization showed a linear temperature dependence between the instrument's internal temperature and the photon counts per cycle. Both Brewers have participated in international spectroradiometer comparisons and have shown good stability. The differences between the Brewers and the portable reference spectroradiometer QASUME have been within 5% during 2002-2010. The features of the spectral UV radiation time series at Sodankylä were analysed for the time period 1990-2001. No statistically significant long-term changes in UV irradiances were found, and the results were strongly dependent on the time period studied. Ozone was the dominant factor affecting UV radiation during the springtime, whereas clouds played a more important role during the summertime. During this work, the Antarctic NILU-UV multifilter radiometer network was established by the Instituto Nacional de Meteorogía (INM) as a joint Spanish-Argentinian-Finnish cooperation project. As part of this work, the QC/QA practices of the network were developed. They included training of the operators, daily maintenance, regular lamp tests and solar comparisons with the travelling reference instrument. Drifts of up to 35% in the sensitivity of the channels of the NILU-UV multifilter radiometers were found during the first four years of operation. This work emphasized the importance of proper QC/QA, including regular lamp tests, for the multifilter radiometers also. The effect of the drifts were corrected by a method scaling the site NILU-UV channels to those of the travelling reference NILU-UV. After correction, the mean ratios of erythemally-weighted UV dose rates measured during solar comparisons between the reference NILU-UV and the site NILU-UVs were 1.007±0.011 and 1.012±0.012 for Ushuaia and Marambio, respectively, when the solar zenith angle varied up to 80°. Solar comparisons between the NILU-UVs and spectroradiometers showed a ±5% difference near local noon time, which can be seen as proof of successful QC/QA procedures and transfer of irradiance scales. This work also showed that UV measurements made in the Arctic and Antarctic can be comparable with each other.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.