21 resultados para correctness verification

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, concern has arisen over the effects of increasing carbon dioxide (CO2) in the earth's atmosphere due to the burning of fossil fuels. One way to mitigate increase in atmospheric CO2 concentration and climate change is carbon sequestration to forest vegeta-tion through photosynthesis. Comparable regional scale estimates for the carbon balance of forests are therefore needed for scientific and political purposes. The aim of the present dissertation was to improve methods for quantifying and verifying inventory-based carbon pool estimates of the boreal forests in the mineral soils. Ongoing forest inventories provide a data based on statistically sounded sampling for estimating the level of carbon stocks and stock changes, but improved modelling tools and comparison of methods are still needed. In this dissertation, the entire inventory-based large-scale forest carbon stock assessment method was presented together with some separate methods for enhancing and comparing it. The enhancement methods presented here include ways to quantify the biomass of understorey vegetation as well as to estimate the litter production of needles and branches. In addition, the optical remote sensing method illustrated in this dis-sertation can be used to compare with independent data. The forest inventory-based large-scale carbon stock assessment method demonstrated here provided reliable carbon estimates when compared with independent data. Future ac-tivity to improve the accuracy of this method could consist of reducing the uncertainties regarding belowground biomass and litter production as well as the soil compartment. The methods developed will serve the needs for UNFCCC reporting and the reporting under the Kyoto Protocol. This method is principally intended for analysts or planners interested in quantifying carbon over extensive forest areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives. The sentence span task is a complex working memory span task used for estimating total working memory capacity for both processing (sentence comprehension) and storage (remembering a set of words). Several traditional models of working memory suggest that performance on these tasks relies on phonological short-term storage. However, long-term memory effects as well as the effects of expertise and strategies have challenged this view. This study uses a working memory task that aids the creation of retrieval structures in the form of stories, which have been shown to form integrated structures in longterm memory. The research question is whether sentence and story contexts boost memory performance in a complex working memory task. The hypothesis is that storage of the words in the task takes place in long-term memory. Evidence of this would be better recall for words as parts of sentences than for separate words, and, particularly, a beneficial effect for words as part of an organized story. Methods. Twenty stories consisting of five sentences each were constructed, and the stimuli in all experimental conditions were based on these sentences and sentence-final words, reordered and recombined for the other conditions. Participants read aloud sets of five sentences that either formed a story or not. In one condition they had to report all the last words at the end of the set, in another, they memorised an additional separate word with each sentence. The sentences were presented on the screen one word at a time (500 ms). After the presentation of each sentence, the participant verified a statement about the sentence. After five sentences, the participant repeated back the words in correct positions. Experiment 1 (n=16) used immediate recall, experiment 2 (n=21) both immediate recall and recall after a distraction interval (the operation span task). In experiment 2 a distracting mental arithmetic task was presented instead of recall in half of the trials, and an individual word was added before each sentence in the two experimental conditions when the participants were to memorize the sentence final words. Subjects also performed a listening span task (in exp.1) or an operation span task (exp.2) to allow comparison of the estimated span and performance in the story task. Results were analysed using correlations, repeated measures ANOVA and a chi-square goodness of fit test on the distribution of errors. Results and discussion. Both the relatedness of the sentences (the story condition) and the inclusion of the words into sentences helped memory. An interaction showed that the story condition had a greater effect on last words than separate words. The beneficial effect of the story was shown in all serial positions. The effects remained in delayed recall. When the sentences formed stories, performance in verification of the statements about sentence context was better. This, as well as the differing distributions of errors in different experimental conditions, suggest different levels of representation are in use in the different conditions. In the story condition, the nature of these representations could be in the form of an organized memory structure, a situation model. The other working memory tasks had only few week correlations to the story task. This could indicate that different processes are in use in the tasks. The results do not support short-term phonological storage, but instead are compatible with the words being encoded to LTM during the task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sea-surface wind observations of previous generation scatterometers have been successfully assimilated into Numerical Weather Prediction (NWP) models. Impact studies conducted with these assimilation implementations have shown a distinct improvement to model analysis and forecast accuracies. The Advanced Scatterometer (ASCAT), flown on Metop-A, offers an improved sea-surface wind accuracy and better data coverage when compared to the previous generation scatterometers. Five individual case studies are carried out. The effect of including ASCAT data into High Resolution Limited Area Model (HIRLAM) assimilation system (4D-Var) is tested to be neutral-positive for situations with general flow direction from the Atlantic Ocean. For northerly flow regimes the effect is negative. This is later discussed to be caused by problems involving modeling northern flows, and also due to the lack of a suitable verification method. Suggestions and an example of an improved verification method is presented later on. A closer examination of a polar low evolution is also shown. It is found that the ASCAT assimilation scheme improves forecast of the initial evolution of the polar low, but the model advects the strong low pressure centre too fast eastward. Finally, the flaws of the implementation are found small and implementing the ASCAT assimilation scheme into the operational HIRLAM suite is feasible, but longer time period validation is still required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this study is to evaluate selected geophysical, structural and topographic methods on regional, local, and tunnel and borehole scales, as indicators of the properties of fracture zones or fractures relevant to groundwater flow. Such information serves, for example, groundwater exploration and prediction of the risk of groundwater inflow in underground construction. This study aims to address how the features detected by these methods link to groundwater flow in qualitative and semi-quantitative terms and how well the methods reveal properties of fracturing affecting groundwater flow in the studied sites. The investigated areas are: (1) the Päijänne Tunnel for water-conveyance whose study serves as a verification of structures identified on regional and local scales; (2) the Oitti fuel spill site, to telescope across scales and compare geometries of structural assessment; and (3) Leppävirta, where fracturing and hydrogeological environment have been studied on the scale of a drilled well. The methods applied in this study include: the interpretation of lineaments from topographic data and their comparison with aeromagnetic data; the analysis of geological structures mapped in the Päijänne Tunnel; borehole video surveying; groundwater inflow measurements; groundwater level observations; and information on the tunnel s deterioration as demonstrated by block falls. The study combined geological and geotechnical information on relevant factors governing groundwater inflow into a tunnel and indicators of fracturing, as well as environmental datasets as overlays for spatial analysis using GIS. Geophysical borehole logging and fluid logging were used in Leppävirta to compare the responses of different methods to fracturing and other geological features on the scale of a drilled well. Results from some of the geophysical measurements of boreholes were affected by the large diameter (gamma radiation) or uneven surface (caliper) of these structures. However, different anomalies indicating more fractured upper part of the bedrock traversed by well HN4 in Leppävirta suggest that several methods can be used for detecting fracturing. Fracture trends appear to align similarly on different scales in the zone of the Päijänne Tunnel. For example, similarities of patterns were found between the regional magnetic trends, correlating with orientations of topographic lineaments interpreted as expressions of fracture zones. The same structural orientations as those of the larger structures on local or regional scales were observed in the tunnel, even though a match could not be made in every case. The size and orientation of the observation space (patch of terrain at the surface, tunnel section, or borehole), the characterization method, with its typical sensitivity, and the characteristics of the location, influence the identification of the fracture pattern. Through due consideration of the influence of the sampling geometry and by utilizing complementary fracture characterization methods in tandem, some of the complexities of the relationship between fracturing and groundwater flow can be addressed. The flow connections demonstrated by the response of the groundwater level in monitoring wells to pressure decrease in the tunnel and the transport of MTBE through fractures in bedrock in Oitti, highlight the importance of protecting the tunnel water from a risk of contamination. In general, the largest values of drawdown occurred in monitoring wells closest to the tunnel and/or close to the topographically interpreted fracture zones. It seems that, to some degree, the rate of inflow shows a positive correlation with the level of reinforcement, as both are connected with the fracturing in the bedrock. The following geological features increased the vulnerability of tunnel sections to pollution, especially when several factors affected the same locations: (1) fractured bedrock, particularly with associated groundwater inflow; (2) thin or permeable overburden above fractured rock; (3) a hydraulically conductive layer underneath the surface soil; and (4) a relatively thin bedrock roof above the tunnel. The observed anisotropy of the geological media should ideally be taken into account in the assessment of vulnerability of tunnel sections and eventually for directing protective measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study is to analyse the development and understanding of the idea of consensus in bilateral dialogues among Anglicans, Lutherans and Roman Catholics. The source material consists of representative dialogue documents from the international, regional and national dialogues from the 1960s until 2006. In general, the dialogue documents argue for agreement/consensus based on commonality or compatibility. Each of the three dialogue processes has specific characteristics and formulates its argument in a unique way. The Lutheran-Roman Catholic dialogue has a particular interest in hermeneutical questions. In the early phases, the documents endeavoured to describe the interpretative principles that would allow the churches to together proclaim the Gospel and to identify the foundation on which the agreement in the church is based. This investigation ended up proposing a notion of basic consensus , which later developed into a form of consensus that seeks to embrace, not to dismiss differences (so-called differentiated consensus ). The Lutheran-Roman Catholic agreement is based on a perspectival understanding of doctrine. The Anglican-Roman Catholic dialogue emphasises the correctness of interpretations. The documents consciously look towards a common future , not the separated past. The dialogue s primary interpretative concept is koinonia. The texts develop a hermeneutics of authoritative teaching that has been described as the rule of communion . The Anglican-Lutheran dialogue is characterised by an instrumental understanding of doctrine. Doctrinal agreement is facilitated by the ideas of coherence, continuity and substantial emphasis in doctrine. The Anglican-Lutheran dialogue proposes a form of sufficient consensus that considers a wide set of doctrinal statements and liturgical practices to determine whether an agreement has been reached to the degree that, although not complete , is sufficient for concrete steps towards unity. Chapter V discusses the current challenges of consensus as an ecumenically viable concept. In this part, I argue that the acceptability of consensus as an ecumenical goal is based not only the understanding of the church but more importantly on the understanding of the nature and function of the doctrine. The understanding of doctrine has undergone significant changes during the time of the ecumenical dialogues. The major shift has been from a modern paradigm towards a postmodern paradigm. I conclude with proposals towards a way to construct a form of consensus that would survive philosophical criticism, would be theologically valid and ecumenically acceptable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

DNA ja siinä sijaitsevat geenit ohjaavat kaikkea solujen toimintaa. DNA-molekyyleihin kuitenkin kertyy mutaatioita sekä ympäristön vaikutuksen, että solujen oman toiminnan tuloksena. Mikäli virheitä ei korjata, saattaa tuloksena olla solun muuttuminen syöpäsoluksi. Soluilla onkin käytössä useita DNA-virheiden korjausmekanismeja, joista yksi on ns. mismatch repair (MMR). MMR vastaa DNA:n kahdentumisessa syntyvien virheiden korjauksesta. Periytyvät mutaatiot geeneissä, jotka vastaavat MMR-proteiinien rakentamisesta, aiheuttavat ongelmia DNA:n korjauksessa ja altistavat kantajansa periytyvälle ei-polypoottiselle paksusuolisyöpäoireyhtymälle (hereditary nonpolyposis colorectal cancer, HNPCC). Yleisimmin mutatoituneet MMR-geenit ovat MLH1 ja MSH2. HNPCC periytyy vallitsevasti, eli jo toiselta vanhemmalta peritty geenivirhe altistaa syövälle. MMR-geenivirheen kantaja sairastuu syöpään elämänsä aikana suurella todennäköisyydellä, ja sairastumisikä on vain noin 40 vuotta. Syövälle altistavan geenivirheen löytäminen mutaation kantajilta on hyvin tärkeää, sillä säännöllinen seuranta mahdollistaa kehittymässä olevan kasvaimen havaitsemisen ja poistamisen jo aikaisessa vaiheessa. Tämän on osoitettu alentavan syöpäkuolleisuutta merkittävästi. Varma tieto altistuksen alkuperästä on tärkeä myös niille syöpäsuvun jäsenille, jotka eivät kanna kyseistä mutaatiota. Syövälle altistavien mutaatioiden ohella MMR-geeneistä löydetään säännöllisesti muutoksia, jotka ovat normaalia henkilöiden välistä geneettistä vaihtelua, eikä niiden oleteta lisäävän syöpäaltistusta. Altistavien mutaatioiden erottaminen näistä neutraaleista variaatioista on vaikeaa, mutta välttämätöntä altistuneiden tehokkaan seurannan varmistamiseksi. Tässä väitöskirjassa tutkittiin 18:a MSH2 -geenin mutaatiota. Mutaatiot oli löydetty perheistä, joissa esiintyi paljon syöpiä, mutta niiden vaikutus DNA:n korjaustehoon ja syöpäaltistukseen oli epäselvä. Työssä tutkittiin kunkin mutaation vaikutusta MSH2-proteiinin normaaliin toimintaan, ja tuloksia verrattiin potilaiden ja sukujen kliinisiin tietoihin. Tutkituista mutaatiosta 12 aiheutti puutteita MMR-korjauksessa. Nämä mutaatiot tulkittiin syövälle altistaviksi. Analyyseissä normaalisti toimineet 4 mutaatiota eivät todennäköisesti ole syynä syövän syntyyn kyseisillä perheillä. Tulkinta jätettiin avoimeksi 2 mutaation kohdalla. Tutkimuksesta hyötyivät suoraan kuvattujen mutaatioiden kantajaperheet, joiden geenivirheen syöpäaltistuksesta saatiin tietoa, mahdollistaen perinnöllisyysneuvonnan ja seurannan kohdentamisen sitä tarvitseville. Työ selvensi myös mekanismeja, joilla mutatoitunut MSH2-proteiini voi menettää toimintakykynsä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the past ten years, large-scale transcript analysis using microarrays has become a powerful tool to identify and predict functions for new genes. It allows simultaneous monitoring of the expression of thousands of genes and has become a routinely used tool in laboratories worldwide. Microarray analysis will, together with other functional genomics tools, take us closer to understanding the functions of all genes in genomes of living organisms. Flower development is a genetically regulated process which has mostly been studied in the traditional model species Arabidopsis thaliana, Antirrhinum majus and Petunia hybrida. The molecular mechanisms behind flower development in them are partly applicable in other plant systems. However, not all biological phenomena can be approached with just a few model systems. In order to understand and apply the knowledge to ecologically and economically important plants, other species also need to be studied. Sequencing of 17 000 ESTs from nine different cDNA libraries of the ornamental plant Gerbera hybrida made it possible to construct a cDNA microarray with 9000 probes. The probes of the microarray represent all different ESTs in the database. From the gerbera ESTs 20% were unique to gerbera while 373 were specific to the Asteraceae family of flowering plants. Gerbera has composite inflorescences with three different types of flowers that vary from each other morphologically. The marginal ray flowers are large, often pigmented and female, while the central disc flowers are smaller and more radially symmetrical perfect flowers. Intermediate trans flowers are similar to ray flowers but smaller in size. This feature together with the molecular tools applied to gerbera, make gerbera a unique system in comparison to the common model plants with only a single kind of flowers in their inflorescence. In the first part of this thesis, conditions for gerbera microarray analysis were optimised including experimental design, sample preparation and hybridization, as well as data analysis and verification. Moreover, in the first study, the flower and flower organ-specific genes were identified. After the reliability and reproducibility of the method were confirmed, the microarrays were utilized to investigate transcriptional differences between ray and disc flowers. This study revealed novel information about the morphological development as well as the transcriptional regulation of early stages of development in various flower types of gerbera. The most interesting finding was differential expression of MADS-box genes, suggesting the existence of flower type-specific regulatory complexes in the specification of different types of flowers. The gerbera microarray was further used to profile changes in expression during petal development. Gerbera ray flower petals are large, which makes them an ideal model to study organogenesis. Six different stages were compared and specifically analysed. Expression profiles of genes related to cell structure and growth implied that during stage two, cells divide, a process which is marked by expression of histones, cyclins and tubulins. Stage 4 was found to be a transition stage between cell division and expansion and by stage 6 cells had stopped division and instead underwent expansion. Interestingly, at the last analysed stage, stage 9, when cells did not grow any more, the highest number of upregulated genes was detected. The gerbera microarray is a fully-functioning tool for large-scale studies of flower development and correlation with real-time RT-PCR results show that it is also highly sensitive and reliable. Gene expression data presented here will be a source for gene expression mining or marker gene discovery in the future studies that will be performed in the Gerbera Laboratory. The publicly available data will also serve the plant research community world-wide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL) is the most common hereditary vascular dementia. CADASIL is a systemic disease of small and medium-sized arteries although the symptoms are almost exclusively neurological, including migraineous headache, recurrent ischemic episodes, cognitive impairment and, finally, subcortical dementia. CADASIL is caused by over 170 different mutations in the NOTCH3 gene, which encodes a receptor expressed in adults predominantly in the vascular smooth muscle cells. The function of NOTCH3 is not crucial for embryonic development but is needed after birth. NOTCH3 directs postnatal arterial maturation and helps to maintain arterial integrity. It is involved in regulation of vascular tone and in the wound healing of a vascular injury. In addition, NOTCH3 promotes cell survival by inducing expression of anti-apoptotic proteins. NOTCH3 is a membrane-spanning protein with a large extracellular domain (N3ECD) containing 34 epidermal growth factor-like (EGF) repeats and a smaller intracellular domain with six ankyrin repeats. All CADASIL mutations are located in the EGF repeats and the majority of the mutations cause gain or loss of one cysteine residue in one of these repeats leading to an odd number of cysteine residues, which in turn leads to misfolding of N3ECD. This misfolding most likely alters the maturation, targetting, degradation and/or function of the NOTCH3 receptor. CADASIL mutations do not seem to affect the canonical NOTCH3 signalling pathway. The main pathological findings are the accumulation of the NOTCH3 extracellular domain on degenerating vascular smooth muscle cells (VSMCs), accumulation of granular osmiophilic material (GOM) in the close vicinity of VSMCs as well as fibrosis and thickening of arterial walls. Narrowing of the arterial lumen and local thrombosis cause insufficient blood flow, mainly in small arteries of the cerebral white matter, resulting in tissue damage and lacunar infarcts. CADASIL is suspected in patients with a suggestive family history and clinical picture as well as characteristic white matter alterations in magnetic resonance imaging. A definitive verification of the diagnosis can be achieved by identifying a pathogenic mutation in the NOTCH3 gene or through the detection of GOM by electron microscopy. To understand the pathology underlying CADASIL, we have generated a unique set of cultured vascular smooth muscle cell (VSMC) lines from umbilical cord, placental, systemic and cerebral arteries of CADASIL patients and controls. Analyses of these VSMCs suggest that mutated NOTCH3 is misfolded, thus causing endoplasmic reticulum stress, activation of the unfolded protein response and increased production of reactive oxygen species. In addition, mutation in NOTCH3 causes alterations in actin cytoskeletal structures and protein expression, increased branching and abnormal node formation. These changes correlate with NOTCH3 expression levels within different VSMCs lines, suggesting that the phenotypic differences of SMCs may affect the vulnerability of the VSMCs and, therefore, the pathogenic impact of mutated NOTCH3 appears to vary in the arteries of different locations. Furthermore, we identified PDGFR- as an immediate downstream target gene of NOTCH3 signalling. Activation of NOTCH induces up-regulation of the PDGFR- expression in control VSMCs, whereas this up-regulation is impaired in CADASIL VSMCs and might thus serve as an alternative molecular mechanism that contributes to CADASIL pathology. In addition, we have established the congruence between NOTCH3 mutations and electron microscopic detection of GOM with a view to constructing a strategy for CADASIL diagnostics. In cases where the genetic analysis is not available or the mutation is difficult to identify, a skin biopsy is an easy-to-perform and highly reliable diagnostic method. Importantly, it is invaluable in setting guidelines concerning how far one should proceed with the genetic analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aerosol particles play a role in the earth ecosystem and affect human health. A significant pathway of producing aerosol particles in the atmosphere is new particle formation, where condensable vapours nucleate and these newly formed clusters grow by condensation and coagulation. However, this phenomenon is still not fully understood. This thesis brings an insight to new particle formation from an experimental point of view. Laboratory experiments were conducted both on the nucleation process and physicochemical properties related to new particle formation. Nucleation rate measurements are used to test nucleation theories. These theories, in turn, are used to predict nucleation rates in atmospheric conditions. However, the nucleation rate measurements have proven quite difficult to conduct, as different devices can yield nucleation rates with differences of several orders of magnitude for the same substances. In this thesis, work has been done to have a greater understanding in nucleation measurements, especially those conducted in a laminar flow diffusion chamber. Systematic studies of nucleation were also made for future verification of nucleation theories. Surface tensions and densities of substances related to atmospheric new particle formation were measured. Ternary sulphuric acid + ammonia + water is a proposed candidate to participate in atmospheric nucleation. Surface tensions of an alternative candidate to nucleate in boreal forest areas, sulphuric acid + dimethylamine + water, were also measured. Binary compounds, consisting of organic acids + water are possible candidates to participate in the early growth of freshly nucleated particles. All the measured surface tensions and densities were fitted with equations, thermodynamically consistent if possible, to be easily applied to atmospheric model calculations of nucleation and subsequent evolution of particle size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objects of the investigation were the syntactic functions of adjectives. The reason for the interest in these functions are the different modes of use, in which an adjective can occur. All together an adjective can take three different modes of use: attributive (e. g. a fast car), predicative (e. g. the car is fast) and adverbial (e. g. the car drives fast). Since an adjective cannot always take every function, some dictionaries (esp. learner s dictionaries) deliver information within the lexical entry about any restrictions. The purpose of the research consisted of a comparison in relation to the lexical entries of adjectives, which were investigated within four selected monolingual German-speaking dictionaries. The comparison of the syntactical data of adjectives were done to work out the differences and the common characteristics of the lexical entries concerning the different modes of use and to analyse respective to assess them. In the foreground, however, were the differences of the syntactical information. Concerning those differences it had to be worked out, which entry is the grammatically right one respective if one entry is in fact wrong. To find that out an empirical analysis was needed, which based on the question in which way an adjective is used within a context as far as there are no conforming data within the dictionaries. The delivery of the correctness and the homogeneity of lexical entries of German-speaking dictionaries are very important to support people who are learning the German language and to ensure the user friendliness of dictionaries. Throughout the investigations it became clear that in almost half of the cases (over 40 %) syntactical information of adjectives differ from each other within the dictionaries. These differences make it for non-native speakers of course very difficult to understand the correct usage of an adjective. Thus the main aim of the doctoral thesis was it to deliver and to demonstrate the clear syntactical usage of a certain amount of adjectives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis explores Finnish business repatriates’ coping strategies. Managing repatriation has been recognized as a demanding task for companies and an important issue in international human resource management. However, we still know relatively little about how repatriates respond to the demands of the return. This thesis addresses this problem by applying a process approach to coping with repatriation. The focus is on identifying repatriates’ coping strategies and the various forms of them. This study also aims to investigate what might influence the use of repatriates’ coping strategies and forms of coping. The background of this doctoral study is provided by earlier research that identified factors influencing repatriates’ adjustment, either positively or negatively. The empirical material of this doctoral thesis comprises twenty-two Phase I semi-structured interviews and ten Phase II follow-up interviews conducted for the purposes of verification. The main findings of the study are formulated as propositions. For instance, it was suggested that repatriates are likely to use different forms of problem-focused strategy more often than various forms of emotion-focused strategy. Moreover, they also are likely to use a larger range of problem-focused strategies than emotion-focused strategies. In addition, in contrast to specialists, repatriates occupying managerial positions are likely to use a greater number and a greater variety of different forms of problem-focused strategy than of emotion-focused strategy, especially in the context of preparing for their return and in different work role changes. This thesis contributes to research on repatriation, expatriation, coping and identifies implications for management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mesoscale weather phenomena, such as the sea breeze circulation or lake effect snow bands, are typically too large to be observed at one point, yet too small to be caught in a traditional network of weather stations. Hence, the weather radar is one of the best tools for observing, analyzing and understanding their behavior and development. A weather radar network is a complex system, which has many structural and technical features to be tuned, from the location of each radar to the number of pulses averaged in the signal processing. These design parameters have no universal optimal values, but their selection depends on the nature of the weather phenomena to be monitored as well as on the applications for which the data will be used. The priorities and critical values are different for forest fire forecasting, aviation weather service or the planning of snow ploughing, to name a few radar-based applications. The main objective of the work performed within this thesis has been to combine knowledge of technical properties of the radar systems and our understanding of weather conditions in order to produce better applications able to efficiently support decision making in service duties for modern society related to weather and safety in northern conditions. When a new application is developed, it must be tested against ground truth . Two new verification approaches for radar-based hail estimates are introduced in this thesis. For mesoscale applications, finding the representative reference can be challenging since these phenomena are by definition difficult to catch with surface observations. Hence, almost any valuable information, which can be distilled from unconventional data sources such as newspapers and holiday shots is welcome. However, as important as getting data is to obtain estimates of data quality, and to judge to what extent the two disparate information sources can be compared. The presented new applications do not rely on radar data alone, but ingest information from auxiliary sources such as temperature fields. The author concludes that in the future the radar will continue to be a key source of data and information especially when used together in an effective way with other meteorological data.