55 resultados para level sets
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
The influence of the architecture of the Byzantine capital spread to the Mediterranean provinces with travelling masters and architects. In this study the architecture of the Constantinopolitan School has been detected on the basis of the typology of churches, completed by certain morphological aspects when necessary. The impact of the Constantinopolitan workshops appears to have been more important than previously realized. This research revealed that the Constantinopolitan composite domed inscribed-cross type or cross-in-square spread everywhere to the Balkans and it was assumed soon by the local schools of architecture. In addition, two novel variants were invented on the basis of this model: the semi-composite type and the so-called Athonite type. In the latter variant lateral conches, choroi, were added for liturgical reasons. Instead, the origin of the domed ambulatory church was partly provincial. One result of this study is that the origin of the Middle Byzantine domed octagonal types was traced to Constantinople. This is attested on the basis of the archaeological evidence. Also some other architectural elements that have not been preserved in the destroyed capital have survived at the provincial level: the domed hexagonal type, the multi-domed superstructure, the pseudo-octagon and the narthex known as the lite. The Constantinopolitan architecture during the period in question was based on the Early Christian and Late Antique forms, practices and innovations and this also emerges at the provincial level.
Resumo:
L Amour de loin: The semantics of the unattainable in Kaija Saariaho s opera Kaija Saariaho (born 1952) is one of the most internationally successful Finnish composers there has ever been. Her first opera L Amour de loin (Love from afar, 1999-2000) has been staged all over the world and has won a number of important prizes. The libretto written for L Amour de loin by Amin Malouf (born 1949) sets the work firmly in the culture of courtly love and the troubadours, which flourished in Occitania in the South of France during the Middle Ages. The male lead in the opera is the troubadour Jaufré Rudel, who lived in the twelfth century and is known to have taken part in the Second Crusade in 1147-1148. This doctoral thesis L Amour de loin: The semantics of the unattainable in Kaija Saariaho s opera, which comes within the field of musicology and opera research, examines the dimensions of meaning contained in Kaija Saariaho s opera L Amour de loin. This hermeneutic-semiotic study is the first doctoral thesis dealing with Saariaho to be completed at the University of Helsinki. It is also the first thesis-level study of Saariaho s opera to be completed anywhere in the world. The study focuses on the libretto and music of the opera, that is to say the dramatic text (L Amour de loin 1980), and examines on the one hand the dimensions of meaning produced by the dramatic text and on the other, the way in which they fix the dramatic text in a historical and cultural context. Thus the study helps to answer questions about the dimensions of meaning contained in the dramatic text of the opera and how they can be interpreted. The most important procedural viewpoint is Lawrence Kramer s hermeneutic window (1990), supplemented by Raymond Monelle s semiotic theory of musical topics (2000, 2006) and the philosophical concept of Emmanuel Levinas (1996, 2002) in which the latter acts as an instrument for semantic interpretation to build up an analysis. The analytical section of the study is built around the three characters in the opera, Jaufré Rudel, Clémence the Countess of Tripoli, and the Pilgrim. The study shows that the music of Saariaho, who belongs to the third generation of Finnish modernists, has become distanced from the post-serial aesthetic towards a more diatonic form of expression. There is diatonicity, for instance, in the sonorous individuality of the male lead, which is based on the actual melodies of the historical Jaufré Rudel. The use of outside material in this context is exceptional in the work of Saariaho. At the same time, Saariaho s opera contains a wealth of expressive devices she has used in her earlier work. It became apparent during the study that, as a piece of music, L Amour de loin is a many layered and multi-dimensional work that does not unambiguously represent any single stylistic trend or aesthetic. Despite the composer s post-serial background and its abrasive relationship with opera, L Amour de loin is firmly attached to the tradition of western opera. The analysis based on the theory of musical topics that was carried out in the study, shows that topics referring to death and resurrection, used in opera since the seventeenth century, appear in L Amour de loin. The troubadour topic, mainly identified with the harp, also emerges in the work. The study also shows that the work is firmly attached to the tradition of western opera in other aspects, too, such as the travesti or trouser role played by the Pilgrim, and the idea of deus ex machina derived from Ancient Greek theatre. The study shows that the concept of love based on the medieval practices of courtly love, and the associated longing for another defined by almost 1,000 years of western culture, are both manifested in the semantics of Kaija Saariaho s opera which takes its place in the contemporary music genre.
Resumo:
The aim of this dissertation was to adapt a questionnaire for assessing students’ approaches to learning and their experiences of the teaching-learning environment. The aim was to explore the validity of the modified Experiences of Teaching and Learning Questionnaire (ETLQ) by examining how the instruments measure the underlying dimensions of student experiences and their learning. The focus was on the relation between students’ experiences of their teaching-learning environment and their approaches to learning. Moreover, the relation between students’ experiences and students’ and teachers’ conceptions of good teaching was examined. In Study I the focus was on the use of the ETLQ in two different contexts: Finnish and British. The study aimed to explore the similarities and differences between the factor structures that emerged from both data sets. The results showed that the factor structures concerning students’ experiences of their teaching-learning environment and their approaches to learning were highly similar in the two contexts. Study I also examined how students’ experiences of the teaching-learning environment are related to their approaches to learning in the two contexts. The results showed that students’ positive experiences of their teaching-learning environment were positively related to their deep approach to learning and negatively to the surface approach to learning in both the Finnish and British data sets. This result was replicated in Study II, which examined the relation between approaches to learning and experiences of the teaching-learning environment on a group level. Furthermore, Study II aimed to explore students’ approaches to learning and their experiences of the teaching-learning environment in different disciplines. The results showed that the deep approach to learning was more common in the soft sciences than in the hard sciences. In Study III, students’ conceptions of good teaching were explored by using qualitative methods, more precisely, by open-ended questions. The aim was to examine students’ conceptions, disciplinary differences and their relation to students’ approaches to learning. The focus was on three disciplines, which differed in terms of students’ experiences of their teaching-learning environment. The results showed that students’ conceptions of good teaching were in line with the theory of good teaching and there were disciplinary differences in their conceptions. Study IV examined university teachers’ conceptions of good teaching, which corresponded to the learning-focused approach to teaching. Furthermore, another aim in this doctoral dissertation was to compare the students’ and teachers’ conceptions of good teaching, the results of which showed that these conceptions appear to have similarities. The four studies indicated that the ETLQ appears to be a sufficiently robust measurement instrument in different contexts. Moreover, its strength is its ability to be at the same time a valid research instrument and a practical tool for enhancing the quality of students’ learning. In addition, the four studies emphasise that in order to enhance teaching and learning in higher education, various perspectives have to be taken into account. This study sheds light on the interaction between students’ approaches to learning, their conceptions of good teaching, their experiences of the teaching-learning environment, and finally, the disciplinary culture.
Resumo:
The aim of this dissertation was to examine the determinants of severe back disorders leading to hospital admission in Finland. First, back-related hospitalisations were considered from the perspective of socioeconomic status, occupation, and industry. Secondly, the significance of psychosocial factors at work, sleep disturbances, and such lifestyle factors as smoking and overweight was studied as predictors of hospitalisation due to back disorders. Two sets of data were used: 1) the population-based data comprised all occupationally active Finns aged 25-64, and included hospitalisations due to back disorders in 1996 and 2) a cohort of employees followed up from 1973 to 2000 having been hospitalised due to back disorders. The results of the population-based study showed that people in physically strenuous industries and occupations, such as agriculture and manufacturing, were at an increased risk of being hospitalised for back disorders. The lowest hospitalisation rates were found in sedentary occupations. Occupational class and the level of formal education were independently associated with hospitalisation for back disorders. This stratification was fairly consistent across age-groups and genders. Men had a slightly higher risk of becoming hospitalised compared with women, and the risk increased with age among both genders. The results of the prospective cohort study showed that psychosocial factors at work such as low job control and low supervisor support predicted subsequent hospitalisation for back disorders even when adjustments were made for occupational class and physical workload history. However, psychosocial factors did not predict hospital admissions due to intervertebral disc disorders; only admissions due to other back disorders. Smoking and overweight predicted, instead, only hospitalisation for intervertebral disc disorders. These results suggest that the etiological factors of disc disorders and other back disorders differ from each other. The study concerning the association of sleep disturbances and other distress symptoms with hospitalisation for back disorders revealed that sleep disturbances predicted subsequent hospitalisation for all back disorders after adjustment for chronic back disorders and recurrent back symptoms at baseline, as well as for work-related load and lifestyle factors. Other distress symptoms were not predictive of hospitalisation.
Resumo:
Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.
Resumo:
Due to the improved prognosis of many forms of cancer, an increasing number of cancer survivors are willing to return to work after their treatment. It is generally believed, however, that people with cancer are either unemployed, stay at home, or retire more often than people without cancer. This study investigated the problems that cancer survivors experience on the labour market, as well as the disease-related, sociodemographic and psychosocial factors at work that are associated with the employment and work ability of cancer survivors. The impact of cancer on employment was studied combining the data of Finnish Cancer Registry and census data of the years 1985, 1990, 1995 or 1997 of Statistics Finland. There were two data sets containing 46 312 and 12 542 people with cancer. The results showed that cancer survivors were slightly less often employed than their referents. Two to three years after the diagnosis the employment rate of the cancer survivors was 9% lower than that of their referents (64% vs. 73%), whereas the employment rate was the same before the diagnosis (78%). The employment rate varied greatly according to the cancer type and education. The probability of being employed was greater in the lower than in the higher educational groups. People with cancer were less often employed than people without cancer mainly because of their higher retirement rate (34% vs. 27%). As well as employment, retirement varied by cancer type. The risk of retirement was twofold for people having cancer of the nervous system or people with leukaemia compared to their referents, whereas people with skin cancer, for example, did not have an increased risk of retirement. The aim of the questionnaire study was to investigate whether the work ability of cancer survivors differs from that of people without cancer and whether cancer had impaired their work ability. There were 591 cancer survivors and 757 referents in the data. Even though current work ability of cancer survivors did not differ between the survivors and their referents, 26% of cancer survivors reported that their physical work ability, and 19% that their mental work ability had deteriorated due to cancer. The survivors who had other diseases or had had chemotherapy, most often reported impaired work ability, whereas survivors with a strong commitment to their work organization, or a good social climate at work, reported impairment less frequently. The aim of the other questionnaire study containing 640 people with the history of cancer was to examine extent of social support that cancer survivors needed, and had received from their work community. The cancer survivors had received most support from their co-workers, and they hoped for more support especially from the occupational health care personnel (39% of women and 29% of men). More support was especially needed by men who had lymphoma, had received chemotherapy or had a low education level. The results of this study show that the majority of the survivors are able to return to work. There is, however, a group of cancer survivors who leave work life early, have impaired work ability due to their illness, and suffer from lack of support from their work place and the occupational health services. Treatment-related, as well as sociodemographic factors play an important role in survivors' work-related problems, and presumably their possibilities to continue working.
Design and testing of stand-specific bucking instructions for use on modern cut-to-length harvesters
Resumo:
This study addresses three important issues in tree bucking optimization in the context of cut-to-length harvesting. (1) Would the fit between the log demand and log output distributions be better if the price and/or demand matrices controlling the bucking decisions on modern cut-to-length harvesters were adjusted to the unique conditions of each individual stand? (2) In what ways can we generate stand and product specific price and demand matrices? (3) What alternatives do we have to measure the fit between the log demand and log output distributions, and what would be an ideal goodness-of-fit measure? Three iterative search systems were developed for seeking stand-specific price and demand matrix sets: (1) A fuzzy logic control system for calibrating the price matrix of one log product for one stand at a time (the stand-level one-product approach); (2) a genetic algorithm system for adjusting the price matrices of one log product in parallel for several stands (the forest-level one-product approach); and (3) a genetic algorithm system for dividing the overall demand matrix of each of the several log products into stand-specific sub-demands simultaneously for several stands and products (the forest-level multi-product approach). The stem material used for testing the performance of the stand-specific price and demand matrices against that of the reference matrices was comprised of 9 155 Norway spruce (Picea abies (L.) Karst.) sawlog stems gathered by harvesters from 15 mature spruce-dominated stands in southern Finland. The reference price and demand matrices were either direct copies or slightly modified versions of those used by two Finnish sawmilling companies. Two types of stand-specific bucking matrices were compiled for each log product. One was from the harvester-collected stem profiles and the other was from the pre-harvest inventory data. Four goodness-of-fit measures were analyzed for their appropriateness in determining the similarity between the log demand and log output distributions: (1) the apportionment degree (index), (2) the chi-square statistic, (3) Laspeyres quantity index, and (4) the price-weighted apportionment degree. The study confirmed that any improvement in the fit between the log demand and log output distributions can only be realized at the expense of log volumes produced. Stand-level pre-control of price matrices was found to be advantageous, provided the control is done with perfect stem data. Forest-level pre-control of price matrices resulted in no improvement in the cumulative apportionment degree. Cutting stands under the control of stand-specific demand matrices yielded a better total fit between the demand and output matrices at the forest level than was obtained by cutting each stand with non-stand-specific reference matrices. The theoretical and experimental analyses suggest that none of the three alternative goodness-of-fit measures clearly outperforms the traditional apportionment degree measure. Keywords: harvesting, tree bucking optimization, simulation, fuzzy control, genetic algorithms, goodness-of-fit
Resumo:
The objective was to measure productivity growth and its components in Finnish agriculture, especially in dairy farming. The objective was also to compare different methods and models - both parametric (stochastic frontier analysis) and non-parametric (data envelopment analysis) - in estimating the components of productivity growth and the sensitivity of results with respect to different approaches. The parametric approach was also applied in the investigation of various aspects of heterogeneity. A common feature of the first three of five articles is that they concentrate empirically on technical change, technical efficiency change and the scale effect, mainly on the basis of the decompositions of Malmquist productivity index. The last two articles explore an intermediate route between the Fisher and Malmquist productivity indices and develop a detailed but meaningful decomposition for the Fisher index, including also empirical applications. Distance functions play a central role in the decomposition of Malmquist and Fisher productivity indices. Three panel data sets from 1990s have been applied in the study. The common feature of all data used is that they cover the periods before and after Finnish EU accession. Another common feature is that the analysis mainly concentrates on dairy farms or their roughage production systems. Productivity growth on Finnish dairy farms was relatively slow in the 1990s: approximately one percent per year, independent of the method used. Despite considerable annual variation, productivity growth seems to have accelerated towards the end of the period. There was a slowdown in the mid-1990s at the time of EU accession. No clear immediate effects of EU accession with respect to technical efficiency could be observed. Technical change has been the main contributor to productivity growth on dairy farms. However, average technical efficiency often showed a declining trend, meaning that the deviations from the best practice frontier are increasing over time. This suggests different paths of adjustment at the farm level. However, different methods to some extent provide different results, especially for the sub-components of productivity growth. In most analyses on dairy farms the scale effect on productivity growth was minor. A positive scale effect would be important for improving the competitiveness of Finnish agriculture through increasing farm size. This small effect may also be related to the structure of agriculture and to the allocation of investments to specific groups of farms during the research period. The result may also indicate that the utilization of scale economies faces special constraints in Finnish conditions. However, the analysis of a sample of all types of farms suggested a more considerable scale effect than the analysis on dairy farms.
Resumo:
Achieving sustainable consumption patterns is a crucial step on the way towards sustainability. The scientific knowledge used to decide which priorities to set and how to enforce them has to converge with societal, political, and economic initiatives on various levels: from individual household decision-making to agreements and commitments in global policy processes. The aim of this thesis is to draw a comprehensive and systematic picture of sustainable consumption and to do this it develops the concept of Strong Sustainable Consumption Governance. In this concept, consumption is understood as resource consumption. This includes consumption by industries, public consumption, and household consumption. Next to the availability of resources (including the available sink capacity of the ecosystem) and their use and distribution among the Earth’s population, the thesis also considers their contribution to human well-being. This implies giving specific attention to the levels and patterns of consumption. Methods: The thesis introduces the terminology and various concepts of Sustainable Consumption and of Governance. It briefly elaborates on the methodology of Critical Realism and its potential for analysing Sustainable Consumption. It describes the various methods on which the research is based and sets out the political implications a governance approach towards Strong Sustainable Consumption may have. Two models are developed: one for the assessment of the environmental relevance of consumption activities, another to identify the influences of globalisation on the determinants of consumption opportunities. Results: One of the major challenges for Strong Sustainable Consumption is that it is not in line with the current political mainstream: that is, the belief that economic growth can cure all our problems. So, the proponents have to battle against a strong headwind. Their motivation however is the conviction that there is no alternative. Efforts have to be taken on multiple levels by multiple actors. And all of them are needed as they constitute the individual strings that together make up the rope. However, everyone must ensure that they are pulling in the same direction. It might be useful to apply a carrot and stick strategy to stimulate public debate. The stick in this case is to create a sense of urgency. The carrot would be to articulate better the message to the public that a shrinking of the economy is not as much of a disaster as mainstream economics tends to suggest. In parallel to this it is necessary to demand that governments take responsibility for governance. The dominant strategy is still information provision. But there is ample evidence that hard policies like regulatory instruments and economic instruments are most effective. As for Civil Society Organizations it is recommended that they overcome the habit of promoting Sustainable (in fact green) Consumption by using marketing strategies and instead foster public debate in values and well-being. This includes appreciating the potential of social innovation. A countless number of such initiatives are on the way but their potential is still insufficiently explored. Beyond the question of how to multiply such approaches, it is also necessary to establish political macro structures to foster them.
Resumo:
The topic of this dissertation lies in the intersection of harmonic analysis and fractal geometry. We particulary consider singular integrals in Euclidean spaces with respect to general measures, and we study how the geometric structure of the measures affects certain analytic properties of the operators. The thesis consists of three research articles and an overview. In the first article we construct singular integral operators on lower dimensional Sierpinski gaskets associated with homogeneous Calderón-Zygmund kernels. While these operators are bounded their principal values fail to exist almost everywhere. Conformal iterated function systems generate a broad range of fractal sets. In the second article we prove that many of these limit sets are porous in a very strong sense, by showing that they contain holes spread in every direction. In the following we connect these results with singular integrals. We exploit the fractal structure of these limit sets, in order to establish that singular integrals associated with very general kernels converge weakly. Boundedness questions consist a central topic of investigation in the theory of singular integrals. In the third article we study singular integrals of different measures. We prove a very general boundedness result in the case where the two underlying measures are separated by a Lipshitz graph. As a consequence we show that a certain weak convergence holds for a large class of singular integrals.
Resumo:
Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
Quasiconformal mappings are natural generalizations of conformal mappings. They are homeomorphisms with 'bounded distortion' of which there exist several approaches. In this work we study dimension distortion properties of quasiconformal mappings both in the plane and in higher dimensional Euclidean setting. The thesis consists of a summary and three research articles. A basic property of quasiconformal mappings is the local Hölder continuity. It has long been conjectured that this regularity holds at the Sobolev level (Gehring's higher integrabilty conjecture). Optimal regularity would also provide sharp bounds for the distortion of Hausdorff dimension. The higher integrability conjecture was solved in the plane by Astala in 1994 and it is still open in higher dimensions. Thus in the plane we have a precise description how Hausdorff dimension changes under quasiconformal deformations for general sets. The first two articles contribute to two remaining issues in the planar theory. The first one concerns distortion of more special sets, for rectifiable sets we expect improved bounds to hold. The second issue consists of understanding distortion of dimension on a finer level, namely on the level of Hausdorff measures. In the third article we study flatness properties of quasiconformal images of spheres in a quantitative way. These also lead to nontrivial bounds for their Hausdorff dimension even in the n-dimensional case.